CN108062785A - The processing method and processing device of face-image, computing device - Google Patents

The processing method and processing device of face-image, computing device Download PDF

Info

Publication number
CN108062785A
CN108062785A CN201810146374.2A CN201810146374A CN108062785A CN 108062785 A CN108062785 A CN 108062785A CN 201810146374 A CN201810146374 A CN 201810146374A CN 108062785 A CN108062785 A CN 108062785A
Authority
CN
China
Prior art keywords
facial
mesh point
image
face
dimensional grid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810146374.2A
Other languages
Chinese (zh)
Inventor
眭帆
眭一帆
张望
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201810146374.2A priority Critical patent/CN108062785A/en
Publication of CN108062785A publication Critical patent/CN108062785A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping

Abstract

The invention discloses a kind of processing method and processing device of face-image, computing device, method includes:Determine and the corresponding first facial three-dimensional grid of target facial image in the corresponding texture pixel of each first mesh point between texture mapping relation;It determines and the corresponding second facial three-dimensional grid of pending original facial image;Wherein, each second mesh point in the second facial three-dimensional grid corresponds respectively with each first mesh point in first facial three-dimensional grid;According to the correspondence between texture mapping relation and each second mesh point and the first mesh point, generation and the corresponding face processing image of each second mesh point in the second facial three-dimensional grid.According to this method, can need not be manually operated, and the authenticity of obtained image is greatly improved automatically by the face of the face replacement in a two field picture to another two field picture.

Description

The processing method and processing device of face-image, computing device
Technical field
The present invention relates to image processing fields, and in particular to a kind of processing method and processing device of face-image, computing device.
Background technology
With the development of science and technology, the technology of image capture device also increasingly improves.It is regarded using what image capture device was recorded Frequency also becomes apparent from, and resolution ratio, display effect also greatly improve.In order to improve the recreational of video and meet the need of user It will, it is sometimes desirable to will be on the face of the face replacement in a two field picture to another two field picture.
But inventor has found in the implementation of the present invention, in the prior art by the face replacement in a two field picture Generally use PS (Adobe Photoshop) software on to the face of another two field picture, and figure is repaiied manually to realize, but adopt The effect of the image obtained in aforementioned manners usually compares distortion, and the face in image is side face or new line, bows It, can not effectively will be on the face of the face replacement in a two field picture to another frame image data when the face of special angles.
The content of the invention
In view of the above problems, it is proposed that the present invention overcomes the above problem in order to provide one kind or solves at least partly State processing method and processing device, the computing device of the face-image of problem.
According to an aspect of the invention, there is provided a kind of processing method of face-image, including:
It determines corresponding with each first mesh point in the corresponding first facial three-dimensional grid of target facial image Texture pixel between texture mapping relation;
It determines and the corresponding second facial three-dimensional grid of pending original facial image;Wherein, the second face is three-dimensional Each second mesh point in grid corresponds respectively with each first mesh point in first facial three-dimensional grid;
According to the correspondence between texture mapping relation and each second mesh point and the first mesh point, generation and the The corresponding face processing image of each second mesh point in two facial three-dimensional grids.
Optionally, wherein, it is described to determine and each the in the corresponding first facial three-dimensional grid of target facial image The step of texture mapping relation between the corresponding texture pixel of one mesh point, specifically includes:
Each first key point included in extraction target facial image, determines and the mesh according to first key point Mark the corresponding first facial three-dimensional grid of face-image;
According to the target facial image and the first facial three-dimensional grid, the first facial three-dimensional grid is determined In the corresponding texture pixel of each first mesh point between texture mapping relation.
Optionally, wherein, it is described according to the target facial image and the first facial three-dimensional grid, determine described The step of texture mapping relation between the corresponding texture pixel of each first mesh point in first facial three-dimensional grid It specifically includes:
The first Marking the cell of each first mesh point in the first facial three-dimensional grid is determined respectively;
According to the target facial image, determine with the pixel value of each corresponding texture pixel of first Marking the cell with And texture coordinate value;
The pixel value and texture coordinate value of texture pixel according to corresponding to each first Marking the cell generate the line Manage mapping relations.
Optionally, wherein, it is described according to the texture mapping relation and each second mesh point and the first mesh point it Between correspondence, generation and the corresponding face processing image of each second mesh point in the described second facial three-dimensional grid The step of specifically include:
Respectively for each second mesh point in the described second facial three-dimensional grid, the second of second mesh point is determined Marking the cell and first Marking the cell identical with second Marking the cell;According to the texture mapping relation, inquiry is with being somebody's turn to do The pixel value of texture pixel and texture coordinate value corresponding to the first identical Marking the cell of second Marking the cell, according to inquiry As a result stick picture disposing is carried out to second mesh point;
Wherein, the first Marking the cell phase of the first corresponding mesh point of the second Marking the cell of each second mesh point Together.
Optionally, wherein, each first mesh point in the first facial three-dimensional grid further comprises:By multiple First predeterminable area grid group of one mesh point composition, and each second grid in the described second facial three-dimensional grid clicks through one Step includes:The second predeterminable area grid group being made of multiple second mesh points;Also, each first predeterminable area grid group with Each second predeterminable area grid group corresponds;
The then corresponding pass according between the texture mapping relation and each second mesh point and the first mesh point It is specific to generate the step of handling image with the corresponding face of each second mesh point in the described second facial three-dimensional grid for system Including:
Respectively for each second predeterminable area grid group, determine and the second predeterminable area grid group corresponding first The pixel value of texture pixel and texture coordinate value corresponding to predeterminable area grid group and the first predeterminable area grid group; The pixel value and texture coordinate value of texture pixel according to corresponding to the first predeterminable area grid group are to second preset areas Domain grid group carries out stick picture disposing;
Wherein, the first predeterminable area grid group and/or the second predeterminable area grid group include:It is opposite with face contour The area grid group answered, and/or with the corresponding area grid group of facial face.
Optionally, wherein, the first facial three-dimensional grid and/or the second facial three-dimensional grid include:With frontal faces The corresponding frontal faces three-dimensional grid of image, and/or the side face three-dimensional grid for carrying rotation angle information.
Optionally, wherein, it is described to determine and each the in the corresponding first facial three-dimensional grid of target facial image Before the step of texture mapping relation between the corresponding texture pixel of one mesh point, further comprise:
Default processing is performed for the target facial image, wherein, the default processing includes:Gradual change handle and/or Whitening is handled.
Optionally, wherein, described the step of performing default processing for the target facial image, specifically includes:
The original pixels information of the pending original facial image is extracted, according to the original pixels information to described Target facial image performs default processing.
Optionally, wherein, it is described to determine and the corresponding second facial three-dimensional grid of pending original facial image Step specifically includes:
Each second key point included in pending original facial image is extracted, is determined according to second key point Corresponding with the pending original facial image second facial three-dimensional grid.
Optionally, wherein, the step of each second key point included in the pending original facial image of the extraction Before, further comprise:
The current frame image included in live video stream is obtained in real time, and the original face is extracted according to the current frame image Portion's image;Alternatively,
Each two field picture included in the video flowing recorded is obtained successively, according to the current frame image extraction got Original facial image.
Optionally, wherein, the generation and each second mesh point in the described second facial three-dimensional grid are corresponding After the step of face processing image, further comprise:
Image is handled by the face and covers the current frame image, the current frame image that obtains that treated, and use Current frame image after reason replaces the current frame image of before processing.
Optionally, wherein, the quantity of the first mesh point included in the first facial three-dimensional grid and second face The quantity of the second mesh point included in portion's three-dimensional grid is identical, and the first mesh point tool that each second mesh point is corresponding There is identical texture mapping relation.
According to another aspect of the present invention, a kind of processing unit of face-image is provided, including:
First determining module is adapted to determine that and each in the corresponding first facial three-dimensional grid of target facial image Texture mapping relation between the corresponding texture pixel of one mesh point;
Second determining module is adapted to determine that the corresponding with pending original facial image second facial three-dimensional grid; Wherein, each second mesh point in the second facial three-dimensional grid respectively with each first grid in first facial three-dimensional grid Point corresponds;
Generation module, suitable for according to corresponding between texture mapping relation and each second mesh point and the first mesh point Relation, generation and the corresponding face processing image of each second mesh point in the second facial three-dimensional grid.
Optionally, wherein, first determining module is particularly adapted to:
Each first key point included in extraction target facial image, determines and the mesh according to first key point Mark the corresponding first facial three-dimensional grid of face-image;
According to the target facial image and the first facial three-dimensional grid, the first facial three-dimensional grid is determined In the corresponding texture pixel of each first mesh point between texture mapping relation.
Optionally, wherein, first determining module is particularly adapted to:
The first Marking the cell of each first mesh point in the first facial three-dimensional grid is determined respectively;
According to the target facial image, determine with the pixel value of each corresponding texture pixel of first Marking the cell with And texture coordinate value;
The pixel value and texture coordinate value of texture pixel according to corresponding to each first Marking the cell generate the line Manage mapping relations.
Optionally, wherein, the generation module is particularly adapted to:
Respectively for each second mesh point in the described second facial three-dimensional grid, the second of second mesh point is determined Marking the cell and first Marking the cell identical with second Marking the cell;According to the texture mapping relation, inquiry is with being somebody's turn to do The pixel value of texture pixel and texture coordinate value corresponding to the first identical Marking the cell of second Marking the cell, according to inquiry As a result stick picture disposing is carried out to second mesh point;
Wherein, the first Marking the cell phase of the first corresponding mesh point of the second Marking the cell of each second mesh point Together.
Optionally, wherein, each first mesh point in the first facial three-dimensional grid further comprises:By multiple First predeterminable area grid group of one mesh point composition, and each second grid in the described second facial three-dimensional grid clicks through one Step includes:The second predeterminable area grid group being made of multiple second mesh points;Also, each first predeterminable area grid group with Each second predeterminable area grid group corresponds;
Then the generation module is particularly adapted to:
Respectively for each second predeterminable area grid group, determine and the second predeterminable area grid group corresponding first The pixel value of texture pixel and texture coordinate value corresponding to predeterminable area grid group and the first predeterminable area grid group; The pixel value and texture coordinate value of texture pixel according to corresponding to the first predeterminable area grid group are to second preset areas Domain grid group carries out stick picture disposing;
Wherein, the first predeterminable area grid group and/or the second predeterminable area grid group include:It is opposite with face contour The area grid group answered, and/or with the corresponding area grid group of facial face.
Optionally, wherein, the first facial three-dimensional grid and/or the second facial three-dimensional grid include:With frontal faces The corresponding frontal faces three-dimensional grid of image, and/or the side face three-dimensional grid for carrying rotation angle information.
Optionally, wherein, described device further comprises default processing module, is suitable for:
Default processing is performed for the target facial image, wherein, the default processing includes:Gradual change handle and/or Whitening is handled.
Optionally, wherein, the default processing module is particularly adapted to:
The original pixels information of the pending original facial image is extracted, according to the original pixels information to described Target facial image performs default processing.
Optionally, wherein, second determining module is particularly adapted to:
Each second key point included in pending original facial image is extracted, is determined according to second key point Corresponding with the pending original facial image second facial three-dimensional grid.
Optionally, wherein, second determining module is further adapted for:
The current frame image included in live video stream is obtained in real time, and the original face is extracted according to the current frame image Portion's image;Alternatively,
Each two field picture included in the video flowing recorded is obtained successively, according to the current frame image extraction got Original facial image.
Optionally, wherein, described device further comprises replacement module, is suitable for:
Image is handled by the face and covers the current frame image, the current frame image that obtains that treated, and use Current frame image after reason replaces the current frame image of before processing.
Optionally, wherein, the quantity of the first mesh point included in the first facial three-dimensional grid and second face The quantity of the second mesh point included in portion's three-dimensional grid is identical, and the first mesh point tool that each second mesh point is corresponding There is identical texture mapping relation.
According to another aspect of the invention, a kind of computing device is provided, including:Processor, memory, communication interface and Communication bus, the processor, the memory and the communication interface complete mutual communication by the communication bus;
For the memory for storing an at least executable instruction, it is above-mentioned that the executable instruction performs the processor The corresponding operation of processing method of face-image.
In accordance with a further aspect of the present invention, provide a kind of computer storage media, be stored in the storage medium to A few executable instruction, the executable instruction make processor perform the corresponding operation of processing method such as above-mentioned face-image.
According to the processing method and processing device of face-image provided in this embodiment, computing device, pass through definite and target face Texture between the corresponding texture pixel of each first mesh point in the corresponding first facial three-dimensional grid of portion's image Mapping relations, it is then determined that with the corresponding second facial three-dimensional grid of pending original facial image;Finally according to texture Correspondence between mapping relations and each second mesh point and the first mesh point, in generation and the second facial three-dimensional grid Each second mesh point it is corresponding face processing image.It, can be automatically by the frame in picture data according to this method On the face of another two field picture in face replacement to video data either picture data in image or automatically by video On the face of another two field picture in the face replacement in a two field picture to video data or picture data in data, it is not required to It is manually operated, and the authenticity of replaced image is greatly improved.Further, since first facial three-dimensional grid All it is three-dimensional three-dimensional grid with the second facial three-dimensional grid, there is rotation angle, therefore can efficiently solve when in image Face be side face or new line, bow when the face of special angles, can not be effectively by the face replacement in a two field picture The problem of on to the face of another frame image data.
Above description is only the general introduction of technical solution of the present invention, in order to better understand the technological means of the present invention, And can be practiced according to the content of specification, and in order to allow above and other objects of the present invention, feature and advantage can It is clearer and more comprehensible, below the special specific embodiment for lifting the present invention.
Description of the drawings
By reading the detailed description of hereafter preferred embodiment, it is various other the advantages of and benefit it is common for this field Technical staff will be apparent understanding.Attached drawing is only used for showing the purpose of preferred embodiment, and is not considered as to the present invention Limitation.And throughout the drawings, the same reference numbers will be used to refer to the same parts.In the accompanying drawings:
Fig. 1 shows the flow chart of the processing method of face-image according to an embodiment of the invention;
Fig. 2 shows the flow chart of the processing method of face-image in accordance with another embodiment of the present invention;
Fig. 3 shows the functional block diagram of the processing unit of face-image according to an embodiment of the invention;
Fig. 4 shows a kind of structure diagram of computing device according to an embodiment of the invention.
Specific embodiment
The exemplary embodiment of the disclosure is more fully described below with reference to accompanying drawings.Although the disclosure is shown in attached drawing Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here It is limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure Completely it is communicated to those skilled in the art.
Fig. 1 shows the flow chart of the processing method of face-image according to an embodiment of the invention.As shown in Figure 1, The processing method of face-image specifically comprises the following steps:
Step S101 is determined and each first mesh point in the corresponding first facial three-dimensional grid of target facial image Texture mapping relation between corresponding texture pixel.
Wherein, target facial image can be the face-image in the two field picture in picture data or video data. Specifically, each first key point included in target facial image can be extracted, then according to above-mentioned each first key point It determines and the corresponding first facial three-dimensional grid of original facial image.Wherein, first facial three-dimensional grid is facial for determining The stereo profile of key position, concrete form can flexibly be set by those skilled in the art.Then it is three-dimensional to above-mentioned first facial Each first mesh point in grid adds the first Marking the cell, and obtains above-mentioned each corresponding picture of first Marking the cell Element value and texture coordinate value, then identified according to corresponding each first network the pixel value of corresponding texture pixel with And texture coordinate value generation texture mapping relation.Wherein, texture mapping relation is used to determine that each first mesh point is corresponding The pixel value of pixel and texture coordinate value between mapping relations.It can be seen that pass through texture mapping relation, on the one hand, Can determine the pixel corresponding to each first mesh point pixel value (such as determine face edge the first mesh point institute The pixel value of corresponding pixel be and the corresponding numerical value of lip color);On the other hand, it can determine each first mesh point Texture coordinate value, i.e. position coordinates.When it is implemented, texture coordinate value can be represented by diversified forms, for example, can be by pre- If the coordinate value in coordinate system represents, can also be by opposite between each first mesh point in first facial three-dimensional grid Position relationship represents that the present invention does not limit the concrete form of texture coordinate value, as long as can reflect in corresponding mesh point the Location information in one facial three-dimensional grid.Above-mentioned each first key point includes:With facial face and/or face contour Corresponding characteristic point, can be obtained by way of deep learning can also obtain according to other modes.Except according to upper It states outside mode, it can also be according to other modes come in the definite and corresponding first facial three-dimensional grid of target facial image Texture mapping relation between the corresponding texture pixel of each first mesh point.
Step S102 is determined and the corresponding second facial three-dimensional grid of pending original facial image;Wherein, second Each second mesh point in facial three-dimensional grid is a pair of with each first mesh point one in first facial three-dimensional grid respectively It should.
Wherein, pending original facial image can be in the video data of acquisition obtain in real time or non real-time One two field picture can also be the two field picture in picture data.It can be each in pending original facial image by extracting Then a second key point determines and pending original facial image corresponding second according to above-mentioned each second key point Facial three-dimensional grid.In addition to aforesaid way, it can also determine by another way and pending original facial image Corresponding second facial three-dimensional grid.Above-mentioned each second key point includes:It is corresponding with facial face and/or face contour Characteristic point, can be obtained, can also be obtained according to other modes by way of deep learning.Also, the second face three Each second mesh point tieed up in grid corresponds respectively with each first mesh point in first facial three-dimensional grid.It is optional Ground, can zoom in or out to each first mesh point in first facial three-dimensional grid processing, so that the second face Each second mesh point in portion's three-dimensional grid corresponds respectively with each first mesh point in first facial three-dimensional grid.
Step S103, according to the corresponding pass between texture mapping relation and each second mesh point and the first mesh point System, generation and the corresponding face processing image of each second mesh point in the second facial three-dimensional grid.
In this step, each first mesh point and the second facial three dimensional network in first facial three-dimensional grid can first be determined The correspondence between each second mesh point in lattice;Then each first mesh point in first facial three-dimensional grid Each first mesh point and in texture mapping relation and above-mentioned first three-dimensional grid between corresponding texture pixel The correspondence between each second mesh point in two facial three-dimensional grids, determine with it is each in the second face three-dimensional grid There are the texture pixel of texture mapping relation, finally each second grids in the second facial three-dimensional grid for second mesh point Each second mesh point of texture mapping relation pair between the corresponding texture pixel of point carries out stick picture disposing, so as to generate with The corresponding face processing image of each second mesh point in second facial three-dimensional grid.It can be seen that the second face is three-dimensional The first corresponding mesh point of each second mesh point in grid has identical texture mapping relation.Popular It says, for first facial three-dimensional grid, the position of each second mesh point in the second facial three-dimensional grid may be with The position of its corresponding first mesh point is slightly different, and still, the pixel value corresponding to each second mesh point is corresponding to its The pixel value of first mesh point is identical.
Specifically, it is determined that in first facial three-dimensional grid each first mesh point with it is each in the second facial three-dimensional grid During correspondence between the second mesh point, second mesh point corresponding to each first mesh point can be passed through, phase is set The mode of same or corresponding Marking the cell is realized.For example, for opposite with lip left hand edge position in first facial three-dimensional grid The first mesh point answered, it is 10 to set corresponding first Marking the cell.Correspondingly, in the second facial three-dimensional grid with lip Corresponding second mesh point in left hand edge position, it is 10 ' to set corresponding second Marking the cell.It can be seen that pass through grid mark Know the first mesh point that can quickly determine corresponding to each second mesh point.In addition to this it is possible to according to other modes Come determine each second mesh point in each first mesh point in first facial three-dimensional grid and the second facial three-dimensional grid it Between correspondence.For example, a table of comparisons can be set, for inquiring about the first grid corresponding to each second mesh point Point.
According to the processing method of face-image provided in this embodiment, by determining and target facial image corresponding the Texture mapping relation between the corresponding texture pixel of each first mesh point in one facial three-dimensional grid, it is then determined that With pending the corresponding second facial three-dimensional grid of original facial image;Finally according to texture mapping relation and each Correspondence between two mesh points and the first mesh point generates and each second mesh point phase in the second facial three-dimensional grid Corresponding face processing image.According to this method, automatically the face replacement in the two field picture in picture data can be arrived Video data is either on the face of another two field picture in picture data or automatically will be in the two field picture in video data Face replacement to video data or picture data in another two field picture face on, need not be manually operated, and To the authenticity of image be greatly improved.Further, since first facial three-dimensional grid and the second facial three-dimensional grid All it is three-dimensional three-dimensional grid, there is rotation angle, therefore can efficiently solve when the face in image is side face or lift Head, bow angle face when, can not be effectively by the face replacement in a two field picture to the face of another frame image data The problem of upper.
Fig. 2 shows the flow chart of the processing method of face-image in accordance with another embodiment of the present invention.Such as Fig. 2 institutes Show, the processing method of face-image specifically comprises the following steps:
Step S201 performs default processing for target facial image, wherein, default processing includes:Gradual change handle and/or Whitening is handled.
Wherein, target facial image can be the face-image in the two field picture in picture data or video data. Specifically, in order to target facial image and the image background of original facial image is made preferably to merge, improve authenticity and be Target facial image is made more to beautify, the original pixels information of pending original facial image can be extracted, according to upper It states original pixels information and default processing is performed to target facial image.For example, the face mask of original facial image can be extracted Then the original pixels information on periphery does the pixel around target facial image face mask gradual change processing or extraction is original Then pixel in target facial image is replaced with the picture in original facial image by the pixel of frontal faces in face-image Element so that target facial image and original facial image preferably merge, improves authenticity.Optionally, in order to make target face Portion's image more beautifies, and can carry out whitening processing or various landscaping treatments to target facial image.By implementing the step Suddenly, the background of target facial image and original facial image can be made preferably to merge and make target facial image more Beautification.
Step S202 extracts each first key point for including in target facial image, according to the first key point determine with The corresponding first facial three-dimensional grid of target facial image.
Wherein, first facial three-dimensional grid includes:Frontal faces three-dimensional grid corresponding with frontal face images or Carry the side face three-dimensional grid of rotation angle information.Above-mentioned rotation angle information can be the anglec of rotation in preset range Degree, the specific number of degrees can be by those skilled in the art according to actual conditions sets itself.Above-mentioned each first key point includes:With face Portion's face and/or the corresponding characteristic point of face contour, can be obtained by way of deep learning can also be according to others Mode obtains.For example 95 key points can be arranged altogether in image facial, then in cheek position, eyes, eyebrow Position, face position, nose areas and face mask position are respectively arranged several the first key points.By obtaining each The relative position relation of the location information of one key point and each first key point can then determine opposite with target facial image The first facial three-dimensional grid answered.
Further, each first key point in above-mentioned first facial three-dimensional grid further comprises:By multiple first First predeterminable area grid group of mesh point composition.Above-mentioned first predeterminable area grid group includes:It is corresponding with face contour Area grid group, and/or with the corresponding area grid group of facial face.It then further can be by extracting the first preset areas Then domain grid group determines that first facial corresponding with target facial image is three-dimensional according to above-mentioned first predeterminable area grid group Grid.
Step S203 according to target facial image and first facial three-dimensional grid, is determined in first facial three-dimensional grid The corresponding texture pixel of each first mesh point between texture mapping relation.
Specifically, the content of this step can be realized by one~step 3 of following step:
Step 1:The first Marking the cell of each first mesh point in first facial three-dimensional grid is determined respectively.
Since three-dimensional grid is made of multiple cross one another lines, above-mentioned multiple lines intersect, and can be formed more A mesh point.Each mesh point, above-mentioned Marking the cell can be distinguished and identify by adding Marking the cell to each mesh point It can be Arabic numerals sequences such as " 1,2,3,4 ", can also be alphabetical sequences such as " A, B, C, D ", can also be other certainly The Marking the cell of type, the one kind that differs herein are stated.By adding net to each first mesh point in first facial three-dimensional grid Case marker is known, and can determine the first Marking the cell of each first mesh point in first facial three-dimensional grid respectively, by searching for Each first Marking the cell can then obtain its corresponding each first mesh point.
Step 2:According to target facial image, the pixel with each corresponding texture pixel of first Marking the cell is determined Value and texture coordinate value.
Since texture coordinate is two-dimensional array, texture coordinate value can be used to indicate that position of the texture pixel in texture coordinate It puts, represents to determine the texel value corresponding to each facial key point in face-image in the form of coordinate value Texture coordinate and the position in 3D grids, so that it is determined that position of each pixel value in face-image.Each texture For pixel all there are one unique address in texture, this address is considered the value of a row and column, and can use U It is represented with V.And since original facial image is that image texture is mapped to 3D surface meshes by UV coordinates is obtained, So it analyzes, can obtain and determines and each first according to this relation, and by the texture to original facial image The pixel value of the corresponding texture coordinate value of Marking the cell and its corresponding texture pixel.
Step 3:Pixel value and texture coordinate the value generation of texture pixel according to corresponding to each first Marking the cell The texture mapping relation.
Specifically, by determining pixel value and texture coordinate with each corresponding texture pixel of first Marking the cell Value, and the pixel value of the texture pixel according to corresponding to each first Marking the cell and texture coordinate value can generate texture and reflect Penetrate relation.According to this texture mapping relation, the picture of its corresponding texture pixel can be determined according to each first Marking the cell Element value and texture coordinate value;Conversely, it can determine to correspond according to the pixel value of texture pixel and texture coordinate value Each first Marking the cell.
Step S204 is determined and the corresponding second facial three-dimensional grid of pending original facial image;Wherein, second Each second mesh point in facial three-dimensional grid is a pair of with each first mesh point one in first facial three-dimensional grid respectively It should.
Specifically, it is possible, firstly, to which the current frame image included in real time in acquisition live video stream, carries according to current frame image Take original facial image;Alternatively, each two field picture included in the video flowing recorded can also be obtained successively, according to what is got Current frame image extracts original facial image.It can be seen that the process of extraction original facial image can both carry out in real time, also may be used With non real-time progress;Both live video stream can be based on to realize, can also have been realized based on locally stored video flowing.In short, The present invention does not limit the specific source of original facial image.
Then, each second key point included in pending original facial image is extracted, it is crucial according to above-mentioned second Point determines and the corresponding second facial three-dimensional grid of pending original facial image.Above-mentioned each second key point includes: With facial face and/or the corresponding characteristic point of face contour, can be obtained by way of deep learning can also be according to it Its mode obtains.Such as can arrange 95 key points altogether at image face position, then cheek position, eyes, Eyebrow position, face position, nose areas and face mask position are respectively arranged several the second key points.It is each by obtaining The opposite position relationship of the location information of a second key point and each second key point can then determine and pending original The corresponding second facial three-dimensional grid of beginning face-image.Wherein, the second facial three-dimensional grid includes and frontal face images phase Corresponding frontal faces three-dimensional grid, and/or the side face three-dimensional grid for carrying rotation angle information.Above-mentioned rotation angle Specific angle value can specifically be set by those skilled in the art according to actual conditions.Due to above-mentioned first facial three-dimensional grid and Two facial three-dimensional grids are three-dimensional three-dimensional grids, and can be the side three-dimensional grids for carrying rotation angle information, then may be used To efficiently solve target facial image and/or pending face-image is that side image either comes back or bow angle Image when, can not by the face replacement in a two field picture to another two field picture the problem of.
Further, each second mesh point in the above-mentioned second facial three-dimensional grid group includes:By multiple second grids Second predeterminable area grid group of point composition.Above-mentioned second predeterminable area grid group includes:With the corresponding region of face contour Grid group, and/or with the corresponding area grid group of facial face, also, each first predeterminable area grid group and each the Two predeterminable area grid groups correspond.It then can also be by extracting the second predeterminable area grid group, then according to above-mentioned second Predeterminable area grid group determines the corresponding with target facial image second facial three-dimensional grid.
Step S205, according to the corresponding pass between texture mapping relation and each second mesh point and the first mesh point System, generation and the corresponding face processing image of each second mesh point in the second facial three-dimensional grid.
Specifically, can be realized according to following two ways, mode one:Wherein, included in first facial three-dimensional grid The first mesh point second mesh point of the quantity with being included in the second facial three-dimensional grid quantity it is identical, and each second net The first corresponding mesh point of lattice point has identical texture mapping relation.According to above-mentioned correspondence, can be directed to respectively Each second mesh point in described second facial three-dimensional grid determines the second Marking the cell of second mesh point, Yi Jiyu The first identical Marking the cell of second Marking the cell;According to the texture mapping relation, inquiry and the second Marking the cell phase The pixel value of texture pixel and texture coordinate value corresponding to the first same Marking the cell, according to query result to second net Lattice point carries out stick picture disposing;Wherein, the first of the first corresponding mesh point of the second Marking the cell of each second mesh point Marking the cell is identical.It can be seen that mode one is laid particular emphasis on carries out stick picture disposing in units of mesh point, thus, it is possible to ensure first Each mesh point in facial three-dimensional grid and the second facial three-dimensional grid is respectively provided with correspondence exactly.
Mode two:Since each first mesh point in above-mentioned first facial three-dimensional grid further comprises:By multiple First predeterminable area grid group of one mesh point composition, and each second grid in the above-mentioned second facial three-dimensional grid clicks through one Step includes:The second predeterminable area grid group being made of multiple second mesh points;Also, each first predeterminable area grid group with Each second predeterminable area grid group corresponds;It then can be with:Respectively for each second predeterminable area grid group, determine with Corresponding to the corresponding first predeterminable area grid group of second predeterminable area grid group and the first predeterminable area grid group The pixel value of texture pixel and texture coordinate value;The picture of texture pixel according to corresponding to the first predeterminable area grid group Element value and texture coordinate value carry out stick picture disposing to the second predeterminable area grid group.For example, for the second predeterminable area net The corresponding area grid group of eyes in lattice group, determines the region in corresponding first predeterminable area of the eyes The pixel of the texture pixel corresponding to area grid group in grid group and corresponding first predeterminable area of the eyes Value and texture coordinate value;Then corresponding to the area grid group in corresponding first predeterminable area of the eyes The pixel value and texture coordinate value of texture pixel carry out stick picture disposing to the second predeterminable area grid group.
It can be seen that mode two is laid particular emphasis on carries out stick picture disposing in units of area grid group, thus, it is possible to ensure first Each area grid group in facial three-dimensional grid and the second facial three-dimensional grid is respectively provided with accurate correspondence.Due to area Domain grid group corresponds to specific facial (such as nose, face etc.), and correspondingly, pass-through mode two can be ensured that each portion Position corresponds.It in the present embodiment, can be any one of arbitrarily to select in a manner of above two, in order to improve the standard of textures Exactness can also use above two mode simultaneously.
Step S206 covers current frame image by face processing image, the current frame image that obtains that treated, and use Current frame image after reason replaces the current frame image of before processing.
Current frame image is directly override by the way that above-mentioned face is handled image, the present frame figure after can be processed Picture, while the user recorded the current frame image that can also be immediately seen that treated.After the current frame image that obtains that treated, It can be shown in real time, the display effect of user can directly be seen that treated video data.Optionally, may be used also With by treated, current frame image is uploaded to Cloud Server.It specifically, can will treated that current frame image is uploaded to cloud Video platform server, such as iqiyi.com, youku.com, fast video cloud video platform server, so that cloud video platform server exists Cloud video platform shows video data.Treated current frame image can also be uploaded to cloud direct broadcast server, for By treated, current frame image real time propelling movement gives viewing subscription client to cloud direct broadcast server.As the user for having live streaming viewing end When being watched into cloud direct broadcast server, video data real time propelling movement can be given to viewing user client by cloud direct broadcast server End.Or current frame image is uploaded to cloud public platform server by treated, after being handled for cloud public platform server Current frame image be pushed to public platform concern client.It, will by cloud public platform server when there is user to pay close attention to the public platform Video data is pushed to public platform concern client;Further, cloud public platform server can also be according to the use of concern public platform The viewing custom at family, the video data that push meets user's custom pay close attention to client to public platform.
It is default by being performed first against target facial image according to the processing method of face-image provided in this embodiment Processing, makes target facial image and the background of original facial image preferably merge, and makes target facial image more beautiful Change.Then each first key point included in target facial image is extracted, is determined and target face figure according to the first key point As corresponding first facial three-dimensional grid, according to target facial image and first facial three-dimensional grid, first facial is determined Texture mapping relation between the corresponding texture pixel of each first mesh point in three-dimensional grid, and determine with it is pending The corresponding second facial three-dimensional grid of original facial image;According to texture mapping relation and each second mesh point and Correspondence between one mesh point, generation at the corresponding face of each second mesh point in the second facial three-dimensional grid Image is managed, finally by face processing image covering current frame image, the current frame image that obtains that treated, and with treated Current frame image replaces the current frame image of before processing.It, can be automatically by picture data or video data according to this method In a two field picture in face replacement to video data in another two field picture face on, need not be manually operated, and The authenticity of obtained image is greatly improved.Further, since first facial three-dimensional grid and the second facial three dimensional network Lattice are all three-dimensional three-dimensional grids, have rotation angle, thus can efficiently solve when the face in image be side face or It, can not be effectively by the face replacement in a two field picture to the face of another frame image data during the face for the angle come back, bowed The problem of in portion.
Fig. 3 shows the functional block diagram of the processing unit of face-image according to an embodiment of the invention.Such as Fig. 3 institutes Show, described device includes:Default processing module 31, the first determining module 32, the second determining module 33, generation module 34, replacement Module 35.
Wherein, the first determining module 32, be adapted to determine that in the corresponding first facial three-dimensional grid of target facial image The corresponding texture pixel of each first mesh point between texture mapping relation;
Second determining module 33 is adapted to determine that the corresponding with pending original facial image second facial three dimensional network Lattice;Wherein, each second mesh point in the described second facial three-dimensional grid respectively in the first facial three-dimensional grid Each first mesh point corresponds;
Generation module 34, suitable for according between the texture mapping relation and each second mesh point and the first mesh point Correspondence, generation and the corresponding face processing image of each second mesh point in the described second facial three-dimensional grid.
In another embodiment, optionally, first determining module 32 is particularly adapted to:
Each first key point included in extraction target facial image, determines and the mesh according to first key point Mark the corresponding first facial three-dimensional grid of face-image;
According to the target facial image and the first facial three-dimensional grid, the first facial three-dimensional grid is determined In the corresponding texture pixel of each first mesh point between texture mapping relation.
Optionally, wherein, first determining module 32 is particularly adapted to:
The first Marking the cell of each first mesh point in the first facial three-dimensional grid is determined respectively;
According to the target facial image, determine with the pixel value of each corresponding texture pixel of first Marking the cell with And texture coordinate value;
The pixel value and texture coordinate value of texture pixel according to corresponding to each first Marking the cell generate the line Manage mapping relations.
Optionally, wherein, the generation module 34 is particularly adapted to:
Respectively for each second mesh point in the described second facial three-dimensional grid, the second of second mesh point is determined Marking the cell and first Marking the cell identical with second Marking the cell;According to the texture mapping relation, inquiry is with being somebody's turn to do The pixel value of texture pixel and texture coordinate value corresponding to the first identical Marking the cell of second Marking the cell, according to inquiry As a result stick picture disposing is carried out to second mesh point;
Wherein, the first Marking the cell phase of the first corresponding mesh point of the second Marking the cell of each second mesh point Together.
Optionally, wherein, each first mesh point in the first facial three-dimensional grid further comprises:By multiple First predeterminable area grid group of one mesh point composition, and each second grid in the described second facial three-dimensional grid clicks through one Step includes:The second predeterminable area grid group being made of multiple second mesh points;Also, each first predeterminable area grid group with Each second predeterminable area grid group corresponds;
Then the generation module 34 is particularly adapted to:
Respectively for each second predeterminable area grid group, determine and the second predeterminable area grid group corresponding first The pixel value of texture pixel and texture coordinate value corresponding to predeterminable area grid group and the first predeterminable area grid group; The pixel value and texture coordinate value of texture pixel according to corresponding to the first predeterminable area grid group are to second preset areas Domain grid group carries out stick picture disposing;
Wherein, the first predeterminable area grid group and/or the second predeterminable area grid group include:It is opposite with face contour The area grid group answered, and/or with the corresponding area grid group of facial face.
Optionally, wherein, the first facial three-dimensional grid and/or the second facial three-dimensional grid include:With frontal faces The corresponding frontal faces three-dimensional grid of image, and/or the side face three-dimensional grid for carrying rotation angle information.
Optionally, wherein, described device further comprises default processing module 31, is suitable for:
Default processing is performed for the target facial image, wherein, the default processing includes:Gradual change handle and/or Whitening is handled.
Optionally, wherein, the default processing module 31 is particularly adapted to:
The original pixels information of the pending original facial image is extracted, according to the original pixels information to described Target facial image performs default processing.
Optionally, wherein, second determining module 33 is particularly adapted to:
Each second key point included in pending original facial image is extracted, is determined according to second key point Corresponding with the pending original facial image second facial three-dimensional grid.
Optionally, wherein, second determining module 33 is further adapted for:
The current frame image included in live video stream is obtained in real time, and the original face is extracted according to the current frame image Portion's image;Alternatively,
Each two field picture included in the video flowing recorded is obtained successively, according to the current frame image extraction got Original facial image.
Optionally, wherein, described device further comprises replacement module 35, is suitable for:
Image is handled by the face and covers the current frame image, the current frame image that obtains that treated, and use Current frame image after reason replaces the current frame image of before processing.
The concrete structure and operation principle of above-mentioned modules can refer to the description of corresponding portion in embodiment of the method, herein It repeats no more.
Fig. 4 shows a kind of structure diagram of computing device according to an embodiment of the invention, and the present invention is specific real Example is applied not limit the specific implementation of computing device.
As shown in figure 4, the computing device can include:Processor (processor) 402, communication interface (Communications Interface) 404, memory (memory) 406 and communication bus 408.
Wherein:
Processor 402, communication interface 404 and memory 406 complete mutual communication by communication bus 408.
Communication interface 404, for communicating with the network element of miscellaneous equipment such as client or other servers etc..
Processor 402, for performing program 410, in the processing method embodiment that can specifically perform above-mentioned face-image Correlation step.
Specifically, program 410 can include program code, which includes computer-managed instruction.
Processor 402 may be central processor CPU or specific integrated circuit ASIC (Application Specific Integrated Circuit) or be arranged to implement the embodiment of the present invention one or more integrate electricity Road.The one or more processors that computing device includes can be same type of processor, such as one or more CPU;Also may be used To be different types of processor, such as one or more CPU and one or more ASIC.
Memory 406, for storing program 410.Memory 406 may include high-speed RAM memory, it is also possible to further include Nonvolatile memory (non-volatile memory), for example, at least a magnetic disk storage.
Program 410 specifically can be used for so that processor 402 performs following operation:
It determines corresponding with each first mesh point in the corresponding first facial three-dimensional grid of target facial image Texture pixel between texture mapping relation;
It determines and the corresponding second facial three-dimensional grid of pending original facial image;Wherein, second face Each second mesh point in three-dimensional grid is a pair of with each first mesh point one in the first facial three-dimensional grid respectively It should;
According to the correspondence between the texture mapping relation and each second mesh point and the first mesh point, generation With the corresponding face processing image of each second mesh point in the described second facial three-dimensional grid.
In a kind of optional mode, program 410 can specifically be further used for so that processor 402 performs following behaviour Make:
Each first key point included in extraction target facial image, determines and the mesh according to first key point Mark the corresponding first facial three-dimensional grid of face-image;
According to the target facial image and the first facial three-dimensional grid, the first facial three-dimensional grid is determined In the corresponding texture pixel of each first mesh point between texture mapping relation.
In a kind of optional mode, program 410 can specifically be further used for so that processor 402 performs following behaviour Make:
The first Marking the cell of each first mesh point in the first facial three-dimensional grid is determined respectively;
According to the target facial image, determine with the pixel value of each corresponding texture pixel of first Marking the cell with And texture coordinate value;
The pixel value and texture coordinate value of texture pixel according to corresponding to each first Marking the cell generate the line Manage mapping relations.
In a kind of optional mode, program 410 can specifically be further used for so that processor 402 performs following behaviour Make:
Respectively for each second mesh point in the described second facial three-dimensional grid, the second of second mesh point is determined Marking the cell and first Marking the cell identical with second Marking the cell;According to the texture mapping relation, inquiry is with being somebody's turn to do The pixel value of texture pixel and texture coordinate value corresponding to the first identical Marking the cell of second Marking the cell, according to inquiry As a result stick picture disposing is carried out to second mesh point;
Wherein, the first Marking the cell phase of the first corresponding mesh point of the second Marking the cell of each second mesh point Together.
In a kind of optional mode, wherein, each first mesh point in the first facial three-dimensional grid is further Including:The first predeterminable area grid group being made of multiple first mesh points, and it is each in the described second facial three-dimensional grid Second mesh point further comprises:The second predeterminable area grid group being made of multiple second mesh points;Also, each first is pre- If area grid group is corresponded with each second predeterminable area grid group;
Then program 410 can specifically be further used for so that processor 402 performs following operation:
Respectively for each second predeterminable area grid group, determine and the second predeterminable area grid group corresponding first The pixel value of texture pixel and texture coordinate value corresponding to predeterminable area grid group and the first predeterminable area grid group; The pixel value and texture coordinate value of texture pixel according to corresponding to the first predeterminable area grid group are to second preset areas Domain grid group carries out stick picture disposing;
Wherein, the first predeterminable area grid group and/or the second predeterminable area grid group include:It is opposite with face contour The area grid group answered, and/or with the corresponding area grid group of facial face.
In a kind of optional mode, program 410 can specifically be further used for so that processor 402 performs following behaviour Make:
The first facial three-dimensional grid and/or the second facial three-dimensional grid include:It is corresponding with frontal face images Frontal faces three-dimensional grid, and/or the side face three-dimensional grid for carrying rotation angle information.
In a kind of optional mode, program 410 can specifically be further used for so that processor 402 performs following behaviour Make:
Default processing is performed for the target facial image, wherein, the default processing includes:Gradual change handle and/or Whitening is handled.
In a kind of optional mode, program 410 can specifically be further used for so that processor 402 performs following behaviour Make:
The original pixels information of the pending original facial image is extracted, according to the original pixels information to described Target facial image performs default processing.
In a kind of optional mode, program 410 can specifically be further used for so that processor 402 performs following behaviour Make:
Each second key point included in pending original facial image is extracted, is determined according to second key point Corresponding with the pending original facial image second facial three-dimensional grid.
In a kind of optional mode, program 410 can specifically be further used for so that processor 402 performs following behaviour Make:
The current frame image included in live video stream is obtained in real time, and the original face is extracted according to the current frame image Portion's image;Alternatively,
Each two field picture included in the video flowing recorded is obtained successively, according to the current frame image extraction got Original facial image.
In a kind of optional mode, program 410 can specifically be further used for so that processor 402 performs following behaviour Make:
Image is handled by the face and covers the current frame image, the current frame image that obtains that treated, and use Current frame image after reason replaces the current frame image of before processing.
Algorithm and display be not inherently related to any certain computer, virtual system or miscellaneous equipment provided herein. Various general-purpose systems can also be used together with teaching based on this.As described above, required by constructing this kind of system Structure be obvious.In addition, the present invention is not also directed to any certain programmed language.It should be understood that it can utilize various Programming language realizes the content of invention described herein, and the description done above to language-specific is to disclose this hair Bright preferred forms.
In the specification provided in this place, numerous specific details are set forth.It is to be appreciated, however, that the implementation of the present invention Example can be put into practice without these specific details.In some instances, well known method, structure is not been shown in detail And technology, so as not to obscure the understanding of this description.
Similarly, it should be understood that in order to simplify the disclosure and help to understand one or more of each inventive aspect, Above in the description of exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimes In example, figure or descriptions thereof.However, the method for the disclosure should be construed to reflect following intention:I.e. required guarantor Shield the present invention claims the more features of feature than being expressly recited in each claim.It is more precisely, such as following Claims reflect as, inventive aspect is all features less than single embodiment disclosed above.Therefore, Thus the claims for following specific embodiment are expressly incorporated in the specific embodiment, wherein each claim is in itself Separate embodiments all as the present invention.
Those skilled in the art, which are appreciated that, to carry out adaptively the module in the equipment in embodiment Change and they are arranged in one or more equipment different from the embodiment.It can be the module or list in embodiment Member or component be combined into a module or unit or component and can be divided into addition multiple submodule or subelement or Sub-component.In addition at least some in such feature and/or process or unit exclude each other, it may be employed any Combination is disclosed to all features disclosed in this specification (including adjoint claim, summary and attached drawing) and so to appoint Where all processes or unit of method or equipment are combined.Unless expressly stated otherwise, this specification is (including adjoint power Profit requirement, summary and attached drawing) disclosed in each feature can be by providing the alternative features of identical, equivalent or similar purpose come generation It replaces.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments In included some features rather than other feature, but the combination of the feature of different embodiments means in of the invention Within the scope of and form different embodiments.For example, in the following claims, embodiment claimed is appointed One of meaning mode can use in any combination.
The all parts embodiment of the present invention can be with hardware realization or to be run on one or more processor Software module realize or realized with combination thereof.It will be understood by those of skill in the art that it can use in practice Microprocessor or digital signal processor (DSP) realize device that video data according to embodiments of the present invention is handled in real time In some or all components some or all functions.The present invention is also implemented as performing as described herein The some or all equipment or program of device (for example, computer program and computer program product) of method.So Realization the present invention program can may be stored on the computer-readable medium or can have one or more signal shape Formula.Such signal can be downloaded from internet website to be obtained either providing or with any other shape on carrier signal Formula provides.
It should be noted that the present invention will be described rather than limits the invention for above-described embodiment, and ability Field technique personnel can design alternative embodiment without departing from the scope of the appended claims.In the claims, Any reference symbol between bracket should not be configured to limitations on claims.Word "comprising" does not exclude the presence of not Element or step listed in the claims.Word "a" or "an" before element does not exclude the presence of multiple such Element.The present invention can be by means of including the hardware of several different elements and being come by means of properly programmed computer real It is existing.If in the unit claim for listing equipment for drying, several in these devices can be by same hardware branch To embody.The use of word first, second, and third does not indicate that any order.These words can be explained and run after fame Claim.

Claims (10)

1. a kind of processing method of face-image, including:
Determine the line corresponding with each first mesh point in the corresponding first facial three-dimensional grid of target facial image Manage the texture mapping relation between pixel;
It determines and the corresponding second facial three-dimensional grid of pending original facial image;Wherein, second face is three-dimensional Each second mesh point in grid corresponds respectively with each first mesh point in the first facial three-dimensional grid;
According to the correspondence between the texture mapping relation and each second mesh point and the first mesh point, generation and institute State the corresponding face processing image of each second mesh point in the second facial three-dimensional grid.
2. according to the method described in claim 1, wherein, the corresponding first facial of definite and target facial image is three-dimensional The step of texture mapping relation between the corresponding texture pixel of each first mesh point in grid, specifically includes:
Each first key point included in extraction target facial image, determines and the target face according to first key point The corresponding first facial three-dimensional grid of portion's image;
According to the target facial image and the first facial three-dimensional grid, determine in the first facial three-dimensional grid Texture mapping relation between the corresponding texture pixel of each first mesh point.
It is described according to the target facial image and the first facial three 3. according to the method described in claim 2, wherein Grid is tieed up, determines the texture between the corresponding texture pixel of each first mesh point in the first facial three-dimensional grid The step of mapping relations, specifically includes:
The first Marking the cell of each first mesh point in the first facial three-dimensional grid is determined respectively;
According to the target facial image, the pixel value and line with each corresponding texture pixel of first Marking the cell are determined Manage coordinate value;
The pixel value and texture coordinate value of texture pixel according to corresponding to each first Marking the cell generate the texture and reflect Penetrate relation.
It is described according to the texture mapping relation and each second mesh point 4. according to the method described in claim 3, wherein With the correspondence between the first mesh point, generate corresponding with each second mesh point in the described second facial three-dimensional grid Face processing image the step of specifically include:
Each second mesh point being directed to respectively in the described second facial three-dimensional grid determines the second grid of second mesh point Mark and first Marking the cell identical with second Marking the cell;According to the texture mapping relation, inquiry with this second The pixel value of texture pixel and texture coordinate value corresponding to the first identical Marking the cell of Marking the cell, according to query result Stick picture disposing is carried out to second mesh point;
Wherein, the first Marking the cell of the first corresponding mesh point of the second Marking the cell of each second mesh point is identical.
5. according to any methods of claim 1-4, wherein, each first grid in the first facial three-dimensional grid Point further comprises:The first predeterminable area grid group being made of multiple first mesh points, and the described second facial three-dimensional grid In each second mesh point further comprise:The second predeterminable area grid group being made of multiple second mesh points;It is also, each A first predeterminable area grid group is corresponded with each second predeterminable area grid group;
The then correspondence according between the texture mapping relation and each second mesh point and the first mesh point, it is raw It is specifically included into the step of face processing image corresponding with each second mesh point in the described second facial three-dimensional grid:
Respectively for each second predeterminable area grid group, determine default with the second predeterminable area grid group corresponding first The pixel value of texture pixel and texture coordinate value corresponding to area grid group and the first predeterminable area grid group;According to The pixel value and texture coordinate value of texture pixel corresponding to the first predeterminable area grid group are to the second predeterminable area net Lattice group carries out stick picture disposing;
Wherein, the first predeterminable area grid group and/or the second predeterminable area grid group include:It is corresponding with face contour Area grid group, and/or with the corresponding area grid group of facial face.
6. according to any methods of claim 1-5, wherein, the first facial three-dimensional grid and/or the second face three Dimension grid includes:Frontal faces three-dimensional grid corresponding with frontal face images, and/or the side for carrying rotation angle information Facial three-dimensional grid.
7. according to any methods of claim 1-6, wherein, it is described to determine and corresponding first face of target facial image Before the step of texture mapping relation between the corresponding texture pixel of each first mesh point in portion's three-dimensional grid, into One step includes:
Default processing is performed for the target facial image, wherein, the default processing includes:Gradual change processing and/or whitening Processing.
8. a kind of processing unit of face-image, including:
First determining module is adapted to determine that and each first net in the corresponding first facial three-dimensional grid of target facial image Texture mapping relation between the corresponding texture pixel of lattice point;
Second determining module is adapted to determine that the corresponding with pending original facial image second facial three-dimensional grid;Wherein, Each second mesh point in the second facial three-dimensional grid respectively with each first in the first facial three-dimensional grid Mesh point corresponds;
Generation module, suitable for according to corresponding between the texture mapping relation and each second mesh point and the first mesh point Relation, generation and the corresponding face processing image of each second mesh point in the described second facial three-dimensional grid.
9. a kind of computing device, including:Processor, memory, communication interface and communication bus, the processor, the storage Device and the communication interface complete mutual communication by the communication bus;
For the memory for storing an at least executable instruction, the executable instruction makes the processor perform right such as will Ask the corresponding operation of processing method of the face-image any one of 1-7.
10. a kind of computer storage media, an at least executable instruction, the executable instruction are stored in the storage medium Make the corresponding operation of processing method of face-image of the processor execution as any one of claim 1-7.
CN201810146374.2A 2018-02-12 2018-02-12 The processing method and processing device of face-image, computing device Pending CN108062785A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810146374.2A CN108062785A (en) 2018-02-12 2018-02-12 The processing method and processing device of face-image, computing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810146374.2A CN108062785A (en) 2018-02-12 2018-02-12 The processing method and processing device of face-image, computing device

Publications (1)

Publication Number Publication Date
CN108062785A true CN108062785A (en) 2018-05-22

Family

ID=62134509

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810146374.2A Pending CN108062785A (en) 2018-02-12 2018-02-12 The processing method and processing device of face-image, computing device

Country Status (1)

Country Link
CN (1) CN108062785A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110764934A (en) * 2019-10-24 2020-02-07 清华大学 Parallel communication method, device and system for numerical model and storage medium
CN112734930A (en) * 2020-12-30 2021-04-30 长沙眸瑞网络科技有限公司 Three-dimensional model weight reduction method, system, storage medium, and image processing apparatus

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003017206A1 (en) * 2001-08-14 2003-02-27 Pulse Entertainment, Inc. Automatic 3d modeling system and method
CN103854306A (en) * 2012-12-07 2014-06-11 山东财经大学 High-reality dynamic expression modeling method
CN105184249A (en) * 2015-08-28 2015-12-23 百度在线网络技术(北京)有限公司 Method and device for processing face image
CN106447604A (en) * 2016-09-30 2017-02-22 北京奇虎科技有限公司 Method and device for transforming facial frames in videos
CN106570822A (en) * 2016-10-25 2017-04-19 宇龙计算机通信科技(深圳)有限公司 Human face mapping method and device
CN107564086A (en) * 2017-09-08 2018-01-09 北京奇虎科技有限公司 Video data handling procedure and device, computing device
CN107610209A (en) * 2017-08-17 2018-01-19 上海交通大学 Human face countenance synthesis method, device, storage medium and computer equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003017206A1 (en) * 2001-08-14 2003-02-27 Pulse Entertainment, Inc. Automatic 3d modeling system and method
CN103854306A (en) * 2012-12-07 2014-06-11 山东财经大学 High-reality dynamic expression modeling method
CN105184249A (en) * 2015-08-28 2015-12-23 百度在线网络技术(北京)有限公司 Method and device for processing face image
CN106447604A (en) * 2016-09-30 2017-02-22 北京奇虎科技有限公司 Method and device for transforming facial frames in videos
CN106570822A (en) * 2016-10-25 2017-04-19 宇龙计算机通信科技(深圳)有限公司 Human face mapping method and device
CN107610209A (en) * 2017-08-17 2018-01-19 上海交通大学 Human face countenance synthesis method, device, storage medium and computer equipment
CN107564086A (en) * 2017-09-08 2018-01-09 北京奇虎科技有限公司 Video data handling procedure and device, computing device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SHU ZHAN ET.AL: "Real-time 3D face modeling based on 3D face imaging", 《NEUROCOMPUTING》 *
孙晨: "表演驱动的实时人脸表情动画合成", 《中国优秀硕士学位论文全文数据库信息科技辑(月刊)》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110764934A (en) * 2019-10-24 2020-02-07 清华大学 Parallel communication method, device and system for numerical model and storage medium
CN110764934B (en) * 2019-10-24 2020-11-27 清华大学 Parallel communication method, device and system for numerical model and storage medium
CN112734930A (en) * 2020-12-30 2021-04-30 长沙眸瑞网络科技有限公司 Three-dimensional model weight reduction method, system, storage medium, and image processing apparatus

Similar Documents

Publication Publication Date Title
Frühstück et al. Tilegan: synthesis of large-scale non-homogeneous textures
US11880977B2 (en) Interactive image matting using neural networks
DE102016011380A1 (en) Image synthesis using an active mask
CN111598993A (en) Three-dimensional data reconstruction method and device based on multi-view imaging technology
CN115100339A (en) Image generation method and device, electronic equipment and storage medium
CN107483892A (en) Video data real-time processing method and device, computing device
CN107507155A (en) Video segmentation result edge optimization real-time processing method, device and computing device
CN107613360A (en) Video data real-time processing method and device, computing device
US11182942B2 (en) Map generation system and method for generating an accurate building shadow
US20190362524A1 (en) Oil painting stroke simulation using neural network
CN107665482A (en) Realize the video data real-time processing method and device, computing device of double exposure
CN108062785A (en) The processing method and processing device of face-image, computing device
CN107682731A (en) Video data distortion processing method, device, computing device and storage medium
CA3137297A1 (en) Adaptive convolutions in neural networks
CN115222683A (en) Method and system for distributing cleaning personnel, storage medium and electronic equipment
CN107547803A (en) Video segmentation result edge optimization processing method, device and computing device
CN114782645A (en) Virtual digital person making method, related equipment and readable storage medium
CN108171716A (en) Video personage based on the segmentation of adaptive tracing frame dresss up method and device
CN107566853A (en) Realize the video data real-time processing method and device, computing device of scene rendering
CN116011061A (en) Three-dimensional reconstruction model monomer segmentation method, system and terminal for multi-target building
CN108564659A (en) The expression control method and device of face-image, computing device
Seo et al. Pixel based stroke generation for painterly effect using maximum homogeneity neighbor filter
CN113496468B (en) Depth image restoration method, device and storage medium
Tran et al. Encoder–decoder network with guided transmission map: Robustness and applicability
Beardsley et al. Editable parametric dense foliage from 3D capture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180522

RJ01 Rejection of invention patent application after publication