CN110675489B - Image processing method, device, electronic equipment and storage medium - Google Patents
Image processing method, device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN110675489B CN110675489B CN201910911254.1A CN201910911254A CN110675489B CN 110675489 B CN110675489 B CN 110675489B CN 201910911254 A CN201910911254 A CN 201910911254A CN 110675489 B CN110675489 B CN 110675489B
- Authority
- CN
- China
- Prior art keywords
- face
- image
- dimensional model
- shaping
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 19
- 238000007493 shaping process Methods 0.000 claims abstract description 125
- 238000012545 processing Methods 0.000 claims abstract description 20
- 238000000034 method Methods 0.000 claims abstract description 17
- 238000013507 mapping Methods 0.000 claims description 9
- 238000009877 rendering Methods 0.000 claims description 9
- 238000001514 detection method Methods 0.000 claims description 8
- 239000011159 matrix material Substances 0.000 description 16
- 238000010586 diagram Methods 0.000 description 11
- 238000004590 computer program Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 9
- 210000000887 face Anatomy 0.000 description 8
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000006073 displacement reaction Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 230000004075 alteration Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 210000000216 zygoma Anatomy 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000003416 augmentation Effects 0.000 description 1
- 230000003796 beauty Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 210000001061 forehead Anatomy 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
- 230000037303 wrinkles Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Processing Or Creating Images (AREA)
Abstract
The disclosure relates to an image processing method, an image processing device, an electronic device and a storage medium, which belong to the technical field of image processing, and the method comprises the following steps: receiving a request for three-dimensional shaping of a face in an image, wherein the request carries identification information of a shaping mode, carrying out three-dimensional reconstruction on the face in the image according to the detected face key points and the standard face three-dimensional model, acquiring shaping information pre-labeled on the standard face three-dimensional model under the corresponding shaping mode according to the identification information, shaping the reconstructed face three-dimensional model according to the shaping information, and determining a shaped image according to the shaped face three-dimensional model. In this way, the three-dimensional reconstruction is carried out on the human face in the image by means of the standard human face three-dimensional model, the reconstructed human face three-dimensional model is shaped according to the shaping information of the standard human face three-dimensional model in the corresponding shaping mode, grids on the human face three-dimensional model are dense, fine shaping can be easily achieved, and the reality of the human face in the finally obtained image is good.
Description
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, an image processing device, an electronic device, and a storage medium.
Background
Currently, social applications are increasingly abundant in variety, such as short video applications, self-timer beauty applications, and the like, and 3-dimensional (D) shaping for beautifying faces in images is widely used in these social applications.
In the related art, 3D shaping of a face in an image is actually obtained by deforming a face in a 2D image, specifically, some face feature points in the 2D image are determined, 2D coordinate positions of the deformed face feature points are calculated for each face feature point, and a deformed face image is constructed according to the 2D coordinate positions of the deformed face feature points.
Disclosure of Invention
The disclosure provides an image processing method, an image processing device, an electronic device and a storage medium, which at least solve the problems of low shaping precision and poor authenticity of a face in a finally obtained image when shaping the face in the image in the related art. The technical scheme of the present disclosure is as follows:
according to a first aspect of an embodiment of the present disclosure, there is provided an image processing method including:
receiving a request for three-dimensionally shaping a face in an image, wherein the request carries identification information of a shaping mode, and the shaping mode is used for indicating a face part to be shaped and a shaping mode adopted for the face part;
detecting key points of the faces in the image, and carrying out three-dimensional reconstruction on the faces in the image according to the detected key points of the faces and the standard three-dimensional model of the faces to obtain a three-dimensional model of the faces in the image;
the shaping information pre-marked on the standard face three-dimensional model in the corresponding shaping mode is obtained according to the identification information, and the three-dimensional model of the face in the image is shaped according to the shaping information;
and determining the shaped image according to the three-dimensional model of the face in the shaped image.
Optionally, shaping the three-dimensional model of the face in the image according to the shaping information includes:
determining a second vertex corresponding to the first vertex in the three-dimensional model of the face in the image for each first vertex in the standard face three-dimensional model recorded in the shaping information;
and adjusting the three-dimensional coordinates of the second vertex according to the three-dimensional coordinate adjustment information corresponding to the first vertex.
Optionally, determining the shaped image according to the three-dimensional model of the face in the shaped image includes:
rendering a three-dimensional model of a face in the image to obtain face information after shaping the face part;
and replacing the face information in the image with the shaped face information to obtain the image with the shaped face.
Optionally, rendering the three-dimensional model of the face in the image to obtain face information after shaping the face part, including:
determining a pixel area corresponding to each grid in a three-dimensional model of a face in the image on a two-dimensional plane;
and carrying out texture filling on the pixel area according to the face texture information corresponding to the grid to obtain the face information of the pixel area.
Optionally, three-dimensional reconstruction is performed on the face in the image according to the detected face key points and the standard face three-dimensional model, including:
according to the detected key points of the human face, determining the pose information of the human face in the image when the equipment collects the image, wherein the pose information at least comprises the position and the orientation of the human face;
determining coordinates of each vertex of the standard face three-dimensional model projected onto a two-dimensional plane under the corresponding gesture according to gesture information of the face in the image, projection parameters of the equipment and three-dimensional coordinates of each vertex in the standard face three-dimensional model;
and carrying out texture mapping on the standard face three-dimensional model according to the coordinates of the projected vertexes on the two-dimensional plane and the image, and obtaining the three-dimensional model of the face in the image.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus including:
the receiving module is configured to execute receiving a request for three-dimensionally shaping a face in an image, wherein the request carries identification information of a shaping mode, and the shaping mode is used for indicating a face part to be shaped and a shaping mode adopted for the face part;
the reconstruction module is configured to perform key point detection on the face in the image, and perform three-dimensional reconstruction on the face in the image according to the detected key points of the face and the standard face three-dimensional model to obtain a three-dimensional model of the face in the image;
the shaping module is configured to acquire shaping information pre-marked on the standard face three-dimensional model in a corresponding shaping mode according to the identification information, and shape the three-dimensional model of the face in the image according to the shaping information;
and the determining module is configured to determine the shaped image according to the three-dimensional model of the face in the shaped image.
Optionally, the shaping module is specifically configured to perform:
for each vertex in the three-dimensional model of the face in the image, searching three-dimensional coordinate adjustment information corresponding to the vertex from the shaping information;
and adjusting the three-dimensional coordinates of the vertexes according to the three-dimensional coordinate adjustment information.
Optionally, the determining module is specifically configured to perform:
rendering a three-dimensional model of a face in the image to obtain face information after shaping the face part;
and replacing the face information in the image with the shaped face information to obtain the image with the shaped face.
Optionally, the determining module is specifically configured to perform:
determining a pixel area corresponding to each grid in a three-dimensional model of a face in the image on a two-dimensional plane;
and carrying out texture filling on the pixel area according to the face texture information corresponding to the grid to obtain the face information of the pixel area.
Optionally, the reconstruction module is specifically configured to perform:
according to the detected key points of the human face, determining the pose information of the human face in the image when the equipment collects the image, wherein the pose information at least comprises the position and the orientation of the human face;
determining coordinates of each vertex of the standard face three-dimensional model projected onto a two-dimensional plane under the corresponding gesture according to gesture information of the face in the image, projection parameters of the equipment and three-dimensional coordinates of each vertex in the standard face three-dimensional model;
and carrying out texture mapping on the standard face three-dimensional model according to the coordinates of the projected vertexes on the two-dimensional plane and the image, and obtaining the three-dimensional model of the face in the image.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, comprising: at least one processor, and a memory communicatively coupled to the at least one processor, wherein:
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any one of the image processing methods described above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a storage medium, which when executed by a processor of an electronic device, is capable of performing any one of the image processing methods described above.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product, which when invoked by an electronic device for execution, may cause the electronic device to perform any of the image processing methods described above.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
the method comprises the steps of receiving a request for three-dimensionally shaping a face in an image, carrying identification information of a shaping mode in the request, carrying out key point detection on the face in the image, carrying out three-dimensional reconstruction on the face in the image according to the detected key points of the face and a standard face three-dimensional model, obtaining a three-dimensional model of the face in the image, further obtaining shaping information pre-labeled on the standard face three-dimensional model in a corresponding shaping mode according to the identification information, shaping the three-dimensional model of the face in the image according to the shaping information, determining the shaped image according to the three-dimensional model of the face in the shaped image, thus, when the face in the image is shaped, establishing the three-dimensional model of the face in the image by means of the standard face three-dimensional model, and shaping the established face three-dimensional model according to the shaping information of the standard face three-dimensional model in the corresponding shaping mode.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
FIG. 1 is a schematic diagram of a computing device, according to an example embodiment.
Fig. 2 is a schematic diagram of a standard three-dimensional model of a face, according to an example embodiment.
Fig. 3 is a flow chart illustrating an image processing method according to an exemplary embodiment.
Fig. 4 is a flowchart illustrating an image processing method according to an exemplary embodiment.
Fig. 5 is a flowchart illustrating a method for three-dimensional reconstruction of a face in an image, according to an exemplary embodiment.
Fig. 6 is a set of pre-shaping images shown according to an exemplary embodiment.
Fig. 7 is an image after shaping the image shown in fig. 6, according to an exemplary embodiment.
Fig. 8 is a block diagram of an image processing apparatus according to an exemplary embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and in the claims of the present disclosure are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The image processing method provided in the present disclosure may be applied to a variety of computing devices, and fig. 1 shows a schematic configuration of a computing device, where the computing device 10 shown in fig. 1 is only an example, and does not impose any limitation on the functions and application scope of the embodiments of the present disclosure.
As shown in fig. 1, computing device 10 is embodied in the form of a general purpose computing device, and the components of computing device 10 may include, but are not limited to: at least one processing unit 101, at least one memory unit 102, a bus 103 connecting the different system components, including the memory unit 102 and the processing unit 101.
Bus 103 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, and a local bus using any of a variety of bus architectures.
The storage unit 102 may include readable media in the form of volatile memory, such as Random Access Memory (RAM) 1021 and/or cache memory 1022, and may further include Read Only Memory (ROM) 1023.
Storage unit 102 may also include program/utility 1025 having a set (at least one) of program modules 1024, such program modules 1024 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
The computing device 10 may also communicate with one or more external devices 104 (e.g., keyboard, pointing device, etc.), one or more devices that enable a user to interact with the computing device 10, and/or any devices (e.g., routers, modems, etc.) that enable the computing device 10 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 105. Moreover, computing device 10 may also communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet, through network adapter 106. As shown in FIG. 1, network adapter 106 communicates with other modules for computing device 10 over bus 103. It should be appreciated that although not shown in fig. 1, other hardware and/or software modules may be used in connection with computing device 10, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
It will be appreciated by those skilled in the art that FIG. 1 is merely an example of a computing device and is not intended to be limiting of the computing device, and may include more or fewer components than shown, or may combine certain components, or different components.
To facilitate an understanding of the present disclosure, a three-dimensional model of a standard face is first described, and fig. 2 is a schematic diagram of a three-dimensional model of a standard face, which includes a plurality of triangular meshes, each of which includes three vertices, each of which may be shared by a number of triangles, according to an exemplary embodiment. Assuming that the number of vertices in the standard face three-dimensional model is n, the position of the ith vertex in the three-dimensional space is (xi, yi, zi).
In practice, several shaping face parts, such as nose, eyes, mouth, cheekbones, etc., are selected in advance, then at least one shaping mode is determined for each face part, and one shaping mode is obtained by one face part plus one shaping mode, for example, a shaping mode of thinning nose, augmentation nose, opening eyes, removing eye bags, removing eye wrinkles, reducing lip thickness, increasing cheekbones, etc.
Further, shaping information of the standard face three-dimensional model in each shaping mode is determined. Specifically, for each shaping mode, according to the face part indicated by the shaping mode and the shaping mode adopted for the face part, three-dimensional coordinates of relevant vertexes in the standard face three-dimensional model are repeatedly adjusted to change the 3D shape of the face part in the standard face three-dimensional model, when a satisfactory 3D shaping effect is obtained, identification information and three-dimensional coordinate adjustment information of each vertex which moves in the standard face three-dimensional model before and after adjustment can be recorded, and the identification information and the three-dimensional coordinate adjustment information of each vertex are used as pre-labeled shaping information of the standard face three-dimensional model in the shaping mode.
Subsequently, a plurality of selectable shaping modes can be provided, a user selects which shaping mode, three-dimensional reconstruction is carried out on the face in the image according to the standard face three-dimensional model, the reconstructed face three-dimensional model is shaped according to shaping information of the standard face three-dimensional model in the corresponding shaping mode, a shaped two-dimensional image is determined according to the shaped face three-dimensional model, 3D shaping in a real sense is achieved, and the reality of the face in the finally obtained two-dimensional image is improved.
Fig. 3 is a flowchart of an image processing method according to an exemplary embodiment, which specifically includes the following steps:
firstly, an image containing a human face is acquired, the image is input into an electronic device, and the electronic device can provide various shaping modes for the image. If the user selects a certain shaping mode S, face detection can be performed on the image, and face positioning is performed in the region where the face is detected, namely face key point detection is performed, and then three-dimensional reconstruction is performed on the face in the image according to the detected face key points and the standard face three-dimensional model, so that face grid information, a face texture image and a face grid-to-two-dimensional face transformation matrix M are obtained.
Assuming that the standard face three-dimensional model is (V, G), and the reconstructed face three-dimensional model is (U, G), wherein V= { V 1 ,V 2 ,…V n N vertices, V in a standard face three-dimensional model i For the three-dimensional coordinates of the ith vertex, i is more than or equal to 1 and less than or equal to n, and U= { U 1 ,U 2 ,…U n N vertexes in the reconstructed three-dimensional model of the face, namely face grid information, U i For the three-dimensional coordinates of the ith vertex, i is more than or equal to 1 and less than or equal to n, G represents the grid topology of the standard face three-dimensional model, namely the connection relation among all the vertices, and the network topology of the reconstructed face three-dimensional model is consistent with the standard face three-dimensional model.
In particular, the transformation matrix M may be a 4*4 matrix, provided that the vertices U at specific locations (e.g., nose tip, left eye corner, etc.) in the reconstructed three-dimensional model of the face i =(x i ,y i ,z i ) The coordinates of the specific position in the two-dimensional image are B i =(a i ,b i ) Then satisfy M (x i ,y i ,z i ,1) T =(a i ,b i ,c i ,1) T Wherein c i Is a placeholder.
In addition, if the face texture image is recorded as J, the reconstructed face three-dimensional model satisfies the following conditions in the face region: and a register (V, G, M, J) =i, wherein the register () represents a rasterization operator, and functions to rasterize a three-dimensional model (V, G) after transforming using a transformation matrix M, calculate a color value using a face texture image J as a texture map, and color a pixel region obtained after rasterization according to the calculated color value, thereby rendering a two-dimensional image.
Secondly, according to the predetermined shaping information of the standard face three-dimensional model in the shaping mode S, the reconstructed face three-dimensional model is shaped, and the shaped face three-dimensional model (W, G) is obtained.
Suppose that in the shaping mode S, shaping information Δv of the standard face three-dimensional model is: Δv= { Δv 1 ,ΔV 2 ,…ΔV n }, where DeltaV i And three-dimensional coordinate adjustment information of an ith vertex in the standard face three-dimensional model is represented, and then w=u+Δv.
Then, according to the three-dimensional model (W, G) of the face after shaping, the face texture image J and the mapping matrix M, the face area image after shaping is rendered, namely, a Raster (W, G, M, J) is obtained.
Finally, combining the input image and the shaped facial region image to obtain a final shaped image.
Specifically, pixels in the shaped face region image are used instead of pixels at the same positions in the input image.
It should be noted that if the image includes multiple faces, each face is sequentially processed according to the above process, which is not described herein.
Fig. 4 is a flowchart illustrating an image processing method according to an exemplary embodiment, including the following steps.
S401: and receiving a request for three-dimensional shaping of the face in the image, wherein the request carries identification information of a shaping mode.
Here, the shaping mode is used for indicating the face part to be shaped and the shaping mode adopted for the face part, for example, the face part to be shaped indicated by the thin nose is the nose, and the shaping mode adopted for the nose is the shrinkage mode; the face part to be shaped indicated by the open canthus is canthus, and the shaping mode adopted for the canthus is expansion.
S402: and detecting key points of the faces in the images.
In specific implementation, a face region may be detected in an image, and then a key point detection may be performed in the face region, where the detected face key points may be, for example, a face contour point, an eye contour point, a nose contour point, an eyebrow contour point, a forehead contour point, an upper lip contour point, a lower lip contour point, and the like.
S403: and carrying out three-dimensional reconstruction on the human face in the image according to the detected human face key points and the standard human face three-dimensional model to obtain the three-dimensional model of the human face in the image.
In the implementation, the pose information of the face in the image, such as the position and the orientation of the face, of the device when the device collects the image can be determined according to the detected face key points, and then the three-dimensional coordinates of each vertex in the standard face three-dimensional model are converted into two-dimensional coordinates according to the pose information of the face in the image and the projection parameters of the device, and texture mapping is carried out on the standard face three-dimensional model according to the two-dimensional coordinates and the image, so that the three-dimensional model of the face in the image is obtained.
Fig. 5 is a flowchart illustrating a method for three-dimensional reconstruction of a face in an image, according to an exemplary embodiment, including the following steps.
S501a: and determining pose information of the face in the image when the equipment collects the image according to the detected face key points, wherein the pose information at least comprises the position and the orientation of the face in the image.
Here, the position and orientation of the face may refer to the position and orientation of the face in the three-dimensional space in the image when the device collects the image, for example, the position of the face may be a position of the face that is far left, a position that is far right, or a position that is midway between the fields of view of the device, and the orientation of the face may be a frontal face, a left face, a right face, a head-up or low head-up orientation.
In the implementation, according to the detected key points of the human face, a displacement matrix and a rotation matrix can be determined, wherein the displacement matrix is used for representing the position of the human face in a three-dimensional space in the image when the equipment collects the image; the rotation matrix is used for representing the orientation of the face in the three-dimensional space when the device collects the image, and the matrix (marked as the gesture matrix) obtained by multiplying the displacement matrix and the rotation matrix can be used as the gesture information of the face in the image.
S502a: and determining the coordinates of each vertex of the standard face three-dimensional model projected onto a two-dimensional plane under the corresponding gesture according to the gesture information of the face in the image, the projection parameters of the equipment and the three-dimensional coordinates of each vertex in the standard face three-dimensional model.
Wherein the projection parameters of the device comprise a projection matrix.
In the specific implementation, for each vertex, the three-dimensional coordinates of the vertex are multiplied by the gesture matrix and the projection matrix, so that the coordinates of the vertex projected onto the two-dimensional plane can be obtained.
S503a: and carrying out texture mapping on the standard face three-dimensional model according to the coordinates projected onto the two-dimensional plane by each vertex and the image, and obtaining the three-dimensional model of the face in the image.
In the implementation, texture information acquisition can be performed on the image according to the coordinates of the projection of each vertex onto the two-dimensional plane, and then texture mapping is performed on the standard face three-dimensional model according to the acquired texture information, so that the three-dimensional model of the face in the image can be obtained.
The number of vertices of the three-dimensional model of the face in the image is the same as that of the three-dimensional model of the standard face. Assuming that vertexes in the standard face three-dimensional model are collectively called first vertexes, and vertexes in the three-dimensional model of the face in the image are collectively called second vertexes, each first vertex in the standard face three-dimensional model corresponds to a second vertex in the three-dimensional model of the face in the image, and the two vertexes correspond to the same face position.
S404: and acquiring shaping information pre-marked on the standard face three-dimensional model in a corresponding shaping mode according to the identification information.
S405: and shaping the three-dimensional model of the face in the image according to shaping information pre-marked on the standard three-dimensional model of the face in the corresponding shaping mode.
In the implementation, each first vertex in the standard face three-dimensional model recorded in the integer information is used for determining a second vertex corresponding to the first vertex in the face three-dimensional model in the image, and then the three-dimensional coordinates of the second vertex are adjusted according to the three-dimensional coordinate adjustment information corresponding to the first vertex, so that the face three-dimensional model in the image is adjusted vertex by vertex, the shaping fineness is higher, and the reality of the face in the finally obtained shaped image is better.
S406: rendering the three-dimensional model of the face in the image to obtain face information after shaping the face part.
Specifically, for each grid in a three-dimensional model of a face in an image, determining a pixel area corresponding to the grid on a two-dimensional plane, for example, performing rasterization processing on the grid to determine a pixel area corresponding to the grid on the two-dimensional plane, performing texture filling on the pixel area according to face texture information corresponding to the grid to obtain face information of the pixel area, and thus obtaining face information of the pixel area corresponding to each grid, namely obtaining face information after shaping a face part.
S407: and replacing the face information in the image with the face information after the face part is shaped to obtain the image after the face part is shaped.
In the implementation, the three-dimensional model of the face in the image is rendered, only the face information of the face information after the face is shaped is obtained, and the image usually contains some non-character information such as automobiles, water cups and the like besides the face information, so that the face information after the face is shaped can be obtained in a mode of replacing the face information in the image.
Embodiments of the present disclosure are described below in conjunction with specific embodiments.
Assuming that the image shown in fig. 6 is obtained and a user selects a request for performing nose thinning processing on a face in the image, a face region can be detected in the image, then key point detection is performed in the face region, and further three-dimensional reconstruction is performed on the face in the image according to the detected key points of the face and a standard face three-dimensional model, so that a three-dimensional model of the face in the image is obtained.
Further, shaping information pre-marked on the standard face three-dimensional model when the nose thinning processing is carried out is obtained from the shaping information base, each first vertex in the standard face three-dimensional model recorded in the shaping information is determined, a second vertex corresponding to the first vertex in the face three-dimensional model in the image is determined, and then the three-dimensional coordinates of the second vertex are adjusted according to the three-dimensional coordinate adjusting information corresponding to the first vertex.
Then, the three-dimensional model of the face in the image is rendered to obtain a shaped face region image, each pixel in the face region image is replaced by the pixel at the same position in the image, so that a shaped image is obtained, see fig. 7, and fig. 7 is the shaped image.
When the method provided in the embodiments of the present disclosure is implemented in software or hardware or a combination of software and hardware, a plurality of functional modules may be included in the electronic device, and each functional module may include software, hardware, or a combination thereof.
Fig. 8 is a block diagram of an image processing apparatus according to an exemplary embodiment, which includes a receiving module 801, a reconstructing module 802, a shaping module 803, and a determining module 804.
A receiving module 801, configured to perform receiving a request for three-dimensionally shaping a face in an image, where the request carries identification information of a shaping mode, and the shaping mode is used to indicate a face part to be shaped and a shaping mode adopted for the face part;
a reconstruction module 802 configured to perform key point detection on a face in the image, and perform three-dimensional reconstruction on the face in the image according to the detected key points of the face and a standard face three-dimensional model to obtain a three-dimensional model of the face in the image;
a shaping module 803 configured to perform shaping of the three-dimensional model of the face in the image according to shaping information obtained from the identification information and pre-labeled with the standard three-dimensional model of the face in the corresponding shaping mode;
a determining module 804 is configured to perform determining a reshaped image from the three-dimensional model of the face in the reshaped image.
Optionally, the shaping module 803 is specifically configured to perform:
for each vertex in the three-dimensional model of the face in the image, searching three-dimensional coordinate adjustment information corresponding to the vertex from the shaping information;
and adjusting the three-dimensional coordinates of the vertexes according to the three-dimensional coordinate adjustment information.
Optionally, the determining module 804 is specifically configured to perform:
rendering a three-dimensional model of a face in the image to obtain face information after shaping the face part;
and replacing the face information in the image with the shaped face information to obtain the image with the shaped face.
Optionally, the determining module 804 is specifically configured to perform:
determining a pixel area corresponding to each grid in a three-dimensional model of a face in the image on a two-dimensional plane;
and carrying out texture filling on the pixel area according to the face texture information corresponding to the grid to obtain the face information of the pixel area.
Optionally, the reconstruction module 802 is specifically configured to perform:
according to the detected key points of the human face, determining the pose information of the human face in the image when the equipment collects the image, wherein the pose information at least comprises the position and the orientation of the human face;
determining coordinates of each vertex of the standard face three-dimensional model projected onto a two-dimensional plane under the corresponding gesture according to gesture information of the face in the image, projection parameters of the equipment and three-dimensional coordinates of each vertex in the standard face three-dimensional model;
and carrying out texture mapping on the standard face three-dimensional model according to the coordinates of the projected vertexes on the two-dimensional plane and the image, and obtaining the three-dimensional model of the face in the image.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
The division of the modules in the embodiments of the present disclosure is schematically shown as merely one logic function division, and there may be another division manner when actually implemented, and in addition, each functional module in the embodiments of the present disclosure may be integrated in one processor, or may exist separately and physically, or two or more modules may be integrated in one module. The coupling of the individual modules to each other may be achieved by means of interfaces which are typically electrical communication interfaces, but it is not excluded that they may be mechanical interfaces or other forms of interfaces. Thus, the modules illustrated as separate components may or may not be physically separate, may be located in one place, or may be distributed in different locations on the same or different devices. The integrated modules may be implemented in hardware or in software functional modules.
The embodiment of the disclosure also provides an electronic device, including: at least one processor, and a memory communicatively coupled to the at least one processor, wherein:
the memory stores instructions executable by the at least one processor, which when executed by the at least one processor is capable of performing steps in any of the image processing methods described above.
The disclosed embodiments also provide a storage medium, which when executed by a processor of an electronic device, is capable of performing the steps of any of the image processing methods described above.
In some possible embodiments, aspects of the image processing method provided in the present disclosure may also be implemented in the form of a computer program product, which when run on an electronic device, causes the electronic device to perform the steps of any of the image processing methods described above.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product for image processing of embodiments of the present disclosure may employ a portable compact disc read only memory (CD-ROM) and include program code and may run on a computing device. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such a division is merely exemplary and not mandatory. Indeed, the features and functions of two or more of the units described above may be embodied in one unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one unit described above may be further divided into a plurality of units to be embodied.
Furthermore, although the operations of the methods of the present disclosure are depicted in the drawings in a particular order, this is not required to or suggested that these operations must be performed in this particular order or that all of the illustrated operations must be performed in order to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform.
It will be apparent to those skilled in the art that embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present disclosure have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the disclosure.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present disclosure without departing from the spirit or scope of the disclosure. Thus, the present disclosure is intended to include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
Claims (10)
1. An image processing method, comprising:
receiving a request for three-dimensionally shaping a face in an image, wherein the request carries identification information of a shaping mode, and the shaping mode is used for indicating a face part to be shaped and a shaping mode adopted for the face part;
detecting key points of faces in the image, and carrying out three-dimensional reconstruction on each face in the image according to the detected key points of the faces and the standard three-dimensional model of the faces to obtain a three-dimensional model of each face in the image;
the shaping information pre-marked on the standard face three-dimensional model in the corresponding shaping mode is obtained according to the identification information, and the three-dimensional model of each face in the image is shaped according to the shaping information;
determining a shaped image according to the three-dimensional model of each face in the shaped image;
shaping the three-dimensional model of each face in the image according to the shaping information, including:
determining a second vertex corresponding to the first vertex in the three-dimensional model of each face in the image for each first vertex in the standard face three-dimensional model recorded in the shaping information;
and adjusting the three-dimensional coordinates of the second vertex according to the three-dimensional coordinate adjustment information corresponding to the first vertex so as to reshape the three-dimensional model of the human face.
2. The method of claim 1, wherein determining the reshaped image from the three-dimensional model of each face in the reshaped image comprises:
rendering a three-dimensional model of each face in the image to obtain face information after shaping the face part;
and replacing the face information of each face in the image with the shaped face information to obtain the image with the shaped face.
3. The method of claim 2, wherein rendering the three-dimensional model of each face in the image to obtain face information after shaping the face portion comprises:
determining a pixel area corresponding to each grid in the three-dimensional model of each face in the image on a two-dimensional plane;
and carrying out texture filling on the pixel area according to the face texture information corresponding to the grid to obtain the face information of the pixel area.
4. The method according to claim 1, wherein the three-dimensional reconstruction of each face in the image based on the detected face keypoints and a standard face three-dimensional model comprises:
according to the detected key points of the faces, determining the gesture information of each face in the image when the equipment collects the image, wherein the gesture information at least comprises the position and the orientation of the face;
determining coordinates of each vertex of the standard face three-dimensional model projected onto a two-dimensional plane under the corresponding gesture according to gesture information of each face in the image, projection parameters of the equipment and three-dimensional coordinates of each vertex in the standard face three-dimensional model;
and carrying out texture mapping on the standard face three-dimensional model according to the coordinates of the projected vertexes on the two-dimensional plane and the image, and obtaining the face three-dimensional model in the image.
5. An image processing apparatus, comprising:
the receiving module is configured to execute receiving a request for three-dimensionally shaping a face in an image, wherein the request carries identification information of a shaping mode, and the shaping mode is used for indicating a face part to be shaped and a shaping mode adopted for the face part;
the reconstruction module is configured to perform key point detection on the faces in the image, and perform three-dimensional reconstruction on the faces in the image according to the detected key points of the faces and the standard three-dimensional model of the faces to obtain a three-dimensional model of the faces in the image;
the shaping module is configured to acquire shaping information pre-marked on the standard face three-dimensional model in a corresponding shaping mode according to the identification information, and shape the three-dimensional model of each face in the image according to the shaping information;
a determining module configured to perform determining a reshaped image from a three-dimensional model of each face in the reshaped image;
the shaping module is specifically configured to perform three-dimensional coordinate adjustment information corresponding to each vertex in the three-dimensional model of each face in the image, and search the shaping information; and adjusting the three-dimensional coordinates of the vertexes according to the three-dimensional coordinate adjustment information so as to reshape the three-dimensional model of the face.
6. The apparatus of claim 5, wherein the determining module is specifically configured to perform:
rendering a three-dimensional model of each face in the image to obtain face information after shaping the face part;
and replacing the face information of each face in the image with the shaped face information to obtain the image with the shaped face.
7. The apparatus of claim 6, wherein the determining module is specifically configured to perform:
determining a pixel area corresponding to each grid in the three-dimensional model of each face in the image on a two-dimensional plane;
and carrying out texture filling on the pixel area according to the face texture information corresponding to the grid to obtain the face information of the pixel area.
8. The apparatus of claim 5, wherein the reconstruction module is specifically configured to perform:
according to the detected key points of the faces, determining the gesture information of each face in the image when the equipment collects the image, wherein the gesture information at least comprises the position and the orientation of the face;
determining coordinates of each vertex of the standard face three-dimensional model projected onto a two-dimensional plane under the corresponding gesture according to gesture information of each face in the image, projection parameters of the equipment and three-dimensional coordinates of each vertex in the standard face three-dimensional model;
and carrying out texture mapping on the standard face three-dimensional model according to the coordinates of the projected vertexes on the two-dimensional plane and the image, and obtaining the face three-dimensional model in the image.
9. An electronic device, comprising: at least one processor, and a memory communicatively coupled to the at least one processor, wherein:
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 4.
10. A storage medium, characterized in that the electronic device is capable of performing the method of any one of claims 1 to 4 when instructions in the storage medium are executed by a processor of the electronic device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910911254.1A CN110675489B (en) | 2019-09-25 | 2019-09-25 | Image processing method, device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910911254.1A CN110675489B (en) | 2019-09-25 | 2019-09-25 | Image processing method, device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110675489A CN110675489A (en) | 2020-01-10 |
CN110675489B true CN110675489B (en) | 2024-01-23 |
Family
ID=69079453
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910911254.1A Active CN110675489B (en) | 2019-09-25 | 2019-09-25 | Image processing method, device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110675489B (en) |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111274916B (en) * | 2020-01-16 | 2024-02-02 | 华为技术有限公司 | Face recognition method and face recognition device |
CN113409409A (en) * | 2020-03-17 | 2021-09-17 | 腾讯科技(深圳)有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN113570634B (en) * | 2020-04-28 | 2024-07-12 | 北京达佳互联信息技术有限公司 | Object three-dimensional reconstruction method, device, electronic equipment and storage medium |
CN113850888A (en) * | 2020-06-28 | 2021-12-28 | 北京达佳互联信息技术有限公司 | Image processing method and device, electronic equipment and storage medium |
CN112241933B (en) * | 2020-07-15 | 2024-07-19 | 北京沃东天骏信息技术有限公司 | Face image processing method and device, storage medium and electronic equipment |
CN112669198A (en) * | 2020-10-29 | 2021-04-16 | 北京达佳互联信息技术有限公司 | Image special effect processing method and device, electronic equipment and storage medium |
CN112308955A (en) * | 2020-10-30 | 2021-02-02 | 北京字跳网络技术有限公司 | Texture filling method, device and equipment based on image and storage medium |
CN112257657B (en) * | 2020-11-11 | 2024-02-27 | 网易(杭州)网络有限公司 | Face image fusion method and device, storage medium and electronic equipment |
CN112669447B (en) * | 2020-12-30 | 2023-06-30 | 网易(杭州)网络有限公司 | Model head portrait creation method and device, electronic equipment and storage medium |
CN113240814B (en) * | 2021-05-12 | 2024-10-01 | 北京达佳互联信息技术有限公司 | Image processing method, device, electronic equipment and storage medium |
CN113256797A (en) * | 2021-06-03 | 2021-08-13 | 广州虎牙科技有限公司 | Semantic point determining method and device, electronic equipment and computer-readable storage medium |
CN113343879A (en) * | 2021-06-18 | 2021-09-03 | 厦门美图之家科技有限公司 | Method and device for manufacturing panoramic facial image, electronic equipment and storage medium |
CN113657357B (en) * | 2021-10-20 | 2022-02-25 | 北京市商汤科技开发有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN114187408B (en) * | 2021-12-15 | 2023-04-07 | 中国电信股份有限公司 | Three-dimensional face model reconstruction method and device, electronic equipment and storage medium |
CN114266693A (en) * | 2021-12-16 | 2022-04-01 | 阿里巴巴(中国)有限公司 | Image processing method, model generation method and equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5751363A (en) * | 1995-02-15 | 1998-05-12 | Nec Corporation | System and method for coding and/or decoding image-adaptive split region of motion picture |
CN106920277A (en) * | 2017-03-01 | 2017-07-04 | 浙江神造科技有限公司 | Simulation beauty and shaping effect visualizes the method and system of online scope of freedom carving |
CN108447017A (en) * | 2018-05-31 | 2018-08-24 | Oppo广东移动通信有限公司 | Face virtual face-lifting method and device |
CN108765273A (en) * | 2018-05-31 | 2018-11-06 | Oppo广东移动通信有限公司 | The virtual lift face method and apparatus that face is taken pictures |
CN108765351A (en) * | 2018-05-31 | 2018-11-06 | Oppo广东移动通信有限公司 | Image processing method, device, electronic equipment and storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB0117157D0 (en) * | 2001-07-16 | 2001-09-05 | Imec Inter Uni Micro Electr | Extraction, hierarchical representation and flexible compression of surface meshes derived from 3D data |
-
2019
- 2019-09-25 CN CN201910911254.1A patent/CN110675489B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5751363A (en) * | 1995-02-15 | 1998-05-12 | Nec Corporation | System and method for coding and/or decoding image-adaptive split region of motion picture |
CN106920277A (en) * | 2017-03-01 | 2017-07-04 | 浙江神造科技有限公司 | Simulation beauty and shaping effect visualizes the method and system of online scope of freedom carving |
CN108447017A (en) * | 2018-05-31 | 2018-08-24 | Oppo广东移动通信有限公司 | Face virtual face-lifting method and device |
CN108765273A (en) * | 2018-05-31 | 2018-11-06 | Oppo广东移动通信有限公司 | The virtual lift face method and apparatus that face is taken pictures |
CN108765351A (en) * | 2018-05-31 | 2018-11-06 | Oppo广东移动通信有限公司 | Image processing method, device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110675489A (en) | 2020-01-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110675489B (en) | Image processing method, device, electronic equipment and storage medium | |
CN109859296B (en) | Training method of SMPL parameter prediction model, server and storage medium | |
CN108648269B (en) | Method and system for singulating three-dimensional building models | |
US11395715B2 (en) | Methods and systems for generating and using 3D images in surgical settings | |
CN104574267B (en) | Bootstrap technique and information processing equipment | |
US10762704B2 (en) | Method for establishing a deformable 3D model of an element, and associated system | |
US6434278B1 (en) | Generating three-dimensional models of objects defined by two-dimensional image data | |
US8884947B2 (en) | Image processing apparatus and image processing method | |
KR101744079B1 (en) | The face model generation method for the Dental procedure simulation | |
CN104376594A (en) | Three-dimensional face modeling method and device | |
KR20150107063A (en) | 3d scanning system using facial plastic surgery simulation | |
JP2002197443A (en) | Generator of three-dimensional form data | |
WO2018190805A1 (en) | Depth image pose search with a bootstrapped-created database | |
CN108182663A (en) | A kind of millimeter-wave image effect reinforcing method, equipment and readable storage medium storing program for executing | |
CN110599535A (en) | High-resolution human body real-time dynamic reconstruction method and device based on hash table | |
CN111382618A (en) | Illumination detection method, device, equipment and storage medium for face image | |
CN108655571A (en) | A kind of digital-control laser engraving machine, control system and control method, computer | |
CN114863061A (en) | Three-dimensional reconstruction method and system for remote monitoring medical image processing | |
CN117422802B (en) | Three-dimensional figure digital reconstruction method, device, terminal equipment and storage medium | |
KR20210147647A (en) | Apparatus and method for color synthesis of face images | |
CN112634439B (en) | 3D information display method and device | |
CN112561784B (en) | Image synthesis method, device, electronic equipment and storage medium | |
CN113781653A (en) | Object model generation method and device, electronic equipment and storage medium | |
CN112967329A (en) | Image data optimization method and device, electronic equipment and storage medium | |
CN117635814B (en) | Drivable 3D digital human body modeling method, system and equipment based on RGBD data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |