WO2018076437A1 - 一种人脸贴图方法及装置 - Google Patents

一种人脸贴图方法及装置 Download PDF

Info

Publication number
WO2018076437A1
WO2018076437A1 PCT/CN2016/107806 CN2016107806W WO2018076437A1 WO 2018076437 A1 WO2018076437 A1 WO 2018076437A1 CN 2016107806 W CN2016107806 W CN 2016107806W WO 2018076437 A1 WO2018076437 A1 WO 2018076437A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
dimensional
face image
dimensional face
key area
Prior art date
Application number
PCT/CN2016/107806
Other languages
English (en)
French (fr)
Inventor
朱洪达
刘亚辉
裴鸿刚
Original Assignee
宇龙计算机通信科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 宇龙计算机通信科技(深圳)有限公司 filed Critical 宇龙计算机通信科技(深圳)有限公司
Publication of WO2018076437A1 publication Critical patent/WO2018076437A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/08Projecting images onto non-planar surfaces, e.g. geodetic screens

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to a face mapping method and apparatus.
  • the “3D model” of characters has been widely used in various “3D applications” and “3D game applications”.
  • the nowadays “Virtual Reality” and “Augmented Reality” are inseparable from 3D space. Scene, 3D character modeling technology.
  • the 3D character avatar models in the "3D application” and “3D game application” of the mobile terminal are currently preset in the mobile terminal application client, or are imported into the mobile terminal application client through the network, and these 3D models are usually They are all created by computer.
  • to create a 3D character avatar model on the computer side first create a 3D character avatar model according to the common "hand-painted" or "photo”.
  • the 3D character avatar model After the 3D character avatar model is completed, there is no texture, no texture, no lighting effect, and after rendering.
  • the sculpture is general, as shown in Figure 1; then the real object surface (skin surface) is attached to the 3D character avatar model as a material, so that the 3D character avatar model has a real effect, as shown in Figure 2;
  • the bright model adds a lighting effect to make the 3D character avatar model more realistic.
  • the embodiment of the invention provides a method and a device for mapping a face to solve the problem that the three-dimensional character avatar model in various applications on the mobile terminal cannot be greatly edited in the prior art.
  • the embodiment of the present invention adopts the following technical solutions:
  • a face mapping method includes: acquiring a three-dimensional avatar model to be subjected to texture processing; acquiring a UV topology mesh of the three-dimensional avatar model; acquiring a two-dimensional human face image as a texture material by using a mobile terminal; acquiring the second Mapping relationship between the dimension face image and the UV topology mesh; and attaching the two-dimensional face image to the three-dimensional avatar model according to the mapping relationship.
  • acquiring the three-dimensional avatar model to be subjected to the mapping processing comprises: receiving a model selection instruction; and selecting a corresponding three-dimensional avatar model from the preset model library according to the model selection instruction, where the model library stores a plurality of different regions A three-dimensional avatar model of scope, age, and gender.
  • acquiring the two-dimensional face image as the texture material by the mobile terminal includes: collecting, by the image acquisition module of the mobile terminal, a left side image, a front side image, and a right side image of the human face; The image, the front image, and the right image are subjected to image synthesis processing to obtain a two-dimensional face image as a texture material.
  • mapping relationship between the two-dimensional face image and the UV topology mesh includes:
  • mapping the key area on the two-dimensional face image to the grid in the corresponding key area in the UV topology mesh to form a key area mapping relationship comprises: using the UV topology mesh as a semi-transparent layer The method covers the two-dimensional face image, and the two-dimensional face image is moved according to an image movement instruction to move the texture element of the key area on the two-dimensional face image to the UV topology network.
  • mapping relationship between the two-dimensional face image and the UV topology mesh further includes: according to the texel of the key region on the two-dimensional face image and the corresponding key region in the UV topology mesh
  • the key area mapping relationship of the grid determines a mapping relationship between the texels in the other face regions on the two-dimensional face image and the grids in other corresponding regions in the UV topology mesh.
  • a face mapping device is applied to a mobile terminal, comprising: a model selection module, configured to acquire a three-dimensional avatar model to be subjected to texture processing; and a grid acquisition module configured to acquire a UV topology mesh of the three-dimensional avatar model; a texture material acquisition module, configured to acquire a two-dimensional face image as a texture material by using a mobile terminal; and a processing module configured to acquire a mapping relationship between the two-dimensional face image and the UV topology mesh; And a block configured to paste the two-dimensional face image onto the three-dimensional avatar model according to the mapping relationship.
  • the model selection module includes: an instruction receiving unit configured to receive a model selection instruction; and a selection unit configured to select a corresponding three-dimensional avatar model from the preset model library according to the model selection instruction, the model library A three-dimensional avatar model with multiple geographical regions, age groups, and genders is stored.
  • the texture material acquisition module includes: an image acquisition control unit, configured to control an image acquisition module of the mobile terminal to respectively collect a left image, a front image, and a right image of the human face; and an image synthesis processing unit, configured to The left side image, the front side image, and the right side image are subjected to image synthesis processing to obtain a two-dimensional face image as a texture material.
  • the processing module includes: a dividing unit, configured to divide the two-dimensional face image into a plurality of units, one unit is a texture element; and a key area mapping unit is configured to acquire the two-dimensional face image The texel of the upper key area is mapped to the grid in the corresponding key area on the UV topology mesh to form a key area mapping relationship; the key area includes a binocular area, a binaural area, a nasal area, and a mouth area At least one of them.
  • a method and a device for mapping a face map by acquiring a mapping relationship between a UV topological mesh and a two-dimensional face image developed by a three-dimensional avatar model to be subjected to texture processing, wherein the two-dimensional face image is moved by
  • the terminal acquires and attaches the two-dimensional face image to the three-dimensional avatar model according to the mapping relationship, and solves the problem that the three-dimensional character avatar model of each application on the mobile terminal cannot be greatly edited in the prior art, so that the user can access the mobile terminal.
  • the effect of maximizing editing with the 3D character avatar model in the app The application function of the 3D avatar model is added to improve the satisfaction of the user experience.
  • FIG. 1 is a schematic diagram of a 3D character avatar model provided by the present invention.
  • FIG. 2 is a schematic diagram of a 3D character avatar model map provided by the present invention.
  • FIG. 3 is a schematic flowchart of a face mapping method according to Embodiment 1 of the present invention.
  • FIG. 4 is a schematic diagram of expanding a three-dimensional avatar model into a UV topology grid according to Embodiment 1 of the present invention
  • FIG. 5 is a schematic diagram of dividing a two-dimensional face image into a plurality of texture elements according to Embodiment 1 of the present invention.
  • 6-1 is a schematic diagram of a UV topology unit grid according to Embodiment 1 of the present invention.
  • 6-2 is a schematic diagram 1 of a two-dimensional face image texture element according to Embodiment 1 of the present invention.
  • 6-3 is a second schematic diagram of a two-dimensional face image texture element according to Embodiment 1 of the present invention.
  • FIG. 7 is a schematic flowchart diagram of a face mapping method according to Embodiment 2 of the present invention.
  • FIG. 8 is a schematic diagram of a user box selection key area according to Embodiment 2 of the present invention.
  • FIG. 9 is a schematic structural diagram 1 of a face mapping apparatus according to Embodiment 3 of the present invention.
  • FIG. 10 is a second schematic structural diagram of a face mapping apparatus according to Embodiment 3 of the present invention.
  • FIG. 11 is a schematic structural diagram 3 of a face mapping apparatus according to Embodiment 3 of the present invention.
  • FIG. 12 is a schematic structural diagram 4 of a face mapping apparatus according to Embodiment 3 of the present invention.
  • FIG. 13 is a schematic structural diagram of a mobile terminal according to Embodiment 4 of the present invention.
  • the present invention is applicable to all terminals, including, for example, mobile phones, tablets, and the like.
  • the present invention will be further described in detail below with reference to the accompanying drawings.
  • Embodiment 1 is a diagrammatic representation of Embodiment 1:
  • the embodiment provides a face mapping method, as shown in FIG. 3, including:
  • S301 Acquire a three-dimensional avatar model to be subjected to texture processing.
  • obtaining the three-dimensional avatar model to be subjected to the mapping processing may include: receiving a model selection instruction, and selecting a corresponding three-dimensional avatar model from the preset model library according to the model selection instruction, where the model library includes multiple different types 3D avatar model.
  • the three-dimensional avatar models of different geographical ranges, age groups, and genders are different. Therefore, different types of three-dimensional avatar models in the model library can be divided according to different geographical ranges, age groups, and genders, for example, in a model library. It may include a three-dimensional avatar model of China, India, the United States, etc., and may also include a three-dimensional avatar model of Europe, America, Asia, Africa, and the like.
  • S302 Acquire a UV topology mesh of the 3D avatar model to be mapped.
  • FIG. 4 The schematic diagram of expanding the three-dimensional avatar model to be processed into a UV topology mesh can be seen in FIG. 4 .
  • S303 Acquire a two-dimensional face image as a texture material by using the mobile terminal.
  • the two-dimensional face as a texture material can be obtained from the network through the mobile terminal.
  • the two-dimensional face image as the texture material can also be directly obtained from the mobile terminal, and the two-dimensional face image as the texture material can also be obtained through the image acquisition module of the mobile terminal.
  • the image acquisition module of the mobile terminal may separately collect the left image, the front image, and the right image of the face, and then collect the left image.
  • the front image and the right image are subjected to image synthesis processing to obtain a two-dimensional face image as a texture material.
  • three images of the left side image, the front side image, and the right side image of the human face can be obtained by three separate photographs. Then, the three-sided image is synthesized to obtain a two-dimensional face image as a texture material, and the three-sided image of the left side image, the front side image, and the right side image of the human face can be obtained in one shooting process by panoramic shooting. Then, a synthetic process is performed to obtain a two-dimensional face image as a texture material. It should be understood that, in some cases, when a two-dimensional face image as a texture material is obtained through an image acquisition module of the mobile terminal, the image acquisition module may also collect only the front image of the face, specific shooting rules, developers Can be set flexibly.
  • the two-dimensional face image as a texture material may be acquired first by the mobile terminal, and then the three-dimensional avatar model to be subjected to texture processing may be acquired, and then the UV topology mesh of the three-dimensional avatar model to be mapped may be acquired;
  • the 3D avatar model to be processed by the texture is obtained first, and then the 2D face image as the texture material is acquired by the mobile terminal, and then the UV topology mesh of the 3D avatar model to be mapped is obtained.
  • S304 Acquire a mapping relationship between the two-dimensional face image and the topology mesh.
  • mapping relationship between the two-dimensional face image and the topology mesh may include:
  • the two-dimensional face image is divided into a plurality of units, and one unit is a texel element; as shown in FIG. 5, the two-dimensional avatar “upper left corner” corresponds to a (0.0) square unit, and the unit square in FIG.
  • the same two-dimensional face image may have multiple division manners, and the specific division manner may be flexibly set by the developer;
  • the key area includes the binocular area, the binaural area, and the nose At least one of the area and the mouth area.
  • mapping the key area on the two-dimensional face image to the grid in the corresponding key area in the UV topology mesh to form the key area mapping relationship may further include:
  • the UV topology mesh is overlaid on the two-dimensional face image in a semi-transparent layer manner, and the two-dimensional face image is moved according to the image movement instruction to move the texture elements of the key region on the two-dimensional face image with the UV Aligning the grids in the corresponding key areas on the topological grid; or automatically identifying the key areas on the 2D face images by image recognition, and meshing the grids in the corresponding key areas on the UV topology grid
  • the texture elements in the key areas on the face image are aligned and covered on the two-dimensional face image in a semi-transparent layer manner; the texture elements in the key areas on the two-dimensional face image are acquired according to the key area division instruction And a grid corresponding to each texel, and calculating a texel as a center point in the key area and a center grid corresponding to the center point.
  • the key area division instruction in the embodiment may be delivered by the user, that is, the texture element in the key area on the two-dimensional face image and the grid corresponding to each texture element are obtained by the user's selection instruction. For example, if the user selects the nose region on the screen of the terminal, the texture element in the nose region selected from the frame on the two-dimensional face image and the pair of the texture element are acquired accordingly. The grid is applied, and the texture element as the center point in the nose region selected by the frame and the center grid corresponding to the center point are calculated.
  • the key area division instruction in this embodiment may also be preset by the developer when the texture elements of the key area on the two-dimensional face image are aligned with the grids in the corresponding key areas on the UV topology mesh.
  • the key region can be automatically identified by the image recognition method as the region to be mapped, and the two-dimensional face image is automatically acquired.
  • the preset key area division instruction is to acquire the binocular area as a key area for mapping, and then the area can be determined by the image recognition method to be the binocular area.
  • the texture elements in the binocular region on the two-dimensional face image and the mesh corresponding to each texture element, and the texture element as the center point in the binocular region and the center mesh corresponding to the center point are calculated. It should be understood that the center point in this embodiment can be calculated by a program set in advance in the terminal.
  • Figure 6-1 shows the UV A unit mesh of the topological mesh
  • Figure 6-2 shows the texture element of the 2D face image X
  • Figure 6-3 shows the texture element of the 2D face image Y, assuming the network of Figure 6-1
  • the grid is respectively attached to the image X of Fig. 6-2 and the image Y of Fig.
  • the UV topological grid is the length of the u and v coordinate axes, and the central coordinate is (0.5, 0.5), then it corresponds to
  • the center point of the image X is (2, 2); the center point of the corresponding image Y is (3, 3).
  • mapping relationship between the two-dimensional face image and the topology mesh in the embodiment S304 may further include: according to the texel of the key region on the two-dimensional face image and the corresponding key region in the UV topology mesh.
  • the key area mapping relationship of the grid indeed The mapping relationship between the texels in other face regions on the two-dimensional face image and the grids in other corresponding regions in the UV topology mesh.
  • S305 Paste the two-dimensional face image onto the three-dimensional avatar model according to the mapping relationship.
  • the 3D avatar model processed by the texture may be illuminated based on the simple illumination model, and the 3D avatar model presented on the terminal may be obtained by the rendering process, and the user may also observe the rendering effect of the 3D avatar model, and then According to the specific situation, the 3D avatar model is "fine-tuned" to perfect it to make it more realistic.
  • the user can customize the skin color of the 3D avatar model to hide unreal places, or can cover imperfect places by selecting a variety of pre-set 3D avatar model hairstyles.
  • the BUMP (bump) texture may be performed after the two-dimensional face image is attached to the three-dimensional avatar model according to the mapping relationship. Processing, that is, mapping a layer of texture on the 3D avatar model with the 2D face avatar, the mapped texture and the content of the 2D face avatar are the same, but the position is wrong, so as to better represent the bump Details such as pores, wrinkles, etc.
  • the face mapping method provided by the embodiment obtains the mapping relationship between the UV topology mesh and the two-dimensional face image developed by the three-dimensional avatar model to be mapped, and then attaches the two-dimensional face image to the three-dimensional avatar according to the mapping relationship.
  • the user can maximize the editing of the three-dimensional character avatar model in the mobile terminal application, so that the user gets a better experience.
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • S701 Select a three-dimensional avatar model to be subjected to the tile processing from the preset model library according to the model selection instruction, and expand a UV topology mesh of the three-dimensional avatar model to be subjected to the tile processing.
  • the model selection instruction may be delivered by the user through the terminal, and the model library includes a plurality of different types of three-dimensional avatar models.
  • the three-dimensional avatar models of different geographical ranges, age groups, and genders are different. Therefore, different types of three-dimensional avatar models in the model library can be divided according to different geographical ranges, age groups, and genders.
  • S702 Acquire a two-dimensional face image as a texture material by using an image acquisition module of the mobile terminal.
  • the image acquisition module in the embodiment S702 can separately collect the left image, the front image, and the right image of the human face, and then perform image synthesis processing on the collected left image, front image, and right image to obtain the texture material.
  • the face image of the dimension is specifically obtained by three separate shots to obtain the three-sided image of the left side image, the front side image, and the right side image of the face, and then the three-sided image is synthesized to obtain a two-dimensional face as a texture material.
  • the image may also be obtained by panoramic shooting, and the three-sided image of the left side image, the front side image, and the right side image of the human face is obtained in one shooting process, and then synthesized to obtain a two-dimensional face image as a texture material. It should be understood that, in some cases, when a two-dimensional face image as a texture material is obtained through an image acquisition module of the mobile terminal, the image acquisition module may also collect only the front image of the face, specific shooting rules, developers Can be set flexibly.
  • S703 Divide the two-dimensional face image into a plurality of units, and one unit is a texture element, and acquire a texture element of a key area on the two-dimensional face image.
  • the key area may include at least one of a binocular area, a binaural area, a nasal area, and a mouth area.
  • the key area in order to make the finally obtained three-dimensional avatar model more realistic, is a binocular area, a binaural area, and a nasal area. , mouth area.
  • S704 overlaying the UV topology mesh on the two-dimensional face image in a semi-transparent layer manner, and moving the two-dimensional face image according to the image movement instruction to move the texture element of the key area on the two-dimensional face image Align with the mesh within the corresponding key area on the UV topology mesh.
  • S705 Acquire a texture element in a key area on the two-dimensional face image and a grid corresponding to each texture element according to a key area division instruction issued by the user, and calculate a texture element as a center point in the key area and the center point. Corresponding center grid.
  • the key area division instruction issued by the user in this embodiment may be that the user selects a texture element in a key area on the two-dimensional face image and a grid corresponding to each texture element on the terminal screen, for example, the user is on the terminal screen.
  • the upper frame selects the binocular region, the binaural region, the nasal region, and the mouth region. For details, as shown in FIG. 8, the corresponding two-eye region, the binaural region, the nasal region, and the nasal region are obtained.
  • a texture element in the mouth area and a grid corresponding to the texture element and respectively calculating a texture element selected as a center point in the binocular region, the binaural region, the nose region, and the mouth region, and a center network corresponding to the center point grid.
  • the key area division instruction in this embodiment may also be a texture element preset by the developer when the key area on the two-dimensional face image is When the grid is aligned with the corresponding key area on the UV topology mesh, and the UV topology mesh is overlaid on the two-dimensional face image in a semi-transparent layer manner, the key area can be automatically identified by the image recognition method.
  • the area where the texture is performed, at this time, the texture element in the key area on the two-dimensional face image and the grid corresponding to each texture element are automatically acquired.
  • the preset key area division instruction is to acquire the binocular area and the binaural area.
  • the nasal region and the mouth region are key regions for mapping, and the image recognition method can be used to determine that the region is any one of a binocular region, a binaural region, a nasal region, and a mouth region, and then the image is acquired on the two-dimensional face image.
  • the texels in the corresponding region and the mesh corresponding to each texel, and the texel as the center point in the corresponding region and the center mesh corresponding to the center point are calculated. It should be understood that the center point in this embodiment can be calculated by a program set in advance in the terminal.
  • S706 The 2D face image is pasted onto the 3D avatar model, and the 3D avatar model is obtained and presented to the end user.
  • S707 Receive an instruction issued by the user, and modify the 3D avatar model according to the instruction.
  • the user can issue an adjustment instruction according to the rendering effect of the 3D avatar model presented on the terminal, so that the final 3D avatar model is more realistic, that is, the skin color adjustment instruction issued by the user can be received.
  • the 3D avatar model performs skin tone adjustment to hide unreal places.
  • the user can also issue a 3D hairstyle selection command according to the rendering effect of the 3D avatar model presented on the terminal to select a suitable hairstyle to cover the ⁇ , that is, the user can receive the 3D hairstyle selection instruction issued by the user, and select the preset.
  • the 3D hairstyle covers the incomplete areas such as hairline, "stitching", and ears.
  • the user can perform maximum editing on the three-dimensional character avatar model in the mobile terminal application, so that the user can get a better experience.
  • Embodiment 3 is a diagrammatic representation of Embodiment 3
  • the model selection module 91 is configured to acquire a three-dimensional avatar model to be subjected to texture processing; the grid acquisition module 92 is configured to acquire a UV topology grid of the three-dimensional avatar model expansion; and the texture material obtaining module 93 is configured to obtain the texture material through the mobile terminal.
  • the two-dimensional face image; the processing module 94 is configured to acquire a mapping relationship between the two-dimensional face image and the UV topology mesh; the execution module 95 is configured to paste the two-dimensional face image onto the three-dimensional avatar model according to the mapping relationship.
  • the model selection module 91 may include: an instruction receiving unit 911 and a selecting unit 912. Specifically, as shown in FIG. 10, the instruction receiving unit 911 is configured to receive a model selection instruction; and the selecting unit 912 is configured to preset from the model selection instruction.
  • a corresponding three-dimensional avatar model is selected in the model library, wherein the model library includes a plurality of different types of three-dimensional avatar models.
  • the three-dimensional avatar models of different geographical ranges, age groups, and genders are different. Therefore, different types of three-dimensional avatar models in the model library can be divided according to different geographical ranges, age groups, and genders, for example, in a model library. It may include a three-dimensional avatar model of China, India, the United States, etc., and may also include a three-dimensional avatar model of Europe, America, Asia, Africa, and the like.
  • the texture material obtaining module 93 in this embodiment may include: an image acquisition control sheet
  • the image acquisition control unit 931 is configured to control the image acquisition module of the mobile terminal to separately collect the left side image, the front side image, and the right side image of the human face
  • the synthesis processing unit 932 is configured to perform image synthesis processing on the left image, the front image, and the right image to obtain a two-dimensional face image as a texture material.
  • the texture material obtaining module 93 in this embodiment can also directly acquire a two-dimensional face image as a texture material from the network or locally.
  • the image acquisition control unit 931 in this embodiment may control the image acquisition module to obtain three images of the left side image, the front side image, and the right side image of the human face by three separate photographing; of course, in this embodiment The image acquisition control unit 931 can also control the image acquisition module to obtain the three-sided image of the left side image, the front side image, and the right side image of the human face in one shooting process by panoramic shooting; then the image composition processing unit 932 will image the left side, The front image and the right image are subjected to image synthesis processing to obtain a two-dimensional face image as a texture material.
  • the processing module 94 in this embodiment includes a dividing unit 941 and a key area mapping unit 942.
  • the dividing unit 941 is configured to divide the two-dimensional face image into a plurality of units, and one unit is a texture element.
  • the key area mapping unit 942 is configured to acquire the texel of the key area on the two-dimensional face image and associate it with the grid in the corresponding key area on the UV topology mesh to form a key area mapping relationship; the key area includes the binocular area At least one of a binaural region, a nasal region, and a mouth region.
  • the key area mapping unit 942 in this embodiment may further include: a first aligning subunit or a second aligning subunit, wherein the first aligning subunit is configured to cover the UV topology mesh in a semitransparent layer manner On the two-dimensional face image, the two-dimensional face image is moved according to the image movement instruction to move the texture element of the key area on the two-dimensional face image with the grid in the key area corresponding to the UV topology mesh.
  • the second aligning subunit is configured to automatically recognize key areas on the two-dimensional face image by image recognition,
  • the grids in the corresponding key regions on the UV topology mesh are aligned with the texture elements in the key regions on the 2D face image, and are overlaid on the 2D face images in a semi-transparent layer manner;
  • the key area mapping unit 942 in the embodiment further includes a calculation subunit, configured to acquire a texture element in a key area on the two-dimensional face image and a grid corresponding to each texture element according to the key area division instruction, and calculate the key area.
  • the key area division instruction in the embodiment may be delivered by the user, that is, the texture element in the key area on the two-dimensional face image and the grid corresponding to each texture element are obtained by the user's selection instruction. For example, if the user selects the nose area on the screen of the terminal, the corresponding texture element in the nose area selected on the two-dimensional face image and the grid corresponding to the texture element are obtained, and the selected nose is calculated.
  • the texel as the center point in the area and the center grid corresponding to the center point.
  • the center point can be obtained by an algorithm in the terminal system. For example, if the user selects the nose area by using a rectangular frame, the diagonal of the rectangular frame The intersection can be used as a central point.
  • the key area division instruction in this embodiment may also be preset by the developer when the texture elements of the key area on the two-dimensional face image are aligned with the grids in the corresponding key areas on the UV topology mesh.
  • the key region can be automatically identified by the image recognition method as the region to be mapped, and the two-dimensional face image is automatically acquired.
  • the preset key area division instruction is to acquire the binocular area as a key area for mapping, and then the area can be determined by the image recognition method to be the binocular area.
  • the texture elements in the binocular region on the two-dimensional face image and the mesh corresponding to each texture element, and the texture element as the center point in the binocular region and the center mesh corresponding to the center point are calculated. It should be understood that the center point in this embodiment can be calculated by a program set in advance in the terminal.
  • the processing module 94 in this embodiment may further include: a common area mapping unit, configured to be based on the texel of the key area on the two-dimensional face image acquired by the key area mapping unit 942 and on the UV topology grid. Corresponding to the key area mapping relationship formed by the mesh in the key area, the mapping relationship between the texels in other face areas on the two-dimensional face image and the grids in other corresponding areas in the UV topology mesh is determined.
  • the illumination module and the rendering module on the mobile terminal may further perform the subsequent processing on the texture-processed three-dimensional avatar model.
  • the illumination module in this embodiment may perform a lighting effect on the texture-processed three-dimensional avatar model based on the simple illumination model
  • the rendering module may perform the texture-processed three-dimensional avatar model. Render processing.
  • the user can control the specific effect of the 3D avatar model presented on the screen of the mobile terminal by using operations on the screen of the mobile terminal terminal. For example, the user can customize the skin color of the 3D avatar model to hide The real place, or you can cover the imperfections by choosing a variety of pre-set 3D avatar models.
  • the execution module 95 may paste the two-dimensional face image onto the three-dimensional avatar model according to the mapping relationship, and then perform BUMP.
  • BUMP bump
  • texture processing that is, mapping a layer of texture on a three-dimensional avatar model with a two-dimensional face avatar, the mapped texture and the content of the two-dimensional face avatar are the same, but the position is wrong, so that it is better
  • the details of the bumps such as pores, wrinkles, etc.
  • the face mapping device acquires the mapping relationship between the UV topological mesh and the two-dimensional face image of the three-dimensional avatar model to be subjected to texture processing by the processing module.
  • the execution module then pastes the two-dimensional face image onto the three-dimensional avatar model according to the mapping relationship, so that the user can perform the maximum editing on the three-dimensional character avatar model in the mobile terminal application, so that the user gets a better experience.
  • Embodiment 4 is a diagrammatic representation of Embodiment 4:
  • the present embodiment provides a mobile terminal.
  • the block diagram of the mobile terminal is shown in FIG. 13 and includes at least one processor 135 and a storage device.
  • the storage device may be a memory 134 or a hard disk, an input unit 131, and a display unit 132. , power supply 133 and other components.
  • the mobile terminal structure illustrated in FIG. 13 does not constitute a limitation of the mobile terminal, and may include more or less components than those illustrated.
  • the input unit 131 can be configured to receive various information input and generate signal inputs related to user settings and function control of the mobile phone, for example, information of a three-dimensional avatar model selected by the user in the "three-dimensional application" can be received.
  • the input unit 131 may include a touch screen 1311 and other input devices 1312.
  • the touch screen 1311 may include a touch detection device and a touch controller, the touch detection device detects a touch orientation of the user and detects a signal brought by the touch operation, transmits a signal to the touch controller, and the touch controller converts the received touch information into The contact coordinates are sent to the processor 135 and can receive and execute commands from the processor 135; other input devices 1312 can include, but are not limited to, physical keyboards, function keys, mice, and the like.
  • the display unit 132 is configured to display information input by the user or information provided to the user and various menus of the mobile terminal.
  • the display unit 132 includes but is not limited to the display panel 1321, and the display panel 1321 may be configured in the form of a liquid crystal display, a light emitting diode, or the like.
  • the three-dimensional avatar model can be correspondingly displayed on the display panel 1321 according to the three-dimensional avatar model selected by the user.
  • the touch screen 1311 may cover the display panel 1321. When the touch screen 1311 detects a touch operation on or near it, the touch screen 1311 is passed to the processor 135 to determine the type of the touch event, and then the processor 135 is on the display panel 1321 according to the type of the touch event. A corresponding visual output is provided thereon.
  • the touch screen 1311 can also be integrated with the display panel 1321 to implement input and output of the mobile terminal.
  • the mobile terminal further includes a power source 133 for supplying power to various components, such as a battery.
  • the power source 133 can be connected to the processor 135 through the power source 133 management system, thereby managing functions such as charging, discharging, and power consumption management through the power source 133 management system.
  • the memory 134 can store software programs and various modules, and the processor 135 executes various functional applications and data processing of the mobile terminal by running software programs and modules stored in the memory 134.
  • the memory 134 may specifically include a non-volatile memory 134, a volatile memory 134, and the like.
  • a plurality of instructions are stored in the memory 134 to implement the face mapping method of the present invention.
  • the processor 135 is a control center of the mobile terminal, which connects various parts of the entire mobile terminal using various interfaces and lines, performs operations by running or executing a software program or module stored in the memory 134, and calling data stored in the memory 134.
  • the functions and data processing of the terminal enable overall monitoring of the mobile terminal.
  • processor 135 executes instructions within memory 134 to:
  • the processor 135 acquires a three-dimensional avatar model to be subjected to texture processing and a two-dimensional face image as a texture material;
  • the processor 135 expands the acquired three-dimensional avatar model to obtain a UV topology mesh.
  • the processor 135 acquires a mapping relationship between the two-dimensional face image and the UV topology
  • the processor 135 pastes the two-dimensional face image onto the three-dimensional avatar model.
  • the acquiring, by the processor 135, the three-dimensional avatar model to be subjected to the mapping processing further includes:
  • the processor 135 receives the model selection instruction, and selects a corresponding three-dimensional avatar model from the preset model library according to the model selection instruction processor 135, wherein the model library stores a plurality of three-dimensional avatar models of different geographical ranges, age groups, and genders. .
  • the processor 135 acquires a two-dimensional face image as a texture material, including:
  • the left side image, the front side image, and the right side image of the human face are respectively collected by the image acquisition module of the mobile terminal;
  • the collected left image, front image, and right image are subjected to image synthesis processing to obtain a two-dimensional face image as a texture material.
  • mapping relationship between the two-dimensional face image and the UV topology of the processor 135 further includes:
  • the key areas include the binocular area, the binaural area, the nasal area, At least one of the mouth regions.
  • the processor 135 maps the key area on the two-dimensional face image to the grid in the corresponding key area in the UV topology mesh to form a key area mapping relationship, including:
  • the UV topology mesh is overlaid on the two-dimensional face image in a semi-transparent layer manner, and the two-dimensional face image is moved according to the image movement instruction to move the texture elements of the key region on the two-dimensional face image with the UV Aligning the grids in the corresponding key areas on the topological grid; or automatically identifying key areas on the two-dimensional face image by image recognition, Aligning a grid in a corresponding key area on the UV topology mesh with a texture element in a key area on the two-dimensional face image, and overlaying the two-dimensional face image in a semi-transparent layer manner;
  • the texture elements in the key area on the two-dimensional face image and the grid corresponding to each texture element are obtained, and the texture element as the center point in the key area and the center grid corresponding to the center point are calculated.
  • the mobile terminal provided by the embodiment can enable the user to independently edit the three-dimensional avatar model in the mobile terminal application to achieve the effect desired by the user, and make the function of the mobile terminal more diversified and improve the user experience.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种人脸贴图方法及装置,通过获取待进行贴图处理的三维头像模型展开的UV拓扑网格与二维人脸图像的映射关系,其中,二维人脸图像通过移动终端获取,根据映射关系将二维人脸图像贴到三维头像模型上,达到了使用户可以对移动终端应用中的三维人物头像模型进行最大程度编辑的效果,使三维头像模型的功能应用更加趋于多样化,用户可以自定义三维头型模型,提升了用户体验的满意度。

Description

一种人脸贴图方法及装置
本申请要求于2016年10月25日提交中国专利局,申请号为201610939875.7、发明名称为“一种人脸贴图方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及图像处理技术领域,尤其涉及一种人脸贴图方法及装置。
背景技术
随着科技的不断进步,人物“三维模型”已经广泛存在于各种“三维应用”、“三维游戏应用”中,例如现在逐渐流行的“虚拟现实”和“增强现实”都离不开3D空间场景,3D人物建模技术。然而目前移动终端的“三维应用”、“三维游戏应用”中的3D人物头像模型都是预置在移动终端应用客户端中,或者是通过网络导入于移动终端应用客户端,而且这些3D模型通常都是通过电脑创建的。以在电脑端创建3D人物头像模型为例,首先根据普通“手绘”或“相片”创建一个3D人物头像模型,3D人物头像模型完成后,无贴图、无质感、无点亮特效,渲染后如雕塑一般,具体可以参见图1所示;然后将真实的物体表面(皮肤表面)作为材料贴在3D人物头像模型上,使3D人物头像模型有真实的效果,如图2所示;最后通过点亮模型添加点亮特效使3D人物头像模型更逼真。虽然通过这种方式可以预置多个3D人物 头像模型,但是用户在移动终端上却不能对该3D人物头像模型做大幅度编辑,因此用户的体验感较差。
发明内容
本发明实施例提供一种人脸贴图方法及装置,以解决现有技术中无法对移动终端上各种应用中的三维人物头像模型进行大幅度编辑的问题。
为解决上述技术问题,本发明实施例采用以下技术方案:
一种人脸贴图方法,包括:获取待进行贴图处理的三维头像模型;获取所述三维头像模型展开的UV拓扑网格;通过移动终端获取作为贴图材料的二维人脸图像;获取所述二维人脸图像与所述UV拓扑网格的映射关系;根据所述映射关系将所述二维人脸图像贴到所述三维头像模型上。
进一步地,获取待进行贴图处理的三维头像模型包括:接收模型选择指令;根据所述模型选择指令从预设的模型库中选择对应的三维头像模型,所述模型库中存储有多个不同地域范围、年龄段以及性别的三维头像模型。
进一步地,通过移动终端获取作为贴图材料的二维人脸图像包括:通过所述移动终端的图像采集模块分别采集人脸的左侧图像、正面图像以及右侧图像;将采集的所述左侧图像、正面图像以及右侧图像进行图像合成处理得到作为贴图材料的二维人脸图像。
进一步地,获取所述二维人脸图像与所述UV拓扑网格的映射关系包括:
将所述二维人脸图像划分为多个单元,一个单元为一个纹理元素;获 取所述二维人脸图像上关键区域的纹理元素,并将其与所述UV拓扑网格上对应关键区域内的网格进行对应形成关键区域映射关系;所述关键区域包括双眼区域、双耳区域、鼻区域、嘴区域中的至少一个。
进一步地,将所述二维人脸图像上关键区域与所述UV拓扑网格中对应关键区域内的网格进行对应形成关键区域映射关系包括:将所述UV拓扑网格以半透明图层方式覆盖在所述二维人脸图像上,根据图像移动指令将所述二维人脸图像进行位置移动,以将所述二维人脸图像上的关键区域的纹理元素与所述UV拓扑网格上对应的关键区域内的网格进行对准;或,通过图像识别方式自动识别所述二维人脸图像上的关键区域,将所述UV拓扑网格上相应关键区域内的网格与所述二维人脸图像上的关键区域内的纹理元素进行对准,并以半透明图层方式覆盖在所述二维人脸图像上;根据关键区域划分指令获取所述二维人脸图像上的关键区域内的纹理元素以及各纹理元素对应的网格,并计算得到关键区域内作为中心点的纹理元素以及该中心点对应的中心网格。
进一步地,获取所述二维人脸图像与所述UV拓扑网格的映射关系还包括:根据所述二维人脸图像上关键区域的纹理元素与所述UV拓扑网格中对应关键区域内的网格的关键区域映射关系,确定所述二维人脸图像上其他人脸区域中的纹理元素与所述UV拓扑网格中其他对应区域内的网格的映射关系。
一种人脸贴图装置,应用于移动终端,包括:模型选择模块,设置为获取待进行贴图处理的三维头像模型;网格获取模块,设置为获取所述三维头像模型展开的UV拓扑网格;贴图材料获取模块,设置为通过移动终端获取作为贴图材料的二维人脸图像;处理模块,设置为获取所述二维人脸图像与所述UV拓扑网格的映射关系;执行模 块,设置为根据所述映射关系将所述二维人脸图像贴到所述三维头像模型上。
进一步地,所述模型选择模块包括:指令接收单元,设置为接收模型选择指令;选择单元,设置为根据所述模型选择指令从预设的模型库中选择对应的三维头像模型,所述模型库中存储有多个不同地域范围、年龄段以及性别的三维头像模型。
进一步地,所述贴图材料获取模块包括:图像采集控制单元,设置为控制所述移动终端的图像采集模块分别采集人脸的左侧图像、正面图像以及右侧图像;图像合成处理单元,设置为将所述左侧图像、正面图像以及右侧图像进行图像合成处理得到作为贴图材料的二维人脸图像。
进一步地,所述处理模块包括:划分单元,设置为将所述二维人脸图像划分为多个单元,一个单元为一个纹理元素;关键区映射单元,设置为获取所述二维人脸图像上关键区域的纹理元素,并将其与所述UV拓扑网格上对应关键区域内的网格进行对应形成关键区域映射关系;所述关键区域包括双眼区域、双耳区域、鼻区域、嘴区域中的至少一个。
有益效果
本发明实施例提供的一种人脸贴图方法及装置,通过获取待进行贴图处理的三维头像模型展开的UV拓扑网格与二维人脸图像的映射关系,其中,二维人脸图像通过移动终端获取,根据映射关系将二维人脸图像贴到三维头像模型上,解决现有技术中无法对移动终端上各应用三维人物头像模型进行大幅度编辑的问题,达到了使用户可以对移动终端应用中的三维人物头像模型进行最大程度编辑的效果,增 加了三维头像模型的应用功能,提升了用户体验的满意度。
附图说明
图1为本发明提供的3D人物头像模型示意图;
图2为本发明提供的在3D人物头像模型贴图的示意图;
图3为本发明实施例一提供的人脸贴图方法的流程示意图;
图4为本发明实施例一提供的三维头像模型展开为UV拓扑网格的示意图;
图5为本发明实施例一提供的二维人脸图像划分为多个纹理元素的示意图;
图6-1为本发明实施例一提供的UV拓扑单元网格示意图;
图6-2为本发明实施例一提供的二维人脸图像纹理元素示意图一;
图6-3为本发明实施例一提供的二维人脸图像纹理元素示意图二;
图7为本发明实施例二提供的人脸贴图方法的流程示意图;
图8为本发明实施例二提供的用户框选关键区域的示意图;
图9为本发明实施例三提供的人脸贴图装置的结构示意图一;
图10为本发明实施例三提供的人脸贴图装置的结构示意图二;
图11为本发明实施例三提供的人脸贴图装置的结构示意图三;
图12为本发明实施例三提供的人脸贴图装置的结构示意图四;
图13为本发明实施例四提供的移动终端的结构示意图。
具体实施方式
本发明适用于所有终端,包括如手机、平板等。下面通过具体实施方式结合附图对本发明作进一步详细说明。
实施例一:
为了使用户可以对终端应用中的三维人物头像模型进行最大程度的编辑,本实施例提供一种人脸贴图方法,参见图3所示,包括:
S301:获取待进行贴图处理的三维头像模型。
需要说明的是,获取待进行贴图处理的三维头像模型可以包括:接收模型选择指令,根据模型选择指令从预设的模型库中选择对应的三维头像模型,其中,模型库中包含多个不同种类的三维头像模型。一般情况下,不同地域范围、年龄段以及性别的三维头像模型是不同的,所以,模型库中的不同种类的三维头像模型可以根据不同的地域范围、年龄段以及性别划分,例如,模型库中可以包括中国、印度、美国等的三维头像模型,也可以包括欧美、亚洲、非洲等的三维头像模型。
S302:获取待进行贴图处理的三维头像模型展开的UV拓扑网格。
其中,将待进行贴图处理的三维头像模型展开为UV拓扑网格的示意图可以参见图4所示。
S303:通过移动终端获取作为贴图材料的二维人脸图像。
其中,可以通过移动终端从网络上获取作为贴图材料的二维人脸 图像,也可以通过移动终端直接从本地获取作为贴图材料的二维人脸图像,还可以通过移动终端的图像采集模块获得作为贴图材料的二维人脸图像。当通过移动终端的图像采集模块获得作为贴图材料的二维人脸图像时,移动终端的图像采集模块可以分别采集人脸的左侧图像、正面图像以及右侧图像,然后将采集的左侧图像、正面图像以及右侧图像进行图像合成处理得到作为贴图材料的二维人脸图像,具体的,可以是通过三次单独的拍摄获得人脸的左侧图像、正面图像以及右侧图像这三面图像,然后将这三面图像进行合成处理得到作为贴图材料的二维人脸图像,还可以是通过全景拍摄,在一次拍摄的过程中获得人脸的左侧图像、正面图像以及右侧图像这三面图像,然后进行合成处理得到作为贴图材料的二维人脸图像。应当理解的是,在有些情况下,当通过移动终端的图像采集模块获得作为贴图材料的二维人脸图像时,图像采集模块也可以只采集人脸的正面图像,具体的拍摄规则,开发人员可以灵活设置。
在此,需要说明的是,本实施例中S301步骤与S303步骤之间,S302步骤与S303步骤之间并无严格的时序限制。例如,可以是先通过移动终端获取作为贴图材料的二维人脸图像,再获取待进行贴图处理的三维头像模型,然后再获取待进行贴图处理的三维头像模型展开的UV拓扑网格;还可以是先获取待进行贴图处理的三维头像模型,再通过移动终端获取作为贴图材料的二维人脸图像,然后再获取待进行贴图处理的三维头像模型展开的UV拓扑网格。
S304:获取二维人脸图像与拓扑网格的映射关系。
需要说明的是,获取二维人脸图像与拓扑网格的映射关系可以包括:
将二维人脸图像划分为多个单元,一个单元为一个纹理元素;具体可以参见图5所示,二维头像“左上角”对应的就是(0.0)方块单元,图5中的单元方块即为本实施例中的纹理元素,应当理解的是,同样的二维人脸图像可以有多种划分方式,具体的划分方式可以由开发人员灵活设置;
获取二维人脸图像上关键区域的纹理元素,并将其与UV拓扑网格上对应关键区域内的网格进行对应形成关键区域映射关系;其中,关键区域包括双眼区域、双耳区域、鼻区域、嘴区域中的至少一个。
此外,将二维人脸图像上关键区域与UV拓扑网格中对应关键区域内的网格进行对应形成关键区域映射关系还可以包括:
将UV拓扑网格以半透明图层方式覆盖在二维人脸图像上,根据图像移动指令将二维人脸图像进行位置移动,以将二维人脸图像上的关键区域的纹理元素与UV拓扑网格上对应的关键区域内的网格进行对准;或,通过图像识别方式自动识别二维人脸图像上的关键区域,将UV拓扑网格上相应关键区域内的网格与二维人脸图像上的关键区域内的纹理元素进行对准,并以半透明图层方式覆盖在二维人脸图像上;根据关键区域划分指令获取二维人脸图像上的关键区域内的纹理元素以及各纹理元素对应的网格,并计算得到关键区域内作为中心点的纹理元素以及该中心点对应的中心网格。其中,本实施例中的关键区域划分指令可以是通过用户下发的,也即是通过用户的选择指令获取二维人脸图像上的关键区域内的纹理元素以及各纹理元素对应的网格,例如用户在终端屏幕上框选出鼻子区域,则相应的就获取二维人脸图像上框选出来的鼻子区域内的纹理元素以及该纹理元素对 应的网格,并计算得到框选出来的鼻子区域内作为中心点的纹理元素以及该中心点对应的中心网格。当然了,本实施例中的关键区域划分指令还可以是开发人员预先设置的,当二维人脸图像上的关键区域的纹理元素与UV拓扑网格上对应的关键区域内的网格对准,且UV拓扑网格以半透明图层方式覆盖在二维人脸图像上时,可以通过图像识别方式自动识别该关键区域为要进行贴图的区域,则自动获取二维人脸图像上的该关键区域内的纹理元素以及各纹理元素对应的网格,例如,预先设置的关键区域划分指令是获取双眼区域为关键区域以进行贴图,则可以通过图像识别方式确定该区域为双眼区域后,获取二维人脸图像上的双眼区域内的纹理元素以及各纹理元素对应的网格,并计算得到双眼区域内作为中心点的纹理元素以及该中心点对应的中心网格。应当理解的是,本实施例中的中心点可以通过预先设置在终端内的程序计算得到。
下面对将二维人脸图像上关键区域的纹理元素与UV拓扑网格上对应的关键区域内的网格对应,并得到相关中心点进行进一步的具体说明,图6-1所示为UV拓扑网格的一个单元网格,图6-2所示为二维人脸图像X的纹理元素,图6-3所示为二维人脸图像Y的纹理元素,假设图6-1的网格分别与图6-2的图像X和图6-3的图像Y贴片,因为该UV拓扑网格是u、v坐标轴为1的长度,中心坐标是(0.5,0.5),那么它对应图像X的中心点是(2,2);对应图像Y的中心点是(3,3)。
此外需要说明的是,本实施例S304中获取二维人脸图像与拓扑网格的映射关系还可以包括:根据二维人脸图像上关键区域的纹理元素与UV拓扑网格中对应关键区域内的网格的关键区域映射关系,确 定二维人脸图像上其他人脸区域中的纹理元素与UV拓扑网格中其他对应区域内的网格的映射关系。
S305:根据映射关系将二维人脸图像贴到三维头像模型上。
本实施例中可以基于简单光照模型对该经贴图处理的三维头像模型做点亮特效,并进行渲染处理得到终端上呈现的3D头像模型,用户还可以观察该3D头像模型的渲染效果,然后可以根据具体情况对该3D头像模型进行“微调”以完善瑕疵使其更逼真。例如,用户可以自定义3D头像模型的肤色以隐藏不真实的地方,或者可以通过选择多种预先设置的3D头像模型发型遮盖不完善的地方。
在此,需要说明的是,在本实施例中为了使最终得到的3D模型更为逼真,还可以在根据映射关系将二维人脸图像贴到三维头像模型上后再进行BUMP(凹凸)贴图处理,也即是在贴有二维人脸头像的三维头像模型上再映射一层纹理,映射的纹理和二维人脸头像的内容相同,但是位置相错,以此更好的表现凹凸的细节,比如毛孔、皱纹等。
本实施例提供的人脸贴图方法,通过获取待进行贴图处理的三维头像模型展开的UV拓扑网格与二维人脸图像的映射关系,再根据映射关系将二维人脸图像贴到三维头像模型上,使用户可以对移动终端应用中的三维人物头像模型进行最大程度的编辑,使用户得到更好的体验。
实施例二:
为了更好的理解本发明,本实施例提供一种更加具体的人脸贴图 方法,参见图7所示,包括:
S701:根据模型选择指令从预设的模型库中选择待进行贴片处理的三维头像模型,并展开该待进行贴片处理的三维头像模型的UV拓扑网格。
其中,模型选择指令可以由用户通过终端下发,模型库中包含多个不同种类的三维头像模型。一般情况下,不同地域范围、年龄段以及性别的三维头像模型是不同的,所以,模型库中的不同种类的三维头像模型可以根据不同的地域范围、年龄段以及性别划分。
S702:通过移动终端的图像采集模块获取作为贴图材料的二维人脸图像。
本实施例S702中的图像采集模块可以分别采集人脸的左侧图像、正面图像以及右侧图像,然后将采集的左侧图像、正面图像以及右侧图像进行图像合成处理得到作为贴图材料的二维人脸图像,具体的,可以是通过三次单独的拍摄获得人脸的左侧图像、正面图像以及右侧图像这三面图像,然后将这三面图像进行合成处理得到作为贴图材料的二维人脸图像,还可以是通过全景拍摄,在一次拍摄的过程中获得人脸的左侧图像、正面图像以及右侧图像这三面图像,然后进行合成处理得到作为贴图材料的二维人脸图像。应当理解的是,在有些情况下,当通过移动终端的图像采集模块获得作为贴图材料的二维人脸图像时,图像采集模块也可以只采集人脸的正面图像,具体的拍摄规则,开发人员可以灵活设置。
在此需要说明的是,本实施例S701步骤和S702步骤之间没有严 格的时序限制。
S703:将二维人脸图像划分为多个单元,一个单元为一个纹理元素,获取二维人脸图像上关键区域的纹理元素。
关键区域可以包括双眼区域、双耳区域、鼻区域、嘴区域中的至少一个,在本实施例中为了使最终得到的三维头像模型更加逼真,该关键区域为双眼区域、双耳区域、鼻区域、嘴区域。
S704:将UV拓扑网格以半透明图层方式覆盖在二维人脸图像上,根据图像移动指令将二维人脸图像进行位置移动,以将二维人脸图像上的关键区域的纹理元素与UV拓扑网格上对应的关键区域内的网格进行对准。
S705:根据用户下发的关键区域划分指令获取二维人脸图像上的关键区域内的纹理元素以及各纹理元素对应的网格,并计算得到关键区域内作为中心点的纹理元素以及该中心点对应的中心网格。
其中,本实施例中用户下发的关键区域划分指令可以是用户在终端屏幕上选择二维人脸图像上的关键区域内的纹理元素以及各纹理元素对应的网格,例如,用户在终端屏幕上框选出双眼区域、双耳区域、鼻区域、嘴区域,具体可以参见图8所示,则相应的就获取二维人脸图像上框选出来的双眼区域、双耳区域、鼻区域、嘴区域中的纹理元素以及该纹理元素对应的网格,并分别计算得到框选出来的双眼区域、双耳区域、鼻区域、嘴区域内作为中心点的纹理元素以及该中心点对应的中心网格。当然了,本实施例中的关键区域划分指令还可以是开发人员预先设置的,当二维人脸图像上的关键区域的纹理元素 与UV拓扑网格上对应的关键区域内的网格对准,且UV拓扑网格以半透明图层方式覆盖在二维人脸图像上时,可以通过图像识别方式自动识别该关键区域为要进行贴图的区域,此时则自动获取二维人脸图像上的该关键区域内的纹理元素以及各纹理元素对应的网格,例如,预先设置的关键区域划分指令是获取双眼区域、双耳区域、鼻区域、嘴区域为关键区域以进行贴图,则可以通过图像识别方式确定该区域为双眼区域、双耳区域、鼻区域、嘴区域中的任何一个区域后,获取二维人脸图像上的相应区域内的纹理元素以及各纹理元素对应的网格,并计算得到相应区域内作为中心点的纹理元素以及该中心点对应的中心网格。应当理解的是,本实施例中的中心点可以通过预先设置在终端内的程序计算得到。
S706:将二维人脸图像贴到三维头像模型上,经渲染后得到3D头像模型并呈现给终端用户。
S707:接收用户下发的指令,并根据该指令对3D头像模型进行修饰。
本实施例S707中,用户可以根据终端上呈现的3D头像模型的渲染效果来下发调节指令以使最终得到的3D头像模型更逼真,也即是,可以接收用户下发的肤色调节指令,对该3D头像模型进行肤色调节以隐藏不真实的地方。当然,用户还可以根据终端上呈现的3D头像模型的渲染效果来下发3D发型选择指令以选择合适的发型来遮盖瑕疵,也即是,可以接收用户下发的3D发型选择指令,选择预置的3D发型来遮盖发迹线、“缝合处”、耳朵等不完善的地方。
通过本实施例提供的人脸贴图方法,用户可以对移动终端应用中的三维人物头像模型进行最大程度的编辑,使用户可以得到更好的体验。
实施例三:
为了优化移动终端应用中的三维人物头像模型,使用户可以对该三维人物头像模型进行大幅度的编辑,本实施例提供一种人脸贴图装置,可以参见图9所示,应用于移动终端,包括:模型选择模块91、网格获取模块92、贴图材料获取模块93、处理模块94和执行模块95。其中,模型选择模块91设置为获取待进行贴图处理的三维头像模型;网格获取模块92设置为获取三维头像模型展开的UV拓扑网格;贴图材料获取模块93设置为通过移动终端获取作为贴图材料的二维人脸图像;处理模块94设置为获取二维人脸图像与UV拓扑网格的映射关系;执行模块95设置为根据映射关系将二维人脸图像贴到三维头像模型上。
其中,模型选择模块91可以包括:指令接收单元911和选择单元912,具体可以参见图10所示,指令接收单元911设置为接收模型选择指令;选择单元912设置为根据模型选择指令从预设的模型库中选择对应的三维头像模型,其中,模型库中包含多个不同种类的三维头像模型。一般情况下,不同地域范围、年龄段以及性别的三维头像模型是不同的,所以,模型库中的不同种类的三维头像模型可以根据不同的地域范围、年龄段以及性别划分,例如,模型库中可以包括中国、印度、美国等的三维头像模型,也可以包括欧美、亚洲、非洲等的三维头像模型。
本实施例中的贴图材料获取模块93可以包括:图像采集控制单 元931和图像合成处理单元932,具体可以参见图11所示,其中,图像采集控制单元931设置为控制移动终端的图像采集模块分别采集人脸的左侧图像、正面图像以及右侧图像;图像合成处理单元932设置为将左侧图像、正面图像以及右侧图像进行图像合成处理得到作为贴图材料的二维人脸图像。当然,本实施例中的贴图材料获取模块93还可以直接从网络上或者本地获取作为贴图材料的二维人脸图像。
应当理解的是,本实施例中的图像采集控制单元931可以控制图像采集模块通过三次单独的拍摄获得人脸的左侧图像、正面图像以及右侧图像这三面图像;当然,本实施例中的图像采集控制单元931还可以控制图像采集模块通过全景拍摄,在一次拍摄的过程中获得人脸的左侧图像、正面图像以及右侧图像这三面图像;然后图像合成处理单元932将左侧图像、正面图像以及右侧图像进行图像合成处理得到作为贴图材料的二维人脸图像。
本实施例中的处理模块94包括划分单元941和关键区映射单元942,可以参见图12所示,其中划分单元941设置为将二维人脸图像划分为多个单元,一个单元为一个纹理元素;关键区映射单元942设置为获取二维人脸图像上关键区域的纹理元素,并将其与UV拓扑网格上对应关键区域内的网格进行对应形成关键区域映射关系;关键区域包括双眼区域、双耳区域、鼻区域、嘴区域中的至少一个。
本实施例中的关键区映射单元942还可以包括:第一对准子单元或第二对准子单元,其中,第一对准子单元设置为将UV拓扑网格以半透明图层方式覆盖在二维人脸图像上,根据图像移动指令将二维人脸图像进行位置移动,以将二维人脸图像上的关键区域的纹理元素与UV拓扑网格上对应的关键区域内的网格进行对准;第二对准子单元设置为通过图像识别方式自动识别二维人脸图像上的关键区域,将 UV拓扑网格上相应关键区域内的网格与二维人脸图像上的关键区域内的纹理元素进行对准,并以半透明图层方式覆盖在二维人脸图像上;此时,本实施例中的关键区映射单元942还包括计算子单元,设置为根据关键区域划分指令获取二维人脸图像上的关键区域内的纹理元素以及各纹理元素对应的网格,并计算得到关键区域内作为中心点的纹理元素以及该中心点对应的中心网格。其中,本实施例中的关键区域划分指令可以是通过用户下发的,也即是通过用户的选择指令获取二维人脸图像上的关键区域内的纹理元素以及各纹理元素对应的网格,例如用户在终端屏幕上框选出鼻子区域,则相应的就获取二维人脸图像上框选出来的鼻子区域内的纹理元素以及该纹理元素对应的网格,并计算得到框选出来的鼻子区域内作为中心点的纹理元素以及该中心点对应的中心网格,该中心点可以通过终端系统内的算法得到,比如若用户用矩形框框选出鼻子区域,则该矩形框的对角线的交点可以作为中心点。当然了,本实施例中的关键区域划分指令还可以是开发人员预先设置的,当二维人脸图像上的关键区域的纹理元素与UV拓扑网格上对应的关键区域内的网格对准,且UV拓扑网格以半透明图层方式覆盖在二维人脸图像上时,可以通过图像识别方式自动识别该关键区域为要进行贴图的区域,则自动获取二维人脸图像上的该关键区域内的纹理元素以及各纹理元素对应的网格,例如,预先设置的关键区域划分指令是获取双眼区域为关键区域以进行贴图,则可以通过图像识别方式确定该区域为双眼区域后,获取二维人脸图像上的双眼区域内的纹理元素以及各纹理元素对应的网格,并计算得到双眼区域内作为中心点的纹理元素以及该中心点对应的中心网格。应当理解的是,本实施例中的中心点可以通过预先设置在终端内的程序计算得到。
此外需要说明的是,本实施例中的处理模块94还可以包括:普通区映射单元,设置为根据关键区映射单元942获取的二维人脸图像上关键区域的纹理元素与UV拓扑网格上对应关键区域内的网格形成的关键区域映射关系,确定二维人脸图像上其他人脸区域中的纹理元素与UV拓扑网格中其他对应区域内的网格的映射关系。
本实施例中,在执行模块95根据映射关系将二维人脸图像贴到三维头像模型上后,移动终端上的光照模块、渲染模块还可以对该经贴图处理的三维头像模型做后续的处理,以使用户获得更好的视觉效果,本实施例中的光照模块可以基于简单光照模型对该经贴图处理的三维头像模型做点亮特效,渲染模块可以对该经贴图处理的三维头像模型做渲染处理。此外需要说明的是,本实施例中,用户可以通过在移动终端终端屏幕上的操作控制移动终端屏幕上呈现的3D头像模型的具体效果,例如,用户可以自定义3D头像模型的肤色以隐藏不真实的地方,或者可以通过选择多种预先设置的3D头像模型发型遮盖不完善的地方。
在此,还需要说明的是,在本实施例中为了使最终得到的3D模型更为逼真,还可以在执行模块95根据映射关系将二维人脸图像贴到三维头像模型上后再进行BUMP(凹凸)贴图处理,也即是在贴有二维人脸头像的三维头像模型上再映射一层纹理,映射的纹理和二维人脸头像的内容相同,但是位置相错,以此更好的表现凹凸的细节,比如毛孔、皱纹等。
本实施例提供的人脸贴图装置,通过处理模块获取待进行贴图处理的三维头像模型展开的UV拓扑网格与二维人脸图像的映射关系, 再由执行模块根据映射关系将二维人脸图像贴到三维头像模型上,使用户可以对移动终端应用中的三维人物头像模型进行最大程度的编辑,使用户得到更好的体验。
实施例四:
本实施例提供一种移动终端,移动终端的部分结构框图参见图13所示,包括:至少一个处理器135,以及存储装置,存储装置具体可以为存储器134或者硬盘,输入单元131,显示单元132,电源133等部件。应当说明的是,图13示出的移动终端结构不构成对移动终端的限定,可以包括比图示更多或者更少的部件。
结合图13对移动终端的各个构成部件进行具体的介绍:
输入单元131可设置为接收输入的各种信息,并产生与手机的用户设置以及功能控制有关的信号输入,例如,可以接收用户在“三维应用”中选中的三维头像模型的信息。具体的,输入单元131可以包括触摸屏1311以及其他输入设备1312。其中,触摸屏1311可以包括触摸检测装置和触摸控制器,触摸检测装置检测用户的触摸方位并检测触摸操作带来的信号,将信号传递给触摸控制器,触摸控制器将接收到的触摸信息转换成触点坐标并发送给处理器135,并能接收处理器135发来的命令并加以执行;其他输入设备1312可以包括但不限于物理键盘、功能键、鼠标等。
显示单元132设置为显示由用户输入的信息或提供给用户的信息以及移动终端的各种菜单,显示单元132包括但不限于显示面板1321,可以采用液晶显示器、发光二极管等形式来配置显示面板1321, 例如,可以根据用户选择的三维头像模型在显示面板1321上相应显示该三维头像模型。其中,触摸屏1311可以覆盖显示面板1321,当触摸屏1311检测到在其上或附近的触摸操作后,传递给处理器135以确定触摸事件的类型,然后处理器135根据触摸事件的类型在显示面板1321上提供相应的视觉输出,当然,还可以将触摸屏1311与显示面板1321集成而实现移动终端的输入和输出。
移动终端上还包括给各个部件供电的电源133,例如电池,电源133可以通过电源133管理系统与处理器135相连,从而通过电源133管理系统实现管理充电、放电及功耗管理等功能。
存储器134可以存储软件程序以及各种模块,处理器135通过运行存储在存储器134的软件程序以及模块从而执行移动终端的各种功能应用以及数据处理。其中,存储器134具体可以包括非易失性存储器134、易失性存储器134等。在本实施例中,存储器134中存储有多个指令以实现本发明的人脸贴图方法。
处理器135是移动终端的控制中心,利用各种接口和线路连接整个移动终端的各个部分,通过运行或者执行存储在存储器134内的软件程序或者模块,以及调用存储在存储器134中数据,执行移动终端的各功能和数据处理,从而对移动终端进行整体监控。在本实施例中,处理器135执行存储器134内的指令实现以下操作:
处理器135获取待进行贴图处理的三维头像模型和作为贴图材料的二维人脸图像;
处理器135将获取的三维头像模型展开得到UV拓扑网格;
处理器135获取二维人脸图像和UV拓扑结构的映射关系;
根据映射关系,处理器135将二维人脸图像贴到三维头像模型上。
进一步地,处理器135获取待进行贴图处理的三维头像模型还包括:
处理器135接收模型选择指令,根据模型选择指令处理器135从预设的模型库中选择对应的三维头像模型,其中,模型库中存储有多个不同地域范围、年龄段以及性别的三维头像模型。
进一步地,处理器135获取作为贴图材料的二维人脸图像包括:
通过移动终端的图像采集模块分别采集人脸的左侧图像、正面图像以及右侧图像;
将采集的左侧图像、正面图像以及右侧图像进行图像合成处理得到作为贴图材料的二维人脸图像。
进一步地,处理器135获取二维人脸图像和UV拓扑结构的映射关系还包括:
将二维人脸图像划分为多个单元,一个单元为一个纹理元素;
获取二维人脸图像上关键区域的纹理元素,并将其与UV拓扑网格上对应关键区域内的网格进行对应形成关键区域映射关系;关键区域包括双眼区域、双耳区域、鼻区域、嘴区域中的至少一个。
进一步地,处理器135将二维人脸图像上关键区域与UV拓扑网格中对应关键区域内的网格进行对应形成关键区域映射关系包括:
将UV拓扑网格以半透明图层方式覆盖在二维人脸图像上,根据图像移动指令将二维人脸图像进行位置移动,以将二维人脸图像上的关键区域的纹理元素与UV拓扑网格上对应的关键区域内的网格进行对准;或,通过图像识别方式自动识别二维人脸图像上的关键区域, 将UV拓扑网格上相应关键区域内的网格与二维人脸图像上的关键区域内的纹理元素进行对准,并以半透明图层方式覆盖在所述二维人脸图像上;
根据关键区域划分指令获取二维人脸图像上的关键区域内的纹理元素以及各纹理元素对应的网格,并计算得到关键区域内作为中心点的纹理元素以及该中心点对应的中心网格。
本实施例提供的移动终端,能够使用户对移动终端应用中的三维头像模型进行自主编辑,以达到用户想要的效果,使移动终端的功能更加多样化,提升了用户体验。
以上内容是结合具体的实施方式对本发明所作的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。对于本发明所属技术领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干简单推演或替换,都应当视为属于本发明的保护范围。

Claims (10)

  1. 一种人脸贴图方法,其特征在于,包括:
    获取待进行贴图处理的三维头像模型;
    获取所述三维头像模型展开的UV拓扑网格;
    通过移动终端获取作为贴图材料的二维人脸图像;
    获取所述二维人脸图像与所述UV拓扑网格的映射关系;
    根据所述映射关系将所述二维人脸图像贴到所述三维头像模型上。
  2. 如权利要求1所述的人脸贴图方法,其特征在于,获取待进行贴图处理的三维头像模型包括:
    接收模型选择指令;
    根据所述模型选择指令从预设的模型库中选择对应的三维头像模型,所述模型库中存储有多个不同地域范围、年龄段以及性别的三维头像模型。
  3. 如权利要求1所述的人脸贴图方法,其特征在于,通过移动终端获取作为贴图材料的二维人脸图像包括:
    通过所述移动终端的图像采集模块分别采集人脸的左侧图像、正面图像以及右侧图像;
    将采集的所述左侧图像、正面图像以及右侧图像进行图像合成处理得到作为贴图材料的二维人脸图像。
  4. 如权利要求1-3任一项所述的人脸贴图方法,其特征在于,获取所述二维人脸图像与所述UV拓扑网格的映射关系包括:
    将所述二维人脸图像划分为多个单元,一个单元为一个纹理元 素;
    获取所述二维人脸图像上关键区域的纹理元素,并将其与所述UV拓扑网格上对应关键区域内的网格进行对应形成关键区域映射关系;所述关键区域包括双眼区域、双耳区域、鼻区域、嘴区域中的至少一个。
  5. 如权利要求4所述的人脸贴图方法,其特征在于,将所述二维人脸图像上关键区域与所述UV拓扑网格中对应关键区域内的网格进行对应形成关键区域映射关系包括:
    将所述UV拓扑网格以半透明图层方式覆盖在所述二维人脸图像上,根据图像移动指令将所述二维人脸图像进行位置移动,以将所述二维人脸图像上的关键区域的纹理元素与所述UV拓扑网格上对应的关键区域内的网格进行对准;或,通过图像识别方式自动识别所述二维人脸图像上的关键区域,将所述UV拓扑网格上相应关键区域内的网格与所述二维人脸图像上的关键区域内的纹理元素进行对准,并以半透明图层方式覆盖在所述二维人脸图像上;
    根据关键区域划分指令获取所述二维人脸图像上的关键区域内的纹理元素以及各纹理元素对应的网格,并计算得到关键区域内作为中心点的纹理元素以及该中心点对应的中心网格。
  6. 如权利要求4所述的人脸贴图方法,其特征在于,获取所述二维人脸图像与所述UV拓扑网格的映射关系还包括:
    根据所述二维人脸图像上关键区域的纹理元素与所述UV拓扑网格中对应关键区域内的网格的关键区域映射关系,确定所述二维人脸图像上其他人脸区域中的纹理元素与所述UV拓扑网格中其他对应区域内的网格的映射关系。
  7. 一种人脸贴图装置,应用于移动终端,其特征在于包括:
    模型选择模块,设置为获取待进行贴图处理的三维头像模型;
    网格获取模块,设置为获取所述三维头像模型展开的UV拓扑网格;
    贴图材料获取模块,设置为通过移动终端获取作为贴图材料的二维人脸图像;
    处理模块,设置为获取所述二维人脸图像与所述UV拓扑网格的映射关系;
    执行模块,设置为根据所述映射关系将所述二维人脸图像贴到所述三维头像模型上。
  8. 如权利要求7所述的人脸贴图装置,其特征在于,所述模型选择模块包括:
    指令接收单元,设置为接收模型选择指令;
    选择单元,设置为根据所述模型选择指令从预设的模型库中选择对应的三维头像模型,所述模型库中存储有多个不同地域范围、年龄段以及性别的三维头像模型。
  9. 如权利要求7所述的人脸贴图装置,其特征在于,所述贴图材料获取模块包括:
    图像采集控制单元,设置为控制所述移动终端的图像采集模块分别采集人脸的左侧图像、正面图像以及右侧图像;
    图像合成处理单元,设置为将所述左侧图像、正面图像以及右侧图像进行图像合成处理得到作为贴图材料的二维人脸图像。
  10. 如权利要求7-9任一项所述的人脸贴图装置,其特征在于,所述处理模块包括:
    划分单元,设置为将所述二维人脸图像划分为多个单元,一个单元为一个纹理元素;
    关键区映射单元,设置为获取所述二维人脸图像上关键区域的纹理元素,并将其与所述UV拓扑网格上对应关键区域内的网格进行对应形成关键区域映射关系;所述关键区域包括双眼区域、双耳区域、鼻区域、嘴区域中的至少一个。
PCT/CN2016/107806 2016-10-25 2016-11-30 一种人脸贴图方法及装置 WO2018076437A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610939875.7 2016-10-25
CN201610939875.7A CN106570822B (zh) 2016-10-25 2016-10-25 一种人脸贴图方法及装置

Publications (1)

Publication Number Publication Date
WO2018076437A1 true WO2018076437A1 (zh) 2018-05-03

Family

ID=58536342

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/107806 WO2018076437A1 (zh) 2016-10-25 2016-11-30 一种人脸贴图方法及装置

Country Status (2)

Country Link
CN (1) CN106570822B (zh)
WO (1) WO2018076437A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108876713A (zh) * 2018-06-28 2018-11-23 北京字节跳动网络技术有限公司 二维模板图像的映射方法、装置、终端设备和存储介质
CN109242760A (zh) * 2018-08-16 2019-01-18 Oppo广东移动通信有限公司 人脸图像的处理方法、装置和电子设备
CN109410298A (zh) * 2018-11-02 2019-03-01 北京恒信彩虹科技有限公司 一种虚拟模型的制作方法及表情变化方法
CN112132785A (zh) * 2020-08-25 2020-12-25 华东师范大学 一种二维材料的透射电镜图像识别、分析方法及系统
CN112330824A (zh) * 2018-05-31 2021-02-05 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备和存储介质
CN114863030A (zh) * 2022-05-23 2022-08-05 广州数舜数字化科技有限公司 基于人脸识别和图像处理技术生成自定义3d模型的方法
CN115797535A (zh) * 2023-01-05 2023-03-14 深圳思谋信息科技有限公司 一种三维模型纹理贴图方法及相关装置

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108876921B (zh) * 2017-05-08 2021-09-17 腾讯科技(深圳)有限公司 三维装扮模型处理方法、装置、计算机设备和存储介质
CN109547766B (zh) * 2017-08-03 2020-08-14 杭州海康威视数字技术股份有限公司 一种全景图像生成方法及装置
CN108416835B (zh) * 2018-01-31 2022-07-05 福建天晴在线互动科技有限公司 一种脸部特效的实现方法及终端
CN108062785A (zh) * 2018-02-12 2018-05-22 北京奇虎科技有限公司 面部图像的处理方法及装置、计算设备
CN108305309B (zh) * 2018-04-13 2021-07-20 腾讯科技(成都)有限公司 基于立体动画的人脸表情生成方法和装置
CN108596827B (zh) * 2018-04-18 2022-06-17 太平洋未来科技(深圳)有限公司 三维人脸模型生成方法、装置及电子设备
CN108833791B (zh) * 2018-08-17 2021-08-06 维沃移动通信有限公司 一种拍摄方法和装置
CN109242961B (zh) 2018-09-26 2021-08-10 北京旷视科技有限公司 一种脸部建模方法、装置、电子设备和计算机可读介质
CN111243099B (zh) * 2018-11-12 2023-10-27 联想新视界(天津)科技有限公司 一种处理图像的方法和装置以及在ar设备中显示图像的方法和装置
CN109785448B (zh) * 2018-12-06 2023-07-04 广州西山居网络科技有限公司 一种三维模型表面附加印花的方法
CN111383351B (zh) * 2018-12-29 2023-10-20 上海联泰科技股份有限公司 三维纹理贴图方法及装置、计算机可读存储介质
CN109847360B (zh) * 2019-03-14 2023-03-21 网易(杭州)网络有限公司 游戏道具的3d效果处理方法、装置、电子设备及介质
CN110288680A (zh) * 2019-05-30 2019-09-27 盎锐(上海)信息科技有限公司 影像生成方法及移动终端
CN110276348B (zh) * 2019-06-20 2022-11-25 腾讯科技(深圳)有限公司 一种图像定位方法、装置、服务器及存储介质
CN111274916B (zh) * 2020-01-16 2024-02-02 华为技术有限公司 人脸识别方法和人脸识别装置
CN111324250B (zh) * 2020-01-22 2021-06-18 腾讯科技(深圳)有限公司 三维形象的调整方法、装置、设备及可读存储介质
CN111627106B (zh) * 2020-05-29 2023-04-28 北京字节跳动网络技术有限公司 人脸模型重构方法、装置、介质和设备
CN111862287A (zh) * 2020-07-20 2020-10-30 广州市百果园信息技术有限公司 眼部纹理图像生成方法、纹理贴图方法、装置和电子设备
CN113144614A (zh) * 2021-05-21 2021-07-23 苏州仙峰网络科技股份有限公司 基于Tiled Map的纹理采样贴图计算方法及装置
CN113628095B (zh) * 2021-08-04 2022-11-01 展讯通信(上海)有限公司 人像区域网格点信息存储方法及相关产品
CN114549284A (zh) * 2022-01-14 2022-05-27 北京有竹居网络技术有限公司 图像信息处理方法、装置和电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6664956B1 (en) * 2000-10-12 2003-12-16 Momentum Bilgisayar, Yazilim, Danismanlik, Ticaret A. S. Method for generating a personalized 3-D face model
CN101339669A (zh) * 2008-07-29 2009-01-07 上海师范大学 基于正侧面影像的三维人脸建模方法
CN102663820A (zh) * 2012-04-28 2012-09-12 清华大学 三维头部模型重建方法
CN103646416A (zh) * 2013-12-18 2014-03-19 中国科学院计算技术研究所 一种三维卡通人脸纹理生成方法及设备
CN104318603A (zh) * 2014-09-12 2015-01-28 上海明穆电子科技有限公司 从手机相册调取照片生成3d模型的方法及系统
CN104376594A (zh) * 2014-11-25 2015-02-25 福建天晴数码有限公司 三维人脸建模方法和装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036276A (zh) * 2014-05-29 2014-09-10 无锡天脉聚源传媒科技有限公司 人脸识别方法及装置
CN105117712A (zh) * 2015-09-15 2015-12-02 北京天创征腾信息科技有限公司 兼容人脸老化识别的单样本人脸识别方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6664956B1 (en) * 2000-10-12 2003-12-16 Momentum Bilgisayar, Yazilim, Danismanlik, Ticaret A. S. Method for generating a personalized 3-D face model
CN101339669A (zh) * 2008-07-29 2009-01-07 上海师范大学 基于正侧面影像的三维人脸建模方法
CN102663820A (zh) * 2012-04-28 2012-09-12 清华大学 三维头部模型重建方法
CN103646416A (zh) * 2013-12-18 2014-03-19 中国科学院计算技术研究所 一种三维卡通人脸纹理生成方法及设备
CN104318603A (zh) * 2014-09-12 2015-01-28 上海明穆电子科技有限公司 从手机相册调取照片生成3d模型的方法及系统
CN104376594A (zh) * 2014-11-25 2015-02-25 福建天晴数码有限公司 三维人脸建模方法和装置

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112330824A (zh) * 2018-05-31 2021-02-05 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备和存储介质
CN108876713A (zh) * 2018-06-28 2018-11-23 北京字节跳动网络技术有限公司 二维模板图像的映射方法、装置、终端设备和存储介质
CN108876713B (zh) * 2018-06-28 2022-07-22 北京字节跳动网络技术有限公司 二维模板图像的映射方法、装置、终端设备和存储介质
CN109242760A (zh) * 2018-08-16 2019-01-18 Oppo广东移动通信有限公司 人脸图像的处理方法、装置和电子设备
CN109242760B (zh) * 2018-08-16 2023-02-28 Oppo广东移动通信有限公司 人脸图像的处理方法、装置和电子设备
CN109410298A (zh) * 2018-11-02 2019-03-01 北京恒信彩虹科技有限公司 一种虚拟模型的制作方法及表情变化方法
CN109410298B (zh) * 2018-11-02 2023-11-17 北京恒信彩虹科技有限公司 一种虚拟模型的制作方法及表情变化方法
CN112132785A (zh) * 2020-08-25 2020-12-25 华东师范大学 一种二维材料的透射电镜图像识别、分析方法及系统
CN112132785B (zh) * 2020-08-25 2023-12-15 华东师范大学 一种二维材料的透射电镜图像识别、分析方法及系统
CN114863030A (zh) * 2022-05-23 2022-08-05 广州数舜数字化科技有限公司 基于人脸识别和图像处理技术生成自定义3d模型的方法
CN115797535A (zh) * 2023-01-05 2023-03-14 深圳思谋信息科技有限公司 一种三维模型纹理贴图方法及相关装置

Also Published As

Publication number Publication date
CN106570822A (zh) 2017-04-19
CN106570822B (zh) 2020-10-16

Similar Documents

Publication Publication Date Title
WO2018076437A1 (zh) 一种人脸贴图方法及装置
US10854017B2 (en) Three-dimensional virtual image display method and apparatus, terminal, and storage medium
JP7190042B2 (ja) シャドウレンダリング方法、装置、コンピュータデバイスおよびコンピュータプログラム
WO2020207191A1 (zh) 虚拟物体被遮挡的区域确定方法、装置及终端设备
US11393154B2 (en) Hair rendering method, device, electronic apparatus, and storage medium
TWI678099B (zh) 視頻處理方法、裝置和儲存介質
CN107484428B (zh) 用于显示对象的方法
WO2020133862A1 (zh) 游戏角色模型的生成方法、装置、处理器及终端
CA3090747C (en) Automatic rig creation process
JP7483301B2 (ja) 画像処理及び画像合成方法、装置及びコンピュータプログラム
CN110442245A (zh) 基于物理键盘的显示方法、装置、终端设备及存储介质
WO2020233403A1 (zh) 三维角色的个性化脸部显示方法、装置、设备及存储介质
EP3819752A1 (en) Personalized scene image processing method and apparatus, and storage medium
WO2012097556A1 (zh) 3d图标的处理方法、装置及移动终端
CN112348937A (zh) 人脸图像处理方法及电子设备
US8643679B2 (en) Storage medium storing image conversion program and image conversion apparatus
CN105913496B (zh) 一种将真实服饰快速转换为三维虚拟服饰的方法及系统
CN109829982A (zh) 模型匹配方法、装置、终端设备及存储介质
CN113398583A (zh) 游戏模型的贴花渲染方法、装置、存储介质及电子设备
US20210287330A1 (en) Information processing system, method of information processing, and program
WO2017152848A1 (zh) 人物面部模型的编辑方法及装置
CN116452745A (zh) 手部建模、手部模型处理方法、设备和介质
CN110211214A (zh) 三维地图的纹理叠加方法、装置和存储介质
CN109669541A (zh) 一种用于配置增强现实内容的方法与设备
WO2018151612A1 (en) Texture mapping system and method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16920034

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16920034

Country of ref document: EP

Kind code of ref document: A1