CN108416835A - A kind of implementation method and terminal of face's special efficacy - Google Patents

A kind of implementation method and terminal of face's special efficacy Download PDF

Info

Publication number
CN108416835A
CN108416835A CN201810093430.0A CN201810093430A CN108416835A CN 108416835 A CN108416835 A CN 108416835A CN 201810093430 A CN201810093430 A CN 201810093430A CN 108416835 A CN108416835 A CN 108416835A
Authority
CN
China
Prior art keywords
face
faceforms
special efficacy
coordinates
files
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810093430.0A
Other languages
Chinese (zh)
Other versions
CN108416835B (en
Inventor
刘德建
杨洪
靳勍
陈宏展
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Tianqing Online Interactive Technology Co Ltd
Original Assignee
Fujian Tianqing Online Interactive Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Tianqing Online Interactive Technology Co Ltd filed Critical Fujian Tianqing Online Interactive Technology Co Ltd
Priority to CN201810093430.0A priority Critical patent/CN108416835B/en
Publication of CN108416835A publication Critical patent/CN108416835A/en
Application granted granted Critical
Publication of CN108416835B publication Critical patent/CN108416835B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present invention provides a kind of implementation method and terminal of face's special efficacy, the face scan data for the real person that receiving body propagated sensation sensor is read;According to the face scan data, apex coordinate, normal line vector and the UV coordinates of the corresponding face are calculated separately, the first 3D faceforms are generated;The first 3D faceforms are exported to 3D files;Face's special efficacy is drawn according to the 3D files, generates corresponding face textures;The face textures are mapped to the 2nd 3D faceforms that the face scan data of the real person read according to the body-sensing sensor of reception generate in real time, the first 3D faceforms are identical with the sequence of each apex coordinate on the 2nd 3D faceforms and sequence remains unchanged, it realizes and face's special efficacy of real person is simulated, it is not only easy to operate, and it is true to nature, enhance the experience sense of reality.

Description

A kind of implementation method and terminal of face's special efficacy
Technical field
The present invention relates to the fields augmented reality AR more particularly to a kind of implementation methods and terminal of face's special efficacy.
Background technology
Currently based on the face detection of Kinect, one can be generated and meet personal face 3D models, and energy It is enough attached to above facial image, simulates human face expression.But existing technology it is applicable at present be face to virtual portrait Expression is simulated, and can accomplish that virtual portrait imitates the face action of real personage, and the mould of face's special efficacy for real personage It is quasi-, for example, simulate brow furrows, face festers, faces' special efficacy such as number around face, there is no good solutions.
Invention content
The technical problem to be solved by the present invention is to:A kind of implementation method and terminal of face's special efficacy are provided, it can be to existing Face's special efficacy of real personage is simulated, reinforcement verification true feeling.
In order to solve the above-mentioned technical problem, a kind of technical solution that the present invention uses for:
A kind of implementation method of face's special efficacy, including step:
The face scan data for the real person that S1, receiving body propagated sensation sensor are read;
S2, according to the face scan data, the apex coordinate, normal line vector and the UV that calculate separately the corresponding face are sat Mark generates the first 3D faceforms;
S3, the first 3D faceforms are exported to 3D files;
S4, face's special efficacy is drawn according to the 3D files, generates corresponding face textures;
S5, the face scan number that the face textures are mapped to the real person read according to the body-sensing sensor of reception The 2nd 3D faceforms generated when factually, each vertex on the first 3D faceforms and the 2nd 3D faceforms The sequence of coordinate is identical and remains unchanged.
In order to solve the above-mentioned technical problem, the another technical solution that the present invention uses for:
A kind of realization terminal of face's special efficacy, including memory, processor and be stored on the memory and can be The computer program run on the processor, the processor realize following steps when executing the computer program:
The face scan data for the real person that S1, receiving body propagated sensation sensor are read;
S2, according to the face scan data, the apex coordinate, normal line vector and the UV that calculate separately the corresponding face are sat Mark generates the first 3D faceforms;
S3, the first 3D faceforms are exported to 3D files;
S4, face's special efficacy is drawn according to the 3D files, generates corresponding face textures;
S5, the face scan number that the face textures are mapped to the real person read according to the body-sensing sensor of reception The 2nd 3D faceforms generated when factually, each vertex on the first 3D faceforms and the 2nd 3D faceforms The sequence of coordinate is identical and remains unchanged.
The beneficial effects of the present invention are:The first 3D faceforms corresponding with real human face are exported to 3D files, root Face special efficacy is drawn according to the 3D files, generates corresponding face textures, the face textures are mapped to the generated in real time Two 3D faceforms, the first 3D faceforms are identical with the sequence of each apex coordinate on the 2nd 3D faceforms And it remains unchanged, realizes and face's special efficacy of real person is simulated, it is not only easy to operate but also true to nature, it enhances Experience the sense of reality.
Description of the drawings
Fig. 1 is a kind of flow chart of the implementation method of face's special efficacy of the embodiment of the present invention;
Fig. 2 is a kind of structure flow chart of the realization terminal of face's special efficacy of the embodiment of the present invention;
Label declaration:
1, a kind of realization terminal of face's special efficacy;2, memory;3, processor.
Specific implementation mode
To explain the technical content, the achieved purpose and the effect of the present invention in detail, below in conjunction with embodiment and coordinate attached Figure is explained.
The design of most critical of the present invention is:The first 3D faceforms corresponding with real human face are exported to 3D files, Face textures corresponding with face's special efficacy are generated, the face textures are mapped to the 2nd 3D faceforms generated in real time, institute It is identical with the sequence of each apex coordinate on the 2nd 3D faceforms and remain unchanged to state the first 3D faceforms.
Please refer to Fig. 1, a kind of implementation method of face's special efficacy, including step:
The face scan data for the real person that S1, receiving body propagated sensation sensor are read;
S2, according to the face scan data, the apex coordinate, normal line vector and the UV that calculate separately the corresponding face are sat Mark generates the first 3D faceforms;
S3, the first 3D faceforms are exported to 3D files;
S4, face's special efficacy is drawn according to the 3D files, generates corresponding face textures;
S5, the face scan number that the face textures are mapped to the real person read according to the body-sensing sensor of reception The 2nd 3D faceforms generated when factually, each vertex on the first 3D faceforms and the 2nd 3D faceforms The sequence of coordinate is identical and remains unchanged.
As can be seen from the above description, the beneficial effects of the present invention are:It will the first 3D faceforms corresponding with real human face It exports to 3D files, face's special efficacy is drawn according to the 3D files, generates corresponding face textures, the face textures are mapped To the 2nd 3D faceforms generated in real time, each vertex on the first 3D faceforms and the 2nd 3D faceforms The sequence of coordinate is identical and remains unchanged, and realizes and is simulated to face's special efficacy of real person, not only easy to operate, and And it is true to nature, enhance the experience sense of reality.
Further, the step S3 is specifically included:
According to Obj file formats, apex coordinate, normal line vector and UV coordinates are converted into text formatting, and be written to Obj files.
Seen from the above description, select Obj files as 3D files, file format is simply conducive to operation, facilitates 3D faces Model quickly and easily exports, and versatile.
Further, the step S4 is specifically included:
The 3D files are imported into 3D modeling tool, the UV coordinates on the first 3D faceforms are unfolded, by the UV Coordinate corresponds to the face position of real human face, and face's special efficacy is drawn on the UV coordinates, generates corresponding face textures.
Seen from the above description, it by the way that 3D files are imported 3D modeling tool, can draw as needed various Face's special efficacy, and the face textures of corresponding various face's special efficacys are generated, flexibility is high.
Further, keep the UV coordinates of 3D faceforms constant during mapping in the step S5.
Seen from the above description, it keeps the UV coordinates of 3D faceforms constant during mapping, can ensure face Textures can be correctly mapped in real human face.
Further, the body-sensing sensor is Kinect somatosensory sensor.
Seen from the above description, the face to real person can be accurately realized in real time using Kinect somatosensory sensor The scanner uni of data is read.
It please refers to Fig. 2, a kind of realization terminal of face's special efficacy, including memory, processor and is stored in the storage On device and the computer program that can run on the processor, the processor are realized following when executing the computer program Step:
The face scan data for the real person that S1, receiving body propagated sensation sensor are read;
S2, according to the face scan data, the apex coordinate, normal line vector and the UV that calculate separately the corresponding face are sat Mark generates the first 3D faceforms;
S3, the first 3D faceforms are exported to 3D files;
S4, face's special efficacy is drawn according to the 3D files, generates corresponding face textures;
S5, the face scan number that the face textures are mapped to the real person read according to the body-sensing sensor of reception The 2nd 3D faceforms generated when factually, each vertex on the first 3D faceforms and the 2nd 3D faceforms The sequence of coordinate is identical and remains unchanged.
As can be seen from the above description, the beneficial effects of the present invention are:It will the first 3D faceforms corresponding with real human face It exports to 3D files, face's special efficacy is drawn according to the 3D files, generates corresponding face textures, the face textures are mapped To the 2nd 3D faceforms generated in real time, each vertex on the first 3D faceforms and the 2nd 3D faceforms The sequence of coordinate is identical and remains unchanged, and realizes and is simulated to face's special efficacy of real person, not only easy to operate, and And it is true to nature, enhance the experience sense of reality.
Further, the step S3 is specifically included:
According to Obj file formats, apex coordinate, normal line vector and UV coordinates are converted into text formatting, and be written to Obj files.
Seen from the above description, select Obj files as 3D files, file format is simply conducive to operation, facilitates 3D faces Model quickly and easily exports, and versatile.
Further, the step S4 is specifically included:
The 3D files are imported into 3D modeling tool, the UV coordinates on the first 3D faceforms are unfolded, by the UV Coordinate corresponds to the face position of real human face, and face's special efficacy is drawn on the UV coordinates, generates corresponding face textures.
Seen from the above description, it by the way that 3D files are imported 3D modeling tool, can draw as needed various Face's special efficacy, and the face textures of corresponding various face's special efficacys are generated, flexibility is high.
Further, keep the UV coordinates of 3D faceforms constant during mapping in the step S5.
Seen from the above description, it keeps the UV coordinates of 3D faceforms constant during mapping, can ensure face Textures can be correctly mapped in real human face.
Further, the body-sensing sensor is Kinect somatosensory sensor.
Seen from the above description, the face to real person can be accurately realized in real time using Kinect somatosensory sensor The scanner uni of data is read.
Embodiment one
Please refer to Fig. 1, a kind of implementation method of face's special efficacy, including step:
The face scan data for the real person that S1, receiving body propagated sensation sensor are read;
The body-sensing sensor can be Kinect somatosensory sensor;
S2, according to the face scan data, the apex coordinate, normal line vector and the UV that calculate separately the corresponding face are sat Mark generates the first 3D faceforms;
S3, the first 3D faceforms are exported to 3D files;
Specifically, according to Obj file formats, apex coordinate, normal line vector and UV coordinates are converted into text formatting, and It is written to Obj files;
S4, face's special efficacy is drawn according to the 3D files, generates corresponding face textures;
Specifically, the 3D files are imported 3D modeling tool, the UV coordinates on the first 3D faceforms are unfolded, it will The UV coordinates correspond to the face position of real human face, and face's special efficacy is drawn on the UV coordinates, generates corresponding face Textures, the 3D modeling tool can be 3D Max;
S5, the face scan number that the face textures are mapped to the real person read according to the body-sensing sensor of reception The 2nd 3D faceforms generated when factually, each vertex on the first 3D faceforms and the 2nd 3D faceforms The sequence of coordinate is identical and remains unchanged;
Wherein, keep the UV coordinates of 3D faceforms constant during mapping;
Since the position of face, countenance are changing always, dynamic access is needed, therefore, it is necessary to generate 3D in real time Faceform ensures that 3D faceforms can be bonded face.
Embodiment two
It please refers to Fig. 2, a kind of realization terminal 1 of face's special efficacy, including memory 2, processor 3 and is stored in described deposit On reservoir 2 and the computer program that can be run on the processor 3, the processor 3 execute real when the computer program Existing following steps:
The face scan data for the real person that S1, receiving body propagated sensation sensor are read;
The body-sensing sensor can be Kinect somatosensory sensor;
S2, according to the face scan data, the apex coordinate, normal line vector and the UV that calculate separately the corresponding face are sat Mark generates the first 3D faceforms;
S3, the first 3D faceforms are exported to 3D files;
Specifically, according to Obj file formats, apex coordinate, normal line vector and UV coordinates are converted into text formatting, and It is written to Obj files;
S4, face's special efficacy is drawn according to the 3D files, generates corresponding face textures;
Specifically, the 3D files are imported 3D modeling tool, the UV coordinates on the first 3D faceforms are unfolded, it will The UV coordinates correspond to the face position of real human face, and face's special efficacy is drawn on the UV coordinates, generates corresponding face Textures, the 3D modeling tool can be 3D Max;
S5, the face scan number that the face textures are mapped to the real person read according to the body-sensing sensor of reception The 2nd 3D faceforms generated when factually, each vertex on the first 3D faceforms and the 2nd 3D faceforms The sequence of coordinate is identical and remains unchanged;
Wherein, keep the UV coordinates of 3D faceforms constant during mapping;
Since the position of face, countenance are changing always, dynamic access is needed, therefore, it is necessary to generate 3D in real time Faceform ensures that 3D faceforms can be bonded face.
In conclusion the implementation method and terminal of a kind of face's special efficacy provided by the invention, it will be corresponding with real human face First 3D faceforms export to 3D files, draw face's special efficacy according to the 3D files, corresponding face textures are generated, by institute It states face textures and maps to the 2nd 3D faceforms generated in real time, the first 3D faceforms and the 2nd 3D face moulds The sequence of each apex coordinate in type is identical and remains unchanged, and realizes and is simulated to face's special efficacy of real person, It is not only easy to operate but also true to nature, enhance the experience sense of reality.
Example the above is only the implementation of the present invention is not intended to limit the scope of the invention, every to utilize this hair Equivalents made by bright specification and accompanying drawing content are applied directly or indirectly in relevant technical field, include similarly In the scope of patent protection of the present invention.

Claims (10)

1. a kind of implementation method of face's special efficacy, which is characterized in that including step:
The face scan data for the real person that S1, receiving body propagated sensation sensor are read;
S2, according to the face scan data, calculate separately apex coordinate, normal line vector and the UV coordinates of the corresponding face, Generate the first 3D faceforms;
S3, the first 3D faceforms are exported to 3D files;
S4, face's special efficacy is drawn according to the 3D files, generates corresponding face textures;
S5, the face scan data that the face textures are mapped to the real person read according to the body-sensing sensor of reception are real The 2nd 3D faceforms of Shi Shengcheng, each apex coordinate on the first 3D faceforms and the 2nd 3D faceforms Sequence it is identical and sequence remain unchanged.
2. a kind of implementation method of face's special efficacy according to claim 1, which is characterized in that the step S3 is specifically wrapped It includes:
According to Obj file formats, apex coordinate, normal line vector and UV coordinates are converted into text formatting, and are written to Obj texts Part.
3. a kind of implementation method of face's special efficacy according to claim 1, which is characterized in that the step S4 is specifically wrapped It includes:
The 3D files are imported into 3D modeling tool, the UV coordinates on the first 3D faceforms are unfolded, by the UV coordinates The face position for corresponding to real human face draws face's special efficacy on the UV coordinates, generates corresponding face textures.
4. a kind of implementation method of face's special efficacy according to claim 1, which is characterized in that mapped in the step S5 During keep 3D faceforms UV coordinates it is constant.
5. a kind of implementation method of face's special efficacy according to claim 1, which is characterized in that the body-sensing sensor is Kinect somatosensory sensor.
6. a kind of realization terminal of face's special efficacy, including memory, processor and it is stored on the memory and can be in institute State the computer program run on processor, which is characterized in that the processor is realized following when executing the computer program Step:
The face scan data for the real person that S1, receiving body propagated sensation sensor are read;
S2, according to the face scan data, calculate separately apex coordinate, normal line vector and the UV coordinates of the corresponding face, Generate the first 3D faceforms;
S3, the first 3D faceforms are exported to 3D files;
S4, face's special efficacy is drawn according to the 3D files, generates corresponding face textures;
S5, the face scan data that the face textures are mapped to the real person read according to the body-sensing sensor of reception are real The 2nd 3D faceforms of Shi Shengcheng, each apex coordinate on the first 3D faceforms and the 2nd 3D faceforms Sequence it is identical and remain unchanged.
7. a kind of realization terminal of face's special efficacy according to claim 6, which is characterized in that the step S3 is specifically wrapped It includes:
According to Obj file formats, apex coordinate, normal line vector and UV coordinates are converted into text formatting, and are written to Obj texts Part.
8. a kind of realization terminal of face's special efficacy according to claim 6, which is characterized in that the step S4 is specifically wrapped It includes:
The 3D files are imported into 3D modeling tool, the UV coordinates on the first 3D faceforms are unfolded, by the UV coordinates The face position for corresponding to real human face draws face's special efficacy on the UV coordinates, generates corresponding face textures.
9. a kind of realization terminal of face's special efficacy according to claim 6, which is characterized in that mapped in the step S5 During keep 3D faceforms UV coordinates it is constant.
10. a kind of realization terminal of face's special efficacy according to claim 6, which is characterized in that the body-sensing sensor is Kinect somatosensory sensor.
CN201810093430.0A 2018-01-31 2018-01-31 Method and terminal for realizing special face effect Active CN108416835B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810093430.0A CN108416835B (en) 2018-01-31 2018-01-31 Method and terminal for realizing special face effect

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810093430.0A CN108416835B (en) 2018-01-31 2018-01-31 Method and terminal for realizing special face effect

Publications (2)

Publication Number Publication Date
CN108416835A true CN108416835A (en) 2018-08-17
CN108416835B CN108416835B (en) 2022-07-05

Family

ID=63127327

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810093430.0A Active CN108416835B (en) 2018-01-31 2018-01-31 Method and terminal for realizing special face effect

Country Status (1)

Country Link
CN (1) CN108416835B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553835A (en) * 2020-04-10 2020-08-18 上海完美时空软件有限公司 Method and device for generating face pinching data of user
WO2021121291A1 (en) * 2019-12-18 2021-06-24 北京字节跳动网络技术有限公司 Image processing method and apparatus, electronic device and computer-readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103473804A (en) * 2013-08-29 2013-12-25 小米科技有限责任公司 Image processing method, device and terminal equipment
CN106570822A (en) * 2016-10-25 2017-04-19 宇龙计算机通信科技(深圳)有限公司 Human face mapping method and device
CN106652037A (en) * 2015-10-30 2017-05-10 深圳超多维光电子有限公司 Face mapping processing method and apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103473804A (en) * 2013-08-29 2013-12-25 小米科技有限责任公司 Image processing method, device and terminal equipment
CN106652037A (en) * 2015-10-30 2017-05-10 深圳超多维光电子有限公司 Face mapping processing method and apparatus
CN106570822A (en) * 2016-10-25 2017-04-19 宇龙计算机通信科技(深圳)有限公司 Human face mapping method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
惠阳: "基于中国京剧脸谱的网络体感游戏人物表情设计研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 *
梁海燕: "基于Kinect动作驱动的三维细微面部表情实时模拟", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021121291A1 (en) * 2019-12-18 2021-06-24 北京字节跳动网络技术有限公司 Image processing method and apparatus, electronic device and computer-readable storage medium
US11651529B2 (en) 2019-12-18 2023-05-16 Beijing Bytedance Network Technology Co., Ltd. Image processing method, apparatus, electronic device and computer readable storage medium
CN111553835A (en) * 2020-04-10 2020-08-18 上海完美时空软件有限公司 Method and device for generating face pinching data of user
CN111553835B (en) * 2020-04-10 2024-03-26 上海完美时空软件有限公司 Method and device for generating pinching face data of user

Also Published As

Publication number Publication date
CN108416835B (en) 2022-07-05

Similar Documents

Publication Publication Date Title
CN110163054B (en) Method and device for generating human face three-dimensional image
Bando et al. Animating hair with loosely connected particles
Konukseven et al. Development of a visio‐haptic integrated dental training simulation system
US10521970B2 (en) Refining local parameterizations for applying two-dimensional images to three-dimensional models
WO2016045016A1 (en) Furry avatar animation
CN108133220A (en) Model training, crucial point location and image processing method, system and electronic equipment
KR101743763B1 (en) Method for providng smart learning education based on sensitivity avatar emoticon, and smart learning education device for the same
CN106683193B (en) Design method and design device of three-dimensional model
CN106200960A (en) The content display method of electronic interactive product and device
JP4842242B2 (en) Method and apparatus for real-time expression of skin wrinkles during character animation
US20110254839A1 (en) Systems and Methods for Creating Near Real-Time Embossed Meshes
CN108416835A (en) A kind of implementation method and terminal of face's special efficacy
CN104268921A (en) 3D face expression control method and system
CN109696953A (en) The method, apparatus and virtual reality device of virtual reality text importing
Fratarcangeli Position‐based facial animation synthesis
Mosegaard et al. Real-time Deformation of Detailed Geometry Based on Mappings to a Less Detailed Physical Simulation on the GPU.
KR102026857B1 (en) 3D printing system using 3D modeling authoring tool based on VR technology
CN103678888A (en) Cardiac blood flowing indicating and displaying method based on Euler fluid simulation algorithm
CN104813282B (en) Automatic pipeline forms
JP2006323512A (en) Image generation system, program, and information storage medium
Elden Implementation and initial assessment of VR for scientific visualisation: extending unreal engine 4 to visualise scientific data on the HTC Vive
US20220277586A1 (en) Modeling method, device, and system for three-dimensional head model, and storage medium
CN112132912B (en) Method and device for establishing face generation model and generating face image
CN115631516A (en) Face image processing method, device and equipment and computer readable storage medium
Tytarenko Optimizing Immersion: Analyzing Graphics and Performance Considerations in Unity3D VR Development

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant