CN108648251A - 3D expressions production method and system - Google Patents

3D expressions production method and system Download PDF

Info

Publication number
CN108648251A
CN108648251A CN201810462877.0A CN201810462877A CN108648251A CN 108648251 A CN108648251 A CN 108648251A CN 201810462877 A CN201810462877 A CN 201810462877A CN 108648251 A CN108648251 A CN 108648251A
Authority
CN
China
Prior art keywords
image
expression
face
infrared
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810462877.0A
Other languages
Chinese (zh)
Other versions
CN108648251B (en
Inventor
许星
钟亮洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Orbbec Co Ltd
Original Assignee
Shenzhen Orbbec Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Orbbec Co Ltd filed Critical Shenzhen Orbbec Co Ltd
Priority to CN201810462877.0A priority Critical patent/CN108648251B/en
Publication of CN108648251A publication Critical patent/CN108648251A/en
Application granted granted Critical
Publication of CN108648251B publication Critical patent/CN108648251B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A kind of 3D expressions production method of present invention offer and system, described method includes following steps:S1:Offer standard expression model;S2:Acquire face two dimensional image of the user under current expression and face depth image;S3:The standard expression model deformation is driven based on the face two dimensional image and the face depth image, generates the expression animation consistent with the human face expression.Method and system through the invention can improve user experience, increase the interest of chat.

Description

3D expressions production method and system
Technical field
The present invention relates to technical field of image processing more particularly to a kind of 3D expressions production methods and system.
Background technology
Network communication is carried out by internet and has become an essential part in people's daily life, is chatted linking up The content that language is not easy description can also be given expression to active atmosphere by sending some expressions in the process.It is various Expression coating makes in advance to be used so that user downloads, and current most of expression packets are made by third party professional, commonly User only has right selected to use.
In order to enable user can independently define expression packet, photo is independently selected by user in some technologies, then by Software for producing generates corresponding expression after the measures such as being identified, dividing to photo content.This kind of expression packet is although easy to make, But due to being 2D expressions, user experience is poor.
Invention content
The present invention provides a kind of 3D tables to solve the problem of that the expression packet of the prior art is 2D expression poor user experiences Feelings production method and system.
To solve the above-mentioned problems, the technical solution adopted by the present invention is as described below:
A kind of 3D expressions production method, includes the following steps:S1:Offer standard expression model;S2:User is acquired current Face two dimensional image under expression and face depth image;S3:Based on the face two dimensional image and the face depth image The standard expression model deformation is driven, the expression animation consistent with the human face expression is generated.
In an embodiment of the present invention, the face two dimensional image includes infrared image, the infrared image and institute It is interleaved acquisition to state face depth image.
In another embodiment of the present invention, driving described in step S3 includes:Obtain the face depth image with Orientative feature parameter in the face two dimensional image and expressive features parameter;According to the expressive features parameter and the side Position characteristic parameter drives the standard expression model deformation;It is consistent with the human face expression based on the expressive features parameter acquiring The corresponding text of expression animation and/or voice.
In the another embodiment of the present invention, further include:The voice of user described in synchronous acquisition identifies the voice simultaneously Obtain text corresponding with the voice;According to the voice identified corresponding primary sound is matched from preset original voice library Voice;Further include by the text and/or voice or original voice and the expression Model Fusion.
The present invention provides a kind of 3D expressions manufacturing system again, including:Depth camera, for obtaining two dimensional image and depth map Picture;Memory, for storing data information;Processor is connected with the depth camera and the memory, for executing as before Any 3D expression production methods.
In an embodiment of the present invention, the two dimensional image includes infrared image, and the depth camera includes:It is infrared Floodlight, for providing infrared illumination;Infrared structure light projector is used for target projection infrared structure light image;Infrared phase Machine acquires the infrared image when the only described infrared floodlight is opened, and is adopted when the only described infrared structure light projector opens Structure set light image;Depth calculation processor carries out depth calculation to obtain depth image to the structure light image.
In another embodiment of the present invention, the two dimensional image includes coloured image, and the depth camera further includes Color camera, the coloured image for acquiring target under the conditions of visible light shines.
Beneficial effects of the present invention are:A kind of 3D expressions production method and system are provided, pre-stored standard expression mould is passed through Type, the face two dimensional image based on collected user under current expression become with face depth image driving standard expression model Shape generates the expression animation consistent with the human face expression, improves user experience, increases the interest of chat.
Description of the drawings
Fig. 1 is a kind of schematic diagram of 3D expressions manufacturing system according to the ... of the embodiment of the present invention.
Fig. 2 is a kind of schematic diagram of 3D expressions production method according to the ... of the embodiment of the present invention.
Wherein, 100- depth cameras, 110- processors, 120- memories, 130- displays, 140- input/output interfaces, 150- microphones, 101- infrared structures light projector, 102- infrared floodlights, 103- infrared cameras, the processing of 104- depth calculations Device.
Specific implementation mode
The present invention is described in detail by specific embodiment below in conjunction with the accompanying drawings, for a better understanding of this hair It is bright, but following embodiments are not intended to limit the scope of the invention.In addition, it is necessary to illustrate, the diagram provided in following embodiments Only illustrate the basic conception of the present invention in a schematic way, is only shown in attached drawing with related component in the present invention rather than according to reality Component count, shape when implementation and size are drawn, when actual implementation each component shape, quantity and ratio can be it is a kind of with The change of meaning, and its assembly layout form may also be increasingly complex.
As shown in Figure 1, the present invention provides a kind of 3D expressions manufacturing system, system includes:Depth camera 100, memory 120, processor 110 can also include display 130, input/output interface 140, microphone 150 etc..
Depth camera 100 is used to obtain the two dimensional image and depth image (video) of target, in one embodiment, depth Camera 100 is the depth camera based on structured light technique, including infrared structure light projector 101, infrared camera 103 and depth Computation processor 104, wherein infrared structure light projector 101 are used for target projection infrared structure light image, infrared camera 103 Acquire and transmission structure light image be to depth calculation processor, depth calculation processor to structure light image implement depth calculation with Depth image is obtained, depth camera 100 further includes the infrared floodlight 102 of an offer infrared illumination, when infrared floodlight 102 Open and infrared structure light projector 101 close when, infrared camera 103 will collect two dimensional image (infrared image).It therefore can To pass through the alternation to infrared floodlight 102 and infrared structure light projector 101 so that depth camera 100 has friendship For the ability of acquisition two-dimensional infrared image and depth image.Here infrared floodlight 102 and infrared structure light projector 101 It can be integrated into the same device, which has the function of floodlighting and structured light projection, it is possible thereby to save space With cost.In one embodiment, depth camera 100 further includes a color camera (being not drawn into figure), visible for acquiring Two dimensional image-coloured image of target under illumination condition, it is inclined on position due to having between color camera and infrared camera at this time Difference, therefore there are parallaxes between depth image and coloured image, in some applications it is desirable to depth image and coloured image into Row registration is not regarded if do not illustrated in the description below between default depth image and two dimensional image with eliminating parallax Difference.In some embodiments, two dimensional image can also include other images, such as thermal infrared images, ultraviolet image etc..
The ginseng of infrared camera 103 or color camera in information, such as depth camera 100 for storing data of memory 120 Number data, the reference image data for calculating depth image, 3D expression production processes data, ephemeral data etc..Memory 120 can there are one or it is multiple, be distributed in the different location of system, for example depth camera is embedded by a flash memory In, the memories such as RAM, ROM are additionally provided in system.
Processor 110 is connect with memory 120, depth camera 100 etc. respectively, for controlling and handling data.One In a little embodiments, processor 110 is embedded in depth camera comprising at least two sub-processors, such as one of sub-processor The middle calculating task for executing depth image, i.e. depth calculation processor.Processor 110 is held by calling the program in memory The relevant instruction of row.In one embodiment, it is stored with the program of 3D expressions making in memory 120, swashs in 3D expression tasks After work, processor calls relative program and executes following 3D expressions production method, and the method is as shown in Figure 2:
(1) standard expression model is provided.
Standard expression model is pre-set and preserves in memory to be called by processor.Standard expression model can To be made by 3D animation softs, existing model, such as Candide-3 models, MPEG-4 moulds can also be selected Type etc..Standard expression model can be single model, can also include the benchmark model under multiple and different expressions.Standard expression mould Containing the characteristic parameter that can be deformed in type, for example its characteristic parameter includes multiple AU for Candide-3 models (Action Units) parameter, its characteristic parameter includes multiple static parameter FDPs and dynamic parameter for MPEG-4 models FAPs.In some embodiments, standard expression model can also include animal model, plant model and dummy object model Deng.
Standard expression model can be deformed under the driving of characteristic parameter, in one embodiment, characteristic parameter point For orientative feature parameter and expressive features parameter, the orientation control for being respectively intended to control standard expression model is controlled with expression.
(2) face two dimensional image of the acquisition user under current expression and face depth image.
Processor will send out excitation signal to depth camera, and depth camera is after receiving excitation signal in its visual field Object carries out Image Acquisition, such as with the frame per second interleaved acquisition infrared image and depth image of 60fps, it is possible thereby to obtain respectively The infrared image and depth image of 30fps.Due to being interleaved acquisition, the two dimensional image and depth of synchronization are if desired obtained Image is spent, then needs to be further processed, such as to consecutive frame depth image into row interpolation etc., or considers consecutive frame Infrared image and depth image interval time it is very short, it be it is that synchronization acquires that can be approximately considered.For two dimension Image, which is the situation of coloured image, can then synchronize the acquisition for carrying out coloured image and depth image, only subsequently need to carry out Additional registration calculates.
After the two dimensional image and depth image for collecting target, Face datection is carried out with tracking to obtain to two dimensional image Face two dimensional image and depth image, for example current face is detected using Viola-Jones Face datection algorithms, utilize Mean Shift algorithms realize the face tracking to follow-up multiple image, to obtain the face two dimensional image in each frame two dimensional image, by Correspondence between two dimensional image and depth image can then obtain corresponding face depth image.In one embodiment In, further face depth image is handled to obtain precision higher model, such as face wire frame model etc., is unified See, face depth image is collectively referred to as in subsequent description.
(3) it is based on the face two dimensional image and drives the standard expression model deformation with the face depth image, it is raw At the expression animation consistent with the human face expression.
After getting face two dimensional image and face depth image, carrying out feature extraction to it respectively can be driven with obtaining The characteristic parameter of dynamic standard expression model deformation.In one embodiment, the first step is by calculating face depth image To obtain the position vector of current face, such as first by being extracted to face key point coordinates, in the present embodiment, close Key point includes forehead, the corners of the mouth and nose, and the wherein key point at forehead and the corners of the mouth is used to determine the plane where face, nose Point at point then is used to determine the normal vector of face, i.e. position vector, and driving standard expression model can be obtained by position vector Direction parameter.Second step is to executing feature point extraction algorithm, such as active appearance models (AAM) algorithm after face two dimensional image Include facial contour, eye, nose and mouth etc. Deng, characteristic point, the characteristic point of required acquisition and the standard expression mould of selection Type is related, similarly, due to the correspondence between two dimensional image and depth image, can directly acquire the three of these characteristic points Dimensional coordinate values.Third walks, and standard expression model is registrated with current face's depth image.For example use ICP registration Algorithms Realize being aligned and deforming to standard expression model and current face's depth image.Specifically, in one embodiment, if standard Expression model is indicated with following formula:
M=R (m+S α+A β)+T
Wherein m indicates that neutral standard expression model, R, T indicate the spin matrix and translation square in reflection face orientation respectively Battle array, S, A are respectively static deformation matrix and dynamic deformation matrix, and α, β indicate static parameter and dynamic parameter respectively.
The purpose of registration be by continuous iteration with minimize energy function solve R, T in standard expression model, α, β, R and T indicate that orientative feature parameter, α and β indicate expressive features parameter, calculate and work as to be based further on these parameters The consistent expression animation of preceding human face expression.Energy function reaction is face depth image (being indicated with D) and standard expression model M Difference between middle corresponding points.
In registration process, the selection of iterative initial value is by the speed for directly influencing iteration and the precision being registrated, at one In embodiment, iterations can be greatly decreased as iterative initial value with feature point coordinates in position vector in previous step and are carried Rise registration accuracy.
The simple expression animation being made of image is often not lively enough, some texts are combined on the basis of expression animation Or voice can then promote the ornamental value of expression animation.
In one embodiment, based on expressive features parameter acquired in abovementioned steps, in the text pre-set And/or it is chosen in sound bank and the matched text of expressive features parameter institute and/or voice.Such as the expression for reflection pain For characteristic parameter, matching text and/or voice are equally used for expression pain.
In some embodiments, same using devices such as microphones during carrying out Image Acquisition using depth camera Step acquisition user speech.Can by user speech dubbing directly as expression animation.In one embodiment, in order to increase interest Taste etc. is handled user speech by mixing sound technology, for example is processed into the voice of other pronunciations or expression way, by this Voice is dubbed as expression animation.
In some embodiments, user speech is identified on the basis of synchronous acquisition user speech, and further According to the phonetic search recognized similar original voice, such as movie soundtrack, TV primary sound, animation primary sound etc. therewith, to It can significant increase interest.To achieve it, generally require to pre-establish original voice library, and in original voice library Voice is identified and marks corresponding characteristic symbol, and characteristic symbol includes time, number of characters, content etc..To user speech After being identified, at least one of features such as same time, number of characters, content for obtaining user speech, and it is based on this feature It is scanned for from original voice library to match suitable original voice, finally matching using the original voice as expression animation Sound.In one embodiment, the content of user speech, and stitching portion voice identical with user speech content can be identified.
After obtaining corresponding text and/or voice or original voice etc., itself and each frame in expression animation are melted It closes to obtain final expression animation.
In some embodiments, after completing expression animation and making, expression animation is saved in memory to form 3D Expression packet, in order to subsequently call at any time.
Any process described otherwise above or method description are construed as in Fig. 2 or herein, and expression includes one It is a or more for realizing specific logical function or process the step of executable instruction code module, segment or portion Point, and the range of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discuss suitable Sequence, include according to involved function by it is basic simultaneously in the way of or in the opposite order, to execute function, this should be of the invention Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in fig. 2, are used for for example, being considered The order list for realizing the executable instruction of logic function, may be embodied in any computer-readable medium, for referring to Enable execute system, device or equipment (system of such as computer based system including processor or other can be from instruction execution System, device or equipment instruction fetch and the system executed instruction) it uses, or combine these instruction execution systems, device or equipment And it uses.For the purpose of this specification, " computer-readable medium " can be it is any can include, store, communicate, propagate, or transport Program is for instruction execution system, device or equipment or the device used in conjunction with these instruction execution systems, device or equipment. The more specific example (non-exhaustive list) of computer-readable medium includes following:Electrical connection with one or more wiring Portion's (electronic device), portable computer diskette box (magnetic device), random access memory (RAM), read-only memory (ROM) can Wipe editable read-only memory (EPROM or flash memory), fiber device and portable optic disk read-only storage (CDROM).In addition, computer-readable medium can even is that the paper that can print described program on it or other suitable Jie Matter, because can be for example by carrying out optical scanner to paper or other media, then into edlin, interpretation or when necessary with other Suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each section of the present invention can be realized with hardware, software, firmware or combination thereof.Above-mentioned In embodiment, software that multiple steps or method can in memory and by suitable instruction execution system be executed with storage Or firmware is realized.It, and in another embodiment, can be under well known in the art for example, if realized with hardware Any one of row technology or their combination are realized:With the logic gates for realizing logic function to data-signal Discrete logic, with suitable combinational logic gate circuit application-specific integrated circuit, programmable gate array (PGA), scene Programmable gate array (FPGA) etc..
Those skilled in the art are appreciated that realize all or part of step that above-described embodiment method carries Suddenly it is that relevant hardware can be instructed to complete by program, the program can be stored in a kind of computer-readable storage medium In matter, which includes the steps that one or a combination set of embodiment of the method when being executed.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing module, it can also That each unit physically exists alone, can also two or more units be integrated in a module.Above-mentioned integrated mould The form that hardware had both may be used in block is realized, can also be realized in the form of software function module.The integrated module is such as Fruit is realized in the form of software function module and when sold or used as an independent product, can also be stored in a computer In read/write memory medium.
The above content is a further detailed description of the present invention in conjunction with specific preferred embodiments, and it cannot be said that The specific implementation of the present invention is confined to these explanations.For those skilled in the art to which the present invention belongs, it is not taking off Under the premise of from present inventive concept, several equivalent substitute or obvious modifications can also be made, and performance or use is identical, all answered When being considered as belonging to protection scope of the present invention.

Claims (10)

1. a kind of 3D expressions production method, which is characterized in that include the following steps:
S1:Offer standard expression model;
S2:Acquire face two dimensional image of the user under current expression and face depth image;
S3:Drive the standard expression model deformation based on the face two dimensional image and the face depth image, generate with The consistent expression animation of the human face expression.
2. 3D expressions production method as described in claim 1, which is characterized in that the face two dimensional image includes infrared figure Picture, the infrared image and the face depth image are interleaved acquisitions.
3. 3D expressions production method as described in claim 1, which is characterized in that described in step S3 driving include:
Obtain the face depth image and the orientative feature parameter and expressive features parameter in the face two dimensional image;Foundation Standard expression model deformation described in the expressive features parameter and the orientative feature driving parameter.
4. 3D expressions production method as claimed in claim 3, which is characterized in that further include:Based on the expressive features parameter Obtain the corresponding text of consistent with human face expression expression animation and/or voice.
5. 3D expressions production method as described in claim 1, which is characterized in that further include:The language of user described in synchronous acquisition Sound identifies the voice and obtains text corresponding with the voice.
6. the 3D expression production methods as described in right wants 5, which is characterized in that according to the voice identified from preset primary sound language Corresponding original voice is matched in sound library.
7. the 3D expression production methods as described in right wants 4-6 any, which is characterized in that further include by the text and/or language Sound or original voice and the expression Model Fusion.
8. a kind of 3D expressions manufacturing system, which is characterized in that including:
Depth camera, for obtaining two dimensional image and depth image;
Memory, for storing data information;
Processor is connected with the depth camera and the memory, for executing the 3D tables as described in claim 1-7 is any Feelings production method.
9. 3D expressions manufacturing system as claimed in claim 8, which is characterized in that the two dimensional image includes infrared image, institute Stating depth camera includes:
Infrared floodlight, for providing infrared illumination;
Infrared structure light projector is used for target projection infrared structure light image;
Infrared camera acquires the infrared image when the only described infrared floodlight is opened, when the only described infrared structure light projection Instrument acquires structure light image when opening;
Depth calculation processor carries out depth calculation to obtain depth image to the structure light image.
10. 3D expressions manufacturing system as claimed in claim 9, which is characterized in that the two dimensional image includes coloured image, institute It further includes color camera to state depth camera, the coloured image for acquiring target under the conditions of visible light shines.
CN201810462877.0A 2018-05-15 2018-05-15 3D expression making method and system Active CN108648251B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810462877.0A CN108648251B (en) 2018-05-15 2018-05-15 3D expression making method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810462877.0A CN108648251B (en) 2018-05-15 2018-05-15 3D expression making method and system

Publications (2)

Publication Number Publication Date
CN108648251A true CN108648251A (en) 2018-10-12
CN108648251B CN108648251B (en) 2022-05-24

Family

ID=63755712

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810462877.0A Active CN108648251B (en) 2018-05-15 2018-05-15 3D expression making method and system

Country Status (1)

Country Link
CN (1) CN108648251B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109087644A (en) * 2018-10-22 2018-12-25 奇酷互联网络科技(深圳)有限公司 Electronic equipment and its exchange method of voice assistant, the device with store function
CN109447927A (en) * 2018-10-15 2019-03-08 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN110321009A (en) * 2019-07-04 2019-10-11 北京百度网讯科技有限公司 AR expression processing method, device, equipment and storage medium
CN111530087A (en) * 2020-04-17 2020-08-14 完美世界(重庆)互动科技有限公司 Method and device for generating real-time expression package in game
CN111530088A (en) * 2020-04-17 2020-08-14 完美世界(重庆)互动科技有限公司 Method and device for generating real-time expression picture of game role
CN111530086A (en) * 2020-04-17 2020-08-14 完美世界(重庆)互动科技有限公司 Method and device for generating expression of game role

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0674315A1 (en) * 1994-03-18 1995-09-27 AT&T Corp. Audio visual dubbing system and method
US20070009180A1 (en) * 2005-07-11 2007-01-11 Ying Huang Real-time face synthesis systems
CN103093490A (en) * 2013-02-02 2013-05-08 浙江大学 Real-time facial animation method based on single video camera
US20130147788A1 (en) * 2011-12-12 2013-06-13 Thibaut WEISE Method for facial animation
US20130195428A1 (en) * 2012-01-31 2013-08-01 Golden Monkey Entertainment d/b/a Drawbridge Films Method and System of Presenting Foreign Films in a Native Language
CN104780339A (en) * 2015-04-16 2015-07-15 美国掌赢信息科技有限公司 Method and electronic equipment for loading expression effect animation in instant video
CN105528805A (en) * 2015-12-25 2016-04-27 苏州丽多数字科技有限公司 Virtual face animation synthesis method
CN105551071A (en) * 2015-12-02 2016-05-04 中国科学院计算技术研究所 Method and system of face animation generation driven by text voice
CN105608726A (en) * 2015-12-17 2016-05-25 苏州丽多数字科技有限公司 Three-dimensional interactive chatting method
US20160163084A1 (en) * 2012-03-06 2016-06-09 Adobe Systems Incorporated Systems and methods for creating and distributing modifiable animated video messages
US20160350958A1 (en) * 2013-06-07 2016-12-01 Faceshift Ag Online modeling for real-time facial animation
CN106504308A (en) * 2016-10-27 2017-03-15 天津大学 Face three-dimensional animation generation method based on 4 standards of MPEG
US20170243387A1 (en) * 2016-02-18 2017-08-24 Pinscreen, Inc. High-fidelity facial and speech animation for virtual reality head mounted displays
CN107105217A (en) * 2017-04-17 2017-08-29 深圳奥比中光科技有限公司 Multi-mode depth calculation processor and 3D rendering equipment
CN107204027A (en) * 2016-03-16 2017-09-26 卡西欧计算机株式会社 Image processing apparatus, display device, animation producing method and cartoon display method
CN107330371A (en) * 2017-06-02 2017-11-07 深圳奥比中光科技有限公司 Acquisition methods, device and the storage device of the countenance of 3D facial models
CN107484016A (en) * 2017-09-05 2017-12-15 深圳Tcl新技术有限公司 Video dubs switching method, television set and computer-readable recording medium
CN107886558A (en) * 2017-11-13 2018-04-06 电子科技大学 A kind of human face expression cartoon driving method based on RealSense
CN107945255A (en) * 2017-11-24 2018-04-20 北京德火新媒体技术有限公司 A kind of virtual actor's facial expression driving method and system

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0674315A1 (en) * 1994-03-18 1995-09-27 AT&T Corp. Audio visual dubbing system and method
US20070009180A1 (en) * 2005-07-11 2007-01-11 Ying Huang Real-time face synthesis systems
US20130147788A1 (en) * 2011-12-12 2013-06-13 Thibaut WEISE Method for facial animation
US20130195428A1 (en) * 2012-01-31 2013-08-01 Golden Monkey Entertainment d/b/a Drawbridge Films Method and System of Presenting Foreign Films in a Native Language
US20160163084A1 (en) * 2012-03-06 2016-06-09 Adobe Systems Incorporated Systems and methods for creating and distributing modifiable animated video messages
CN103093490A (en) * 2013-02-02 2013-05-08 浙江大学 Real-time facial animation method based on single video camera
US20160350958A1 (en) * 2013-06-07 2016-12-01 Faceshift Ag Online modeling for real-time facial animation
CN104780339A (en) * 2015-04-16 2015-07-15 美国掌赢信息科技有限公司 Method and electronic equipment for loading expression effect animation in instant video
CN105551071A (en) * 2015-12-02 2016-05-04 中国科学院计算技术研究所 Method and system of face animation generation driven by text voice
CN105608726A (en) * 2015-12-17 2016-05-25 苏州丽多数字科技有限公司 Three-dimensional interactive chatting method
CN105528805A (en) * 2015-12-25 2016-04-27 苏州丽多数字科技有限公司 Virtual face animation synthesis method
US20170243387A1 (en) * 2016-02-18 2017-08-24 Pinscreen, Inc. High-fidelity facial and speech animation for virtual reality head mounted displays
CN107204027A (en) * 2016-03-16 2017-09-26 卡西欧计算机株式会社 Image processing apparatus, display device, animation producing method and cartoon display method
CN106504308A (en) * 2016-10-27 2017-03-15 天津大学 Face three-dimensional animation generation method based on 4 standards of MPEG
CN107105217A (en) * 2017-04-17 2017-08-29 深圳奥比中光科技有限公司 Multi-mode depth calculation processor and 3D rendering equipment
CN107330371A (en) * 2017-06-02 2017-11-07 深圳奥比中光科技有限公司 Acquisition methods, device and the storage device of the countenance of 3D facial models
CN107484016A (en) * 2017-09-05 2017-12-15 深圳Tcl新技术有限公司 Video dubs switching method, television set and computer-readable recording medium
CN107886558A (en) * 2017-11-13 2018-04-06 电子科技大学 A kind of human face expression cartoon driving method based on RealSense
CN107945255A (en) * 2017-11-24 2018-04-20 北京德火新媒体技术有限公司 A kind of virtual actor's facial expression driving method and system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
万贤美 等: "真实感3D人脸表情合成技术研究进展", 《计算机辅助设计与图形学学报》 *
何钦政 等: "基于Kinect的人脸表情捕捉及动画模拟系统研究", 《图学学报》 *
李俊龙 等: "Kinect驱动的人脸动画合成技术研究", 《计算机工程》 *
王洵 等: "三维语音动画聊天室的设计与实现", 《计算机工程与应用》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109447927A (en) * 2018-10-15 2019-03-08 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN109087644A (en) * 2018-10-22 2018-12-25 奇酷互联网络科技(深圳)有限公司 Electronic equipment and its exchange method of voice assistant, the device with store function
CN110321009A (en) * 2019-07-04 2019-10-11 北京百度网讯科技有限公司 AR expression processing method, device, equipment and storage medium
CN111530087A (en) * 2020-04-17 2020-08-14 完美世界(重庆)互动科技有限公司 Method and device for generating real-time expression package in game
CN111530088A (en) * 2020-04-17 2020-08-14 完美世界(重庆)互动科技有限公司 Method and device for generating real-time expression picture of game role
CN111530086A (en) * 2020-04-17 2020-08-14 完美世界(重庆)互动科技有限公司 Method and device for generating expression of game role
CN111530087B (en) * 2020-04-17 2021-12-21 完美世界(重庆)互动科技有限公司 Method and device for generating real-time expression package in game
CN111530086B (en) * 2020-04-17 2022-04-22 完美世界(重庆)互动科技有限公司 Method and device for generating expression of game role
CN111530088B (en) * 2020-04-17 2022-04-22 完美世界(重庆)互动科技有限公司 Method and device for generating real-time expression picture of game role

Also Published As

Publication number Publication date
CN108648251B (en) 2022-05-24

Similar Documents

Publication Publication Date Title
CN108648251A (en) 3D expressions production method and system
US20210390767A1 (en) Computing images of head mounted display wearer
CN113287118A (en) System and method for face reproduction
US20180350123A1 (en) Generating a layered animatable puppet using a content stream
CN103731583B (en) Intelligent synthetic, print processing method is used for taking pictures
CN111971713A (en) 3D face capture and modification using image and time tracking neural networks
JP2020529084A (en) Image processing method, equipment and storage medium
KR102491140B1 (en) Method and apparatus for generating virtual avatar
JP4872135B2 (en) Technology to create face animation using face mesh
CN113228625A (en) Video conference supporting composite video streams
CN112199016B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN110555507B (en) Interaction method and device for virtual robot, electronic equipment and storage medium
CN108876732A (en) Face U.S. face method and device
CN108986190A (en) A kind of method and system of the virtual newscaster based on human-like persona non-in three-dimensional animation
CN105389090B (en) Method and device, mobile terminal and the computer terminal of game interaction interface display
CN113362263B (en) Method, apparatus, medium and program product for transforming an image of a virtual idol
WO2022252866A1 (en) Interaction processing method and apparatus, terminal and medium
WO2023035897A1 (en) Video data generation method and apparatus, electronic device, and readable storage medium
CN111638784A (en) Facial expression interaction method, interaction device and computer storage medium
CN107656611A (en) Somatic sensation television game implementation method and device, terminal device
Malleson et al. Rapid one-shot acquisition of dynamic VR avatars
CN111530086A (en) Method and device for generating expression of game role
CN113313631B (en) Image rendering method and device
US20240163527A1 (en) Video generation method and apparatus, computer device, and storage medium
Fu et al. Real-time multimodal human–avatar interaction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 11-13 / F, joint headquarters building, high tech Zone, 63 Xuefu Road, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000

Applicant after: Obi Zhongguang Technology Group Co., Ltd

Address before: 12 / F, joint headquarters building, high tech Zone, 63 Xuefu Road, Nanshan District, Shenzhen, Guangdong 518000

Applicant before: SHENZHEN ORBBEC Co.,Ltd.

GR01 Patent grant
GR01 Patent grant