CN112799510B - Automatic generation method and device for multi-style natural eyeball movement - Google Patents

Automatic generation method and device for multi-style natural eyeball movement Download PDF

Info

Publication number
CN112799510B
CN112799510B CN202110113677.6A CN202110113677A CN112799510B CN 112799510 B CN112799510 B CN 112799510B CN 202110113677 A CN202110113677 A CN 202110113677A CN 112799510 B CN112799510 B CN 112799510B
Authority
CN
China
Prior art keywords
neural network
control information
deep neural
eyeball
motion parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110113677.6A
Other languages
Chinese (zh)
Other versions
CN112799510A (en
Inventor
徐枫
吕军锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202110113677.6A priority Critical patent/CN112799510B/en
Publication of CN112799510A publication Critical patent/CN112799510A/en
Application granted granted Critical
Publication of CN112799510B publication Critical patent/CN112799510B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides an automatic generation method and device of multi-style natural eyeball movement, wherein the method comprises the following steps: extracting a head motion sequence in the human face motion video stream; obtaining style control information; inputting the head motion parameters and the style control information into a deep neural network obtained by pre-training; and acquiring natural eyeball motion parameters of corresponding styles input by the deep neural network obtained by pre-training. Therefore, a segment of natural eyeball motion can be automatically generated, and the manufacturing cost of the eyeball animation is saved.

Description

Automatic generation method and device for multi-style natural eyeball motion
Technical Field
The invention relates to the technical field of computer graphics, in particular to an automatic generation method and device of multi-style natural eyeball motion.
Background
The natural eyeball motion is important for the sense of reality of computer animation, but due to the fineness of the eyeball motion, the traditional natural eyeball motion making method needs to consume a large amount of manpower. In contrast, the head movement is less expensive to produce.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, a first objective of the present invention is to provide an automatic multi-style generation method for natural eye movement, so as to automatically generate a segment of natural eye movement and save the manufacturing cost of the eye animation.
The second purpose of the invention is to provide an automatic generation device of multi-style natural eyeball movement.
A third object of the invention is to propose a computer device.
A fourth object of the invention is to propose a non-transitory computer-readable storage medium.
A fifth object of the invention is to propose a computer program product.
In order to achieve the above object, a first embodiment of the present invention provides an automatic generation method of multi-grid natural eye movement, including: extracting a head motion sequence in the human face motion video stream; obtaining style control information; inputting the head motion parameters and the style control information into a deep neural network obtained by pre-training; and acquiring natural eyeball motion parameters of corresponding styles input by the deep neural network obtained by pre-training.
In order to achieve the above object, a second embodiment of the present invention provides an apparatus for automatically generating multi-grid natural eye movement, including: the extraction module is used for extracting a head motion sequence in the face motion video stream; the first acquisition module is used for acquiring the style control information; the first input module is used for inputting the head motion parameters and the style control information into a deep neural network obtained by pre-training; and the second acquisition module is used for acquiring the natural eyeball motion parameters of the corresponding style input by the deep neural network obtained by pre-training.
To achieve the above object, a third aspect of the present invention provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the processor implements the automatic generation method of multi-format natural eye movement according to the first aspect of the present invention.
In order to achieve the above object, a fourth aspect embodiment of the present invention proposes a non-transitory computer-readable storage medium, on which a computer program is stored, the computer program, when being executed by a processor, implementing the method for automatically generating multi-grid natural eye movement as described in the first aspect embodiment above.
In order to achieve the above object, a fifth embodiment of the present invention provides a computer program product, wherein when being executed by an instruction processor, the computer program product implements the method for automatically generating multi-format natural eye movement according to the first embodiment.
The embodiment of the invention at least has the following technical effects:
the manufacturing cost of the natural eyeball movement can be greatly reduced, and the style of the generated eyeball movement can be artificially controlled.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of an automated multi-format natural eye movement generation method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of network training according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a network according to an embodiment of the present invention;
fig. 4 is a block diagram of an apparatus for automatically generating multi-grid natural eye movement according to an embodiment of the present invention; and
fig. 5 is a block diagram of another apparatus for automatically generating multi-style natural eye movements according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
Eye movement refers to the rotation of the eyeball. A segment of natural eyeball motion parameters play a great role in improving the reality of computer animation. The traditional eyeball animation production method needs a professional animator to adjust according to experience, and is high in technical threshold and time-consuming and labor-consuming. The eyeball motion generation method provided by the invention can automatically generate a segment of natural eyeball motion, and saves the manufacturing cost of the eyeball animation. Meanwhile, the method provided by the invention can also control the style of the eyeball motion, and the richness degree of the generated eyeball motion is increased.
Related medical studies (such as vestibulo-ocular reflex) show that there is some correlation between human head movement and eye movement. By applying the deep learning technology, a deep neural network can be trained by utilizing a large number of head-eyeball motion parameters. The deep neural network obtained through training comprises the rule between head and eyeball motion, and a section of eyeball motion matched with the section of head motion can be automatically generated according to the section of head motion.
The following describes a method and an apparatus for automatically generating multi-format natural eye movement according to an embodiment of the present invention with reference to the drawings.
Fig. 1 is a schematic flow chart of an automatic generation method of multi-format natural eye movement according to an embodiment of the present invention.
As shown in fig. 1, the automatic generation method of multi-style natural eye movement includes the following steps:
step 101, extracting a head motion sequence in a face motion video stream.
In this embodiment, a motion video stream of a real face may be captured by a camera or the like, and then a head motion sequence in the face motion video stream, that is, motion parameters such as a hallucination and a translation of a head in an image frame including a head motion, may be extracted.
In one embodiment of the invention, a three-dimensional reconstruction technology can be used for extracting head motion parameters in a human face motion video stream, wherein the head motion parameters comprise the rotation r of the head and the translation t of the head;
step 102, style control information is obtained.
The style control information includes a motion intensity, a change frequency of a viewpoint, and the like.
And 103, inputting the head motion parameters and the style control information into a deep neural network obtained by pre-training.
In an embodiment of the present invention, before the head motion parameter and the style control information are input into a deep neural network obtained by training in advance, a sample head motion parameter and a standard eyeball motion parameter corresponding to the sample head motion parameter are obtained, sample style control information of an eyeball corresponding to the sample head motion parameter is determined, further, the sample head motion parameter and the sample style control information of the eyeball corresponding to the sample head motion parameter are input into an initial deep neural network, further, a reference eyeball motion parameter output by the initial deep neural network is obtained, loss values of the reference eyeball motion parameter and the standard eyeball motion parameter are calculated, when the loss value is greater than a preset threshold value, a network parameter of the initial deep neural network is adjusted until the loss value is less than or equal to the preset threshold value, the training of the initial deep neural network is completed.
In this embodiment, a data set is created by using a face three-dimensional reconstruction technique, and a head motion sequence and an eye motion sequence are obtained from a large amount of videos in which a face naturally moves. The input is color video containing human face, and the output is head motion parameter and eyeball motion parameter
In training the deep neural network, as shown in fig. 2, the stream-based deep neural network is trained. The input is head movement [ r, t]And style control information
Figure BDA0002919975280000041
And eye movement [ theta ] l ,θ r ,φ l ,φ r ]The likelihood estimate of the constrained net output is maximum.
And 104, acquiring natural eyeball motion parameters of a corresponding style output by the deep neural network obtained by pre-training.
In this embodiment, natural eyeball motion parameters, specifically, rotation parameters of both eyes and the like, of a corresponding style input by a deep neural network obtained through pre-training are obtained. Natural eye movement parameters including: respective nutation angles theta of both eyes l ,θ r And a precession angle phi l ,φ r Representing eye movement by a sequence of average nutation angle variations
Figure BDA0002919975280000042
And average precession angle variation
Figure BDA0002919975280000043
Representing style control information.
Referring to fig. 3, a trained deep neural network is used. Input as headMotion of [ r ', t']And artificially specified style control information s θ ,s φ ]And the hidden variable sampled randomly can obtain the natural eyeball motion [ theta ] of the corresponding style' i ,θ′ r ,θ′ i ,θ′ r ]。
Therefore, the invention aims to provide convenience for the production of the eyeball animation. The input of the system is a section of head motion parameters (specifically rotation and translation of the head) and style control information (such as motion intensity, change frequency of a viewpoint and the like), and the output of the system is a section of eyeball motion parameters with a specific style (specifically rotation of two eyes). The invention utilizes deep learning technology to automatically generate eyeball motion parameters with corresponding styles according to the input head motion parameters and style control information.
Thus, the input is a segment of head motion parameters (specifically rotation and translation of the head) and style control information (such as the intensity of motion, the change frequency of the viewpoint, etc.), and the output is a segment of eyeball motion parameters of a specific style (specifically rotation of both eyes). The invention utilizes deep learning technology to automatically generate eyeball motion parameters with corresponding styles according to the input head motion parameters and style control information. The method of the invention has at least the following characteristics:
first, probabilistic eye movement generation using flow-based neural network techniques is greatly limited in its expressive power if paired head-eye movement data is used, i.e., all head movements in the training set have corresponding true outputs. With the flow-based neural network, only the probability that the output is constrained to be the true value is the maximum, and probabilistic eye movement generation can be performed. That is, the generated eye movement is not necessarily a true value, but satisfies a distribution of true values as much as possible.
Secondly, the method comprises the following steps: and adding style control information when training the neural network so that the final neural network has style control capability.
If the style control information is not added during training, the finally obtained neural network cannot be manually intervened, and the generated result is completely determined by the training set of the network. By adding style control information in the training stage, the neural network can be forced to mine data rules except for the style information, and human intervention can be performed by inputting different style control information in the use stage, so that the style control effect is achieved.
Further, in an embodiment of the present invention, the animation display may be controlled directly based on the natural eye movement parameter, that is, the moving track parameter of the preset eye animation is determined according to the natural eye movement parameter, and the display of the preset eye animation is controlled according to the moving track parameter. In summary, the automatic generation method of multi-style natural eye movement of the embodiment of the invention can greatly reduce the manufacturing cost of the natural eye movement and can artificially control the style of the generated eye movement.
In order to implement the above embodiment, the present invention further provides an automatic generation device for multi-grid natural eye movement.
Fig. 4 is a schematic structural diagram of an automatic generation device of multi-grid natural eye movement according to an embodiment of the present invention.
As shown in fig. 4, the apparatus for automatically generating multi-format natural eye movement includes: the system comprises an extraction module 410, a first acquisition module 420, a first input module 430 and a second acquisition module 440.
The extracting module 410 is configured to extract a head motion sequence in the face motion video stream;
a first obtaining module 420, configured to obtain style control information;
a first input module 430, configured to input the head movement parameter and the style control information into a deep neural network obtained through pre-training;
and a second obtaining module 440, configured to obtain the natural eyeball motion parameters of the corresponding style input by the deep neural network obtained through the pre-training.
In an embodiment of the present invention, as shown in fig. 5, on the basis of fig. 4, the apparatus further includes: a third acquisition module 450 and determination module 460, a second input module 470, a fourth acquisition module 480, a calculation module 490, and an adjustment module 4100, wherein,
a third obtaining module 450, configured to obtain a sample head motion parameter and a standard eyeball motion parameter corresponding to the sample head motion parameter;
a determining module 460, configured to determine sample style control information of an eyeball corresponding to the sample head movement parameter;
a second input module 470, configured to input the sample head motion parameter and sample style control information of an eyeball corresponding to the sample head motion parameter into an initial deep neural network;
a fourth obtaining module 480, configured to obtain a reference eye movement parameter output by the initial deep neural network;
a calculating module 490, for calculating the loss values of the reference eye movement parameter and the standard eye movement parameter;
an adjusting module 4100, configured to adjust a network parameter of the initial deep neural network when the loss value is greater than a preset threshold, until the loss value is less than or equal to the preset threshold, complete training of the initial deep neural network.
It should be noted that the explanation of the foregoing embodiment of the method for automatically generating multi-grid natural eye movement is also applicable to the apparatus for automatically generating multi-grid natural eye movement of this embodiment, and details are not repeated here.
In order to implement the foregoing embodiments, the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the processor implements the automatic generation method of multi-format natural eye movement described in the foregoing embodiments.
In order to achieve the above embodiments, the present invention also proposes a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method for automated generation of multi-style natural eye movement as described in the above embodiments.
In order to implement the above embodiments, the present invention further provides a computer program product, which when executed by an instruction processor in the computer program product, implements the automatic generation method of multi-format natural eye movement as described in the above embodiments.
In the description of the specification, reference to the description of "one embodiment," "some embodiments," "an example," "a specific example," or "some examples" or the like means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are well known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (7)

1. An automatic generation method of multi-style natural eye movement is characterized by comprising the following steps:
extracting head motion parameters in the face motion video stream;
obtaining style control information, wherein the style control information comprises the intensity of movement and the change frequency of a viewpoint;
inputting the head motion parameters and the style control information into a deep neural network obtained by pre-training;
acquiring natural eyeball motion parameters of corresponding styles output by the deep neural network obtained by pre-training;
before the inputting the head movement parameters and the style control information into the deep neural network obtained by pre-training, the method further comprises the following steps:
acquiring a sample head motion parameter and a standard eyeball motion parameter corresponding to the sample head motion parameter;
determining sample style control information of an eyeball corresponding to the sample head movement parameters;
inputting the sample head motion parameters and sample style control information of the eyeballs corresponding to the sample head motion parameters into an initial deep neural network;
acquiring a reference eyeball movement parameter output by the initial deep neural network;
calculating loss values of the reference eyeball movement parameter and the standard eyeball movement parameter;
and when the loss value is greater than a preset threshold value, adjusting the network parameters of the initial deep neural network until the loss value is less than or equal to the preset threshold value, and finishing the training of the initial deep neural network.
2. The method of claim 1, wherein inputting the head motion parameters and the style control information into a pre-trained deep neural network comprises:
randomly sampling hidden variables; and inputting the head motion parameters, the hidden variables and the style control information into a deep neural network obtained by pre-training.
3. The method of claim 1, wherein the natural eye movement parameters comprise:
the respective nutation and precession angles of the eyes;
the style control information includes a mean of the nutation angles and a mean of the precession angles.
4. The method of claim 1, further comprising:
determining a running track parameter of a preset eyeball animation according to the natural eyeball motion parameter;
and controlling the display of the preset eyeball animation according to the running track parameters.
5. An automated multi-format natural eye movement generation device, comprising:
the extraction module is used for extracting head motion parameters in the face motion video stream;
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring style control information, and the style control information comprises the intensity of movement and the change frequency of a viewpoint;
the first input module is used for inputting the head motion parameters and the style control information into a deep neural network obtained by pre-training;
the second acquisition module is used for acquiring natural eyeball motion parameters of corresponding styles input by the deep neural network obtained by pre-training;
wherein the apparatus further comprises:
the third acquisition module is used for acquiring the sample head motion parameters and the standard eyeball motion parameters corresponding to the sample head motion parameters;
the determining module is used for determining sample style control information of eyeballs corresponding to the sample head motion parameters;
the second input module is used for inputting the sample head motion parameters and sample style control information of eyeballs corresponding to the sample head motion parameters into an initial deep neural network;
the fourth acquisition module is used for acquiring the reference eyeball motion parameters output by the initial deep neural network;
the calculation module is used for calculating the loss values of the reference eyeball motion parameters and the standard eyeball motion parameters;
and the adjusting module is used for adjusting the network parameters of the initial deep neural network when the loss value is greater than a preset threshold value until the loss value is less than or equal to the preset threshold value, and then finishing the training of the initial deep neural network.
6. A computer arrangement comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method according to any of claims 1-4 when executing the computer program.
7. A non-transitory computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the method of any one of claims 1-4.
CN202110113677.6A 2021-01-27 2021-01-27 Automatic generation method and device for multi-style natural eyeball movement Active CN112799510B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110113677.6A CN112799510B (en) 2021-01-27 2021-01-27 Automatic generation method and device for multi-style natural eyeball movement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110113677.6A CN112799510B (en) 2021-01-27 2021-01-27 Automatic generation method and device for multi-style natural eyeball movement

Publications (2)

Publication Number Publication Date
CN112799510A CN112799510A (en) 2021-05-14
CN112799510B true CN112799510B (en) 2022-09-09

Family

ID=75812242

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110113677.6A Active CN112799510B (en) 2021-01-27 2021-01-27 Automatic generation method and device for multi-style natural eyeball movement

Country Status (1)

Country Link
CN (1) CN112799510B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11225967A (en) * 1998-02-12 1999-08-24 Morita Mfg Co Ltd Oculogyration analysis method, oculogyration display and diagnosis device, and recording medium in which private diagnosis file is recorded
CN107977605A (en) * 2017-11-08 2018-05-01 清华大学 Ocular Boundary characteristic extraction method and device based on deep learning
CN109359512A (en) * 2018-08-28 2019-02-19 深圳壹账通智能科技有限公司 Eyeball position method for tracing, device, terminal and computer readable storage medium
CN110780742A (en) * 2019-10-31 2020-02-11 Oppo广东移动通信有限公司 Eyeball tracking processing method and related device
CN110807364A (en) * 2019-09-27 2020-02-18 中国科学院计算技术研究所 Modeling and capturing method and system for three-dimensional face and eyeball motion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11225967A (en) * 1998-02-12 1999-08-24 Morita Mfg Co Ltd Oculogyration analysis method, oculogyration display and diagnosis device, and recording medium in which private diagnosis file is recorded
CN107977605A (en) * 2017-11-08 2018-05-01 清华大学 Ocular Boundary characteristic extraction method and device based on deep learning
CN109359512A (en) * 2018-08-28 2019-02-19 深圳壹账通智能科技有限公司 Eyeball position method for tracing, device, terminal and computer readable storage medium
CN110807364A (en) * 2019-09-27 2020-02-18 中国科学院计算技术研究所 Modeling and capturing method and system for three-dimensional face and eyeball motion
CN110780742A (en) * 2019-10-31 2020-02-11 Oppo广东移动通信有限公司 Eyeball tracking processing method and related device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
头眼协调运动控制仿生模型;毛晓波等;《生物医学工程学杂志》;20111031;第28卷(第05期);第895-900页 *

Also Published As

Publication number Publication date
CN112799510A (en) 2021-05-14

Similar Documents

Publication Publication Date Title
US11055828B2 (en) Video inpainting with deep internal learning
Song et al. Geometry-aware face completion and editing
US10600171B2 (en) Image-blending via alignment or photometric adjustments computed by a neural network
CN111542861A (en) System and method for rendering an avatar using a depth appearance model
US20180025749A1 (en) Automatic generation of semantic-based cinemagraphs
Cheng et al. Adaptively-realistic image generation from stroke and sketch with diffusion model
US10515456B2 (en) Synthesizing hair features in image content based on orientation data from user guidance
CN108665494A (en) Depth of field real-time rendering method based on quick guiding filtering
CN112221145B (en) Game face model generation method and device, storage medium and electronic equipment
Wang et al. Rewriting geometric rules of a GAN.
CN110796593A (en) Image processing method, device, medium and electronic equipment based on artificial intelligence
CN113033442B (en) StyleGAN-based high-freedom face driving method and device
DE102022106057A1 (en) AUTHENTICATOR-INTEGRATED GENERATIVE ADVERSARIAL NETWORK (GAN) FOR SECURE DEEPFAKE GENERATION
Dogan et al. Semi-supervised image attribute editing using generative adversarial networks
CN115689869A (en) Video makeup migration method and system
GB2612881A (en) Techniques for re-aging faces in images and video frames
CN114241102A (en) Method and device for reconstructing and editing human face details based on parameterized model
CN113870315A (en) Training method of action migration model and action migration method
Gai et al. Artistic low poly rendering for images
CN115100334A (en) Image edge drawing and animation method, device and storage medium
KR20230110787A (en) Methods and systems for forming personalized 3D head and face models
CN112799510B (en) Automatic generation method and device for multi-style natural eyeball movement
KR20230060726A (en) Method for providing face synthesis service and apparatus for same
CN113688882A (en) Training method and device of memory-enhanced continuous learning neural network model
CN116091705A (en) Variable topology dynamic scene reconstruction and editing method and device based on nerve radiation field

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant