CN111105484A - Paperless 2D (two-dimensional) string frame optimization method - Google Patents

Paperless 2D (two-dimensional) string frame optimization method Download PDF

Info

Publication number
CN111105484A
CN111105484A CN201911220980.5A CN201911220980A CN111105484A CN 111105484 A CN111105484 A CN 111105484A CN 201911220980 A CN201911220980 A CN 201911220980A CN 111105484 A CN111105484 A CN 111105484A
Authority
CN
China
Prior art keywords
model
animation
data information
frame
fusion model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911220980.5A
Other languages
Chinese (zh)
Other versions
CN111105484B (en
Inventor
熊可
陈睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shimei Jingdian Film Co Ltd
Original Assignee
Beijing Shimei Jingdian Film Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shimei Jingdian Film Co Ltd filed Critical Beijing Shimei Jingdian Film Co Ltd
Priority to CN201911220980.5A priority Critical patent/CN111105484B/en
Publication of CN111105484A publication Critical patent/CN111105484A/en
Application granted granted Critical
Publication of CN111105484B publication Critical patent/CN111105484B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a paperless 2D string frame optimization method, which comprises the following steps: s1: constructing a 2D model by a preset method; s2: performing frame-stringing processing on the 2D model, the 3D scene model and the 3D role model by adopting a preset method to obtain a fusion model; s3: performing real-time 3D rendering on the fusion model by adopting a preset method to obtain a primary 2D animation; s4: and performing post-synthesis processing on the primary 2D animation by a preset method to obtain the 2D animation. Has the advantages that: the two-dimensional animation method has the advantages that two-dimensional technology and three-dimensional technology are combined, two-dimensional and three-dimensional resources in the animation production process are fully utilized, animation expression between a three-dimensional scene and a plane is ingeniously realized, and animation with more vivid and rich pictures is generated.

Description

Paperless 2D (two-dimensional) string frame optimization method
Technical Field
The invention relates to the technical field of animation production, in particular to a paperless 2D (two-dimensional) string frame optimization method.
Background
Along with the continuous improvement of living standard, people's leisure entertainment mode also becomes more and more abundant, and animation is as a comprehensive art that fuses numerous artistic door types, because of the randomness of its action and the exaggeration of image, is deeply loved by the audience of vast audiences, especially the audience of children.
At present, the two-dimensional animation has wide application fields, including a plurality of fields such as movie and television, entertainment, education, advertisement and the like. Many animations and most television animations are accomplished using two-dimensional animation techniques. The traditional two-dimensional animation is a traditional plane animation which is manufactured frame by frame, the animation has the characteristics of strong flexibility, rich colors, lower cost and the like, and the playing carrier is a network, a television and various carriers for playing videos. However, the existing two-dimensional animation has a single lens expression mode, so that the generated picture lacks a corresponding spatial stereoscopic impression, and more vivid and rich animation cannot be generated, thereby greatly influencing the quality of the two-dimensional animation.
An effective solution to the problems in the related art has not been proposed yet.
Disclosure of Invention
Technical problem to be solved
Aiming at the defects of the prior art, the invention provides a paperless 2D string frame optimization method, which has the advantages of generating vivid and rich picture animations and improving the quality of two-dimensional animations, thereby solving the problems in the background art.
(II) technical scheme
In order to realize the advantages of generating vivid and rich picture animation and improving the quality of two-dimensional animation, the invention adopts the following specific technical scheme:
a paperless 2D string frame optimization method comprises the following steps:
s1: constructing a 2D model by a preset method;
s2: performing frame-stringing processing on the 2D model, the 3D scene model and the 3D role model by adopting a preset method to obtain a fusion model;
s3: performing real-time 3D rendering on the fusion model by adopting a preset method to obtain a primary 2D animation;
s4: and performing post-synthesis processing on the preliminary 2D animation by a preset method to obtain the 2D animation.
Further, in order to complete the construction of the S2 model, the step S1 of constructing the 2D model by using a preset method specifically includes the following steps: and (5) finishing the construction of the 2D animation model by adopting animation software.
Further, in order to improve the light, shadow and stereoscopic impression of the picture of the 2D animation, the step S2 performs frame-crossing processing on the 2D model, the 3D scene model and the 3D character model by using a preset method, and the obtaining of the fusion model specifically includes the following steps:
s21: performing frame-string processing on the 2D model and the 3D scene model by adopting a preset method to obtain a primary fusion model;
s22: and performing frame-string processing on the preliminary fusion model and the 3D role model by a preset method to obtain a fusion model.
Further, in order to obtain a 3D scene effect in the 2D model, the step S21 performs frame concatenation on the 2D model and the 3D scene model by using a preset method, and obtaining the preliminary fusion model specifically includes the following steps:
s211: determining a reference characteristic point of a scene and a reference target area where the scene is located based on each frame of picture in the 2D model;
s212: starting from a first frame of picture, taking each frame of picture of the 2D model as a current frame of picture, and acquiring a target area of each current frame of picture according to the reference feature points and the reference target area;
s213: extracting target data of each frame of picture in the 2D model according to the target area;
s214: virtualizing 3D scene animation data information combined with each animation node of the target data information;
s215: and connecting the 3D scene animation data information with the two-dimensional data of the 2D model, and realizing the matching of the 3D scene animation data information and the two-dimensional data information of the 2D model through a preset method to obtain a primary fusion model.
Further, in order to better implement matching between the 3D scene animation data information and the two-dimensional data information, the step S215 links the 3D scene animation data information and the two-dimensional data of the 2D model, and implements matching between the 3D scene animation data information and the two-dimensional data information of the 2D model by using a preset method, so as to obtain a preliminary fusion model, further including the steps of: and performing transition processing on the joint part through edge tracing processing, and processing the color and self-luminous data information in the 3D scene animation data information to weaken the three-dimensional degree of the 3D scene animation data information and realize matching with two-dimensional data information.
Further, in order to obtain a 3D role effect in the preliminary fusion model, the step S22 performs frame-chaining processing on the preliminary fusion model and the 3D role model by using a preset method, and the obtaining of the fusion model specifically includes the following steps:
s221: determining a reference characteristic point of a role and a reference target area where the role is located based on each frame of picture in the preliminary fusion model;
s222: starting from a first frame of picture, taking each frame of picture of the preliminary fusion model as a current frame of picture, and acquiring a target area of each current frame of picture according to the reference feature points and the reference target area;
s223: extracting target data of each frame of picture in the preliminary fusion model according to the target area;
s224: virtualizing 3D character animation data information combined with each animation node of the target data information;
s225: and connecting the 3D role animation data information with the two-dimensional data of the preliminary fusion model, and realizing the matching of the 3D role animation data information and the two-dimensional data information of the preliminary fusion model by a preset method to obtain the fusion model.
Further, in order to better implement matching between the 3D character animation data information and the two-dimensional data information, the step S225 links the 3D character animation data information with the two-dimensional data of the preliminary fusion model, and implements matching between the 3D character animation data information and the two-dimensional data information of the preliminary fusion model by a preset method, so as to obtain the fusion model, further including the steps of: and performing transition processing on the joint part through edge tracing processing, and processing the color and self-luminous data information in the 3D character animation data information to weaken the three-dimensional degree of the 3D character animation data information and realize matching with two-dimensional data information.
Further, in order to obtain a preliminary 2D animation and ensure the quality of the 2D animation, the step S3 performs real-time 3D rendering on the fusion model by using a preset method, and obtaining the preliminary 2D animation specifically includes the following steps:
s31: de-photorealisation processing is carried out on the illumination effect of the fusion model by adopting a preset method;
s32: rendering a black side line which is larger than the object in the fusion model by a preset method;
s33: closing back detection in the fusion model, and drawing a backward surface in black;
s34: and respectively processing the static shadow and the dynamic shadow in the fusion model to obtain a primary 2D animation.
Further, in order to ensure smooth proceeding of subsequent 3D rendering, step S3 adopts a preset method to perform real-time 3D rendering on the fusion model, and obtaining the preliminary 2D animation further includes the following steps: the image operation program based on the GPU is written by using a DirectX11 graphic architecture and through C + +, CG and HLSL languages.
Further, in order to obtain a final high-quality 2D animation, in step S4, the post-synthesis processing is performed on the preliminary 2D animation by using a preset method, and the obtaining of the 2D animation specifically includes the following steps:
s41: clipping the primary 2D animation by adopting a preset method, and making a special effect;
s42: and carrying out dubbing processing on the 2D animation after the special effect processing to obtain the 2D animation.
(III) advantageous effects
Compared with the prior art, the paperless 2D string frame optimization method provided by the invention has the following beneficial effects: the invention combines the two-dimensional technology and the three-dimensional technology, adds the three-dimensional scene and the three-dimensional character in the two-dimensional animation, fully utilizes the two-dimensional and three-dimensional resources in the animation production process, skillfully realizes the animation expression between a solid and a plane, generates the animation with more vivid and rich pictures, and effectively increases the light shadow and the three-dimensional sense of the pictures in the two-dimensional animation by processing through the 3D rendering technology.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a flow chart of a paperless 2D string frame optimization method according to an embodiment of the present invention.
Detailed Description
For further explanation of the various embodiments, the drawings which form a part of the disclosure and which are incorporated in and constitute a part of this specification, illustrate embodiments and, together with the description, serve to explain the principles of operation of the embodiments, and to enable others of ordinary skill in the art to understand the various embodiments and advantages of the invention, and, by reference to these figures, reference is made to the accompanying drawings, which are not to scale and wherein like reference numerals generally refer to like elements.
According to an embodiment of the invention, a paperless 2D string frame optimization method is provided.
Referring to the drawings and the detailed description, the invention will be further explained, as shown in fig. 1, in which a paperless 2D string frame optimization method according to an embodiment of the invention includes the following steps:
s1: constructing a 2D model by a preset method; specifically, in step S1, animation software is used to complete the construction of the 2D animation model through a 2D animation drawing process;
s2: performing frame-stringing processing on the 2D model, the 3D scene model and the 3D role model by adopting a preset method to obtain a fusion model;
wherein, the S2 specifically includes the following steps:
s21: performing frame-string processing on the 2D model and the 3D scene model by adopting a preset method to obtain a primary fusion model;
specifically, the S21 specifically includes the following steps:
s211: determining a reference characteristic point of a scene and a reference target area where the scene is located based on each frame of picture in the 2D model; preferably, the determining the reference feature points of the scene includes the following steps: calibrating one or more specific points of the scene in each frame of picture in the 2D model to serve as reference characteristic points, and acquiring coordinate values of the reference characteristic points; determining the reference target area in which the scene is located comprises the following steps: drawing a peripheral rectangular area of the scene in each frame of picture in the 2D model to serve as a reference target area, and acquiring boundary coordinate values of the rectangular area;
s212: starting from a first frame of picture, taking each frame of picture of the 2D model as a current frame of picture, and acquiring a target area of each current frame of picture according to the reference feature points and the reference target area; preferably, the obtaining of the target area of the current frame picture includes the following steps: firstly, identifying and obtaining the characteristic points of a scene in a current frame picture, then calculating the motion deviation of the characteristic points relative to the reference characteristic points, and finally obtaining the target area of the current frame picture according to the motion deviation and the reference target area;
s213: extracting target data of each frame of picture in the 2D model according to the target area;
s214: virtualizing 3D scene animation data information combined with each animation node of the target data information; preferably, the 3D scene animation data information includes data animation information of a cube entity model, contour line model data animation information of an animation entity, an independent model contour line stroke animation data and animation data of an animation entity view angle transition;
s215: connecting the 3D scene animation data information with the two-dimensional data of the 2D model, and realizing the matching of the 3D scene animation data information and the two-dimensional data information of the 2D model through a preset method to obtain a primary fusion model; preferably, the joint is subjected to transition processing through edge tracing processing, and color and self-luminous data information in the 3D scene animation data information are processed, so that the three-dimensional stereo degree of the 3D scene animation data information is weakened, and the matching with two-dimensional data information is realized.
S22: and performing frame-string processing on the preliminary fusion model and the 3D role model by a preset method to obtain a fusion model.
Specifically, the S22 specifically includes the following steps:
s221: determining a reference characteristic point of a role and a reference target area where the role is located based on each frame of picture in the preliminary fusion model; preferably, the step of determining the reference feature point of the character comprises the following steps: calibrating one or more specific points of the role in each frame of picture in the preliminary fusion model to serve as reference feature points, and acquiring coordinate values of the reference feature points; the step of determining the reference target area where the character is located comprises the following steps: drawing a peripheral rectangular area of the role in each frame of picture in the preliminary fusion model as a reference target area, and acquiring boundary coordinate values of the rectangular area;
s222: starting from a first frame of picture, taking each frame of picture of the preliminary fusion model as a current frame of picture, and acquiring a target area of each current frame of picture according to the reference feature points and the reference target area; preferably, the obtaining of the target area of the current frame picture includes the following steps: firstly, identifying and obtaining the characteristic points of a scene in a current frame picture, then calculating the motion deviation of the characteristic points relative to the reference characteristic points, and finally obtaining the target area of the current frame picture according to the motion deviation and the reference target area;
s223: extracting target data of each frame of picture in the preliminary fusion model according to the target area;
s224: virtualizing 3D character animation data information combined with each animation node of the target data information; preferably, the 3D character animation data information includes data animation information of a cubic solid model, contour model data animation information of an animation solid, an independent model contour tracing animation data and animation data of an animation solid view angle transition;
s225: and connecting the 3D role animation data information with the two-dimensional data of the preliminary fusion model, and realizing the matching of the 3D role animation data information and the two-dimensional data information of the preliminary fusion model by a preset method to obtain the fusion model. Preferably, the joint is subjected to transition processing through edge tracing processing, and color and self-luminous data information in the 3D character animation data information are processed to weaken the three-dimensional stereo degree of the 3D character animation data information, so that matching with two-dimensional data information is realized.
S3: performing real-time 3D rendering on the fusion model by adopting a preset method to obtain a primary 2D animation; specifically, the S3 further includes writing a GPU-based image operation program in C + +, CG, HLSL languages using DirectX11 graphics architecture.
Wherein, the S3 specifically includes the following steps:
s31: de-photorealisation processing is carried out on the illumination effect of the fusion model by adopting a preset method; specifically, the step S41 of de-photorealising the illumination effect of the fusion model by using a preset method includes performing pixel-by-pixel calculation on the value of the conventional light source and projecting the value onto an independent bright and dark region;
s32: rendering a black side line which is larger than the object in the fusion model by a preset method;
s33: closing back detection in the fusion model, and drawing a backward surface in black;
s34: and respectively processing the static shadow and the dynamic shadow in the fusion model to obtain a primary 2D animation. Specifically, the step S44 processes the static shadow and the dynamic shadow in the fusion model respectively to obtain a preliminary 2D animation, which includes generating continuously increasing or decreasing scattering values on the surface of the object, and cutting the surface of the object using the scattering values.
S4: and performing post-synthesis processing on the preliminary 2D animation by a preset method to obtain the 2D animation.
Wherein, the S4 specifically includes the following steps:
s41: clipping the primary 2D animation by adopting a preset method, and making a special effect;
s42: and carrying out dubbing processing on the 2D animation after the special effect processing to obtain the 2D animation.
In summary, according to the technical scheme of the invention, the two-dimensional technology and the three-dimensional technology are combined, the three-dimensional scene and the three-dimensional character are added into the two-dimensional animation, the two-dimensional and three-dimensional resources in the animation production process are fully utilized, the animation expression between the three-dimensional scene and the three-dimensional plane is skillfully realized, the animation with more vivid and rich pictures is generated, and the light shadow and the stereoscopic impression of the pictures in the two-dimensional animation are effectively increased through the processing of the 3D rendering technology.
In the present invention, the above-mentioned embodiments are only preferred embodiments of the present invention, and the present invention is not limited thereto, and any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A paperless 2D string frame optimization method is characterized by comprising the following steps:
s1: constructing a 2D model by a preset method;
s2: performing frame-stringing processing on the 2D model, the 3D scene model and the 3D role model by adopting a preset method to obtain a fusion model;
s3: performing real-time 3D rendering on the fusion model by adopting a preset method to obtain a primary 2D animation;
s4: and performing post-synthesis processing on the preliminary 2D animation by a preset method to obtain the 2D animation.
2. The paperless 2D string frame optimization method according to claim 1, wherein the step S1 of building the 2D model by the preset method specifically includes the steps of: and (5) finishing the construction of the 2D animation model by adopting animation software.
3. The paperless 2D string frame optimization method according to claim 1, wherein in the step S2, a preset method is adopted to string frames of the 2D model, the 3D scene model and the 3D character model, and obtaining a fusion model specifically includes the following steps:
s21: performing frame-string processing on the 2D model and the 3D scene model by adopting a preset method to obtain a primary fusion model;
s22: and performing frame-string processing on the preliminary fusion model and the 3D role model by a preset method to obtain a fusion model.
4. The paperless 2D series frame optimization method according to claim 3, wherein in the step S21, the series frame processing is performed on the 2D model and the 3D scene model by using a preset method, and the obtaining of the preliminary fusion model specifically includes the following steps:
s211: determining a reference characteristic point of a scene and a reference target area where the scene is located based on each frame of picture in the 2D model;
s212: starting from a first frame of picture, taking each frame of picture of the 2D model as a current frame of picture, and acquiring a target area of each current frame of picture according to the reference feature points and the reference target area;
s213: extracting target data of each frame of picture in the 2D model according to the target area;
s214: virtualizing 3D scene animation data information combined with each animation node of the target data information;
s215: and connecting the 3D scene animation data information with the two-dimensional data of the 2D model, and realizing the matching of the 3D scene animation data information and the two-dimensional data information of the 2D model through a preset method to obtain a primary fusion model.
5. The paperless 2D string frame optimization method of claim 4, wherein the step S215 connects the 3D scene animation data information with the two-dimensional data of the 2D model, and matches the 3D scene animation data information with the two-dimensional data of the 2D model through a preset method, so as to obtain a preliminary fusion model, further comprising the steps of: and performing transition processing on the joint part through edge tracing processing, and processing the color and self-luminous data information in the 3D scene animation data information to weaken the three-dimensional degree of the 3D scene animation data information and realize matching with two-dimensional data information.
6. The paperless 2D string frame optimization method according to claim 3, wherein in the step S22, the preliminary fusion model and the 3D role model are subjected to string frame processing through a preset method, and the obtaining of the fusion model specifically comprises the following steps:
s221: determining a reference characteristic point of a role and a reference target area where the role is located based on each frame of picture in the preliminary fusion model;
s222: starting from a first frame of picture, taking each frame of picture of the preliminary fusion model as a current frame of picture, and acquiring a target area of each current frame of picture according to the reference feature points and the reference target area;
s223: extracting target data of each frame of picture in the preliminary fusion model according to the target area;
s224: virtualizing 3D character animation data information combined with each animation node of the target data information;
s225: and connecting the 3D role animation data information with the two-dimensional data of the preliminary fusion model, and realizing the matching of the 3D role animation data information and the two-dimensional data information of the preliminary fusion model by a preset method to obtain the fusion model.
7. The paperless 2D string frame optimization method according to claim 6, wherein the step S225 links the 3D character animation data information with the two-dimensional data of the preliminary fusion model, and matches the 3D character animation data information with the two-dimensional data of the preliminary fusion model by a preset method, so as to obtain the fusion model further comprises the following steps: and performing transition processing on the joint part through edge tracing processing, and processing the color and self-luminous data information in the 3D character animation data information to weaken the three-dimensional degree of the 3D character animation data information and realize matching with two-dimensional data information.
8. The paperless 2D string frame optimization method according to claim 1, wherein the step S3 is implemented by performing real-time 3D rendering on the fusion model by using a preset method, and the obtaining of the preliminary 2D animation specifically includes the following steps:
s31: de-photorealisation processing is carried out on the illumination effect of the fusion model by adopting a preset method;
s32: rendering a black side line which is larger than the object in the fusion model by a preset method;
s33: closing back detection in the fusion model, and drawing a backward surface in black;
s34: and respectively processing the static shadow and the dynamic shadow in the fusion model to obtain a primary 2D animation.
9. The paperless 2D string frame optimization method according to claim 1, wherein the step S3 is performed by performing real-time 3D rendering on the fusion model by using a preset method, and obtaining the preliminary 2D animation further comprises the following steps: the image operation program based on the GPU is written by using a DirectX11 graphic architecture and through C + +, CG and HLSL languages.
10. The paperless 2D string frame optimization method according to claim 1, wherein the step S4 performs post-synthesis processing on the preliminary 2D animation by using a preset method, and obtaining the 2D animation specifically includes the following steps:
s41: clipping the primary 2D animation by adopting a preset method, and making a special effect;
s42: and carrying out dubbing processing on the 2D animation after the special effect processing to obtain the 2D animation.
CN201911220980.5A 2019-12-03 2019-12-03 Paperless 2D serial frame optimization method Active CN111105484B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911220980.5A CN111105484B (en) 2019-12-03 2019-12-03 Paperless 2D serial frame optimization method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911220980.5A CN111105484B (en) 2019-12-03 2019-12-03 Paperless 2D serial frame optimization method

Publications (2)

Publication Number Publication Date
CN111105484A true CN111105484A (en) 2020-05-05
CN111105484B CN111105484B (en) 2023-08-29

Family

ID=70420940

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911220980.5A Active CN111105484B (en) 2019-12-03 2019-12-03 Paperless 2D serial frame optimization method

Country Status (1)

Country Link
CN (1) CN111105484B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112190943A (en) * 2020-11-09 2021-01-08 网易(杭州)网络有限公司 Game display method and device, processor and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120075284A1 (en) * 2010-09-24 2012-03-29 Alec Rivers Computer Method and Apparatus for Rotating 2D Cartoons Using 2.5D Cartoon Models
US20130293537A1 (en) * 2011-01-05 2013-11-07 Cisco Technology Inc. Coordinated 2-Dimensional and 3-Dimensional Graphics Processing
CN104268918A (en) * 2014-10-09 2015-01-07 佛山精鹰传媒股份有限公司 Method for blending two-dimensional animation and three-dimensional animation
CN104599305A (en) * 2014-12-22 2015-05-06 浙江大学 Two-dimension and three-dimension combined animation generation method
CN106415667A (en) * 2014-04-25 2017-02-15 索尼互动娱乐美国有限责任公司 Computer graphics with enhanced depth effect

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120075284A1 (en) * 2010-09-24 2012-03-29 Alec Rivers Computer Method and Apparatus for Rotating 2D Cartoons Using 2.5D Cartoon Models
US20130293537A1 (en) * 2011-01-05 2013-11-07 Cisco Technology Inc. Coordinated 2-Dimensional and 3-Dimensional Graphics Processing
CN106415667A (en) * 2014-04-25 2017-02-15 索尼互动娱乐美国有限责任公司 Computer graphics with enhanced depth effect
CN104268918A (en) * 2014-10-09 2015-01-07 佛山精鹰传媒股份有限公司 Method for blending two-dimensional animation and three-dimensional animation
CN104599305A (en) * 2014-12-22 2015-05-06 浙江大学 Two-dimension and three-dimension combined animation generation method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112190943A (en) * 2020-11-09 2021-01-08 网易(杭州)网络有限公司 Game display method and device, processor and electronic equipment

Also Published As

Publication number Publication date
CN111105484B (en) 2023-08-29

Similar Documents

Publication Publication Date Title
JP4500614B2 (en) Image-based rendering and editing method and apparatus
CN102663766B (en) Non-photorealistic based art illustration effect drawing method
CN104392479A (en) Method of carrying out illumination coloring on pixel by using light index number
CN105956995A (en) Face appearance editing method based on real-time video proper decomposition
CN111105484B (en) Paperless 2D serial frame optimization method
CN111354067A (en) Multi-model same-screen rendering method based on Unity3D engine
CN107833266B (en) Holographic image acquisition method based on color block matching and affine correction
CN111369654A (en) 2D CG animation mixing method
US20230171506A1 (en) Increasing dynamic range of a virtual production display
KR100454070B1 (en) Method for Real-time Toon Rendering with Shadow using computer
CN104050700A (en) Image synthetic method and device
CN102467747A (en) Building decoration animation three-dimensional (3D) effect processing method
CN116824029B (en) Method, device, electronic equipment and storage medium for generating holographic shadow
CN116310150B (en) Furniture multi-view three-dimensional model reconstruction method based on multi-scale feature fusion
Luntraru et al. Harmonizing 2D and 3D in Modern Animation.
KR101717777B1 (en) 3D Animation production methods
Qianqian Visual Design Comfort of OCULUS VR Panoramic Stereo Video Based on Image Recognition Algorithm
Song Research on the Application of Computer 3D Technology in the Creation of Films Adapted from Literary Works
Ali et al. Soft bilateral filtering volumetric shadows using cube shadow maps
Sauvaget et al. Stylization of lighting effects for images
CN117974872A (en) Rendering method and device of three-dimensional text, electronic equipment and readable storage medium
LingJuan The application of computer image processing technology in painting creation
KR20050080334A (en) Method of synthesizing a multitexture and recording medium thereof
CN116245985A (en) Three-dimensional animation graphic image processing system based on artificial intelligence
Brakhage Making Space

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant