CN115797851B - Cartoon video processing method and system - Google Patents

Cartoon video processing method and system Download PDF

Info

Publication number
CN115797851B
CN115797851B CN202310084827.4A CN202310084827A CN115797851B CN 115797851 B CN115797851 B CN 115797851B CN 202310084827 A CN202310084827 A CN 202310084827A CN 115797851 B CN115797851 B CN 115797851B
Authority
CN
China
Prior art keywords
data
cartoon
character
video
frame image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310084827.4A
Other languages
Chinese (zh)
Other versions
CN115797851A (en
Inventor
江学如
江峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Miyu Technology Co ltd
Original Assignee
Anhui Miyu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Miyu Technology Co ltd filed Critical Anhui Miyu Technology Co ltd
Priority to CN202310084827.4A priority Critical patent/CN115797851B/en
Publication of CN115797851A publication Critical patent/CN115797851A/en
Application granted granted Critical
Publication of CN115797851B publication Critical patent/CN115797851B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to the technical field of cartoon video processing, and discloses a cartoon video processing method and a cartoon video processing system, wherein the cartoon video processing method comprises the following steps: extracting action data of a first cartoon object and a second cartoon object; extracting key character gesture data based on the action data of the first cartoon object and the action data of the second cartoon object respectively; generating segment data based on the key character pose data; matching the fragment data of the second cartoon video with the fragment data of the first cartoon video; generating an original frame image set for the fragment data of the first cartoon video; generating a recommended frame image set for the fragment data of the second cartoon video; mapping the original frame image set and the recommended frame image set, and recommending the original frame image set and the recommended frame image set to a user together; according to the invention, the character continuous action image which can be used as a 2D drawing reference is mined from the 2D cartoon video, so that the reference is provided for the drawing personnel, and the requirements on the imagination and experience of the drawing personnel are reduced.

Description

Cartoon video processing method and system
Technical Field
The invention relates to the technical field of cartoon video processing, in particular to a cartoon video processing method.
Background
The animation production technology of the three-rendering-two is characterized in that a 3D model which is produced is rendered into a 2D picture effect, the purpose of the three-rendering-two is to reduce the production cost of the 2D animation, the three-rendering-two is an art style of non-realistic rendering, the rendering is seen as a 2D feeling after the rendering, but the three-rendering-two essentially belongs to the 3D category, and the three-rendering-two cannot be separated from the actual data of the 3D model, so that the artistry of the animation super-reality cannot be reflected; although this problem can be solved by "three rendering" followed by hand-drawing the 2D keyframes of the portion, the hand-drawn portion is identical to the traditional 2D animation video production, requiring high imagination and experience for the drawing personnel.
Disclosure of Invention
The invention provides a cartoon video processing method, which solves the technical problem that key frames of a hand-painted part 2D after three rendering in the related technology have higher requirements on imagination and experience of a drawing person.
According to one aspect of the present invention, there is provided a cartoon video processing method including the steps of:
step 301, extracting a moving first cartoon object and action data of the first cartoon object from a first cartoon video; extracting a second cartoon object of the motion and action data of the second cartoon object from the second cartoon video;
step 302, extracting key figure gesture data based on the action data of the first cartoon object and the action data of the second cartoon object respectively;
step 303, generating fragment data based on the key character gesture data;
step 304, matching the fragment data of the second cartoon video with the fragment data of the first cartoon video;
step 305, extracting frame images of the first cartoon video corresponding to the key character gesture data in the fragment data for the fragment data of the first cartoon video to generate an original frame image set;
extracting frame images of the second cartoon video corresponding to the key character gesture data in the fragment data for the fragment data of the second cartoon video to generate a recommended frame image set;
and 306, mapping the original frame image set and the recommended frame image set, and recommending the original frame image set and the recommended frame image set with the mapping relation to the user together.
Wherein the method for extracting the key character pose data in step 302 comprises:
step 101, selecting the character gesture data with the earliest time node as the original character gesture data;
102, starting from the original character posture data, selecting character posture data one by one as marked character posture data according to a time sequence, and stopping selecting until a first distance between the selected marked character posture data and the original character posture data is larger than a set first distance threshold;
or stopping selecting when the difference value of the time node corresponding to the marked character gesture data and the previous character gesture data is larger than a first time threshold value, and recording the marked character gesture data as breakpoint character gesture data;
step 103, recording the marked character pose data when the selection is stopped in step 102 as key character pose data, and updating the marked character pose data when the selection is stopped in step 102 as new original character pose data;
step 104, steps 102 and 103 are iteratively performed until all of the character pose data is selected as the marker character pose data.
Further, the method for generating fragment data comprises the following steps:
step 201, traversing back from the character gesture data with the earliest time node to find the breakpoint character gesture data, and generating a gesture data set as fragment data from the key character gesture data between the character gesture data with the earliest time node and the breakpoint character gesture data, wherein the gesture data set comprises the character gesture data with the earliest time node and the breakpoint character gesture data;
step 202, searching for breakpoint character gesture data by traversing backwards from the last-time-terminated breakpoint character gesture data, terminating traversing after finding the first breakpoint character gesture data, and generating a gesture data set as fragment data by using key character gesture data between the last-time-terminated breakpoint character gesture data and a time node corresponding to the last-time-terminated breakpoint character gesture data, wherein the gesture data set comprises the last-time-terminated breakpoint character gesture data;
step 203, iteratively executing step 202 until all breakpoint character pose data is traversed.
Further, the method for matching the fragment data of the first cartoon video with the fragment data of the second cartoon video comprises the following steps:
calculating a first similarity of the fragment data of the first cartoon video and the fragment data of the second cartoon video;
and matching the fragment data of the second cartoon video, wherein the first similarity of the fragment data of the second cartoon video and the first cartoon video is larger than the set first similarity threshold value.
Further, the method for calculating the first similarity comprises the following steps:
given weighted bipartite graph g= (X, Y): x= { X i ,x 2 ,…x n },Y={y 1 ,y 2 ,…y m The vertex of the set X respectively represents the key character gesture data of the fragment data of the first cartoon video, the vertex of the set Y respectively represents the key character gesture data of the fragment data of the second cartoon video, and the maximum weight of the weighted bipartite graph G is perfectly matched through a Kuhn-Munkres algorithm;
and obtaining the weight sum after matching based on the maximum weight perfect matching as the first similarity.
Further, the initial top-scalar value when solving for a perfect match of the maximum weights of the weighted bipartite graph G is determined as follows:
the top label of the vertex of the set Y is assigned as 0, and the top label of the vertex of the set X is assigned as the maximum value of the second similarity of the key figure gesture data mapped by the vertex and the key figure gesture data mapped by the vertex of the set Y;
second similarity S 2 The calculation formula of (2) is as follows:
Figure GDA0004137363840000041
wherein A is i Representing the included angle of the ith limb of the character skeleton corresponding to the key character posture data mapped by the vertexes of the set X, B i And (3) representing the included angle of the ith limb of the character skeleton corresponding to the key character gesture data mapped by the vertexes of the set Y, wherein n is the number of the limbs of the character skeleton.
According to an aspect of the present invention, there is provided a cartoon video processing system for executing the above-described cartoon video processing method, the cartoon video processing system comprising:
the first cartoon object extraction module is used for extracting a moving first cartoon object and action data of the first cartoon object from the first cartoon video;
the second cartoon object extraction module is used for extracting a second frame image from the second cartoon video; then, a second cartoon object and action data of the second cartoon object are obtained from the second frame image;
the key gesture extraction module is used for respectively extracting key character gesture data based on the action data of the first cartoon object and the action data of the second cartoon object;
a character segment data generation module that generates segment data based on the key character pose data;
the fragment data matching module is used for calculating first similarity of fragment data of the first cartoon video and fragment data of the second cartoon video;
the fragment data screening module is used for screening and matching the fragment data of the first cartoon video and the fragment data of the second cartoon video;
the screening conditions were: only the fragment data of the second cartoon video, the first similarity of which with the fragment data of the first cartoon video is larger than a set first similarity threshold value, are reserved;
the first frame image extraction module is used for extracting frame images of a second cartoon video corresponding to the key character gesture data in the fragment data based on the fragment data reserved by the fragment data screening module, and generating a recommended frame image set by the frame images corresponding to the fragment data;
the second frame image extraction module is used for extracting frame images of the first cartoon video corresponding to the key character gesture data in the fragment data based on the fragment data of the first cartoon video, and generating an original frame image set by the frame images corresponding to the fragment data;
and the recommending module is used for mapping the original frame image set and the recommended frame image set and recommending the original frame image set and the recommended frame image set with the mapping relation to the user together.
Further, the first cartoon object extraction module directly acquires the first cartoon object and action data of the first cartoon object through data in the first cartoon video production process.
Further, the first cartoon object extraction module extracts a first frame image from the first cartoon video, and then obtains the first cartoon object and action data of the first cartoon object from the first frame image.
Further, the first cartoon video is a 3D cartoon video, and the second cartoon video is a 2D cartoon video; the first and second cartoon objects refer to character objects in a cartoon video.
The invention has the beneficial effects that:
the 2D cartoon drawing work is concentrated on the drawing of cartoon characters, character extraction and action data processing are carried out on an original 3D cartoon video, character continuous action images which can be used as 2D drawing references are mined from the 2D cartoon video, references are provided for drawing staff, the drawing staff can adapt to corresponding drawing skills and expression methods through frame images of the referenced 2D cartoon video, and the requirements on imagination and experience of the drawing staff are reduced.
Drawings
FIG. 1 is a block diagram of a cartoon video processing system of the present invention;
FIG. 2 is a flow chart of a method of extracting key character pose data of the present invention;
FIG. 3 is a flow chart of a method of generating fragment data of the present invention;
fig. 4 is a flow chart of a cartoon video processing method of the present invention.
In the figure: the system comprises a first cartoon object extraction module 101, a second cartoon object extraction module 102, a key gesture extraction module 103, a character segment data generation module 104, a segment data matching module 105, a segment data screening module 106, a first frame image extraction module 107, a second frame image extraction module 108 and a recommendation module 109.
Detailed Description
The subject matter described herein will now be discussed with reference to example embodiments. It is to be understood that these embodiments are merely discussed so that those skilled in the art may better understand and implement the subject matter described herein and that changes may be made in the function and arrangement of the elements discussed without departing from the scope of the disclosure herein. Various examples may omit, replace, or add various procedures or components as desired. In addition, features described with respect to some examples may be combined in other examples as well.
Example 1
As shown in fig. 1-3, a cartoon video processing system, comprising:
a first cartoon object extraction module 101, configured to extract a moving first cartoon object and motion data of the first cartoon object from a first cartoon video;
in one embodiment of the invention, the first cartoon video is a 3D cartoon video, and the original modeling model, skeleton data and motion trail data exist, and can be generally obtained by directly obtaining the data in the first cartoon video manufacturing process;
in one embodiment of the invention, the motion of a human skeleton is taken as a base, and the displacement and rotation of the human skeleton node are taken as parameters of the human skeleton node on the base, for example, the parameters of the human skeleton node can be represented by Euler angles or quaternions.
The character gesture of the first cartoon object directly obtained in this way is represented in three dimensions, so in order to be unified with the character gesture representation of the second cartoon object, new character gesture data of the two-dimensional representation needs to be obtained based on the projection of the original character gesture of the first cartoon object onto the two-dimensional plane;
in another embodiment of the present invention, the first cartoon object extraction module 101 extracts a first frame image from a first cartoon video, and then acquires a first cartoon object and motion data of the first cartoon object from the first frame image.
A second cartoon object extraction module 102 for extracting a second frame image from a second cartoon video; then, a second cartoon object and action data of the second cartoon object are obtained from the second frame image;
the first cartoon video is a 3D cartoon video, and the second cartoon video is a 2D cartoon video; the first and second cartoon objects generally refer to character objects in a cartoon video;
the identification of the person and the person gesture by image data processing is a conventional technical means in the technical field of image processing, and is optionally but not limited to implemented by adopting the following algorithm: an openpost algorithm, a deep algorithm and a Mask RCNN algorithm.
In summary, the motion data of the first cartoon object and the second cartoon object should be character gesture data including a plurality of corresponding time nodes;
a key gesture extraction module 103 that extracts key character gesture data based on the motion data of the first cartoon object and the motion data of the second cartoon object, respectively;
the method for extracting the key character gesture data comprises the following steps:
step 101, selecting the character gesture data with the earliest time node as the original character gesture data;
102, starting from the original character posture data, selecting character posture data one by one as marked character posture data according to a time sequence, and stopping selecting until a first distance between the selected marked character posture data and the original character posture data is larger than a set first distance threshold;
or stopping selecting when the difference value of the time node corresponding to the marked character gesture data and the previous character gesture data is larger than a first time threshold value, and recording the marked character gesture data as breakpoint character gesture data;
the first time threshold may be in ms or in frames, in which a cartoon object may not appear continuously, in a cartoon video, in which the cartoon object appears in one or more segments, and the first time threshold is set to determine whether the segment in which the cartoon object is located is ended.
Step 103, recording the marked character pose data when the selection is stopped in step 102 as key character pose data, and updating the marked character pose data when the selection is stopped in step 102 as new original character pose data;
step 104, steps 102 and 103 are iteratively performed until all of the character pose data is selected as the marker character pose data.
A character segment data generation module 104 that generates segment data based on the key character pose data;
the extraction of the key figure gesture data of the action data of the first cartoon object and the extraction of the key figure gesture data of the action data of the second cartoon object are respectively carried out;
the method for generating the fragment data comprises the following steps:
step 201, traversing back from the character gesture data with the earliest time node to find the breakpoint character gesture data, and generating a gesture data set as fragment data from the key character gesture data between the character gesture data with the earliest time node and the breakpoint character gesture data, wherein the gesture data set comprises the character gesture data with the earliest time node and the breakpoint character gesture data;
step 202, searching for breakpoint character gesture data by traversing backwards from the last-time-terminated breakpoint character gesture data, terminating traversing after finding the first breakpoint character gesture data, and generating a gesture data set as fragment data by using key character gesture data between the last-time-terminated breakpoint character gesture data and a time node corresponding to the last-time-terminated breakpoint character gesture data, wherein the gesture data set comprises the last-time-terminated breakpoint character gesture data;
step 203, iteratively executing step 202 until all breakpoint character pose data is traversed.
A segment data matching module 105, configured to calculate a first similarity between segment data of the first cartoon video and segment data of the second cartoon video;
in one embodiment of the present invention, the method for calculating the first similarity includes:
given weighted bipartite graph g= (X, Y): x= { X 1 ,x 2 ,…x n },Y={y 1 ,y 2 ,…y m The vertexes of the set X respectively represent the key figure gesture data of the fragment data of the first cartoon video, the vertexes of the set Y respectively represent the key figure gesture data of the fragment data of the second cartoon video, and the data passes through Kuhn-Munkres algorithm solves the maximum weight perfect match of the weighted bipartite graph G;
the initial top-scalar value when solving for a perfect match of the maximum weights of the weighted bipartite graph G is determined as follows:
the top label of the vertex of the set Y is assigned as 0, and the top label of the vertex of the set X is assigned as the maximum value of the second similarity of the key figure gesture data mapped by the vertex and the key figure gesture data mapped by the vertex of the set Y;
obtaining the matched weight sum based on maximum weight perfect matching and taking the weight sum as a first similarity;
it should be noted that, the key character pose data mapped by the vertices of the set X needs to calculate a second similarity with the key character pose data mapped by all the vertices of the set Y;
second similarity S 2 The calculation formula of (2) is as follows:
Figure GDA0004137363840000111
wherein A is i Representing the included angle of the ith limb of the character skeleton corresponding to the key character posture data mapped by the vertexes of the set X, B i Representing the included angle of the ith limb of the character skeleton corresponding to the key character gesture data mapped by the vertexes of the set Y, wherein n is the number of the limbs of the character skeleton;
the angle of the limb is the angle relative to the X coordinate axis in the plane.
It should be noted that, the number of nodes and the number of limbs of the skeleton of the cartoon character corresponding to the fragment data of the first cartoon video and the fragment data of the second cartoon video which participate in matching are identical, so to speak, the configuration of the skeleton is identical, and the limbs adopt the same number;
a segment data screening module 106, configured to screen and match segment data of the first cartoon video and segment data of the second cartoon video;
the screening conditions were: only the fragment data of the second cartoon video, the first similarity of which with the fragment data of the first cartoon video is larger than a set first similarity threshold value, are reserved;
for the fragment data of a first cartoon video, the fragment data of a second cartoon video which is screened and reserved are matched with the fragment data of the first cartoon video, and if the number of the fragment data of the second cartoon video which is screened and reserved is 0, the fragment data of the first N second cartoon videos with the largest first similarity can be matched with the fragment data of the first cartoon video; n is more than or equal to 1;
a first frame image extraction module 107, which extracts frame images of the second cartoon video corresponding to the key character gesture data in the fragment data based on the fragment data retained by the fragment data screening module 106, and generates a recommended frame image set from the frame images corresponding to one fragment data;
a second frame image extraction module 108, which extracts frame images of the first cartoon video corresponding to the key character gesture data in the fragment data based on the fragment data of the first cartoon video, and generates an original frame image set by the frame images corresponding to the fragment data;
a recommending module 109, configured to map the original frame image set with the recommended frame image set, and recommend the original frame image set and the recommended frame image set having a mapping relationship to a user together; the user is assisted in performing the first cartoon video 2D.
The mapping relation between the original frame image set and the recommended frame image set is established based on the matching relation between the corresponding fragment data of the first cartoon video and the corresponding fragment data of the second cartoon video.
Although the second cartoon video has a partial super-realistic character gesture expression, the realistic character gesture is still a main part, and in the above embodiment, the matching is performed through the limited key character gesture data of the cartoon object, the realistic character gesture of the second cartoon object participates in the matching with the first cartoon object, and the super-realistic character gesture of the second cartoon object is used as a reference to assist the 2D conversion of the first cartoon video.
As shown in fig. 4, based on the above-mentioned cartoon video processing system, the present invention provides a cartoon video processing method, which includes the following steps:
step 301, extracting a moving first cartoon object and action data of the first cartoon object from a first cartoon video; extracting a second cartoon object of the motion and action data of the second cartoon object from the second cartoon video;
step 302, extracting key figure gesture data based on the action data of the first cartoon object and the action data of the second cartoon object respectively;
step 303, generating fragment data based on the key character gesture data;
step 304, matching the fragment data of the second cartoon video with the fragment data of the first cartoon video;
the method for matching the fragment data of the second cartoon video with the fragment data of the first cartoon video comprises the following steps:
calculating a first similarity of the fragment data of the first cartoon video and the fragment data of the second cartoon video;
matching the fragment data of the second cartoon video, wherein the first similarity of the fragment data of the second cartoon video and the first cartoon video is larger than a set first similarity threshold value;
step 305, extracting frame images of the first cartoon video corresponding to the key character gesture data in the fragment data for the fragment data of the first cartoon video to generate an original frame image set;
extracting frame images of the second cartoon video corresponding to the key character gesture data in the fragment data for the fragment data of the second cartoon video to generate a recommended frame image set;
and 306, mapping the original frame image set and the recommended frame image set, and recommending the original frame image set and the recommended frame image set with the mapping relation to the user together.
The embodiment has been described above with reference to the embodiment, but the embodiment is not limited to the above-described specific implementation, which is only illustrative and not restrictive, and many forms can be made by those of ordinary skill in the art, given the benefit of this disclosure, are within the scope of this embodiment.

Claims (9)

1. The cartoon video processing method is characterized by comprising the following steps of:
step 301, extracting a moving first cartoon object and action data of the first cartoon object from a first cartoon video; extracting a second cartoon object of the motion and action data of the second cartoon object from the second cartoon video;
step 302, extracting key figure gesture data based on the action data of the first cartoon object and the action data of the second cartoon object respectively;
step 303, generating fragment data based on the key character gesture data;
step 304, matching the fragment data of the second cartoon video with the fragment data of the first cartoon video;
step 305, extracting frame images of the first cartoon video corresponding to the key character gesture data in the fragment data for the fragment data of the first cartoon video to generate an original frame image set;
extracting frame images of the second cartoon video corresponding to the key character gesture data in the fragment data for the fragment data of the second cartoon video to generate a recommended frame image set;
step 306, mapping the original frame image set and the recommended frame image set, and recommending the original frame image set and the recommended frame image set with mapping relation to the user together;
wherein the method for extracting the key character pose data in step 302 comprises:
step 101, selecting the character gesture data with the earliest time node as the original character gesture data;
102, starting from the original character posture data, selecting character posture data one by one as marked character posture data according to a time sequence, and stopping selecting until a first distance between the selected marked character posture data and the original character posture data is larger than a set first distance threshold;
or stopping selecting when the difference value of the time node corresponding to the marked character gesture data and the previous character gesture data is larger than a first time threshold value, and recording the marked character gesture data as breakpoint character gesture data;
step 103, recording the marked character pose data when the selection is stopped in step 102 as key character pose data, and updating the marked character pose data when the selection is stopped in step 102 as new original character pose data;
step 104, steps 102 and 103 are iteratively performed until all of the character pose data is selected as the marker character pose data.
2. The method of claim 1, wherein the method of generating clip data comprises:
step 201, traversing back from the character gesture data with the earliest time node to find the breakpoint character gesture data, and generating a gesture data set as fragment data from the key character gesture data between the character gesture data with the earliest time node and the breakpoint character gesture data, wherein the gesture data set comprises the character gesture data with the earliest time node and the breakpoint character gesture data;
step 202, searching for breakpoint character gesture data by traversing backwards from the last-time-terminated breakpoint character gesture data, terminating traversing after finding the first breakpoint character gesture data, and generating a gesture data set as fragment data by using key character gesture data between the last-time-terminated breakpoint character gesture data and a time node corresponding to the last-time-terminated breakpoint character gesture data, wherein the gesture data set comprises the last-time-terminated breakpoint character gesture data;
step 203, iteratively executing step 202 until all breakpoint character pose data is traversed.
3. The method of claim 1, wherein the step of matching the clip data of the first animation with the clip data of the second animation comprises:
calculating a first similarity of the fragment data of the first cartoon video and the fragment data of the second cartoon video;
and matching the fragment data of the second cartoon video, wherein the first similarity of the fragment data of the second cartoon video and the first cartoon video is larger than the set first similarity threshold value.
4. The method for processing a cartoon video of claim 3 wherein the method for calculating the first similarity comprises:
given weighted bipartite graph
Figure QLYQS_1
The vertex of the set X respectively represents the key character gesture data of the fragment data of the first cartoon video, the vertex of the set Y respectively represents the key character gesture data of the fragment data of the second cartoon video, and the maximum weight of the weighted bipartite graph G is perfectly matched through a Kuhn-Munkres algorithm;
and obtaining the weight sum after matching based on the maximum weight perfect matching as the first similarity.
5. The method for processing a cartoon video according to claim 4, wherein the initial top label value when the maximum weights of the weighted bipartite graph G are perfectly matched is determined as follows:
the top label of the vertex of the set Y is assigned as 0, and the top label of the vertex of the set X is assigned as the maximum value of the second similarity of the key figure gesture data mapped by the vertex and the key figure gesture data mapped by the vertex of the set Y;
second similarity degree
Figure QLYQS_2
The calculation formula of (2) is as follows:
Figure QLYQS_3
the method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>
Figure QLYQS_4
An angle of an ith limb of the character skeleton corresponding to the key character posture data representing vertex mapping of the set X, < +.>
Figure QLYQS_5
And (3) representing the included angle of the ith limb of the character skeleton corresponding to the key character gesture data mapped by the vertexes of the set Y, wherein n is the number of the limbs of the character skeleton.
6. A cartoon video processing system for performing a cartoon video processing method according to any one of claims 1-5, the cartoon video processing system comprising:
the first cartoon object extraction module is used for extracting a moving first cartoon object and action data of the first cartoon object from the first cartoon video;
the second cartoon object extraction module is used for extracting a second frame image from the second cartoon video; then, a second cartoon object and action data of the second cartoon object are obtained from the second frame image;
the key gesture extraction module is used for respectively extracting key character gesture data based on the action data of the first cartoon object and the action data of the second cartoon object;
a character segment data generation module that generates segment data based on the key character pose data;
the fragment data matching module is used for calculating first similarity of fragment data of the first cartoon video and fragment data of the second cartoon video;
the fragment data screening module is used for screening and matching the fragment data of the first cartoon video and the fragment data of the second cartoon video;
the screening conditions were: only the fragment data of the second cartoon video, the first similarity of which with the fragment data of the first cartoon video is larger than a set first similarity threshold value, are reserved;
the first frame image extraction module is used for extracting frame images of a second cartoon video corresponding to the key character gesture data in the fragment data based on the fragment data reserved by the fragment data screening module, and generating a recommended frame image set by the frame images corresponding to the fragment data;
the second frame image extraction module is used for extracting frame images of the first cartoon video corresponding to the key character gesture data in the fragment data based on the fragment data of the first cartoon video, and generating an original frame image set by the frame images corresponding to the fragment data;
and the recommending module is used for mapping the original frame image set and the recommended frame image set and recommending the original frame image set and the recommended frame image set with the mapping relation to the user together.
7. The system of claim 6, wherein the first cartoon object extraction module directly obtains the first cartoon object and the action data of the first cartoon object through data in the first cartoon video production process.
8. The system of claim 6, wherein the first cartoon object extraction module extracts the first frame image from the first cartoon video and then obtains the first cartoon object and the motion data of the first cartoon object from the first frame image.
9. The system of claim 6, wherein the first animation video is a 3D animation video and the second animation video is a 2D animation video; the first and second cartoon objects refer to character objects in a cartoon video.
CN202310084827.4A 2023-02-09 2023-02-09 Cartoon video processing method and system Active CN115797851B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310084827.4A CN115797851B (en) 2023-02-09 2023-02-09 Cartoon video processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310084827.4A CN115797851B (en) 2023-02-09 2023-02-09 Cartoon video processing method and system

Publications (2)

Publication Number Publication Date
CN115797851A CN115797851A (en) 2023-03-14
CN115797851B true CN115797851B (en) 2023-05-05

Family

ID=85430572

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310084827.4A Active CN115797851B (en) 2023-02-09 2023-02-09 Cartoon video processing method and system

Country Status (1)

Country Link
CN (1) CN115797851B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309998A (en) * 2023-03-15 2023-06-23 杭州若夕企业管理有限公司 Image processing system, method and medium
CN116310015A (en) * 2023-03-15 2023-06-23 杭州若夕企业管理有限公司 Computer system, method and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113205043A (en) * 2021-04-30 2021-08-03 武汉大学 Video sequence two-dimensional attitude estimation method based on reinforcement learning
US11521341B1 (en) * 2021-06-21 2022-12-06 Lemon Inc. Animation effect attachment based on audio characteristics

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110245638A (en) * 2019-06-20 2019-09-17 北京百度网讯科技有限公司 Video generation method and device
CN110555408B (en) * 2019-09-03 2023-07-28 深圳龙岗智能视听研究院 Single-camera real-time three-dimensional human body posture detection method based on self-adaptive mapping relation
CN110827378B (en) * 2019-10-31 2023-06-09 北京字节跳动网络技术有限公司 Virtual image generation method, device, terminal and storage medium
US11494964B2 (en) * 2021-04-02 2022-11-08 Sony Interactive Entertainment LLC 2D/3D tracking and camera/animation plug-ins
CN115457176A (en) * 2022-09-23 2022-12-09 北京奇艺世纪科技有限公司 Image generation method and device, electronic equipment and storage medium
CN115457448B (en) * 2022-11-09 2023-01-31 安徽米娱科技有限公司 Intelligent extraction system for video key frames

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113205043A (en) * 2021-04-30 2021-08-03 武汉大学 Video sequence two-dimensional attitude estimation method based on reinforcement learning
US11521341B1 (en) * 2021-06-21 2022-12-06 Lemon Inc. Animation effect attachment based on audio characteristics

Also Published As

Publication number Publication date
CN115797851A (en) 2023-03-14

Similar Documents

Publication Publication Date Title
CN115797851B (en) Cartoon video processing method and system
EP3602494B1 (en) Robust mesh tracking and fusion by using part-based key frames and priori model
CN112150638A (en) Virtual object image synthesis method and device, electronic equipment and storage medium
US20200234482A1 (en) Systems and methods for photorealistic real-time portrait animation
KR101794731B1 (en) Method and device for deforming a template model to create animation of 3D character from a 2D character image
CN107657664B (en) Image optimization method and device after face expression synthesis, storage medium and computer equipment
CN104008564A (en) Human face expression cloning method
CN108388882A (en) Based on the gesture identification method that the overall situation-part is multi-modal RGB-D
US11282257B2 (en) Pose selection and animation of characters using video data and training techniques
CN109242950A (en) Multi-angle of view human body dynamic three-dimensional reconstruction method under more close interaction scenarios of people
CN113924600A (en) Real-time body animation based on single image
CN112257657B (en) Face image fusion method and device, storage medium and electronic equipment
WO2021063271A1 (en) Human body model reconstruction method and reconstruction system, and storage medium
JP2002269580A (en) Animation creating system
Reinert et al. Animated 3D creatures from single-view video by skeletal sketching.
Huang et al. Object-occluded human shape and pose estimation with probabilistic latent consistency
Jung et al. Learning free-form deformation for 3D face reconstruction from in-the-wild images
Jain et al. Leveraging the talent of hand animators to create three-dimensional animation
CN115496864B (en) Model construction method, model reconstruction device, electronic equipment and storage medium
WO2023160074A1 (en) Image generation method and apparatus, electronic device, and storage medium
CN115222895B (en) Image generation method, device, equipment and storage medium
CN113763536A (en) Three-dimensional reconstruction method based on RGB image
CN115953516B (en) Interactive animation production platform based on motion capture technology
Li et al. Ecnet: Effective controllable text-to-image diffusion models
Divya Udayan et al. Constrained procedural modeling of real buildings from single facade layout

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: Room 1610, Building C, Tuoji City Plaza, No. 683 Changjiang West Road, High tech Zone, Hefei City, Anhui Province, 230000

Patentee after: Anhui Miyu Technology Co.,Ltd.

Country or region after: China

Address before: Room 1210-1211, Tower C, Tuoji City Plaza, No. 683, Changjiang West Road, High tech Zone, Hefei City, Anhui Province, 230000

Patentee before: Anhui Miyu Technology Co.,Ltd.

Country or region before: China