CN116309998A - Image processing system, method and medium - Google Patents

Image processing system, method and medium Download PDF

Info

Publication number
CN116309998A
CN116309998A CN202310246476.2A CN202310246476A CN116309998A CN 116309998 A CN116309998 A CN 116309998A CN 202310246476 A CN202310246476 A CN 202310246476A CN 116309998 A CN116309998 A CN 116309998A
Authority
CN
China
Prior art keywords
character data
image
dimensional
data
character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310246476.2A
Other languages
Chinese (zh)
Inventor
张刘灿
袁若芝
丁佳伟
黄鹏
于秋燕
明小凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Ruoxi Enterprise Management Co ltd
Original Assignee
Hangzhou Ruoxi Enterprise Management Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Ruoxi Enterprise Management Co ltd filed Critical Hangzhou Ruoxi Enterprise Management Co ltd
Priority to CN202310246476.2A priority Critical patent/CN116309998A/en
Publication of CN116309998A publication Critical patent/CN116309998A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/48Matching video sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/49Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention relates to the technical field of image processing, and discloses an image processing system, an image processing method and a medium, wherein the image processing system comprises: the first image processing module is used for carrying out two-dimensional processing on the generated three-dimensional animation image to obtain first person data in a two-dimensional space; the second image processing module is used for extracting a two-dimensional animation image from the two-dimensional animation and then processing to obtain second character data in the two-dimensional space; the sequence matching module is used for calculating a third distance between the two dimensionalities of the character data sequences corresponding to the first character data and the second character data, extracting first N pieces of second character data with the minimum third distance as reference character data, and generating a two-dimensional animation atlas as a recommended atlas; the method can improve the requirements of drawing staff on the production of the two-dimensional animation based on motion capture.

Description

Image processing system, method and medium
Technical Field
The present invention relates to the field of image processing technology, and more particularly, to an image processing system.
Background
With the maturity of 3D cartoon technology, edge processing and other technologies are carried out on 3D cartoon images to enable the 3D cartoon images to reflect the styles of 2D cartoons, but the 3D cartoons are formed by driving character model motions to generate images through motion capture character motion parameters, the actual character motions cannot be separated, partial frame images are replaced by the modes that the character motions want to completely reflect the styles of the 2D cartoons and the traditional 2D cartoon hand-painting production is combined, and although the partial workload is reduced compared with the traditional 2D cartoon production, the creative requirements for hand-painting personnel cannot be separated.
Disclosure of Invention
The invention provides an image processing system which solves the technical problem of 2D animation production based on motion capture in the related art.
The present invention provides an image processing system including:
a motion capture module for motion capture to obtain first motion data; a modeling module for creating a three-dimensional character model; a model driving module that drives the three-dimensional character model based on the first motion data to generate a three-dimensional animated image;
the first image processing module is used for carrying out two-dimensional processing on the generated three-dimensional animation image to obtain first person data in a two-dimensional space; the second image processing module is used for extracting a two-dimensional animation image from the two-dimensional animation and then processing to obtain second character data in the two-dimensional space; a third image processing module that generates an image node based on the first person data and the second person data;
the two-dimensional judging module is used for judging the two-dimensionality of the character data sequence, and defining a two-dimensional random field, wherein a node set of the two-dimensional random field is denoted as V, and a set of edges between the nodes is denoted as E;
the energy function of the two-dimensional random field is:
Figure BDA0004126107110000021
wherein X is an image node set, θ p (x p ) As a potential function of node p, θ pq (x p ,x q ) As a potential function of the edge between nodes p and q, x p Two-dimensional value, x, of image node mapped for node p q Two-dimensional values of the image nodes mapped for the node q; a two-dimensional value of 0 indicates that the figure data corresponding to the image node has larger deviation from the real figure; a two-dimensional value of 1 indicates that the character data corresponding to the image nodes is close to a real character;
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004126107110000022
dist p,q represents x p And x q A second distance of the mapped image node;
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004126107110000023
S 1 for a set movement entropy threshold, S (x p ) Is x p The motion entropy of the mapped image node is calculated as follows:
Figure BDA0004126107110000024
wherein K is the set of limbs of the human skeletal model, wherein θ c1,c2 Represents x p The included angle between the c-th limb of the mapped image node and the c-th limb in the standard posture model;
when the energy function is minimized, the two-dimensional value of each node of the two-dimensional random field is used as the two-dimension of the character data sequence;
the sequence matching module is used for calculating a third distance between two dimensionalities of the character data sequences corresponding to the first character data and the second character data, and extracting first N pieces of second character data with the minimum third distance as reference character data;
and the original image extraction module is used for extracting two-dimensional animation images mapped by image nodes of the reference character data, and continuously extracting the two-dimensional animation images among time nodes of the extracted two-dimensional animation images to generate a two-dimensional animation atlas serving as a recommended atlas.
Further, the person and the motion gesture of the person are identified from the three-dimensional animated image by means of image recognition.
Further, the first person data and the second person data include a person ID and a person motion gesture parameter.
Further, the method for generating the image node comprises the following steps:
step 101, generating a character data sequence based on character data;
it should be noted that: the character data is generated from a continuous three-dimensional animated image or two-dimensional animated image, so that the time node of the animated image generating the character data can be marked by the character data to generate the sequence data containing a plurality of character data;
step 102, starting traversing backwards from the first character data of the character data sequence, wherein the condition for stopping traversing is as follows: the first distance between the currently traversed character data and the first character data is larger than a set first distance threshold value, and the character data of which the traversal is terminated is marked as abrupt character data;
step 103, searching character data by traversing backwards from the mutation character data terminated by the last traversal; the conditions for the traversal termination are: the first distance between the currently traversed character data and the abrupt character data is larger than a set first distance threshold value, and the character data with the traversed termination is marked as abrupt character data;
step 104, iteratively executing step 103 to obtain all mutation character data;
and 105, mapping the abrupt character data one by one to generate image nodes, and taking the abrupt character data as parameters of the image nodes.
Further, the calculation formula of the first distance is as follows:
Figure BDA0004126107110000031
wherein k is the number of limbs, theta of the human skeleton model i1 An included angle between the ith limb of one person data and the ith limb in the standard posture model; θ i2 Is the included angle between the ith limb of the data of another person and the ith limb in the standard posture model.
Further, the calculation formula of the second distance is as follows:
Figure BDA0004126107110000041
wherein k is the number of limbs, theta of the human skeleton model ip An included angle between the ith limb of one person data and the ith limb in the standard posture model; θ iq Is the included angle between the ith limb of the data of another person and the ith limb in the standard posture model.
Further, a third distance dist 3 The calculation formula of (2) is as follows:
Figure BDA0004126107110000042
wherein x is j1 Two-dimensional value of ith image node representing character data sequence corresponding to first character data, x j2 And a two-dimensional value representing an ith image node of the character data sequence corresponding to the second character data, m being the number of image nodes of the character data sequences corresponding to the first character data and the second character data.
Further, discretizing the character data sequence to generate a plurality of unit sequences, wherein the unit sequences are intercepted from the character data sequence;
the method for intercepting is to divide according to the difference value between the time nodes corresponding to the image nodes, and if the difference value between the time nodes corresponding to the two adjacent image nodes is larger than a set time threshold value, one of the two image nodes is used as a dividing point to divide the character data sequence;
calculating a fourth distance for the unit sequence, and then matching the unit sequences corresponding to the first character data with the first N unit sequences corresponding to the second character data with the smallest fourth distance as reference unit sequences;
and extracting the two-dimensional animation image based on the reference unit sequence to generate a unit recommendation atlas for the unit sequence corresponding to each first person data.
The invention provides an image processing method, which is implemented by the image processing system and comprises the following steps:
step 201, performing two-dimensional processing on a three-dimensional animation image to obtain first person data in a two-dimensional space;
step 202, extracting a two-dimensional animation image from a two-dimensional animation, and then processing to obtain second character data in a two-dimensional space;
step 203, generating image nodes based on the first person data and the second person data;
step 204, judging the two-dimension of the character data sequence;
step 205, calculating a third distance between two dimensionalities of the character data sequences corresponding to the first character data and the second character data;
step 206, extracting first N pieces of second person data with the smallest third distance as reference person data;
step 207, extracting two-dimensional animation images mapped by image nodes of the reference character data, and continuously extracting two-dimensional animation images between time nodes of the extracted two-dimensional animation images to generate a two-dimensional animation atlas as a recommended atlas.
The present invention provides a storage medium storing an image processing system as described above.
The invention has the beneficial effects that:
according to the method, the 3D animation can be generated based on motion capture, then the 2D animation image which cannot be directly generated through technologies such as edge processing and the like for 3D animation matching is obtained through image processing and matching, so that direct reference with high association degree is provided for hand-painted staff, and the creative requirements for the hand-painted staff are reduced.
Drawings
FIG. 1 is a block diagram of an image processing system of the present invention;
FIG. 2 is a flow chart of a method of generating an image node of the present invention;
fig. 3 is a flowchart of an image processing method of the present invention.
In the figure: the system comprises a motion capture module 101, a modeling module 102, a model driving module 103, a first image processing module 104, a second image processing module 105, a third image processing module 106, a two-dimensional judging module 107, a sequence matching module 108 and an original image extracting module 109.
Detailed Description
The subject matter described herein will now be discussed with reference to example embodiments. It is to be understood that these embodiments are merely discussed so that those skilled in the art may better understand and implement the subject matter described herein and that changes may be made in the function and arrangement of the elements discussed without departing from the scope of the disclosure herein. Various examples may omit, replace, or add various procedures or components as desired. In addition, features described with respect to some examples may be combined in other examples as well.
Example 1
As shown in fig. 1-2, an image processing system includes:
a motion capture module 101 for motion capture obtaining first motion data;
a modeling module 102 for creating a three-dimensional character model;
a model driving module 103 that drives the three-dimensional character model based on the first motion data to generate a three-dimensional animated image;
a first image processing module 104 for performing two-dimensional processing on the generated three-dimensional animation image to obtain first person data in a two-dimensional space;
specifically, the character and the motion gesture of the character can be identified from the three-dimensional animation image in an image identification mode;
the character motion gesture is defined by parameters based on a human skeleton model;
a second image processing module 105 for extracting a two-dimensional animation image from the two-dimensional animation, and then processing to obtain second character data in a two-dimensional space;
the first character data and the second character data include a character ID and a character motion gesture parameter;
a third image processing module 106 that generates an image node based on the first person data and the second person data;
the method for generating the image node comprises the following steps:
step 101, generating a character data sequence based on character data;
it should be noted that: the character data is generated from a continuous three-dimensional animated image or two-dimensional animated image, so that the time node of the animated image generating the character data can be marked by the character data to generate the sequence data containing a plurality of character data;
step 102, starting traversing backwards from the first character data of the character data sequence, wherein the condition for stopping traversing is as follows: the first distance between the currently traversed character data and the first character data is larger than a set first distance threshold value, and the character data of which the traversal is terminated is marked as abrupt character data;
step 103, searching character data by traversing backwards from the mutation character data terminated by the last traversal; the conditions for the traversal termination are: the first distance between the currently traversed character data and the abrupt character data is larger than a set first distance threshold value, and the character data with the traversed termination is marked as abrupt character data;
step 104, iteratively executing step 103 to obtain all mutation character data;
and 105, mapping the abrupt character data one by one to generate image nodes, and taking the abrupt character data as parameters of the image nodes.
The calculation formula of the first distance is as follows:
Figure BDA0004126107110000071
wherein k is the number of limbs, theta of the human skeleton model i1 An included angle between the ith limb of one person data and the ith limb in the standard posture model; θ i2 An included angle between the ith limb of the data of the other person and the ith limb in the standard posture model;
a two-dimensional judging module 107 for judging the two-dimensional nature of the character data sequence, defining a two-dimensional random field, wherein a node set of the two-dimensional random field is denoted as V, and a set of edges between the nodes is denoted as E;
the energy function of the two-dimensional random field is:
Figure BDA0004126107110000081
wherein X is an image node set, θ p (x p ) As a potential function of node p, θ pq (x p ,x q ) As a potential function of the edge between nodes p and q, x p Two-dimensional value, x, of image node mapped for node p q Two-dimensional values of the image nodes mapped for the node q; a two-dimensional value of 0 indicates that the figure data corresponding to the image node has larger deviation from the real figure; a two-dimensional value of 1 indicates that the character data corresponding to the image nodes is close to a real character;
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004126107110000082
dist p,q represents x p And x q A second distance of the mapped image node;
the calculation formula of the second distance is as follows:
Figure BDA0004126107110000083
wherein k is the number of limbs, theta of the human skeleton model ip Is one ofThe included angle between the ith limb of the personal figure data and the ith limb in the standard posture model; θ iq An included angle between the ith limb of the data of the other person and the ith limb in the standard posture model;
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004126107110000084
S 1 for a set movement entropy threshold, S (x p ) Is x p The motion entropy of the mapped image node is calculated as follows:
Figure BDA0004126107110000091
wherein K is the set of limbs of the human skeletal model, wherein θ c1,c2 Represents x p The included angle between the c-th limb of the mapped image node and the c-th limb in the standard posture model;
the standard posture model is a limb angle parameter of the human body skeleton model in a natural standing state that the human body is naturally hung by both hands.
When the energy function is minimized, the two-dimensional value of each node of the two-dimensional random field is used as the two-dimension of the character data sequence;
a sequence matching module 108, configured to calculate a third distance between two dimensions of the character data sequence corresponding to the first character data and the second character data, and extract first N second character data with the minimum third distance as reference character data;
third distance dist 3 The calculation formula of (2) is as follows:
Figure BDA0004126107110000092
wherein x is j1 Two-dimensional value of ith image node representing character data sequence corresponding to first character data, x j2 Two-dimensional value of the ith image node representing the character data sequence corresponding to the second character data, m being the first character numberThe number of image nodes of the character data sequence corresponding to the second character data;
the calculation formula of the third distance relates to the alignment of the character data sequence;
as an alignment method, image nodes of character data sequences corresponding to first character data or second character data are intercepted from back to front until the number of the image nodes contained in the first character data or the second character data is consistent;
of course, the third distance of the two character data sequences can also be calculated directly by using a DTW algorithm or the like.
The original image extraction module 109 extracts two-dimensional animated images mapped by image nodes of the reference character data, and successively extracts two-dimensional animated images between time nodes of the extracted two-dimensional animated images to generate a two-dimensional animated image set as a recommended image set.
The three-dimensional animated image may comprise a plurality of independent actions, and further discretizing the character data sequence to generate a plurality of cell sequences, the cell sequences being truncated from the character data sequence;
the intercepting mode can be to divide according to the difference value between the time nodes corresponding to the image nodes, and if the difference value between the time nodes corresponding to the two adjacent image nodes is larger than a set time threshold value, one of the two image nodes is used as a dividing point to divide the character data sequence;
calculating a fourth distance by referring to a calculation formula of the third distance by the unit sequence, and then matching the unit sequences corresponding to the first person data with the unit sequences corresponding to the first N pieces of second person data with the smallest fourth distance to serve as reference unit sequences;
and extracting the two-dimensional animation image based on the reference unit sequence to generate a unit recommendation atlas for the unit sequence corresponding to each first person data.
As shown in fig. 3, the present embodiment provides an image processing method, which uses an image processing system as described above to execute the following steps:
step 201, performing two-dimensional processing on a three-dimensional animation image to obtain first person data in a two-dimensional space;
step 202, extracting a two-dimensional animation image from a two-dimensional animation, and then processing to obtain second character data in a two-dimensional space;
step 203, generating image nodes based on the first person data and the second person data;
step 204, judging the two-dimension of the character data sequence;
step 205, calculating a third distance between two dimensionalities of the character data sequences corresponding to the first character data and the second character data;
step 206, extracting first N pieces of second person data with the smallest third distance as reference person data;
step 207, extracting two-dimensional animation images mapped by image nodes of the reference character data, and continuously extracting two-dimensional animation images between time nodes of the extracted two-dimensional animation images to generate a two-dimensional animation atlas as a recommended atlas.
The embodiment has been described above with reference to the embodiment, but the embodiment is not limited to the above-described specific implementation, which is only illustrative and not restrictive, and many forms can be made by those of ordinary skill in the art, given the benefit of this disclosure, are within the scope of this embodiment.

Claims (10)

1. An image processing system, comprising:
a motion capture module for motion capture to obtain first motion data; a modeling module for creating a three-dimensional character model; a model driving module that drives the three-dimensional character model based on the first motion data to generate a three-dimensional animated image;
the first image processing module is used for carrying out two-dimensional processing on the generated three-dimensional animation image to obtain first person data in a two-dimensional space; the second image processing module is used for extracting a two-dimensional animation image from the two-dimensional animation and then processing to obtain second character data in the two-dimensional space; a third image processing module that generates an image node based on the first person data and the second person data;
the two-dimensional judging module is used for judging the two-dimensionality of the character data sequence, and defining a two-dimensional random field, wherein a node set of the two-dimensional random field is denoted as V, and a set of edges between the nodes is denoted as E;
the energy function of the two-dimensional random field is:
Figure FDA0004126107100000011
wherein X is an image node set, θ p x p As a potential function of node p, θ pq x p ,x q As a potential function of the edge between nodes p and q, x p Two-dimensional value, x, of image node mapped for node p q Two-dimensional values of the image nodes mapped for the node q; a two-dimensional value of 0 indicates that the figure data corresponding to the image node has larger deviation from the real figure; a two-dimensional value of 1 indicates that the character data corresponding to the image nodes is close to a real character;
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure FDA0004126107100000012
dist p,q represents x p And x q A second distance of the mapped image node;
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure FDA0004126107100000013
S 1 for a set movement entropy threshold, sx p Is x p The motion entropy of the mapped image node is calculated as follows:
Sx p =∑ c∈K cosθ c1,c2 wherein K is the set of limbs of the human skeletal model, wherein θ c1,c2 Represents x p The included angle between the c-th limb of the mapped image node and the c-th limb in the standard posture model;
when the energy function is minimized, the two-dimensional value of each node of the two-dimensional random field is used as the two-dimension of the character data sequence;
the sequence matching module is used for calculating a third distance between two dimensionalities of the character data sequences corresponding to the first character data and the second character data, and extracting first N pieces of second character data with the minimum third distance as reference character data;
and the original image extraction module is used for extracting two-dimensional animation images mapped by image nodes of the reference character data, and continuously extracting the two-dimensional animation images among time nodes of the extracted two-dimensional animation images to generate a two-dimensional animation atlas serving as a recommended atlas.
2. The image processing system of claim 1, wherein the character and the character motion gesture are identified from the three-dimensional animated image by means of image recognition.
3. The image processing system of claim 1, wherein the first person data and the second person data include a person ID and a person motion profile parameter.
4. An image processing system according to claim 1, wherein the method of generating image nodes comprises:
step 101, generating a character data sequence based on character data;
it should be noted that: the character data is generated from a continuous three-dimensional animated image or two-dimensional animated image, so that the time node of the animated image generating the character data can be marked by the character data to generate the sequence data containing a plurality of character data;
step 102, starting traversing backwards from the first character data of the character data sequence, wherein the condition for stopping traversing is as follows: the first distance between the currently traversed character data and the first character data is larger than a set first distance threshold value, and the character data of which the traversal is terminated is marked as abrupt character data;
step 103, searching character data by traversing backwards from the mutation character data terminated by the last traversal; the conditions for the traversal termination are: the first distance between the currently traversed character data and the abrupt character data is larger than a set first distance threshold value, and the character data with the traversed termination is marked as abrupt character data;
step 104, iteratively executing step 103 to obtain all mutation character data;
and 105, mapping the abrupt character data one by one to generate image nodes, and taking the abrupt character data as parameters of the image nodes.
5. The image processing system of claim 4, wherein the first distance is calculated as:
Figure FDA0004126107100000031
wherein k is the number of limbs, theta of the human skeleton model i1 An included angle between the ith limb of one person data and the ith limb in the standard posture model; θ i2 Is the included angle between the ith limb of the data of another person and the ith limb in the standard posture model.
6. An image processing system according to claim 1, wherein the second distance is calculated as:
Figure FDA0004126107100000032
wherein k is the number of limbs, theta of the human skeleton model ip An included angle between the ith limb of one person data and the ith limb in the standard posture model; θ iq Is the included angle between the ith limb of the data of another person and the ith limb in the standard posture model.
7. An image processing system according to claim 1, characterized in that the third distance dist 3 The calculation formula of (2) is as follows:
Figure FDA0004126107100000033
wherein x is j1 Two-dimensional value of ith image node representing character data sequence corresponding to first character data, x j2 And a two-dimensional value representing an ith image node of the character data sequence corresponding to the second character data, m being the number of image nodes of the character data sequences corresponding to the first character data and the second character data.
8. The image processing system of claim 1, wherein the sequence of person data is discretized to produce a plurality of sequences of cells, the sequences of cells being truncated from the sequence of person data;
the method for intercepting is to divide according to the difference value between the time nodes corresponding to the image nodes, and if the difference value between the time nodes corresponding to the two adjacent image nodes is larger than a set time threshold value, one of the two image nodes is used as a dividing point to divide the character data sequence;
calculating a fourth distance for the unit sequence, and then matching the unit sequences corresponding to the first character data with the first N unit sequences corresponding to the second character data with the smallest fourth distance as reference unit sequences;
and extracting the two-dimensional animation image based on the reference unit sequence to generate a unit recommendation atlas for the unit sequence corresponding to each first person data.
9. An image processing method, characterized by applying an image processing system according to any of claims 1-8 to perform the steps of:
step 201, performing two-dimensional processing on a three-dimensional animation image to obtain first person data in a two-dimensional space;
step 202, extracting a two-dimensional animation image from a two-dimensional animation, and then processing to obtain second character data in a two-dimensional space;
step 203, generating image nodes based on the first person data and the second person data;
step 204, judging the two-dimension of the character data sequence;
step 205, calculating a third distance between two dimensionalities of the character data sequences corresponding to the first character data and the second character data;
step 206, extracting first N pieces of second person data with the smallest third distance as reference person data;
step 207, extracting two-dimensional animation images mapped by image nodes of the reference character data, and continuously extracting two-dimensional animation images between time nodes of the extracted two-dimensional animation images to generate a two-dimensional animation atlas as a recommended atlas.
10. A storage medium storing an image processing system according to any one of claims 1-8.
CN202310246476.2A 2023-03-15 2023-03-15 Image processing system, method and medium Pending CN116309998A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310246476.2A CN116309998A (en) 2023-03-15 2023-03-15 Image processing system, method and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310246476.2A CN116309998A (en) 2023-03-15 2023-03-15 Image processing system, method and medium

Publications (1)

Publication Number Publication Date
CN116309998A true CN116309998A (en) 2023-06-23

Family

ID=86781039

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310246476.2A Pending CN116309998A (en) 2023-03-15 2023-03-15 Image processing system, method and medium

Country Status (1)

Country Link
CN (1) CN116309998A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110050864A1 (en) * 2009-09-01 2011-03-03 Prime Focus Vfx Services Ii Inc. System and process for transforming two-dimensional images into three-dimensional images
CN102376100A (en) * 2010-08-20 2012-03-14 北京盛开互动科技有限公司 Single-photo-based human face animating method
US20150101411A1 (en) * 2013-10-11 2015-04-16 Seno Medical Instruments, Inc. Systems and methods for component separation in medical imaging
CN110807364A (en) * 2019-09-27 2020-02-18 中国科学院计算技术研究所 Modeling and capturing method and system for three-dimensional face and eyeball motion
CN111063021A (en) * 2019-11-21 2020-04-24 西北工业大学 Method and device for establishing three-dimensional reconstruction model of space moving target
CN112102422A (en) * 2020-11-19 2020-12-18 蚂蚁智信(杭州)信息技术有限公司 Image processing method and device
WO2021209042A1 (en) * 2020-04-16 2021-10-21 广州虎牙科技有限公司 Three-dimensional model driving method and apparatus, electronic device, and storage medium
CN114495274A (en) * 2022-01-25 2022-05-13 上海大学 System and method for realizing human motion capture by using RGB camera
CN115205332A (en) * 2022-06-29 2022-10-18 上海爱之邦教育科技有限公司 Moving object identification and motion track calculation method
CN115797851A (en) * 2023-02-09 2023-03-14 安徽米娱科技有限公司 Animation video processing method and system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110050864A1 (en) * 2009-09-01 2011-03-03 Prime Focus Vfx Services Ii Inc. System and process for transforming two-dimensional images into three-dimensional images
CN102376100A (en) * 2010-08-20 2012-03-14 北京盛开互动科技有限公司 Single-photo-based human face animating method
US20150101411A1 (en) * 2013-10-11 2015-04-16 Seno Medical Instruments, Inc. Systems and methods for component separation in medical imaging
CN110807364A (en) * 2019-09-27 2020-02-18 中国科学院计算技术研究所 Modeling and capturing method and system for three-dimensional face and eyeball motion
CN111063021A (en) * 2019-11-21 2020-04-24 西北工业大学 Method and device for establishing three-dimensional reconstruction model of space moving target
WO2021209042A1 (en) * 2020-04-16 2021-10-21 广州虎牙科技有限公司 Three-dimensional model driving method and apparatus, electronic device, and storage medium
CN112102422A (en) * 2020-11-19 2020-12-18 蚂蚁智信(杭州)信息技术有限公司 Image processing method and device
CN114495274A (en) * 2022-01-25 2022-05-13 上海大学 System and method for realizing human motion capture by using RGB camera
CN115205332A (en) * 2022-06-29 2022-10-18 上海爱之邦教育科技有限公司 Moving object identification and motion track calculation method
CN115797851A (en) * 2023-02-09 2023-03-14 安徽米娱科技有限公司 Animation video processing method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张晓翔等: "基于动态密集条件随机场增量推理计算的多类别视频分割", 《计算机应用研究》, vol. 37, no. 12, 31 December 2020 (2020-12-31), pages 3781 - 3787 *

Similar Documents

Publication Publication Date Title
CN110717977B (en) Method, device, computer equipment and storage medium for processing game character face
WO2019128932A1 (en) Face pose analysis method and apparatus, device, storage medium, and program
US20190333262A1 (en) Facial animation implementation method, computer device, and storage medium
WO2023050992A1 (en) Network training method and apparatus for facial reconstruction, and device and storage medium
EP3992919B1 (en) Three-dimensional facial model generation method and apparatus, device, and medium
CN113628327B (en) Head three-dimensional reconstruction method and device
US9747695B2 (en) System and method of tracking an object
CN112102480B (en) Image data processing method, apparatus, device and medium
Neverova Deep learning for human motion analysis
Xu Fast modelling algorithm for realistic three-dimensional human face for film and television animation
CN111640172A (en) Attitude migration method based on generation of countermeasure network
Jung et al. Learning free-form deformation for 3D face reconstruction from in-the-wild images
Zeng et al. Video‐driven state‐aware facial animation
CN113822114A (en) Image processing method, related equipment and computer readable storage medium
Zhao et al. Cartoonish sketch-based face editing in videos using identity deformation transfer
CN116309998A (en) Image processing system, method and medium
Yang et al. Expression transfer for facial sketch animation
CN114862716A (en) Image enhancement method, device and equipment for face image and storage medium
Aleksandrova et al. 3D face model reconstructing from its 2D images using neural networks
CN114049290A (en) Image processing method, device, equipment and storage medium
Jian et al. Realistic face animation generation from videos
Zhang et al. Monocular face reconstruction with global and local shape constraints
Zhang et al. Cartoon face synthesis based on Markov Network
CN115147578B (en) Stylized three-dimensional face generation method and device, electronic equipment and storage medium
CN115953516B (en) Interactive animation production platform based on motion capture technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination