CN116310015A - Computer system, method and medium - Google Patents
Computer system, method and medium Download PDFInfo
- Publication number
- CN116310015A CN116310015A CN202310246436.8A CN202310246436A CN116310015A CN 116310015 A CN116310015 A CN 116310015A CN 202310246436 A CN202310246436 A CN 202310246436A CN 116310015 A CN116310015 A CN 116310015A
- Authority
- CN
- China
- Prior art keywords
- key frame
- sequence
- person
- distance
- character
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 16
- 230000015572 biosynthetic process Effects 0.000 claims abstract description 12
- 238000012937 correction Methods 0.000 claims abstract description 12
- 238000003786 synthesis reaction Methods 0.000 claims abstract description 12
- 238000000605 extraction Methods 0.000 claims abstract description 10
- 238000012216 screening Methods 0.000 claims abstract description 10
- 238000012545 processing Methods 0.000 claims abstract description 9
- 239000000284 extract Substances 0.000 claims abstract description 5
- 239000011159 matrix material Substances 0.000 claims description 27
- 238000004364 calculation method Methods 0.000 claims description 24
- 239000003550 marker Substances 0.000 claims description 12
- 238000013507 mapping Methods 0.000 claims description 5
- 230000002194 synthesizing effect Effects 0.000 abstract description 3
- 238000007418 data mining Methods 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/80—2D [Two Dimensional] animation, e.g. using sprites
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/48—Matching video sequences
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention relates to the technical field of cartoon making, and discloses a computer system, which comprises: a frame image extraction module for extracting a frame image from the three-dimensional animated video; a person recognition module for recognizing a person from the frame image and acquiring person pose data; the key frame extraction module is used for screening and obtaining a key frame sequence from the frame image; the gesture sequence correction module firstly extracts a key frame sequence corresponding to the first person and then marks the key frame; a reference sequence correction module for marking key frames of a key frame sequence of a third person; a key frame sequence synthesis module for extracting a reference key frame sequence; synthesizing each reference key frame sequence with the key frame sequence of the first person respectively; the invention can map the reference image for the key frame which needs to be hand painted in the 2D animation 2D process based on image processing and data mining, and provides a reference with higher matching degree for hand painted personnel.
Description
Technical Field
The invention relates to the technical field of cartoon making, in particular to a computer system.
Background
In the process of 2D animation, edge processing and color processing are performed on the images through deep learning, if the style of the 2D animation is to be reflected, part of frame images need to be replaced by frame images generated based on hand-painted key frames, and therefore the capability of hand-painted staff to draw super-reality character actions among real actions is required, and the requirement on artistic creation capability of the hand-painted staff is high.
Disclosure of Invention
The invention provides a computer system which solves the technical problem of XXXXX in the related technology.
The present invention provides a computer system comprising:
the frame image extraction module is used for extracting a first frame image from the three-dimensional animation video and extracting a second frame image from the two-dimensional animation video; a person recognition module for recognizing a person from the frame image and acquiring person pose data; the information of the person includes the sex of the person, and the person posture data includes person posture parameters; a character matching module for selecting a first character extracted from the first frame image and then matching a second character extracted from the second frame image, the first character having the same information as the second character, the matched second character being marked as a third character; the key frame extraction module is used for screening and obtaining a key frame sequence from the frame image;
the gesture sequence correction module firstly extracts a key frame sequence corresponding to a first person, marks the key frame, wherein the mark value of the key frame to be redrawn is 0, and the mark values of the other key frames are 1;
the reference sequence correction module is used for extracting a key frame sequence corresponding to the third person and then marking key frames of the key frame sequence of the third person;
the key frame sequence synthesis module is used for calculating the reference distances of the key frame sequences of the first person and the second person, and extracting the key frame sequences of the first N second persons with the smallest reference distances of the key frame sequences of the first person as reference key frame sequences; each reference key frame sequence is synthesized with the key frame sequence of the first person.
Further, the method for screening and obtaining the key frame from the frame image comprises the following steps:
104, extracting all the generated first sequence units to generate a first human gesture sequence;
Further, the calculation formula of the first similarity is as follows:
wherein k is human boneNumber of limbs, θ of the stent model i1 An included angle between the ith limb of the character gesture parameter of a sequence unit of a character gesture sequence and the ith limb in the standard gesture model; θ i2 And the included angle between the ith limb of the character gesture parameter of the sequence unit of the other character gesture sequence and the ith limb in the standard gesture model.
Further, the method of keyframe tagging of the keyframe sequence of the third person comprises:
defining a two-dimensional random field, wherein a node set of the two-dimensional random field is denoted as V, and a set of edges between the nodes is denoted as E;
the energy function of the two-dimensional random field is:
wherein X is a node set, θ p (x p ) As a potential function of node p, θ pq (x p ,x q ) As a potential function of the edge between nodes p and q, x p The marker value, x, of the image node mapped for node p q A marker value of a key frame sequence mapped for the node q; a mark value of 0 indicates that the character gesture parameter corresponding to the key frame has larger deviation from the real character; a mark value of 1 indicates that the character gesture parameter corresponding to the key frame is close to the real character;
simila p,q represents x p And x q A second similarity of character pose parameters mapped by the mapped keyframes;
S 1 for a set movement entropy threshold, S (x p ) Is x p The motion entropy of the mapped key frame is calculated as follows:
S(x p )=∑ c∈K cosθ c1,c2 wherein K is the set of limbs of the human skeletal model, wherein θ c1,c2 Represents x p The included angle between the c-th limb of the character gesture parameters of the mapped key frame and the c-th limb in the standard gesture model;
the marker value of each node of the two-dimensional random field is used as the marker value of the key frame sequence when the energy function is minimized.
Further, the calculation formula of the second similarity is as follows:
wherein k is the number of limbs, theta of the human skeleton model ip An included angle between the ith limb of a person posture parameter and the ith limb in the standard posture model; θ iq Is the included angle between the ith limb of the gesture parameter of another person and the ith limb in the standard gesture model.
Further, the key frame sequence synthesis module includes:
a distance matrix generation module for establishing an m×n distance matrix, wherein the element of the ith row and the jth column of the distance matrix is denoted as d (i, j);
d (i, j) has a value d ij ,d ij A second distance or a third distance representing an ith keyframe of the keyframe sequence Q and a jth keyframe of the keyframe sequence C;
a first distance calculation module for calculating a second distance filling distance matrix, generating a path K from the 1 st row and 1 st column elements to the mth row and n column elements on the distance matrix, summing values of the distance matrix elements on the path K as path distance values, selecting a path K with the smallest path distance value as a shortest path, and using the path distance value of the shortest path as a first distance dist 1 ;
A fourth distance calculation module for calculating a third distance filling distance matrix, generating a path K from the 1 st row and 1 st column elements to the m th row and n th column elements on the distance matrix, summing values of the distance matrix elements on the path K as path distance values, and selecting a path K with the smallest path distance value as path distance valueShortest path, taking path distance value of the shortest path as fourth distance dist 4 ;
A reference distance calculation module that calculates a reference distance, a reference distance dist, based on the first distance and the fourth distance c The calculation formula of (2) is as follows:
wherein t is the maximum value of the number of key frames of the two key frame sequences, and k is the number of limbs of the human skeleton model;
and the synthesis module is used for mapping the key frame sequence of the first person and the key frame marked as 0 of the reference key frame sequence, and the third distance between the two key frames establishing the mapping relation is smaller than a set third distance threshold value.
Further, a second distance dist 2 The calculation formula of (2) is as follows:
dist 2 =|Q i -C j i, wherein Q i The index value C of the ith key frame of the key frame sequence Q j Is the tag value of the j-th key frame of the key frame sequence C.
Further, the calculation formula of the third distance is as follows:
wherein k is the number of limbs, theta of the human skeleton model ip An included angle between the ith limb of a person posture parameter and the ith limb in the standard posture model; θ iq Is the included angle between the ith limb of the gesture parameter of another person and the ith limb in the standard gesture model.
The invention provides a method for processing images by a computer, which uses a computer system to execute the following steps:
The invention provides a computer storage medium for storing the above computer system.
The invention has the beneficial effects that:
the invention can map the reference image of the key frame which needs to be hand painted in the 2D animation 2D process based on image processing and data mining, provides a reference with higher matching degree for the hand painted staff, is convenient for imitation, and reduces the requirement on the artistic creation capability of the hand painted staff.
Drawings
FIG. 1 is a block diagram of a computer system of the present invention;
FIG. 2 is a block diagram of a key frame sequence synthesis module of the present invention;
FIG. 3 is a flow chart of a method of screening a frame image for key frames in accordance with the present invention;
fig. 4 is a flow chart of a method of processing an image by a computer of the present invention.
In the figure: the system comprises a frame image extraction module 101, a person identification module 102, a person matching module 103, a key frame extraction module 104, a gesture sequence correction module 105, a reference sequence correction module 106, a key frame sequence synthesis module 107, a distance matrix generation module 1071, a first distance calculation module 1072, a fourth distance calculation module 1073, a reference distance calculation module 1074 and a synthesis module 1075.
Detailed Description
The subject matter described herein will now be discussed with reference to example embodiments. It is to be understood that these embodiments are merely discussed so that those skilled in the art may better understand and implement the subject matter described herein and that changes may be made in the function and arrangement of the elements discussed without departing from the scope of the disclosure herein. Various examples may omit, replace, or add various procedures or components as desired. In addition, features described with respect to some examples may be combined in other examples as well.
Example 1
As shown in fig. 1-3, a computer system, comprising:
a frame image extraction module 101 for extracting a first frame image from a three-dimensional animated video and extracting a second frame image from a two-dimensional animated video;
a person recognition module 102 for recognizing a person from the frame image and acquiring person pose data;
the information of the person includes the sex of the person, and the person posture data includes person posture parameters;
the information of the current persona may also contain more content, such as age, persona type, etc., which facilitates more accurate matching, but the amount of persona data required is large enough, otherwise more content means more filtering items, and insufficient data amount may result in insufficient data amount of the persona available for matching.
A person matching module 103, configured to select a first person extracted from the first frame image, and then match a second person extracted from the second frame image, where the first person is the same as the second person in information, and the matched second person is marked as a third person;
a key frame extraction module 104, configured to screen and obtain a key frame sequence from a frame image;
the method for screening and obtaining the key frames from the frame images comprises the following steps:
104, extracting all the generated first sequence units to generate a first human gesture sequence;
in this step, the key frame sequence may express a plurality of actions with larger intervals, the key frame sequence is intercepted by setting a time threshold by the time difference of the time points between the key frames of the key frame sequence, and the intercepted key frame sequence enters the gesture sequence correction module 105, the reference sequence correction module 106 and the key frame sequence synthesis module 107 for processing respectively.
The time difference can be set to be 3min, and the number of truncated key frame sequences can be reduced as the time difference increases;
the calculation formula of the first similarity is as follows:
wherein k is the number of limbs, theta of the human skeleton model i1 Ith limb and standard gesture of character gesture parameter of sequence unit of character gesture sequenceIncluded angle of ith limb in model; θ i2 An included angle between the ith limb of the character gesture parameter of the sequence unit of the gesture sequence of another character and the ith limb in the standard gesture model;
in one embodiment of the invention, the person pose parameters are generated from the image based on a person pose estimation algorithm such as the openPose algorithm.
The first similarity threshold may be adjusted according to the number of limbs of the skeletal model, and is generally inversely proportional to the number of limbs of the skeletal model.
The gesture sequence correction module 105 firstly extracts a key frame sequence corresponding to a first person, marks the key frame, wherein the mark value of the key frame to be redrawn is 0, and the mark values of the rest key frames are 1;
a reference sequence correction module 106, configured to extract a key frame sequence corresponding to the third person, and then mark key frames of the key frame sequence of the third person;
the method for marking the key frames of the key frame sequence of the third person comprises the following steps:
defining a two-dimensional random field, wherein a node set of the two-dimensional random field is denoted as V, and a set of edges between the nodes is denoted as E;
the energy function of the two-dimensional random field is:
wherein X is a node set, θ p (x p ) As a potential function of node p, θ pq (x p ,x q ) As a potential function of the edge between nodes p and q, x p The marker value, x, of the image node mapped for node p q A marker value of a key frame sequence mapped for the node q; a mark value of 0 indicates that the character gesture parameter corresponding to the key frame has larger deviation from the real character; a mark value of 1 indicates that the character gesture parameter corresponding to the key frame is close to the real character;
simila p,q represents x p And x q A second similarity of character pose parameters mapped by the mapped keyframes;
the calculation formula of the second similarity is as follows:
wherein k is the number of limbs, theta of the human skeleton model ip An included angle between the ith limb of a person posture parameter and the ith limb in the standard posture model; θ iq An included angle between the ith limb of the gesture parameter of another person and the ith limb in the standard gesture model;
S 1 for a set movement entropy threshold, S (x p ) Is x p The motion entropy of the mapped key frame is calculated as follows:
wherein K is the set of limbs of the human skeletal model, wherein θ c1,c2 Represents x p The included angle between the c-th limb of the character gesture parameters of the mapped key frame and the c-th limb in the standard gesture model;
the standard posture model is a limb angle parameter of the human body skeleton model in a natural standing state that the human body is naturally hung by both hands.
The marker value of each node of the two-dimensional random field is used as the marker value of the key frame sequence when the energy function is minimized.
A keyframe sequence synthesizing module 107, configured to calculate reference distances of keyframe sequences of the first person and the second person, and extract keyframe sequences of the first N second persons having the smallest reference distances from the keyframe sequences of the first person as reference keyframe sequences;
each reference key frame sequence is synthesized with the key frame sequence of the first person.
The key frame sequence synthesis module 107 includes:
a distance matrix generation module 1071 that creates an m×n distance matrix whose i-th row and j-th column elements are denoted as d (i, j);
d (i, j) has a value d ij ,d ij A second distance or a third distance representing an ith keyframe of the keyframe sequence Q and a jth keyframe of the keyframe sequence C;
a first distance calculation module 1072 for calculating a second distance filling distance matrix, generating a path K from the 1 st row and 1 st column element to the m th row and n th column element on the distance matrix, summing values of the elements of the distance matrix on the path K as path distance values, selecting a path K with the smallest path distance value as a shortest path, and selecting the path distance value of the shortest path as a first distance dist 1 。
A fourth distance calculation module 1073 for calculating a third distance filling distance matrix, generating a path K from the 1 st row and 1 st column element to the m th row and n th column element on the distance matrix, summing values of the elements of the distance matrix on the path K as path distance values, selecting a path K with the smallest path distance value as a shortest path, and selecting the path distance value of the shortest path as a fourth distance dist 4 。
Second distance dist 2 The calculation formula of (2) is as follows:
dist 2 =|Q i -C j |
wherein Q is i The index value C of the ith key frame of the key frame sequence Q j Is the tag value of the j-th key frame of the key frame sequence C.
The calculation formula of the third distance is as follows:
wherein k is the number of limbs, theta of the human skeleton model ip An included angle between the ith limb of a person posture parameter and the ith limb in the standard posture model; θ iq Is the included angle between the ith limb of the gesture parameter of another person and the ith limb in the standard gesture model.
A reference distance calculation module 1074 that calculates a reference distance, reference distance dist, based on the first distance and the fourth distance c The calculation formula of (2) is as follows:
wherein t is the maximum value of the number of key frames of the two key frame sequences, and k is the number of limbs of the human skeleton model.
The synthesizing module 1075 is configured to map a key frame sequence of the first person and a key frame labeled 0 of the reference key frame sequence, where a third distance between two key frames establishing the mapping relationship is smaller than a set third distance threshold.
As shown in fig. 4, the present embodiment provides a method for processing an image by using a computer system as described above, which performs the following steps:
The embodiment has been described above with reference to the embodiment, but the embodiment is not limited to the above-described specific implementation, which is only illustrative and not restrictive, and many forms can be made by those of ordinary skill in the art, given the benefit of this disclosure, are within the scope of this embodiment.
Claims (10)
1. A computer system, comprising:
the frame image extraction module is used for extracting a first frame image from the three-dimensional animation video and extracting a second frame image from the two-dimensional animation video; a person recognition module for recognizing a person from the frame image and acquiring person pose data; the information of the person includes the sex of the person, and the person posture data includes person posture parameters; a character matching module for selecting a first character extracted from the first frame image and then matching a second character extracted from the second frame image, the first character having the same information as the second character, the matched second character being marked as a third character; the key frame extraction module is used for screening and obtaining a key frame sequence from the frame image;
the gesture sequence correction module firstly extracts a key frame sequence corresponding to a first person, marks the key frame, wherein the mark value of the key frame to be redrawn is 0, and the mark values of the other key frames are 1;
the reference sequence correction module is used for extracting a key frame sequence corresponding to the third person and then marking key frames of the key frame sequence of the third person;
the key frame sequence synthesis module is used for calculating the reference distances of the key frame sequences of the first person and the second person, and extracting the key frame sequences of the first N second persons with the smallest reference distances of the key frame sequences of the first person as reference key frame sequences; each reference key frame sequence is synthesized with the key frame sequence of the first person.
2. The computer system of claim 1, wherein the means for screening the key frames from the frame images comprises:
step 101, extracting character gesture parameters of a selected character from each frame image to generate a character gesture sequence, wherein each unit of the character gesture sequence comprises the character gesture parameters of the selected character in one frame image;
step 102, traversing backwards from the sequence unit of the character gesture sequence until the first similarity between the traversed sequence unit and the sequence unit from which the traversing starts is smaller than a set first similarity threshold; marking the sequence unit of traversal termination as a first sequence unit;
step 103, iteratively executing step 102 until all sequence units of the character gesture sequence are traversed, wherein the first sequence unit of the character gesture sequence is traversed in the first execution, and the first sequence unit generated in the previous execution is traversed backwards in each subsequent execution;
104, extracting all the generated first sequence units to generate a first human gesture sequence;
step 105, extracting a frame image of the character pose parameter source of the first sequence unit as a key frame, and ordering the key frame according to the time point to generate a key frame sequence.
3. The computer system of claim 2, wherein the first similarity is calculated as:
wherein k is the number of limbs, theta of the human skeleton model i1 An included angle between the ith limb of the character gesture parameter of a sequence unit of a character gesture sequence and the ith limb in the standard gesture model; θ i2 And the included angle between the ith limb of the character gesture parameter of the sequence unit of the other character gesture sequence and the ith limb in the standard gesture model.
4. The computer system of claim 1, wherein the method of keyframe tagging of the keyframe sequence of the third person comprises:
defining a two-dimensional random field, wherein a node set of the two-dimensional random field is denoted as V, and a set of edges between the nodes is denoted as E;
the energy function of the two-dimensional random field is:
wherein X is a node set, θ p x p As a potential function of node p, θ pq x p ,x q As a potential function of the edge between nodes p and q, x p The marker value, x, of the image node mapped for node p q A marker value of a key frame sequence mapped for the node q; a mark value of 0 indicates that the character gesture parameter corresponding to the key frame has larger deviation from the real character; a mark value of 1 indicates that the character gesture parameter corresponding to the key frame is close to the real character;
simila p,q represents x p And x q A second similarity of character pose parameters mapped by the mapped keyframes;
S 1 for a set movement entropy threshold, sx p Is x p The motion entropy of the mapped key frame is calculated as follows:
Sx p =∑ c∈K cosθ c1,c2 wherein K is the set of limbs of the human skeletal model, wherein θ c1,c2 Represents x p The included angle between the c-th limb of the character gesture parameters of the mapped key frame and the c-th limb in the standard gesture model;
the marker value of each node of the two-dimensional random field is used as the marker value of the key frame sequence when the energy function is minimized.
5. The computer system of claim 4, wherein the second similarity is calculated as:
wherein k is the number of limbs, theta of the human skeleton model ip An included angle between the ith limb of a person posture parameter and the ith limb in the standard posture model; θ iq Is the included angle between the ith limb of the gesture parameter of another person and the ith limb in the standard gesture model.
6. The computer system of claim 1, wherein the key frame sequence synthesis module comprises:
a distance matrix generation module for establishing an m×n distance matrix, wherein the element of the ith row and the jth column of the distance matrix is denoted as d (i, j);
d (i, j) has a value d ij ,d ij A second distance or a third distance representing an ith keyframe of the keyframe sequence Q and a jth keyframe of the keyframe sequence C;
a first distance calculation module for calculating a second distance filling distance matrix, generating a path K from the 1 st row and 1 st column elements to the mth row and n column elements on the distance matrix, summing values of the distance matrix elements on the path K as path distance values, selecting a path K with the smallest path distance value as a shortest path, and using the path distance value of the shortest path as a first distance dist 1 ;
A fourth distance calculation module for calculating a third distance filling distance matrix, generating a path K from the 1 st row and 1 st column elements to the m th row and n th column elements on the distance matrix, summing values of the distance matrix elements on the path K as path distance values, and selecting a path K with the smallest path distance value as the shortest pathThe path, the path distance value of the shortest path is taken as the fourth distance dist 4 ;
A reference distance calculation module that calculates a reference distance, a reference distance dist, based on the first distance and the fourth distance c The calculation formula of (2) is as follows:
wherein t is the maximum value of the number of key frames of the two key frame sequences, and k is the number of limbs of the human skeleton model;
and the synthesis module is used for mapping the key frame sequence of the first person and the key frame marked as 0 of the reference key frame sequence, and the third distance between the two key frames establishing the mapping relation is smaller than a set third distance threshold value.
7. The computer system of claim 6, wherein the second distance dist 2 The calculation formula of (2) is as follows:
dist 2 =Q i -C j wherein Q is i The index value C of the ith key frame of the key frame sequence Q j Is the tag value of the j-th key frame of the key frame sequence C.
8. The computer system of claim 6, wherein the third distance is calculated as:
wherein k is the number of limbs, theta of the human skeleton model ip An included angle between the ith limb of a person posture parameter and the ith limb in the standard posture model; θ iq Is the included angle between the ith limb of the gesture parameter of another person and the ith limb in the standard gesture model.
9. A method of computer processing an image, characterized in that a computer system according to any of claims 1-8 is applied for performing the steps of:
step 201, extracting a first frame image from a three-dimensional animation video and extracting a second frame image from a two-dimensional animation video;
step 202, identifying a person from a frame image and acquiring person posture data;
step 203, screening and obtaining a key frame sequence from the frame image;
step 204, extracting a key frame sequence corresponding to the first person and the third person, and then marking the key frame;
step 205, calculating the reference distances of the key frame sequences of the first person and the second person, and extracting the key frame sequences of the first N second persons with the smallest reference distances of the key frame sequences of the first person as reference key frame sequences;
step 206, each reference key frame sequence is synthesized with the key frame sequence of the first person.
10. A computer storage medium storing a computer system according to any one of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310246436.8A CN116310015A (en) | 2023-03-15 | 2023-03-15 | Computer system, method and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310246436.8A CN116310015A (en) | 2023-03-15 | 2023-03-15 | Computer system, method and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116310015A true CN116310015A (en) | 2023-06-23 |
Family
ID=86816305
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310246436.8A Pending CN116310015A (en) | 2023-03-15 | 2023-03-15 | Computer system, method and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116310015A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101958007A (en) * | 2010-09-20 | 2011-01-26 | 南京大学 | Three-dimensional animation posture modeling method by adopting sketch |
CN102682302A (en) * | 2012-03-12 | 2012-09-19 | 浙江工业大学 | Human body posture identification method based on multi-characteristic fusion of key frame |
CN110047096A (en) * | 2019-04-28 | 2019-07-23 | 中南民族大学 | A kind of multi-object tracking method and system based on depth conditions random field models |
CN110602527A (en) * | 2019-09-12 | 2019-12-20 | 北京小米移动软件有限公司 | Video processing method, device and storage medium |
CN115797851A (en) * | 2023-02-09 | 2023-03-14 | 安徽米娱科技有限公司 | Animation video processing method and system |
-
2023
- 2023-03-15 CN CN202310246436.8A patent/CN116310015A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101958007A (en) * | 2010-09-20 | 2011-01-26 | 南京大学 | Three-dimensional animation posture modeling method by adopting sketch |
CN102682302A (en) * | 2012-03-12 | 2012-09-19 | 浙江工业大学 | Human body posture identification method based on multi-characteristic fusion of key frame |
CN110047096A (en) * | 2019-04-28 | 2019-07-23 | 中南民族大学 | A kind of multi-object tracking method and system based on depth conditions random field models |
CN110602527A (en) * | 2019-09-12 | 2019-12-20 | 北京小米移动软件有限公司 | Video processing method, device and storage medium |
CN115797851A (en) * | 2023-02-09 | 2023-03-14 | 安徽米娱科技有限公司 | Animation video processing method and system |
Non-Patent Citations (1)
Title |
---|
张晓翔 等: ""基于动态密集条件随机场增量推理计算的多类别视频分割"", 《计算机应用研究》, vol. 37, no. 12, 31 December 2020 (2020-12-31), pages 3781 - 2787 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109408653B (en) | Human body hairstyle generation method based on multi-feature retrieval and deformation | |
US20200320345A1 (en) | System and method for visual recognition using synthetic training data | |
CN108027878B (en) | Method for face alignment | |
Yang et al. | Extraction of 2d motion trajectories and its application to hand gesture recognition | |
CN109086706B (en) | Motion recognition method based on segmentation human body model applied to human-computer cooperation | |
Uddin et al. | Human activity recognition using body joint‐angle features and hidden Markov model | |
EP0901667A2 (en) | Principal component analysis of image/control-point location coupling for the automatic location of control points | |
CN108388882A (en) | Based on the gesture identification method that the overall situation-part is multi-modal RGB-D | |
CN111680550B (en) | Emotion information identification method and device, storage medium and computer equipment | |
CN106682585A (en) | Dynamic gesture identifying method based on kinect 2 | |
Seddik et al. | Unsupervised facial expressions recognition and avatar reconstruction from Kinect | |
Auephanwiriyakul et al. | Thai sign language translation using scale invariant feature transform and hidden markov models | |
Ari et al. | Facial feature tracking and expression recognition for sign language | |
Yu et al. | A video-based facial motion tracking and expression recognition system | |
Luqman | An efficient two-stream network for isolated sign language recognition using accumulative video motion | |
Neverova | Deep learning for human motion analysis | |
Agarwal et al. | Synthesis of realistic facial expressions using expression map | |
Baumberger et al. | 3d face reconstruction from video using 3d morphable model and silhouette | |
CN115008454A (en) | Robot online hand-eye calibration method based on multi-frame pseudo label data enhancement | |
Huynh-The et al. | Learning action images using deep convolutional neural networks for 3D action recognition | |
CN115797851B (en) | Cartoon video processing method and system | |
CN110516638B (en) | Sign language recognition method based on track and random forest | |
CN108537855B (en) | Ceramic stained paper pattern generation method and device with consistent sketch | |
CN116310015A (en) | Computer system, method and medium | |
US20240062441A1 (en) | System and method for photorealistic image synthesis using unsupervised semantic feature disentanglement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |