WO2022134505A1 - Procédé de mise en œuvre de transition de mouvement de multiples matériaux de personnage dynamique dans une vidéo d'animation - Google Patents
Procédé de mise en œuvre de transition de mouvement de multiples matériaux de personnage dynamique dans une vidéo d'animation Download PDFInfo
- Publication number
- WO2022134505A1 WO2022134505A1 PCT/CN2021/101685 CN2021101685W WO2022134505A1 WO 2022134505 A1 WO2022134505 A1 WO 2022134505A1 CN 2021101685 W CN2021101685 W CN 2021101685W WO 2022134505 A1 WO2022134505 A1 WO 2022134505A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- action
- node information
- node
- information
- coordinate
- Prior art date
Links
- 230000033001 locomotion Effects 0.000 title claims abstract description 36
- 238000000034 method Methods 0.000 title claims abstract description 31
- 230000007704 transition Effects 0.000 title claims abstract description 22
- 230000009471 action Effects 0.000 claims description 187
- 230000000875 corresponding effect Effects 0.000 claims description 46
- 230000008859 change Effects 0.000 claims description 40
- 238000004590 computer program Methods 0.000 claims description 13
- 230000003068 static effect Effects 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 description 11
- 230000008602 contraction Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/0021—Image watermarking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Definitions
- the invention belongs to the technical field of hand-painted videos, and in particular relates to a method, a device, an electronic device and a storage medium for realizing action transition of multiple dynamic character materials in an animation video.
- dynamic character footage In the process of animation video production, dynamic character footage is often used. These dynamic characters usually contain a specific action, such as greeting, walking, typing, etc. In many cases, we need to perform multiple action splicing to express animation, such as sitting and typing after walking, walking after saying hello, etc.
- the current implementation method is usually to directly use multiple animated character materials, and play the materials alternately in chronological order. In this implementation method, since there is no transition between the actions of multiple animated characters, the animation effect is blunt and the action connection is not smooth, which affects the overall animation effect.
- the present invention provides a method for realizing action transition of multiple dynamic character materials in an animation video, comprising the following steps:
- the present invention provides a device for realizing action transition of multiple dynamic character materials in an animation video, which is characterized by comprising:
- the character material reading module reads the action node information of the character material in the animation video
- a body part matching module which matches the action node information corresponding to the body parts of the character material
- a body part movement guidance module which guides the movement of the body parts of the character material according to the start and end position coordinates of the action node information and the time difference between the start and end positions;
- the animation storage module generates transition animation in combination with the body parts of the character material.
- the present invention matches the body parts of the characters in the animation video with the corresponding action node information by reading the action node information of the character material in the animation video, and uses the action nodes to guide the movement of the character body parts.
- the difference between the front and back actions of the character can get the difference of the specific body part, and then the difference between the coordinates of the action nodes corresponding to the body part and the time difference between the front and back actions are obtained.
- the position moves at a uniform speed, and the movement time is the time difference between the front and rear movements.
- the step of matching the action node information corresponding to the body parts of the character material includes:
- the coordinate values are grouped according to the character body parts.
- the body part matching module includes:
- a coordinate reading unit which reads the coordinate value of the action node information
- a coordinate grouping unit for grouping the coordinate values according to the body parts of the character material.
- the coordinate value of the action node information is used to locate the position of the character material in the animation, the coordinate value is grouped according to different body parts, and the different coordinate values of the same body part are kept relatively static, which is used to ensure that the body parts are in There is no deformation during movement.
- step of instructing the movement of the body parts of the character material according to the coordinates of the starting and ending positions of the action node information and the time difference between the starting and ending positions includes:
- the position of the body part of the character material at a certain time point is located.
- the body part motion guidance module includes:
- a calculation unit which calculates the coordinate difference and time difference between the start and end positions of the action node information
- the coordinate positioning unit locates the position of the body part of the character material at a certain time point according to the coordinate difference and the time difference between the starting and ending positions of the action node information.
- the movement trajectory and movement speed of the body part are calculated to guide the movement of the body part.
- body parts of the character material are used to define that different coordinates on the same body part remain relatively static.
- the body parts of the character material are used to define that different coordinates on the same body part remain relatively static.
- the swing of a person's arm when walking can be simply understood as the rotation of the line segment around one of its endpoints. If the position of the two ends of the line segment is not restricted to remain relatively static, the motion trajectory of the moving point will be a straight line instead of an arc, which is shown in the animation. The middle will be that the arm is shortened during the swing, grouping by body parts and limiting the coordinate values to avoid deformation of the body parts.
- action node information corresponding to the body parts of the matching character material includes:
- comparing the two images to obtain at least one change action type of an action node information includes:
- the two images include a first image and a second image
- the change action type is obtained based on the action node information.
- first coordinate set is inconsistent with the second coordinate set, it is determined that the first node is inconsistent with its corresponding second node.
- comparing the two images to obtain at least one change action type of an action node information includes:
- the change action type corresponding to the standard action information is obtained at this time.
- the body part matching module includes:
- an acquisition unit used to acquire two images of the character material at adjacent moments
- a comparison unit configured to compare the two images to obtain at least one change action type of the action node information.
- comparison unit is also used to perform the following steps including:
- the two images include a first image and a second image
- the change action type is obtained based on the action node information.
- the judging unit is configured to perform the following steps:
- first coordinate set is inconsistent with the second coordinate set, it is determined that the first node is inconsistent with its corresponding second node.
- comparison unit is also used to perform the following steps, including:
- the change action type corresponding to the standard action information is obtained at this time.
- the present invention also provides an electronic device, comprising a memory and a processor, wherein the memory stores a computer program, and the computer program can be executed in the processor to implement any of the above methods, wherein the electronic device can be a mobile terminal or web side.
- the present invention also provides a storage medium storing a computer program, and the computer program can implement any of the above methods when executed in a processor.
- the invention provides a processing method for adding a dynamic watermark to a video.
- the video watermark can be effectively prevented from being covered and covered by other watermarks, or removed by an algorithm.
- the copyright protection effect of the video watermark can be effectively improved.
- FIG. 1 is a flowchart of a method for implementing action transitions for multiple dynamic character materials in an animation video provided by an embodiment
- FIG. 2 is an apparatus architecture diagram for the method in FIG. 1 provided by an embodiment
- FIG. 3 is a flowchart of an improved method for the method in FIG. 1 provided by an embodiment
- Fig. 4 is the architecture diagram of the body part matching module in Fig. 2;
- FIG. 5 is a flowchart of an improved method for the method in FIG. 1 provided by an embodiment
- FIG. 6 is a structural diagram of the body movement guidance module in FIG. 2 .
- the term “storage medium” may be various media that can store computer programs, such as ROM, RAM, magnetic disk or optical disk.
- the term "processor” can be CPLD (Complex Programmable Logic Device: Complex Programmable Logic Device), FPGA (Field-Programmable Gate Array: Field Programmable Gate Array), MCU (Microcontroller Unit: Micro Control Unit), PLC (Programmable Logic) Controller: programmable logic controller) and CPU (Central Processing Unit: central processing unit) and other chips or circuits with data processing functions.
- electronic device may be any device with data processing and storage functions, and may generally include both stationary and mobile terminals. Fixed terminals such as desktops, etc. Mobile terminals such as mobile phones, PADs and mobile robots. In addition, the technical features involved in the different embodiments of the present invention described later can be combined with each other as long as there is no conflict with each other.
- this embodiment provides a method for implementing action transitions for multiple dynamic character materials in an animation video, including the following steps:
- the present invention provides a device for realizing action transition of multiple dynamic character materials in an animation video, including:
- the character material reading module 1 reads the action node information of the character material in the animation video
- a body part matching module 2 which matches the action node information corresponding to the body parts of the character material
- the body part movement guidance module 3 according to the start and end position coordinates of the action node information and the start and end position time difference, to guide the movement of the body parts of the character material;
- the animation storage module 4 generates a transition animation in combination with the body parts of the character material.
- the present invention matches the body parts of the characters in the animation video with the corresponding action node information by reading the action node information of the character material in the animation video, and uses the action nodes to guide the movement of the character body parts.
- the difference between the front and back actions of the character can get the difference of the specific body part, and then the difference between the coordinates of the action nodes corresponding to the body part and the time difference between the front and back actions are obtained.
- the position moves at a uniform speed, and the movement time is the time difference between the front and rear movements.
- the step of matching the action node information corresponding to the body parts of the character material includes:
- the body part matching module includes:
- the coordinate reading unit 21 reads the coordinate value of the action node information
- the coordinate grouping unit 22 groups the coordinate values according to the body parts of the character material.
- the coordinate value of the action node information is used to locate the position of the character material in the animation, the coordinate value is grouped according to different body parts, and the different coordinate values of the same body part are kept relatively static, which is used to ensure that the body parts are in There is no deformation during movement.
- the steps of guiding the movement of the body parts of the character material according to the coordinates of the starting and ending positions of the action node information and the time difference between the starting and ending positions include:
- the body part motion guidance module includes:
- the calculation unit 31 calculates the coordinate difference and time difference between the start and end positions of the action node information
- the coordinate positioning unit 32 locates the position of the body part of the character material at a certain time point according to the coordinate difference and time difference between the starting and ending positions of the action node information.
- the movement trajectory and movement speed of the body part are calculated to guide the movement of the body part.
- body parts of the character material are used to define that different coordinates on the same body part remain relatively static.
- the body parts of the character material are used to define that different coordinates on the same body part remain relatively static.
- the swing of a person's arm when walking can be simply understood as the rotation of the line segment around one of its endpoints. If the position of the two ends of the line segment is not restricted to remain relatively static, the motion trajectory of the moving point will be a straight line instead of an arc, which is shown in the animation. The middle will be that the arm is shortened during the swing, grouping by body parts and limiting the coordinate values to avoid deformation of the body parts.
- Animated videos are all composed of images of different character materials.
- the method for implementing action transition provided by the present invention may implement transition in two adjacent images, so images at two adjacent moments need to be acquired.
- the action node information of the present invention is obtained by comparing two images.
- the present invention compares the two images to obtain at least one change action type of an action node information including:
- the two images include a first image and a second image.
- the present invention will distinguish two adjacent images, wherein the time of the first image may be earlier than the time of the second image.
- Each image will have multiple nodes, and the nodes can be various parts of the human body in the image, various parts of machinery and equipment, and so on.
- a plurality of second nodes in the second image are acquired, each second node corresponds to a unique first node.
- the node in the first image is named as the first node
- the node in the second image is named as the second node.
- the person's arm corresponds to a first node
- the person's arm becomes the second node
- the first node and the second node about the person's arm are in the first image and the second node.
- the action node information is the first node in the first image and the Change information between the second nodes in the second image.
- the change action type is obtained based on the action node information.
- Variation action types may be elongation, contraction, and so on. For example, if the action node information is that the arm is shortened, then the arm is likely to contract at this time, and the change action type at this time may be contraction.
- the present invention also includes:
- the first set of coordinates may be coordinates including all pixels of the first node.
- the second set of coordinates may be the coordinates of all pixels including the second node.
- first coordinate set is inconsistent with the second coordinate set, it is determined that the first node is inconsistent with its corresponding second node.
- the first coordinate set and the second coordinate set are inconsistent, it is proved that the second node has changed relative to the first node at this time, and it is determined that the first node is inconsistent with its corresponding second node.
- the present invention compares the two images to obtain at least one change action type of an action node information including:
- each standard action information and the change action type is preset.
- the present invention will preset a plurality of standard action information, each standard action information corresponds to at least one change action type, and the standard action information may be standard arm extension, arm retraction and so on.
- the change action type corresponding to the standard action information is obtained at this time.
- the preset value can be 70%, 80%, etc.
- the similarity comparison is greater than the preset value, it proves that the action node information at this time is similar to the standard action information, and then the corresponding change action type is obtained.
- the present invention needs to introduce a quantified similarity comparison value, so as to judge and obtain the change action type.
- the body part matching module of the present invention includes:
- the acquiring unit is used to acquire two images of the character material at adjacent moments.
- Animated videos are all composed of images of different character materials.
- the method for implementing action transition provided by the present invention may implement transition in two adjacent images, so it is necessary to acquire images at two adjacent moments.
- a comparison unit configured to compare the two images to obtain at least one change action type of the action node information.
- the action node information of the present invention is obtained by comparing two images.
- the comparison unit of the present invention is also used to perform the following steps including:
- the two images include a first image and a second image.
- the present invention will distinguish two adjacent images, wherein the time of the first image may be earlier than the time of the second image.
- Each image will have multiple nodes, and the nodes can be various parts of the human body in the image, various parts of machinery and equipment, and so on.
- a plurality of second nodes in the second image are acquired, each second node corresponds to a unique first node.
- the node in the first image is named as the first node
- the node in the second image is named as the second node.
- the person's arm becomes the second node, and the first and second nodes of the person's arm are in the first image and the second node.
- the corresponding action node information is acquired.
- the action node information is the first node in the first image and the Change information between the second nodes in the second image.
- the change action type is obtained based on the action node information.
- Variation action types may be elongation, contraction, and so on. For example, if the action node information is that the arm is shortened, then the arm is likely to contract at this time, and the change action type at this time may be contraction.
- the present invention also includes a judging unit, and the judging unit is configured to perform the following steps:
- the first set of coordinates may be coordinates including all pixels of the first node.
- the second set of coordinates may be the coordinates of all pixels including the second node.
- first coordinate set is inconsistent with the second coordinate set, it is determined that the first node is inconsistent with its corresponding second node.
- the first coordinate set and the second coordinate set are inconsistent, it is proved that the second node has changed relative to the first node at this time, and it is determined that the first node is inconsistent with its corresponding second node.
- the comparison unit of the present invention is also used to perform the following steps, including:
- each standard action information and the change action type is preset.
- the present invention will preset a plurality of standard action information, each standard action information corresponds to at least one change action type, and the standard action information may be standard arm extension, arm retraction and so on.
- the change action type corresponding to the standard action information is obtained at this time.
- the preset value can be 70%, 80%, etc.
- the similarity comparison is greater than the preset value, it proves that the action node information at this time is similar to the standard action information, and then the corresponding change action type is obtained.
- the present invention needs to introduce a quantified similarity comparison value, so as to judge and obtain the change action type.
- the present invention also provides an electronic device including a memory and a processor, the memory stores a computer program, and the computer program is executed in the processor to implement any of the above methods, wherein the electronic device can be a mobile terminal or a web terminal.
- the present invention also provides a storage medium storing a computer program, and the computer program can implement any of the above methods when executed in a processor.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Processing Or Creating Images (AREA)
Abstract
L'invention concerne un procédé de mise en œuvre d'une transition de mouvement de multiples matériaux de personnage dynamique dans une vidéo d'animation. Par la lecture des mouvements précédents et suivants d'un personnage dans la vidéo d'animation, et par le remplissage automatique d'une animation de transition entre deux mouvements, la charge de travail d'un producteur d'animation est réduite, et la fluidité globale de l'animation est améliorée.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011523414.4A CN112509101A (zh) | 2020-12-21 | 2020-12-21 | 一种动画视频中多个动态人物素材实现动作过渡的方法 |
CN202011523414.4 | 2020-12-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022134505A1 true WO2022134505A1 (fr) | 2022-06-30 |
Family
ID=74923006
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/101685 WO2022134505A1 (fr) | 2020-12-21 | 2021-06-23 | Procédé de mise en œuvre de transition de mouvement de multiples matériaux de personnage dynamique dans une vidéo d'animation |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112509101A (fr) |
WO (1) | WO2022134505A1 (fr) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112509101A (zh) * | 2020-12-21 | 2021-03-16 | 深圳市前海手绘科技文化有限公司 | 一种动画视频中多个动态人物素材实现动作过渡的方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2001033508A1 (fr) * | 1999-11-04 | 2001-05-10 | California Institute Of Technology | Generation automatique d'animation de caracteres synthetiques |
CN104616336A (zh) * | 2015-02-26 | 2015-05-13 | 苏州大学 | 一种动画构建方法及装置 |
CN104867171A (zh) * | 2015-05-05 | 2015-08-26 | 中国科学院自动化研究所 | 一种三维角色的过渡动画生成方法 |
CN110874859A (zh) * | 2018-08-30 | 2020-03-10 | 三星电子(中国)研发中心 | 一种生成动画的方法和设备 |
CN112509101A (zh) * | 2020-12-21 | 2021-03-16 | 深圳市前海手绘科技文化有限公司 | 一种动画视频中多个动态人物素材实现动作过渡的方法 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110634174B (zh) * | 2018-06-05 | 2023-10-10 | 深圳市优必选科技有限公司 | 一种表情动画过渡方法、系统及智能终端 |
CN110415321B (zh) * | 2019-07-06 | 2023-07-25 | 深圳市山水原创动漫文化有限公司 | 一种动画动作处理方法及其系统 |
-
2020
- 2020-12-21 CN CN202011523414.4A patent/CN112509101A/zh active Pending
-
2021
- 2021-06-23 WO PCT/CN2021/101685 patent/WO2022134505A1/fr active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2001033508A1 (fr) * | 1999-11-04 | 2001-05-10 | California Institute Of Technology | Generation automatique d'animation de caracteres synthetiques |
CN104616336A (zh) * | 2015-02-26 | 2015-05-13 | 苏州大学 | 一种动画构建方法及装置 |
CN104867171A (zh) * | 2015-05-05 | 2015-08-26 | 中国科学院自动化研究所 | 一种三维角色的过渡动画生成方法 |
CN110874859A (zh) * | 2018-08-30 | 2020-03-10 | 三星电子(中国)研发中心 | 一种生成动画的方法和设备 |
CN112509101A (zh) * | 2020-12-21 | 2021-03-16 | 深圳市前海手绘科技文化有限公司 | 一种动画视频中多个动态人物素材实现动作过渡的方法 |
Also Published As
Publication number | Publication date |
---|---|
CN112509101A (zh) | 2021-03-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Laine et al. | Production-level facial performance capture using deep convolutional neural networks | |
US10129527B2 (en) | Camera pose estimation for mobile devices | |
JP4746640B2 (ja) | 動的イメージのモーション遷移処理方法とそのシステム、及び、そのプログラムを記録したコンピュータ読み取り可能な記録媒体 | |
JP4955616B2 (ja) | 画像処理装置、画像処理方法及び画像処理プログラム | |
JP2021530031A (ja) | 顔に基づく特殊効果発生方法、装置および電子機器 | |
US20180322680A1 (en) | Creating a mixed-reality video based upon tracked skeletal features | |
US20200363946A1 (en) | Systems and methods for interactive image caricaturing by an electronic device | |
WO2022134505A1 (fr) | Procédé de mise en œuvre de transition de mouvement de multiples matériaux de personnage dynamique dans une vidéo d'animation | |
CN112527115B (zh) | 用户形象生成方法、相关装置及计算机程序产品 | |
JP2007087345A (ja) | 情報処理装置及びその制御方法、コンピュータプログラム、記憶媒体 | |
JP7064257B2 (ja) | 画像深度確定方法及び生き物認識方法、回路、装置、記憶媒体 | |
CN111080776B (zh) | 人体动作三维数据采集和复现的处理方法及系统 | |
Zhao et al. | Sparse to dense motion transfer for face image animation | |
CN110910322B (zh) | 图片处理方法、装置、电子设备及计算机可读存储介质 | |
JP2023517121A (ja) | 画像処理及び画像合成方法、装置及びコンピュータプログラム | |
US20230196593A1 (en) | High Density Markerless Tracking | |
CN111954055A (zh) | 视频特效的展示方法、装置、电子设备及存储介质 | |
JP2002008057A (ja) | アニメーション映像合成装置及びその方法 | |
US10467822B2 (en) | Reducing collision-based defects in motion-stylization of video content depicting closely spaced features | |
KR101276429B1 (ko) | 모션 캡처에서의 안구 운동 데이터 대체 장치 및 컴퓨터 판독 가능 저장 매체 | |
JP2013513172A (ja) | イメージベースドビジュアルハルにおける凹状表面のモデリング | |
JP4357155B2 (ja) | アニメーション画像の生成プログラム | |
CN112862936A (zh) | 表情模型处理方法及装置、电子设备、存储介质 | |
Liang et al. | Emotional Conversation: Empowering Talking Faces with Cohesive Expression, Gaze and Pose Generation | |
JP2008003673A (ja) | アニメーション作成システムおよび方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21908513 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 26.10.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21908513 Country of ref document: EP Kind code of ref document: A1 |