WO2014116056A1 - 애니메이션의 모션 시퀀스를 생성하기 위한 방법, 시스템 및 컴퓨터 판독 가능한 기록 매체 - Google Patents
애니메이션의 모션 시퀀스를 생성하기 위한 방법, 시스템 및 컴퓨터 판독 가능한 기록 매체 Download PDFInfo
- Publication number
- WO2014116056A1 WO2014116056A1 PCT/KR2014/000707 KR2014000707W WO2014116056A1 WO 2014116056 A1 WO2014116056 A1 WO 2014116056A1 KR 2014000707 W KR2014000707 W KR 2014000707W WO 2014116056 A1 WO2014116056 A1 WO 2014116056A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- motion
- character
- line
- point
- generating
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/80—2D [Two Dimensional] animation, e.g. using sprites
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
Definitions
- the present invention relates to a method, a system and a computer readable recording medium for generating a motion sequence of an animation. More specifically, the present invention generates a copper wire representing the path that the character moves with reference to a user operation input with respect to the reference plane, and for the copper wire, at least one section included in the copper wire, at least one point on the copper wire.
- a method, system, and computer-readable recording medium for generating a motion sequence that causes a character to perform the motion when the character is located on a motion line, section or point to which the motion is applied.
- 3D animation is frequently used in movies and TVs.
- 3D animation editing tools are used for authoring 3D animations.
- the conventional 3D animation editing tools are complicated and difficult to use. It is common to use only by trained professionals.
- the object of the present invention is to solve all the above-mentioned problems.
- the present invention generates a moving line indicating a path that the character moves with reference to a user operation input with respect to the reference plane and the motion of the character with respect to the moving line, at least one section included in the moving line, at least one point on the moving line
- Another purpose is to generate a motion sequence that causes the character to perform the motion when the character is located on a moving line, section or point to which the motion is given.
- Another object of the present invention is to provide a user interface that allows a user to freely and easily control a motion performed while moving along a moving line that is a character's movement path.
- a method for generating a motion sequence of an animation comprising: (a) referring to a first user manipulation inputted with respect to a reference plane, a moving line representing a path that a character moves through Generating, (b) specifying the copper line, at least one section included in the copper line and at least one point on the copper line, with reference to a second user manipulation input to the reference plane, and ( c) by giving at least one motion to at least one of the copper wire, the at least one section and the at least one point, with reference to a third user manipulation input with respect to the reference plane; At least one of the copper wire, the at least one section, and the at least one point to which one motion is given
- a method for generating a motion sequence to effect at least one motion that the character of the grant is provided.
- a method for generating a motion sequence of an animation with reference to a first user manipulation input with respect to a reference plane, generating a moving line representing a path that a character travels,
- a copper wire management unit which specifies the copper wire, at least one section included in the copper line and at least one point on the copper line, with reference to a second user manipulation inputted to the reference plane, and inputted to the reference plane
- the at least one motion is given to at least one of the copper wire, the at least one section and the at least one point, so that the character is provided with the at least one motion,
- the method including the motion sequence management unit for generating a motion sequence to effect at least one motion of the character is assigned, it is provided.
- a method for generating a motion sequence of an animation comprising: (a) referring to a first user manipulation input to a reference plane, the at least one motion being performed by a character; Setting, and (b) generating a moving line representing a path along which the character moves, with reference to a second user manipulation input to the reference plane while the at least one motion is set, wherein the character Generating a motion sequence that causes the character to perform the set at least one motion when positioned on the generated copper line.
- a system for generating a motion sequence of an animation the motion of setting at least one motion to be performed by a character with reference to a first user manipulation input to a reference plane
- a sequence management unit and a moving line management unit generating a moving line representing a path along which the character moves, with reference to a second user manipulation input to the reference plane when the at least one motion is set, wherein the motion sequence
- the management unit is provided with a system for generating a motion sequence for causing the character to perform the set at least one motion when the character is located on the generated moving line.
- an animation authoring tool that enables an expert to easily author a three-dimensional animation and to accurately input a path and motion of a moving object (ie, a character).
- the animation is performed by the user simply performing a simple touch operation (for example, tap, hold, drag, pinch, etc.), mouse operation, motion operation, and the like. Since the movement and motion of the characters within can be set, the effect that enables the user to accurately and skillfully adjust the motion sequence of the animation using a portable terminal device such as a smartphone or a tablet PC is achieved.
- a simple touch operation for example, tap, hold, drag, pinch, etc.
- mouse operation for example, mouse operation, motion operation, and the like. Since the movement and motion of the characters within can be set, the effect that enables the user to accurately and skillfully adjust the motion sequence of the animation using a portable terminal device such as a smartphone or a tablet PC is achieved.
- FIG. 1 is a diagram illustrating an internal configuration of a motion sequence generation system according to an embodiment of the present invention.
- FIG. 2 is a diagram illustrating a process of generating a motion sequence according to an embodiment of the present invention.
- FIG. 3 is a diagram exemplarily illustrating a process in which motion is applied to each section of a copper wire according to an embodiment of the present invention.
- FIG. 4 is a diagram illustrating a process of generating a motion sequence according to another embodiment of the present invention.
- 5 to 14 are diagrams exemplarily illustrating a configuration of a user interface that supports generating a motion sequence according to an embodiment of the present invention.
- a microcomputer is provided with memory means such as a personal computer (for example, a desktop computer, a notebook computer, etc.), a server, a workstation, a PDA, a web pad, a mobile phone, a smartphone, a tablet PC, and the like.
- a personal computer for example, a desktop computer, a notebook computer, etc.
- a server for example, a desktop computer, a notebook computer, etc.
- a server for example, a desktop computer, a notebook computer, etc.
- a server for example, a desktop computer, a notebook computer, etc.
- a server for example, a desktop computer, a notebook computer, etc.
- a server for example, a server, a workstation, a PDA, a web pad, a mobile phone, a smartphone, a tablet PC, and the like.
- Any device equipped with a processor having a computing power may be adopted as the motion sequence generation system of the present invention.
- the motion sequence generation system may receive a
- FIG. 1 is a diagram illustrating an internal configuration of a motion sequence generation system according to an embodiment of the present invention.
- a motion sequence generation system 100 may include a moving line management unit 110, a motion sequence management unit 120, an animation generation unit 130, a communication unit 140, and a control unit ( 150).
- the copper line management unit 110, the motion sequence management unit 120, the animation generation unit 130, the communication unit 140 and the control unit 150 at least a part of the external system (not shown) May be program modules in communication with Such program modules may be included in the motion sequence generation system 100 in the form of operating systems, application modules, and other program modules, and may be physically stored on a variety of known storage devices. Also, such program modules may be stored in a remote storage device that can communicate with the motion sequence generation system 100.
- program modules include, but are not limited to, routines, subroutines, programs, objects, components, data structures, etc. that perform particular tasks or execute particular abstract data types, described below, in accordance with the present invention.
- the copper line management unit 110 refers to a first user manipulation inputted to a reference plane, which is a virtual plane defined by a display screen on which a character is displayed, on a reference plane.
- a function of generating a moving line indicating a path that a character moves may be performed.
- the operation of the first user is an operation capable of specifying a virtual straight line or a curve extending from the first point to the second point on the reference plane, for example, a touch extending from the first point to the second point on the display screen.
- a motion operation for moving the terminal device including the predetermined motion input means from the first position to the second position or changing the posture of the terminal device from the first posture to the second posture.
- the copper wire management unit 110 specifies at least one section included in the previously generated copper wire with reference to the second user manipulation inputted with respect to the reference plane, or the previously generated copper wire A function of specifying at least one point of the image may be performed.
- the operation of the second user is an operation capable of specifying a boundary of at least one section included in the copper line or specifying a position of at least one point on the copper line, for example, crossing the at least one point on the copper line. It may include a touch operation for drawing a virtual line or directly selecting at least one point on the copper line, and an operation for changing the position or posture of the terminal device including a predetermined motion input means.
- the length of at least one section included in the copper line or the position of the at least one point on the copper line may be changed by an additional user operation.
- the motion sequence management unit 120 refers to the third user manipulation inputted with respect to the reference plane, the entire copper line, at least one section included in the copper line, or at least one on the copper line.
- a character's motion or an attribute value related to the motion
- a motion sequence is generated that causes the character to perform an action according to the motion when the character is located on a motion line, section or point to which the motion is applied.
- the motion that can be given to a copper wire, a section or a point may include walking, running, jumping, rolling, crawling, punching, shrugging, shaking, talking, singing, a motion expressing the emotional state of the character, and the like.
- Various operations of may be included.
- the type of motion in the present invention is not necessarily limited to those listed above, it will be appreciated that it can be added or changed as much as possible within the scope that can achieve the object of the present invention.
- a user inputs a first user operation, a second user operation, a third user operation, etc. to set a moving line, a section or a point and give motion to the moving line, a section or a point.
- a graphical user interface may be provided that supports this.
- the graphical user interface includes a graphic element indicating various motions of a character that can be given to a moving line, a section or a point, and a pot controller displayed to visually include the graphic element.
- pod controller may be displayed corresponding to copper lines, sections, or points.
- the animation generating unit 130 is a character along the line with reference to the previously generated movement line, section or point and information about the motion assigned to them, that is, the motion sequence A function of generating an animation including a state of performing at least one motion given to the entire line, section or point while moving is performed.
- the animation generator 130 combines an animation generated according to a motion sequence of a character with a real world image input from a camera (not shown).
- Augmented Reality can be used to create a video.
- the animation generator 130 may track coordinates of a real-world image and extract 3D mapping coordinates using SLAM (Simultaneous Localization And Mapping) technology.
- SLAM Simultaneous Localization And Mapping
- the animation according to the motion sequence of the character can be determined in the real world image, and the coordinates to be displayed as described above are augmented by combining the animation with the determined real world image.
- Reality videos can be created.
- the communication unit 140 performs a function for allowing the motion sequence generation system 100 to communicate with an external device.
- control unit 150 performs a function of controlling the flow of data between the copper line management unit 110, the motion sequence management unit 120, the animation generator 130 and the communication unit 140. do. That is, the controller 150 controls the flow of data from the outside or between each component of the motion sequence generation system, thereby moving the copper line management unit 110, the motion sequence management unit 120, the animation generation unit 130, and the communication unit 140. Control each to perform its own function.
- FIG. 2 is a diagram illustrating a process of generating a motion sequence according to an embodiment of the present invention.
- a user moves a character or an object through a variety of manipulation methods such as touch manipulation, mouse manipulation, and motion manipulation with respect to the reference plane 210, that is, the moving line 220. Can be generated.
- the user moves the copper wire 220 to the plurality of sections 221 to 223 using various manipulation methods such as touch manipulation, mouse manipulation, and motion manipulation with respect to the reference plane 210.
- the copper wire 220 may be segmented or at least one point 224 and 225 on the copper wire 220 may be specified.
- the user may manipulate the drawing of a line crossing the copper line 220 to allow the copper line 220 to be segmented or the point on the copper line 220 to be specified based on the intersection point.
- the user may specify a point on the copper wire 220 by directly specifying a point on the copper wire 220.
- the user applies various motions 231 to 233 to each of the sections 221 to 223 of the segmented copper wire, thereby providing the motion and the respective sections 221 to 223 of the copper wire.
- 231 to 233 may be combined to generate a combined motion sequence.
- the user may apply various motions to the entire moving line or various motions to at least one point specified on the moving line, so that when the character is located on the moving line, section or point to which the motion is applied, It is possible to generate a motion sequence for the character to perform the various motions imparted as above.
- FIG. 3 is a diagram exemplarily illustrating a process in which motion is applied to each section of a copper wire according to an embodiment of the present invention.
- GUIs 341 to 343 that support the provision of motion may be displayed corresponding to each section of the copper wire, and thus the user may display each section.
- the corresponding graphical user interfaces 341 to 343 can easily select the motion to be given to the corresponding section.
- FIG. 3B the graphic user interfaces 341 to 343 displayed corresponding to each section of the copper line are displayed in a form that is adaptively arranged according to the direction of each section of the copper line. It may be easier to select a motion to be given to each section of the moving line.
- FIG. 4 is a diagram illustrating a process of generating a motion sequence according to another embodiment of the present invention.
- a user may generate a path for moving a character or an object, that is, a moving line 420 by inputting a first user manipulation such as a touch manipulation, a mouse manipulation, or a motion manipulation with respect to the reference plane 410.
- a first user manipulation such as a touch manipulation, a mouse manipulation, or a motion manipulation with respect to the reference plane 410.
- a second user operation for example, multi-touch operation, shaking operation, etc.
- a graphic user interface for supporting the user to set the motion ( 440 may be provided. If the user selects a desired motion through the graphical user interface, the copper wire generated before the second user operation is input or the copper wire to be generated after the second user operation is input. The selected motion of can be given.
- a user operation for generating the copper wire may be input again.
- the order in which the copper wires are generated and the order in which the motions are set may be changed, and the copper wires are divided into a plurality of sections, or the points on the copper wires are not specified.
- the motion may be applied to the entire copper line.
- a movement line of the character is generated when a first user operation is input later when a second user operation is first input and any one of various motions that can be applied to the character is selected. A motion sequence in which the selected motion is given to the corresponding moving line may be generated.
- 5 to 14 are views illustrating various configurations of a user interface for supporting generating a motion sequence according to an embodiment of the present invention.
- a moving line 530 indicating a movement path of the character 520 may be generated.
- two sections 631 and 632 and one point 633 included in the copper line 630 may be specified.
- the user A pod controller 740 may be provided to assist in imparting motion to the copper wire 730, the sections 731 and 732 or the point 733.
- the pot controller 740 may visually include graphic elements 761 to 764 corresponding to a motion that may be given to the corresponding point 733.
- the selection menu 760 may be displayed together with the graphic elements 761 to 764 corresponding to the motions that may be given to the point 733 through the pot controller 740.
- a point 833 on a moving line 830 that is a movement path of the character 820 is specified, and a pot controller 840 and a motion selection menu 860 corresponding to the point 833 are displayed.
- a user operation eg, hold and drag, drag and drop, etc.
- the graphic element 862 indicating the selected motion is displayed in the pot controller 840.
- the display may move inward to display a state included in the pot controller 840.
- a plurality of graphic elements 962 and 963 representing each of the plurality of motions are displayed. May be included, and the order between the plurality of graphic elements 962 and 963 included in the pot controller 940 may be controlled according to a user input.
- the graphic element representing the motion is displayed. (1062) can be highlighted, and the corresponding graphic element (1062) can be deleted in accordance with the additional user input (1070), the character 1020 according to the additional user input input character 1020 Various properties such as a sound generated when performing the character, a time when the character 1020 performs the motion, and a dialogue that the character 1020 speaks while performing the motion may be set (1080).
- the circular pot controller 1140 may be selected. With reference to a user manipulation input along the circumference, the time for the character 1120 to perform a motion corresponding to the specific graphic element 1162 may be adjusted (1182).
- the character in the state where a specific graphic element representing the character's motion is selected in the circular pot controller, the character is displayed with reference to a user operation input along the circumference of the circular pot controller.
- the direction in which the motion corresponding to the element is performed can be specified.
- a predetermined character string 1283 is input according to a user's manipulation.
- the spoken dialogue may be set.
- the pot may visually include graphic elements 1361 to 1364 corresponding to a motion that may be given to the corresponding section 1332.
- the controller 1340 may be displayed, along with a selection menu 1360 listing the graphic elements 1361 to 1364 corresponding to the motions that may be imparted to the point 1333 via the pot controller 1340. It may be.
- the section 1432 included in the moving line 1430 which is the movement path of the character 1420 is specified, and the pot controller 1440 and the motion selection menu 1460 corresponding to the section 1432 are specified.
- the graphic element 1462 pointing to the selected motion is displayed.
- the controller 1440 may move to the controller 1440 to display a state included in the pot controller 1440.
- the kind of motion that can be given to the point 733 that is, as a motion that can be performed in place, for example, , Do nothing (761), shake (762), crush (763), punch (764), etc.
- the kind of motion that can be applied to the section 1332 or the entire line i.e.
- walking 1361, running 1362, sneaking 1136, crawling 1364, etc. may be set differently.
- the user calls a motion sequence already generated and stored by himself or another user, and the current character 520, 620. ) Can be applied.
- the user may save his newly created motion sequence so that he or another user can utilize the motion sequence later.
- the user may load a pre-stored motion sequence or store a newly generated motion sequence by selecting the preset icons 510 and 610 displayed on the screen.
- Embodiments according to the present invention described above may be implemented in the form of program instructions that may be executed by various computer components, and may be recorded in a computer-readable recording medium.
- the computer-readable recording medium may include program instructions, data files, data structures, etc. alone or in combination.
- Program instructions recorded on the computer-readable recording medium may be those specially designed and configured for the present invention, or may be known and available to those skilled in the computer software arts.
- Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks and magnetic tape, optical recording media such as CD-ROMs, DVDs, and magneto-optical media such as floptical disks. media), and hardware devices specifically configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like.
- Examples of program instructions include not only machine code generated by a compiler, but also high-level language code that can be executed by a computer using an interpreter or the like.
- the hardware device may be configured to operate as one or more software modules to perform the process according to the invention, and vice versa.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
Claims (17)
- 애니메이션의 모션 시퀀스(motion sequence)를 생성하기 위한 방법으로서,(a) 기준 평면에 대하여 입력되는 제1 사용자 조작을 참조로 하여, 캐릭터가 이동하는 경로를 나타내는 동선을 생성하는 단계,(b) 상기 기준 평면에 대하여 입력되는 제2 사용자 조작을 참조로 하여, 상기 동선, 상기 동선에 포함되는 적어도 하나의 구간 및 상기 동선 상의 적어도 하나의 지점을 특정하는 단계, 및(c) 상기 기준 평면에 대하여 입력되는 제3 사용자 조작을 참조로 하여, 상기 동선, 상기 적어도 하나의 구간 및 상기 적어도 하나의 지점 중 적어도 하나에 대하여 적어도 하나의 모션을 부여함으로써, 상기 캐릭터가 상기 적어도 하나의 모션이 부여된 상기 동선, 상기 적어도 하나의 구간 및 상기 적어도 하나의 지점 중 적어도 하나에 위치할 때 상기 캐릭터가 상기 부여된 적어도 하나의 모션을 행하도록 하는 모션 시퀀스를 생성하는 단계를 포함하는 방법.
- 제1항에 있어서,상기 (a) 단계에서,상기 제1 사용자 조작은 상기 기준 평면 상의 제1 지점에서 제2 지점까지 이어지는 가상의 선을 그리는 조작인 방법.
- 제1항에 있어서,상기 (b) 단계에서,상기 동선을 가로지르는 가상의 선을 그리는 조작 및 상기 동선을 선택하는 조작 중 적어도 하나의 조작을 참조로 하여, 상기 적어도 하나의 구간 및 상기 적어도 하나의 지점을 특정하는 방법.
- 제1항에 있어서,상기 적어도 하나의 모션의 추가, 삭제 및 순서 중 적어도 하나가 제어될 수 있도록 하는 방법.
- 제1항에 있어서,상기 적어도 하나의 모션에는, 상기 캐릭터가 상기 적어도 하나의 모션을 행할 때 발생되는 음향, 상기 캐릭터가 상기 적어도 하나의 모션을 행하는 시간, 상기 캐릭터가 상기 적어도 하나의 모션을 행하면서 말하는 대사 및 상기 캐릭터가 상기 적어도 하나의 모션을 행하는 방향 중 적어도 일부에 대한 속성이 부여되는 방법.
- 제1항에 있어서,상기 제1 사용자 조작, 상기 제2 사용자 조작 및 상기 제3 사용자 조작 중 적어도 하나가 입력되는 것을 지원하는 그래픽 사용자 인터페이스(GUI)가 제공되는 방법.
- 제6항에 있어서,상기 그래픽 사용자 인터페이스는 상기 적어도 하나의 모션 각각을 나타내는 적어도 하나의 그래픽 요소 및 상기 적어도 하나의 그래픽 요소를 포함하는 팟 컨트롤러(pod controller)를 포함하는 방법.
- 제7항에 있어서,상기 그래픽 요소 및 상기 팟 컨트롤러에 대하여 입력되는 사용자 조작을 참조로 하여, 상기 적어도 하나의 모션의 추가, 삭제 및 순서 중 적어도 하나를 제어하는 방법.
- 제7항에 있어서,상기 그래픽 요소 및 상기 팟 컨트롤러에 대하여 입력되는 사용자 조작을 참조로 하여, 상기 적어도 하나의 모션에 상기 캐릭터가 상기 적어도 하나의 모션을 행할 때 발생되는 음향, 상기 캐릭터가 상기 적어도 하나의 모션을 행하는 시간, 상기 캐릭터가 상기 적어도 하나의 모션을 행하면서 말하는 대사 및 상기 캐릭터가 상기 적어도 하나의 모션을 행하는 방향 중 적어도 일부에 대한 속성을 부여하는 방법.
- 제7항에 있어서,상기 그래픽 요소 및 상기 팟 컨트롤러는 상기 경로, 상기 적어도 하나의 구간 및 상기 적어도 하나의 지점 중 적어도 하나에 대응되어 표시되는 방법.
- 제1항에 있어서,(d) 상기 생성된 모션 시퀀스를 참조로 하여 상기 캐릭터가 상기 동선을 따라 이동하면서 상기 동선, 상기 적어도 하나의 구간 및 상기 적어도 하나의 지점 중 적어도 하나에 부여된 적어도 하나의 모션을 행하는 모습을 포함하는 애니메이션을 생성하는 단계를 더 포함하는 방법.
- 제1항에 있어서,(e) 상기 캐릭터의 모션 시퀀스를 포함하는 애니메이션과 카메라로부터 입력되는 실세계(real world) 영상을 결합함으로써 증강현실(AR, Augmented Reality) 동영상을 생성하는 단계를 더 포함하는 방법.
- 제12항에 있어서,상기 (e) 단계는,(e1) SLAM(Simultaneous Localization And Mapping) 기술을 이용하여 상기 실세계 영상의 좌표를 추적하고 상기 실세계 영상의 3D Mapping 좌표를 추출하는 단계(e2) 상기 추적된 좌표 및 상기 추출된 3D Mapping 좌표에 관한 정보를 참조로 하여 상기 실세계 영상 내에서 상기 캐릭터의 모션 시퀀스에 따른 애니메이션이 표시될 좌표를 결정하는 단계, 및(e3) 상기 표시될 좌표가 결정된 애니메이션과 상기 실세계 영상을 결합함으로써 증강현실 동영상을 생성하는 단계를 포함하는 방법.
- 애니메이션의 모션 시퀀스(motion sequence)를 생성하기 위한 방법으로서,(a) 기준 평면에 대하여 입력되는 제1 사용자 조작을 참조로 하여, 캐릭터가 행할 적어도 하나의 모션을 설정하는 단계, 및(b) 상기 적어도 하나의 모션이 설정된 상태에서 상기 기준 평면에 대하여 입력되는 제2 사용자 조작을 참조로 하여, 상기 캐릭터가 이동하는 경로를 나타내는 동선을 생성하고, 상기 캐릭터가 상기 생성된 동선에 위치할 때 상기 캐릭터가 상기 설정된 적어도 하나의 모션을 행하도록 하는 모션 시퀀스를 생성하는 단계를 포함하는 방법.
- 제1항 내지 제14항 중 어느 한 항에 따른 방법을 실행하기 위한 컴퓨터 프로그램을 기록한 컴퓨터 판독 가능한 기록 매체.
- 애니메이션의 모션 시퀀스(motion sequence)를 생성하기 위한 시스템으로서,기준 평면에 대하여 입력되는 제1 사용자 조작을 참조로 하여, 캐릭터가 이동하는 경로를 나타내는 동선을 생성하고, 상기 기준 평면에 대하여 입력되는 제2 사용자 조작을 참조로 하여, 상기 동선, 상기 동선에 포함되는 적어도 하나의 구간 및 상기 동선 상의 적어도 하나의 지점을 특정하는 동선 관리부, 및상기 기준 평면에 대하여 입력되는 제3 사용자 조작을 참조로 하여, 상기 동선, 상기 적어도 하나의 구간 및 상기 적어도 하나의 지점 중 적어도 하나에 대하여 적어도 하나의 모션을 부여함으로써, 상기 캐릭터가 상기 적어도 하나의 모션이 부여된 상기 동선, 상기 적어도 하나의 구간 및 상기 적어도 하나의 지점 중 적어도 하나에 위치할 때 상기 캐릭터가 상기 부여된 적어도 하나의 모션을 행하도록 하는 모션 시퀀스를 생성하는 모션 시퀀스 관리부를 포함하는 시스템.
- 애니메이션의 모션 시퀀스(motion sequence)를 생성하기 위한 시스템으로서,기준 평면에 대하여 입력되는 제1 사용자 조작을 참조로 하여, 캐릭터가 행할 적어도 하나의 모션을 설정하는 모션 시퀀스 관리부, 및상기 적어도 하나의 모션이 설정된 상태에서 상기 기준 평면에 대하여 입력되는 제2 사용자 조작을 참조로 하여, 상기 캐릭터가 이동하는 경로를 나타내는 동선을 생성하는 동선 관리부를 포함하고,상기 모션 시퀀스 관리부는, 상기 캐릭터가 상기 생성된 동선에 위치할 때 상기 캐릭터가 상기 설정된 적어도 하나의 모션을 행하도록 하는 모션 시퀀스를 생성하는 시스템.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201480005736.6A CN104969263B (zh) | 2013-01-24 | 2014-01-24 | 用于生成动画的运动序列的方法、系统及计算机可读记录介质 |
EP14743842.8A EP2950274B1 (en) | 2013-01-24 | 2014-01-24 | Method and system for generating motion sequence of animation, and computer-readable recording medium |
JP2015555106A JP6647867B2 (ja) | 2013-01-24 | 2014-01-24 | アニメーションのモーションシーケンスを生成するための方法、システム及びコンピュータ読み取り可能な記録媒体 |
US14/763,499 US10037619B2 (en) | 2013-01-24 | 2014-01-24 | Method and system for generating motion sequence of animation, and computer-readable recording medium |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2013-0008034 | 2013-01-24 | ||
KR20130008034 | 2013-01-24 | ||
KR20130102747 | 2013-08-28 | ||
KR10-2013-0102747 | 2013-08-28 | ||
KR1020130135624A KR20140095414A (ko) | 2013-01-24 | 2013-11-08 | 애니메이션의 모션 시퀀스를 생성하기 위한 방법, 시스템 및 컴퓨터 판독 가능한 기록 매체 |
KR10-2013-0135624 | 2013-11-08 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014116056A1 true WO2014116056A1 (ko) | 2014-07-31 |
Family
ID=51743868
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2014/000707 WO2014116056A1 (ko) | 2013-01-24 | 2014-01-24 | 애니메이션의 모션 시퀀스를 생성하기 위한 방법, 시스템 및 컴퓨터 판독 가능한 기록 매체 |
Country Status (6)
Country | Link |
---|---|
US (1) | US10037619B2 (ko) |
EP (1) | EP2950274B1 (ko) |
JP (2) | JP6647867B2 (ko) |
KR (2) | KR20140095414A (ko) |
CN (1) | CN104969263B (ko) |
WO (1) | WO2014116056A1 (ko) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9734618B2 (en) * | 2013-11-25 | 2017-08-15 | Autodesk, Inc. | Animating sketches via kinetic textures |
USD906348S1 (en) * | 2014-11-26 | 2020-12-29 | Intergraph Corporation | Computer display screen or portion thereof with graphic |
WO2017164511A1 (ko) * | 2016-03-25 | 2017-09-28 | (주) 애니펜 | 애니메이션을 저작하기 위한 방법, 시스템 및 컴퓨터 판독 가능한 기록 매체 |
US10551993B1 (en) * | 2016-05-15 | 2020-02-04 | Google Llc | Virtual reality content development environment |
KR101947160B1 (ko) * | 2018-06-20 | 2019-02-12 | (주)코딩앤플레이 | 증강현실을 이용한 코딩교육 방법 |
WO2020130692A1 (ko) * | 2018-12-19 | 2020-06-25 | (주) 애니펜 | 애니메이션 시퀀스를 생성하는 방법, 시스템 및 비일시성의 컴퓨터 판독 가능 기록 매체 |
KR20210101518A (ko) * | 2020-02-10 | 2021-08-19 | 삼성전자주식회사 | Ar 객체를 배치하는 방법 및 전자 장치 |
US11462016B2 (en) * | 2020-10-14 | 2022-10-04 | Meta Platforms Technologies, Llc | Optimal assistance for object-rearrangement tasks in augmented reality |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20030056294A (ko) * | 2001-12-28 | 2003-07-04 | 한국전자통신연구원 | 3차원 캐릭터 애니메이션 제작 방법 및 시스템 |
JP2003334386A (ja) * | 2002-05-21 | 2003-11-25 | Sega Corp | ゲームの制御方法及びプログラム及び記録媒体及びビデオゲーム装置 |
US20050134598A1 (en) * | 2003-12-19 | 2005-06-23 | Baxter Brent S. | Method and apparatus for producing animation |
KR100623173B1 (ko) * | 2005-08-02 | 2006-09-12 | 엔에이치엔(주) | 게임 캐릭터 애니메이션 구현 시스템, 구현 방법 및 제작방법 |
KR20080052272A (ko) * | 2006-12-05 | 2008-06-11 | 한국전자통신연구원 | 캐릭터 애니메이션과 메쉬 변형을 이용한 카툰 애니메이션제작 방법 및 시스템 |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060274070A1 (en) * | 2005-04-19 | 2006-12-07 | Herman Daniel L | Techniques and workflows for computer graphics animation system |
JP5044503B2 (ja) * | 2008-08-06 | 2012-10-10 | 株式会社河合楽器製作所 | 演出画像再生装置、演出画像再生方法、演出画像再生プログラム及び記録媒体 |
JP2010108319A (ja) * | 2008-10-30 | 2010-05-13 | Kyushu Institute Of Technology | 描画制御装置、描画制御方法及び描画制御プログラム |
US20110012903A1 (en) * | 2009-07-16 | 2011-01-20 | Michael Girard | System and method for real-time character animation |
JP4627802B1 (ja) * | 2009-08-04 | 2011-02-09 | 株式会社スマイルラボ | 仮想モデル表示システム、及び仮想モデル表示方法 |
KR20110045719A (ko) * | 2009-10-27 | 2011-05-04 | 주식회사 숀픽쳐스 | 애니메이션 제작방법과 이 제작방법을 실행하기 위한 프로그램이 저장된 컴퓨터로 판독할 수 있는 기록매체 및 이를 이용한 온라인상에서의 애니메이션 제작 시스템 |
JP2011159163A (ja) * | 2010-02-02 | 2011-08-18 | Sony Corp | 画像処理装置、画像処理方法及びプログラム |
CN101853503A (zh) * | 2010-04-26 | 2010-10-06 | 华中科技大学 | 一种谱线拐点多尺度寻优分段方法及其应用 |
JP5689953B2 (ja) * | 2010-05-25 | 2015-03-25 | ジョン、ジェ ウンJEON, Jae Woong | アニメーション著作システムおよびアニメーション著作方法 |
US20120122570A1 (en) * | 2010-11-16 | 2012-05-17 | David Michael Baronoff | Augmented reality gaming experience |
US9367770B2 (en) * | 2011-08-30 | 2016-06-14 | Digimarc Corporation | Methods and arrangements for identifying objects |
WO2013074926A1 (en) * | 2011-11-18 | 2013-05-23 | Lucasfilm Entertainment Company Ltd. | Path and speed based character control |
CN102831401B (zh) * | 2012-08-03 | 2016-01-13 | 樊晓东 | 对无特定标记目标物体跟踪、三维叠加及交互的方法及系统 |
-
2013
- 2013-11-08 KR KR1020130135624A patent/KR20140095414A/ko active Application Filing
-
2014
- 2014-01-24 CN CN201480005736.6A patent/CN104969263B/zh active Active
- 2014-01-24 EP EP14743842.8A patent/EP2950274B1/en active Active
- 2014-01-24 JP JP2015555106A patent/JP6647867B2/ja active Active
- 2014-01-24 US US14/763,499 patent/US10037619B2/en active Active
- 2014-01-24 WO PCT/KR2014/000707 patent/WO2014116056A1/ko active Application Filing
-
2015
- 2015-05-18 KR KR1020150069060A patent/KR101575092B1/ko active IP Right Grant
-
2018
- 2018-08-17 JP JP2018153483A patent/JP2018198083A/ja active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20030056294A (ko) * | 2001-12-28 | 2003-07-04 | 한국전자통신연구원 | 3차원 캐릭터 애니메이션 제작 방법 및 시스템 |
JP2003334386A (ja) * | 2002-05-21 | 2003-11-25 | Sega Corp | ゲームの制御方法及びプログラム及び記録媒体及びビデオゲーム装置 |
US20050134598A1 (en) * | 2003-12-19 | 2005-06-23 | Baxter Brent S. | Method and apparatus for producing animation |
KR100623173B1 (ko) * | 2005-08-02 | 2006-09-12 | 엔에이치엔(주) | 게임 캐릭터 애니메이션 구현 시스템, 구현 방법 및 제작방법 |
KR20080052272A (ko) * | 2006-12-05 | 2008-06-11 | 한국전자통신연구원 | 캐릭터 애니메이션과 메쉬 변형을 이용한 카툰 애니메이션제작 방법 및 시스템 |
Also Published As
Publication number | Publication date |
---|---|
JP2018198083A (ja) | 2018-12-13 |
CN104969263B (zh) | 2019-01-15 |
CN104969263A (zh) | 2015-10-07 |
US20160005207A1 (en) | 2016-01-07 |
KR20150067096A (ko) | 2015-06-17 |
EP2950274B1 (en) | 2021-04-14 |
US10037619B2 (en) | 2018-07-31 |
EP2950274A4 (en) | 2016-07-27 |
KR20140095414A (ko) | 2014-08-01 |
EP2950274A1 (en) | 2015-12-02 |
JP6647867B2 (ja) | 2020-02-14 |
KR101575092B1 (ko) | 2015-12-08 |
JP2016509722A (ja) | 2016-03-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2014116056A1 (ko) | 애니메이션의 모션 시퀀스를 생성하기 위한 방법, 시스템 및 컴퓨터 판독 가능한 기록 매체 | |
US10249095B2 (en) | Context-based discovery of applications | |
US10185468B2 (en) | Animation editor | |
WO2012067369A2 (en) | Method and apparatus for displaying user interface capable of intuitively editing and browsing folder | |
WO2019017582A1 (ko) | 클라우드 소싱 기반의 ar 컨텐츠 템플릿을 수집하여 ar 컨텐츠를 자동으로 생성하는 방법 및 시스템 | |
WO2014062001A1 (ko) | 3차원의 가상 공간 내에서 가상 카메라를 제어하기 위한 방법, 시스템 및 컴퓨터 판독 가능한 기록 매체 | |
WO2014062003A1 (ko) | 군중 애니메이션을 생성하기 위한 방법, 시스템 및 컴퓨터 판독 가능한 기록 매체 | |
US8427502B2 (en) | Context-aware non-linear graphic editing | |
Medeiros et al. | A tablet-based 3d interaction tool for virtual engineering environments | |
US20170039037A1 (en) | Live mobile application visual editor demo | |
TWI624782B (zh) | 立體場景中超連結編輯方法與系統 | |
WO2016053029A1 (ko) | 가상 공간 및 가상 객체를 포함하는 메시지를 생성하기 위한 방법, 시스템 및 컴퓨터 판독 가능한 기록 매체 | |
JP2019532385A (ja) | 仮想現実シーケンスを構成又は修正するためのシステム、構成方法、及びそのシーケンスを読み取るためのシステム | |
WO2019124850A1 (ko) | 사물 의인화 및 인터랙션을 위한 방법 및 시스템 | |
Massó et al. | Direct manipulation of user interfaces for migration | |
Stefanidi et al. | BricklAyeR: a platform for building rules for AmI environments in AR | |
WO2017164511A1 (ko) | 애니메이션을 저작하기 위한 방법, 시스템 및 컴퓨터 판독 가능한 기록 매체 | |
Abidin et al. | A framework of adaptive multimodal input for location-based augmented reality application | |
Lu et al. | Design of immersive and interactive application based on augmented reality and machine learning | |
CN111522439B (zh) | 一种虚拟样机的修订方法、装置、设备及计算机存储介质 | |
WO2016032303A1 (ko) | 문맥 정보에 기초하여 스크롤을 제어하기 위한 방법, 시스템 및 비일시성의 컴퓨터 판독 가능한 기록 매체 | |
Krekhov et al. | MorphableUI: a hypergraph-based approach to distributed multimodal interaction for rapid prototyping and changing environments | |
KR101979754B1 (ko) | 애니메이션을 저작하기 위한 방법, 시스템 및 컴퓨터 판독 가능한 기록 매체 | |
KR20130049673A (ko) | 동영상 스팟 이미지에 기초한 객체정보 제공방법 및 시스템 | |
WO2019198844A1 (ko) | 미디어 플레이어를 제어하는 방법 및 시스템 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14743842 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2015555106 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14763499 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2014743842 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: IDP00201505178 Country of ref document: ID |