US20220301249A1 - Animation production system for objects in a virtual space - Google Patents
Animation production system for objects in a virtual space Download PDFInfo
- Publication number
- US20220301249A1 US20220301249A1 US17/834,405 US202217834405A US2022301249A1 US 20220301249 A1 US20220301249 A1 US 20220301249A1 US 202217834405 A US202217834405 A US 202217834405A US 2022301249 A1 US2022301249 A1 US 2022301249A1
- Authority
- US
- United States
- Prior art keywords
- character
- user
- controller
- track
- action
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000004519 manufacturing process Methods 0.000 title claims abstract description 27
- 230000009471 action Effects 0.000 claims abstract description 44
- 230000004044 response Effects 0.000 claims abstract description 15
- 238000000034 method Methods 0.000 claims description 39
- 230000008569 process Effects 0.000 description 28
- 210000003128 head Anatomy 0.000 description 23
- 238000004891 communication Methods 0.000 description 14
- 238000010586 diagram Methods 0.000 description 14
- 238000003860 storage Methods 0.000 description 13
- 210000000988 bone and bone Anatomy 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 210000003811 finger Anatomy 0.000 description 3
- 230000010365 information processing Effects 0.000 description 3
- 238000003825 pressing Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 210000003813 thumb Anatomy 0.000 description 2
- 230000003213 activating effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 210000004087 cornea Anatomy 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/0304—Detection arrangements using opto-electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0338—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of limited linear or angular displacement of an operating part of the device from a neutral position, e.g. isotonic or isometric joysticks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0346—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/55—Controlling game characters or game objects based on the game progress
- A63F13/56—Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
- G02B27/0172—Head mounted characterised by optical features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/024—Multi-user, collaborative environment
Definitions
- the present invention relates to an animation production system.
- Virtual cameras are arranged in a virtual space (see Patent Document 1).
- the present invention has been made in view of such a background, and is intended to provide a technology capable of capturing animations in a virtual space.
- the principal invention for solving the above-described problem is an animation production method comprising: placing a first and second objects and a virtual camera in a virtual space; controlling an action of the first object in response to an operation from the first user; controlling an action of the second object in response to an operation from the second user; and controlling the camera in response to an operation from the first or second user or the second user to shoot the first and second objects.
- animations can be captured in a virtual space.
- FIG. 1 is a diagram illustrating an example of a virtual space displayed on a head mount display (HMD) mounted by a user in an animation production system of the present embodiment;
- HMD head mount display
- FIG. 2 is a diagram illustrating an example of the overall configuration of an animation production system 300 according to an embodiment of the present invention.
- FIG. 3 shows a schematic view of the appearance of a head mount display (hereinafter referred to as an HMD) 110 according to the present embodiment.
- HMD head mount display
- FIG. 4 shows a schematic view of the outside of the controller 210 according to the present embodiment.
- FIG. 5 shows a functional configuration diagram of the HMD 110 according to the present embodiment.
- FIG. 6 shows a functional configuration diagram of the controller 210 according to the present embodiment.
- FIG. 7 is a diagram illustrating a functional configuration of a control device 350 according to the present embodiment.
- FIG. 8 shows a functional configuration diagram of an image producing device 310 according to the present embodiment.
- FIG. 9 is a flowchart illustrating an example of a track generation process according to an embodiment of the present invention.
- FIG. 10 is a flowchart illustrating an example of a track editing process according to an embodiment of the present invention.
- FIG. 11A is a diagram illustrating an example of a user interface for editing a track according to an embodiment of the present invention.
- FIG. 11B is a diagram illustrating an example of a user interface for editing a track according to an embodiment of the present invention.
- An animation production method has the following configuration.
- An animation production method comprising:
- the animation production method according to item 1, the method further comprising:
- the second object is the same object as the first object, the method further comprising:
- FIG. 1 is a diagram illustrating an example of a virtual space displayed on a head mount display (HMD) mounted by a user in an animation production system of the present embodiment.
- a character 4 and a camera 3 are disposed in the virtual space 1 , and a character 4 is shot using the camera 3 .
- the photographer 2 is disposed, and the camera 3 is virtually operated by the photographer 2 .
- the animation production system of the present embodiment as shown in FIG.
- a user makes an animation by placing a character 4 and a camera 3 while viewing the virtual space 1 from a bird's perspective with a TPV (Third Person's View), taking a character 4 with an FPV (First Person View; first person support) as a photographer 2 , and performing a character 4 with an FPV.
- TPV Transmission Person's View
- FPV First Person View; first person support
- a character 4 and a character 5 can be disposed, and the user can perform the performance while possessing a character 4 and a character 5 , respectively. That is, in the animation production system of the present embodiment, one can play a number of roles (roles).
- the camera 3 can be virtually operated as the photographer 2 , natural camera work can be realized and the representation of the movie to be shot can be enriched.
- a plurality of users can collaborate to produce an animation. For example, one user may operate the character 4 to perform the performance, and another user may operate the camera man 2 to photograph the performance of the character 4 by the camera 3 .
- the lighting engineer performing lighting and the mixer performing voice mixing can be performed by the third and fourth users to share the work required for one animation.
- the animation production system of the present embodiment can perform these tasks asynchronously.
- the second user operates the camera man 2 to take a photograph with one or more cuts while changing the angle of the camera 3 , and then the lighting engineer performs the lighting while adjusting the color, position, and timing of the lighting, so that the photographing operation can be performed asynchronously.
- FIG. 2 is a diagram illustrating an example of the overall configuration of an animation production system 300 according to an embodiment of the present invention.
- the animation production system 300 may comprise a plurality of character manipulation systems 360 and an image generator 310 serving as a host computer.
- the character manipulation system 360 may also include, for example, an HMD 110 , a controller 210 , and a control device 350 for controlling them.
- An infrared camera (not shown) for detecting the position, orientation and slope of the HMD 110 or the controller 210 can also be added to the character manipulation system 360 .
- These devices may be connected to each other by wired or wireless means.
- each device may be equipped with a USB port to establish communication by cable connection, or communication may be established by wired or wireless, such as HDMI, wired LAN, infrared, BluetoothTM, WiFiTM.
- the control device 350 may be a PC, a game machine, a portable communication terminal, or any other device having a calculation processing function.
- the control device 350 is communicatively connected to the image generating device 310 and can transmit input information from the user by the HMD 110 or the controller 210 to the image generating device 310 .
- FIG. 3 shows a schematic view of the appearance of a head mount display (hereinafter referred to as HMD) 110 according to the present embodiment.
- FIG. 5 shows a functional configuration diagram of the HMD 110 according to the present embodiment.
- the HMD 110 is mounted on the user's head and includes a display panel 120 for placement in front of the user's left and right eyes.
- a display panel 120 displays a left-eye image and a right-eye image, which can provide the user with a three-dimensional image by utilizing the visual difference of both eyes. If left- and right-eye images can be displayed, a left-eye display and a right-eye display can be provided separately, and an integrated display for left-eye and right-eye can be provided.
- the housing portion 130 of the HMD 110 includes a sensor 140 .
- the sensor 140 may comprise, for example, a magnetic sensor, an acceleration sensor, or a gyro sensor, or a combination thereof, to detect actions such as the orientation or tilt of the user's head.
- the axis corresponding to the user's anteroposterior direction is Z-axis, which connects the center of the display panel 120 with the user, and the axis corresponding to the user's left and right direction is X-axis
- the sensor 140 can detect the rotation angle around the X-axis (so-called pitch angle), rotation angle around the Y-axis (so-called yaw angle), and rotation angle around the Z-axis (so-called roll angle).
- the housing portion 130 of the HMD 110 may also include a plurality of light sources 150 (e.g., infrared light LEDs, visible light LEDs).
- a camera e.g., an infrared light camera, a visible light camera
- the HMD 110 may be provided with a camera for detecting a light source installed in the housing portion 130 of the HMD 110 .
- the housing portion 130 of the HMD 110 may also include an eye tracking sensor.
- the eye tracking sensor is used to detect the user's left and right eye gaze directions and gaze.
- FIG. 4 shows a schematic view of the appearance of the controller 210 according to the present embodiment.
- FIG. 6 shows a functional configuration diagram of the controller 210 according to the present embodiment.
- the controller 210 can support the user to make predetermined inputs in the virtual space.
- the controller 210 may be configured as a set of left-hand 220 and right-hand 230 controllers.
- the left hand controller 220 and the right hand controller 230 may each have an operational trigger button 240 , an infrared LED 250 , a sensor 260 , a joystick 270 , and a menu button 280 .
- the operation trigger button 240 is positioned as 240 a , 240 b in a position that is intended to perform an operation to pull the trigger with the middle finger and index finger when gripping the grip 235 of the controller 210 .
- the frame 245 formed in a ring-like fashion downward from both sides of the controller 210 is provided with a plurality of infrared LEDs 250 , and a camera (not shown) provided outside the controller can detect the position, orientation and slope of the controller 210 in a particular space by detecting the position of these infrared LEDs.
- the controller 210 may also incorporate a sensor 260 to detect operations such as the orientation or tilt of the controller 210 .
- sensor 260 it may comprise, for example, a magnetic sensor, an acceleration sensor, or a gyro sensor, or a combination thereof.
- the top surface of the controller 210 may include a joystick 270 and a menu button 280 . It is envisioned that the joystick 270 may be moved in a 360 degree direction centered on the reference point and operated with a thumb when gripping the grip 235 of the controller 210 . Menu buttons 280 are also assumed to be operated with the thumb.
- the controller 210 may include a vibrator (not shown) for providing vibration to the hand of the user operating the controller 210 .
- the controller 210 includes an input/output unit and a communication unit for outputting information such as the position, orientation, and slope of the controller 210 via a button or a joystick, and for receiving information from the host computer.
- the system can determine the user's hand operation and attitude, pseudo-displaying and operating the user's hand in the virtual space.
- FIG. 7 shows a functional configuration diagram of a control device 350 according to the present embodiment.
- the control device 350 may use a device such as a PC, a game machine, or a portable communication terminal, which has a function for storing information about the user's head operation or operation of the controller acquired by the user input information or the sensor, which is transmitted from the HMD 110 or the controller 210 , performing a predetermined computational processing, and generating an image.
- the control device 350 may include, for example, an input/output unit 351 for establishing a wired connection with a peripheral device such as an HMD 110 or controller 210 , and a communication unit 352 for establishing a wireless connection such as infrared, Bluetooth, or WiFi.
- Information received from the HMD 110 and/or the controller 210 through the I/O unit 351 and/or the communication unit 352 regarding the operation of the user's head or the operation or operation of the controller is detected in the control unit 353 as input content including the operation of the user's position, line of vision, posture, etc., speech, pronunciation, operation, etc.
- the control unit 353 may be composed of a CPU. However, by further providing a GPU specialized for image processing, information processing and image processing can be distributed and overall processing efficiency can be improved.
- the control unit 353 includes a user input detecting unit 354 which detects information (input information) received from the HMD 110 and/or the controller 210 regarding the operation of the user's head, the speech and pronunciation of the user, and the operation and operation of the controller.
- the input information is solved by the communication unit 330 and transmitted to the image producing device 310 .
- the communication unit 330 also receives information for reproducing the virtual space 1 from the image producing device 310 (which may be rendered video data or model information of an object disposed in a virtual space) and can display the virtual space 1 on the HMD 110 by the virtual space display unit 355 based on this information.
- FIG. 8 shows a functional configuration diagram of an image producing device 310 according to this embodiment.
- the image generating device 310 may be a general purpose computer, such as a workstation or personal computer, or may be logically implemented by cloud computing.
- the image producing device 310 may include an input/output unit 320 for establishing a connection between the HMD 110 and the controller 210 , for example, when the image producing device 310 includes a peripheral device such as an HMD 110 or a controller 210 , and a communication unit 330 for establishing a connection by wires such as infrared, Bluetooth, WiFi, or a registered trademark, or a public telephone line.
- Information relating to the operation of the user's head or the operation or operation of the controller (input information) input from the HMD 110 and/or the controller 210 in the character manipulation system 360 is transmitted to the control unit 340 , and is detected in the control unit 340 as input content including the operation of the user's position, line of vision, attitude, etc., speech, pronunciation, operation, etc., and a control program stored in the storage unit 350 is executed in accordance with the user's input content to perform a process such as controlling the character and generating an image.
- the control unit 340 may be composed of a CPU. However, by further providing a GPU specialized for image processing, information processing and image processing can be distributed and overall processing efficiency can be improved.
- the image generating device 310 may also communicate with other computing processing devices to allow other computing processing devices to share information processing and image processing.
- the control unit 340 includes a user input receiving unit 410 for acquiring input information from a user, a character control unit 420 for executing a control program stored in the control program storage unit 520 for a character stored in the character data storage unit 510 of the storage unit 350 , and an image producing unit 430 for generating an image based on a character control.
- the control of the operation of a character can be realized by converting input information, such as the direction or tilt of the user head detected through the HMD 110 or the controller 210 , or the manual operation, into the operation of each portion of the bone structure created in accordance with the movement or restriction of the joints of the human body, and applying the operation of the bone structure to the previously stored character data by relating the bone structure.
- the character control unit 420 can operate various objects in the virtual space 1 , such as a character 4 or a camera 3 , according to operation by a plurality of users received from the plurality of character manipulation systems 350 .
- the operation of the character 4 can be controlled based on the input information received from the first user from the first character manipulation system 360
- the camera 2 can be operated based on the input information received from the second user from the second character manipulation system 360 to control the camera 3 .
- the control unit 340 includes a recording and playback executing unit 440 for recording and playing back an image-generated character on a track, and an editing executing unit 450 for editing each track and generating the final content.
- the storage unit 350 includes a character data storage unit 510 for storing not only image data of a character but also information related to a character such as attributes of a character.
- the control program storage unit 520 stores a program for controlling the operation of a character or an expression in the virtual space.
- the storage unit 350 includes a track storage unit 530 for storing action data composed of parameters for controlling the movement of a character in a dynamic image generated by the image producing unit 430 .
- FIG. 9 is a flowchart illustrating an example of a track generation process according to an embodiment of the present invention.
- the recording and reproduction executing unit 440 of the control unit 340 of the image producing device 310 starts recording for storing action data of the moving image related to operation by the first character in the virtual space in the first track of the track storage unit 530 (S 101 ).
- the position of the camera where the character is to be shot and the viewpoint of the camera e.g., FPV, TPV, etc.
- the position where the cameraman 2 is disposed and the angle of the camera 3 can be set with respect to the character 4 corresponding to the first character.
- the recording start operation may be indicated by a remote controller, such as controller 210 , or may be indicated by other terminals.
- the operation may also be performed by a user who is equipped with an HMD 110 to manipulate the controller 210 , to play a character, or by a user other than the user who performs the character.
- the recording process may be automatically started based on detecting an operation by a user who performs the character described below.
- the user input detecting unit 410 of the control unit 340 detects information received from the HMD 110 and/or the controller 210 relating to the operation of the user's head, speech or pronunciation of the user, and operation or operation of the controller (S 102 ). For example, when the user mounting the HMD 110 tilts the head, the sensor 140 provided in the HMD 110 detects the tilt and transmits information about the tilt to the image generating device 310 . The image generating device 310 receives information about the operation of the user through the communication unit 330 , and the user input detecting unit 410 detects the operation of the user's head based on the received information.
- the sensor 260 provided in the controller detects the operation and/or operation and transmits information about the operation and/or operation to the image generating device 310 using the controller 210 .
- the image producing device 310 receives information related to the user's controller operation and operation through the communication unit 330 , and the user input detecting unit 410 detects the user's controller operation and operation based on the received information.
- the character control unit 420 of the control unit 340 controls the operation of the first character in the virtual space based on the operation of the detected user (S 103 ). For example, based on the user detecting an operation to tilt the head, the character control unit 420 controls to tilt the head of the first character. Also, based on the fact that the user lifts the controller and detects pressing a predetermined button on the controller, the character control unit 420 controls something while extending the arm of the first character upward. In this manner, the character control unit 420 controls the first character to perform the corresponding operation each time the user input detecting unit 410 detects an operation by a user transmitted from the HMD 110 or the controller 210 .
- the character Stores parameters related to the operation and/or operation detected by the user input detecting unit 410 in the first track of the track storage unit 530 .
- the character may be controlled to perform a predetermined performance action without user input, the action data relating to the predetermined performance action may be stored in the first track, or both user action and action data relating to the predetermined behavior may be stored.
- the recording and reproduction executing unit 440 confirms whether or not the user receives the instruction to end the recording (S 104 ), and when receiving the instruction to end the recording, completes the recording of the first track related to the first character (S 105 ).
- the recording and reproduction executing unit 440 continues the recording process unless the user receives an instruction to end the recording.
- the recording and reproduction executing unit 440 may perform the process of automatically completing the recording when the operation by the user acting as a character is no longer detected. It is also possible to execute the recording termination process at a predetermined time by activating a timer rather than accepting instructions from the user.
- the recording and playback executing unit 440 starts recording for storing action data of the moving image associated with operation by the second character in the virtual space in the second track of the track storage unit 530 (S 106 ).
- the position of the camera where the character is to be shot and the viewpoint of the camera e.g., FPV, TPV, etc.
- the position where the cameraman 2 is disposed and the angle of the camera 3 can be set with respect to the character 5 corresponding to the second character.
- the user may set the camera viewpoint by the FPV of the character 5 , perform the playback processing of the first track, and perform the desired operation by the user who performs the character 5 while checking the operation of the character 4 .
- the recording start operation may be indicated by a remote controller, such as controller 210 , or may be indicated by other terminals.
- the operation may also be performed by a user who is equipped with an HMD 110 to manipulate the controller 210 , to play a character, or by a user other than the user who performs the character.
- the recording process may also be automatically initiated based on the detection of an action and/or operation by the user performing the characters described below.
- the user input detecting unit 410 of the control unit 340 detects information received from the HMD 110 and/or the controller 210 relating to the operation of the user's head, speech or pronunciation of the user, and operation or operation of the controller (S 107 ).
- the user may be the same user as the user performing the first character, or may be a different user.
- the sensor 140 provided in the HMD 110 detects the tilt and transmits information about the tilt to the image generating device 310 .
- the image generating device 310 receives information about the operation of the user through the communication unit 330 , and the user input detecting unit 410 detects the operation of the user's head based on the received information.
- the sensor 260 provided in the controller detects the operation and/or operation and transmits information about the operation and/or operation to the image generating device 310 using the controller 210 .
- the image producing device 310 receives information related to the user's controller operation and operation through the communication unit 330 , and the user input detecting unit 410 detects the user's controller operation and operation based on the received information.
- the character control unit 420 of the control unit 340 controls the operation of the second character in the virtual space based on the operation of the detected user (S 108 ). For example, based on the user detecting an operation to tilt the head, the character control unit 420 controls to tilt the head of the second character.
- the character control unit 420 also controls the opening of the finger (i.e. “by”) by shaking the arm of the second character, based on the fact that the user has raised the controller to the right and left or pressed a predetermined button on the controller. In this manner, the character control unit 420 controls the second character to perform the corresponding operation each time the user input detecting unit 410 detects an operation by a user transmitted from the HMD 110 or the controller 210 .
- the parameters related to the operation and/or operation detected by the user input detecting unit 410 are stored in the second track of the track storage unit 530 .
- the character may be controlled to perform a predetermined performance action without user input, the action data relating to the predetermined performance action may be stored in a second track, or both user action and action data relating to a predetermined operation may be stored.
- the recording and reproduction executing unit 440 confirms whether or not the user receives the instruction to end the recording (S 109 ), and when the instruction to terminate the recording is received, the recording of the second track related to the second character is completed (S 110 ).
- the recording and reproduction executing unit 440 continues the recording process unless the user receives an instruction to end the recording.
- the recording and reproduction executing unit 440 may perform the process of automatically completing the recording when the operation by the user acting as a character is no longer detected.
- FIG. 10 is a flowchart illustrating an example of a track editing process according to an embodiment of the present invention.
- the editing execution unit 450 of the control unit 340 of the image generating device 310 performs a process of editing the first track stored in the track storage unit 530 (S 201 ).
- the user edits a first track (T 1 ) associated with the first character via a user interface for track editing, as shown in FIG. 11( a ) .
- the user interface displays the area in which the first track is stored along a time series. The user selects a desired bar so that the 3D model of the character is rendered based on the parameters and character data of the stored moving image, and the moving image of the character (e.g., the character 4 ) disposed in the virtual space as shown in FIG. 1 is reproduced.
- a user interface for editing tracks it is also possible to display, for example, a track name and title (e.g., a “first character”) in a list format, in addition to the display described above.
- the user interface may allow editing to be performed while the moving image of the character is played directly, or a keyframe of the moving image may be displayed in which the user selects the keyframe to be edited, without showing the tracks or lists as shown in FIG. 11 .
- a process such as changing the color of the character selected as an editing target by the selected time.
- the placement position of the character 4 corresponding to the first character is changed in the virtual space 1 , the shooting position and angle of the character 4 is changed, and the shooting viewpoint (FPV, TPV) is changed. It is also possible to change or delete the time scale.
- a portion of the operation of the character 4 may be changed as an editing process.
- a user may specify a part of the body to be edited and only operate the specified part by operation using the controller 210 or the HMD 110 .
- a user may specify an arm of a first character to manipulate only the arm of the first character by the operation of controller 210 while playing back the operation of the first character to modify the movement of the arm.
- the editing execution unit 450 performs the process of editing the second track stored in the track storage unit 530 (S 202 ).
- the user selects the option (not shown) to edit a second track (T 2 ) associated with the second character via a user interface for track editing, and places a bar, via a user interface, as shown in FIG. 11 b , indicating the area in which the second track is stored along a time series.
- the user can adjust the second track relatively, such as synchronizing the playback timing of each track while editing the second track independently of the first track.
- the user may edit other tracks (e.g., a third track (T 3 )).
- the placement position of the character 5 corresponding to the second character is changed in the virtual space 1 , the shooting position and angle of the character 5 is changed, and the shooting viewpoint (FPV, TPV) is changed. It is also possible to change or delete the time scale.
- the user may change the placement position of the second character 5 in the virtual space 1 opposite the character 4 corresponding to the first character.
- an animated moving image that is realized in a virtual space as a whole can be created by playing an active image including the operation of a character corresponding to each track in a virtual space.
- it is possible to edit animated moving images that can even be realized in a virtual space by not only reproducing the movement of the character, but also changing the set of the background of the character, increasing or decreasing the object head containing the character, changing the setting of writing, changing the clothing of the character or attaching an accessory, changing the wind amount or direction of the character, and changing the character as is the action data.
- the editing execution unit 450 stores the edited contents in response to a user's request or automatically (S 203 ).
- the editing process of S 201 and S 202 can be performed in any order, and can be moved back and forth.
- the storage processing of S 203 can also be performed each time the editing of S 201 and S 202 is performed. Editing can also be terminated by editing S 201 only.
- the track generation process described above can be executed independently for each user. Accordingly, after a track generation process is performed by the first user, a track generation process can be performed by the second user. Accordingly, for example, after the first user operates the character 4 and stores the performance in the first track storage section 530 as a track in which the first user operates, the second user can control the camera 3 by operating the camera 2 while playing the first track, take a later picture of the performance of the character 4 , and store the operation of the camera 3 (or the operation of the camera man 2 ) in the second track. This allows the first and second tracks to be executed to create an image captured by the second user of the character 4 performed by the first user.
- the track editing process described above can also be performed independently for each user. Accordingly, after the track creation process is performed by the first user, the editing process of the track created by the first user by the second user can be performed. Accordingly, the operation of one object can be defined by the operation in which the first user and the second user bring together their favorite parts. For example, after the first user operates the character 4 to record the wrap operation on the track, the second user may manipulate the character 4 to add a dance operation that is performed between the wraps. In this way, even if the same character 4 is used, when performing a different movement, a user who is skilled in the art is able to perform the operation according to the movement, thereby enabling the overall performance of the high-quality movement.
- the character operation linked to the user operation can be stored in each track, and the animation production can be realized easily and efficiently by performing editing work within each track or between each track.
- the method disclosed in this embodiment may be applied to an object (vehicle, structure, article, etc.) comprising an action, including not only a character, but also a character.
- the HMD 110 may include all or part of the configuration and functions provided by the image producing device 310 .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Multimedia (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
- Position Input By Displaying (AREA)
Abstract
To enable you to take animations in a virtual space, an animation production method comprising: placing a first and second objects and a virtual camera in a virtual space; controlling an action of the first object in response to an operation from the first user; controlling an action of the second object in response to an operation from the second user; and controlling the camera in response to an operation from the first or second user or the second user to shoot the first and second objects.
Description
- This application is a continuation of U.S. patent application Ser. No. 17/008,258, filed Aug. 31, 2020, which claims priority to Japanese Patent Application No. 2020-128304, filed Jul. 29, 2020, which are incorporated by reference in their entirety.
- The present invention relates to an animation production system.
- Virtual cameras are arranged in a virtual space (see Patent Document 1).
-
- [PTL 1] Patent Application Publication No. 2017-146651
- No attempt was made to capture animations in the virtual space.
- The present invention has been made in view of such a background, and is intended to provide a technology capable of capturing animations in a virtual space.
- The principal invention for solving the above-described problem is an animation production method comprising: placing a first and second objects and a virtual camera in a virtual space; controlling an action of the first object in response to an operation from the first user; controlling an action of the second object in response to an operation from the second user; and controlling the camera in response to an operation from the first or second user or the second user to shoot the first and second objects.
- The other problems disclosed in the present application and the method for solving them are clarified in the sections and drawings of the embodiments of the invention.
- According to the present invention, animations can be captured in a virtual space.
-
FIG. 1 is a diagram illustrating an example of a virtual space displayed on a head mount display (HMD) mounted by a user in an animation production system of the present embodiment; -
FIG. 2 is a diagram illustrating an example of the overall configuration of ananimation production system 300 according to an embodiment of the present invention. -
FIG. 3 shows a schematic view of the appearance of a head mount display (hereinafter referred to as an HMD) 110 according to the present embodiment. -
FIG. 4 shows a schematic view of the outside of thecontroller 210 according to the present embodiment. -
FIG. 5 shows a functional configuration diagram of the HMD 110 according to the present embodiment. -
FIG. 6 shows a functional configuration diagram of thecontroller 210 according to the present embodiment. -
FIG. 7 is a diagram illustrating a functional configuration of acontrol device 350 according to the present embodiment; -
FIG. 8 shows a functional configuration diagram of animage producing device 310 according to the present embodiment. -
FIG. 9 is a flowchart illustrating an example of a track generation process according to an embodiment of the present invention. -
FIG. 10 is a flowchart illustrating an example of a track editing process according to an embodiment of the present invention. -
FIG. 11A is a diagram illustrating an example of a user interface for editing a track according to an embodiment of the present invention. -
FIG. 11B is a diagram illustrating an example of a user interface for editing a track according to an embodiment of the present invention. - The contents of embodiments of the present invention will be described with reference. An animation production method according to an embodiment of the present invention has the following configuration.
- An animation production method comprising:
- placing a first and second objects and a virtual camera in a virtual space;
- controlling an action of the first object in response to an operation from the first user;
- controlling an action of the second object in response to an operation from the second user; and
- controlling the camera in response to an operation from the first or second user or the second user to shoot the first and second objects.
- The animation production method according to
item 1, the method further comprising: - recording the action of the first object in a first track; and
- controlling the action of the second object while playing back the action of the first object recorded on the first track.
- The animation production method according to
item 1, wherein: - the second object is the same object as the first object, the method further comprising:
- recording the action of the first object in a first track; and
- changing the action of the first object in response to the operation of the second user while playing back the action of the first object recorded in the first track.
- A specific example of an animation production system according to an embodiment of the present invention will be described below with reference to the drawings. It should be noted that the present invention is not limited to these examples, and is intended to include all modifications within the meaning and scope of equivalence with the appended claims, as indicated by the appended claims. In the following description, the same elements are denoted by the same reference numerals in the description of the drawings and overlapping descriptions are omitted.
-
FIG. 1 is a diagram illustrating an example of a virtual space displayed on a head mount display (HMD) mounted by a user in an animation production system of the present embodiment. In the animation production system of the present embodiment, acharacter 4 and acamera 3 are disposed in thevirtual space 1, and acharacter 4 is shot using thecamera 3. In thevirtual space 1, thephotographer 2 is disposed, and thecamera 3 is virtually operated by thephotographer 2. In the animation production system of the present embodiment, as shown inFIG. 1 , a user makes an animation by placing acharacter 4 and acamera 3 while viewing thevirtual space 1 from a bird's perspective with a TPV (Third Person's View), taking acharacter 4 with an FPV (First Person View; first person support) as aphotographer 2, and performing acharacter 4 with an FPV. In thevirtual space 1, a plurality of characters (in the example shown inFIG. 1 , acharacter 4 and a character 5) can be disposed, and the user can perform the performance while possessing acharacter 4 and acharacter 5, respectively. That is, in the animation production system of the present embodiment, one can play a number of roles (roles). In addition, since thecamera 3 can be virtually operated as thephotographer 2, natural camera work can be realized and the representation of the movie to be shot can be enriched. In addition, in the animation production system of the present embodiment, a plurality of users can collaborate to produce an animation. For example, one user may operate thecharacter 4 to perform the performance, and another user may operate thecamera man 2 to photograph the performance of thecharacter 4 by thecamera 3. In addition, the lighting engineer performing lighting and the mixer performing voice mixing can be performed by the third and fourth users to share the work required for one animation. In addition, the animation production system of the present embodiment can perform these tasks asynchronously. That is, after the first user operates thecharacter 4 and performs the performance, the second user operates thecamera man 2 to take a photograph with one or more cuts while changing the angle of thecamera 3, and then the lighting engineer performs the lighting while adjusting the color, position, and timing of the lighting, so that the photographing operation can be performed asynchronously. -
FIG. 2 is a diagram illustrating an example of the overall configuration of ananimation production system 300 according to an embodiment of the present invention. Theanimation production system 300 may comprise a plurality ofcharacter manipulation systems 360 and animage generator 310 serving as a host computer. Thecharacter manipulation system 360 may also include, for example, anHMD 110, acontroller 210, and acontrol device 350 for controlling them. An infrared camera (not shown) for detecting the position, orientation and slope of theHMD 110 or thecontroller 210 can also be added to thecharacter manipulation system 360. These devices may be connected to each other by wired or wireless means. For example, each device may be equipped with a USB port to establish communication by cable connection, or communication may be established by wired or wireless, such as HDMI, wired LAN, infrared, Bluetooth™, WiFi™. Thecontrol device 350 may be a PC, a game machine, a portable communication terminal, or any other device having a calculation processing function. Thecontrol device 350 is communicatively connected to theimage generating device 310 and can transmit input information from the user by theHMD 110 or thecontroller 210 to theimage generating device 310. -
FIG. 3 shows a schematic view of the appearance of a head mount display (hereinafter referred to as HMD) 110 according to the present embodiment.FIG. 5 shows a functional configuration diagram of theHMD 110 according to the present embodiment. TheHMD 110 is mounted on the user's head and includes adisplay panel 120 for placement in front of the user's left and right eyes. Although an optically transmissive and non-transmissive display is contemplated as the display panel, this embodiment illustrates a non-transmissive display panel that can provide more immersion. Thedisplay panel 120 displays a left-eye image and a right-eye image, which can provide the user with a three-dimensional image by utilizing the visual difference of both eyes. If left- and right-eye images can be displayed, a left-eye display and a right-eye display can be provided separately, and an integrated display for left-eye and right-eye can be provided. - The
housing portion 130 of theHMD 110 includes asensor 140. Thesensor 140 may comprise, for example, a magnetic sensor, an acceleration sensor, or a gyro sensor, or a combination thereof, to detect actions such as the orientation or tilt of the user's head. When the vertical direction of the user's head is Y-axis, the axis corresponding to the user's anteroposterior direction is Z-axis, which connects the center of thedisplay panel 120 with the user, and the axis corresponding to the user's left and right direction is X-axis, thesensor 140 can detect the rotation angle around the X-axis (so-called pitch angle), rotation angle around the Y-axis (so-called yaw angle), and rotation angle around the Z-axis (so-called roll angle). - In place of or in addition to the
sensor 140, thehousing portion 130 of theHMD 110 may also include a plurality of light sources 150 (e.g., infrared light LEDs, visible light LEDs). A camera (e.g., an infrared light camera, a visible light camera) installed outside the HMD 110 (e.g., indoor, etc.) can detect the position, orientation, and tilt of theHMD 110 in a particular space by detecting these light sources. Alternatively, for the same purpose, theHMD 110 may be provided with a camera for detecting a light source installed in thehousing portion 130 of theHMD 110. - The
housing portion 130 of theHMD 110 may also include an eye tracking sensor. The eye tracking sensor is used to detect the user's left and right eye gaze directions and gaze. There are various types of eye tracking sensors. For example, the position of reflected light on the cornea, which can be irradiated with infrared light that is weak in the left eye and right eye, is used as a reference point, the position of the pupil relative to the position of reflected light is used to detect the direction of the eye line, and the intersection point in the direction of the eye line in the left eye and right eye is used as a focus point. -
FIG. 4 shows a schematic view of the appearance of thecontroller 210 according to the present embodiment.FIG. 6 shows a functional configuration diagram of thecontroller 210 according to the present embodiment. Thecontroller 210 can support the user to make predetermined inputs in the virtual space. Thecontroller 210 may be configured as a set of left-hand 220 and right-hand 230 controllers. Theleft hand controller 220 and theright hand controller 230 may each have an operational trigger button 240, aninfrared LED 250, asensor 260, ajoystick 270, and amenu button 280. - The operation trigger button 240 is positioned as 240 a, 240 b in a position that is intended to perform an operation to pull the trigger with the middle finger and index finger when gripping the grip 235 of the
controller 210. Theframe 245 formed in a ring-like fashion downward from both sides of thecontroller 210 is provided with a plurality ofinfrared LEDs 250, and a camera (not shown) provided outside the controller can detect the position, orientation and slope of thecontroller 210 in a particular space by detecting the position of these infrared LEDs. - The
controller 210 may also incorporate asensor 260 to detect operations such as the orientation or tilt of thecontroller 210. Assensor 260, it may comprise, for example, a magnetic sensor, an acceleration sensor, or a gyro sensor, or a combination thereof. Additionally, the top surface of thecontroller 210 may include ajoystick 270 and amenu button 280. It is envisioned that thejoystick 270 may be moved in a 360 degree direction centered on the reference point and operated with a thumb when gripping the grip 235 of thecontroller 210.Menu buttons 280 are also assumed to be operated with the thumb. In addition, thecontroller 210 may include a vibrator (not shown) for providing vibration to the hand of the user operating thecontroller 210. Thecontroller 210 includes an input/output unit and a communication unit for outputting information such as the position, orientation, and slope of thecontroller 210 via a button or a joystick, and for receiving information from the host computer. - With or without the user grasping the
controller 210 and manipulating the various buttons and joysticks, and with information detected by the infrared LEDs and sensors, the system can determine the user's hand operation and attitude, pseudo-displaying and operating the user's hand in the virtual space. -
FIG. 7 shows a functional configuration diagram of acontrol device 350 according to the present embodiment. Thecontrol device 350 may use a device such as a PC, a game machine, or a portable communication terminal, which has a function for storing information about the user's head operation or operation of the controller acquired by the user input information or the sensor, which is transmitted from theHMD 110 or thecontroller 210, performing a predetermined computational processing, and generating an image. Thecontrol device 350 may include, for example, an input/output unit 351 for establishing a wired connection with a peripheral device such as anHMD 110 orcontroller 210, and acommunication unit 352 for establishing a wireless connection such as infrared, Bluetooth, or WiFi. Information received from theHMD 110 and/or thecontroller 210 through the I/O unit 351 and/or thecommunication unit 352 regarding the operation of the user's head or the operation or operation of the controller is detected in thecontrol unit 353 as input content including the operation of the user's position, line of vision, posture, etc., speech, pronunciation, operation, etc. Thecontrol unit 353 may be composed of a CPU. However, by further providing a GPU specialized for image processing, information processing and image processing can be distributed and overall processing efficiency can be improved. Thecontrol unit 353 includes a userinput detecting unit 354 which detects information (input information) received from theHMD 110 and/or thecontroller 210 regarding the operation of the user's head, the speech and pronunciation of the user, and the operation and operation of the controller. The input information is solved by thecommunication unit 330 and transmitted to theimage producing device 310. Thecommunication unit 330 also receives information for reproducing thevirtual space 1 from the image producing device 310 (which may be rendered video data or model information of an object disposed in a virtual space) and can display thevirtual space 1 on theHMD 110 by the virtualspace display unit 355 based on this information. -
FIG. 8 shows a functional configuration diagram of animage producing device 310 according to this embodiment. Theimage generating device 310 may be a general purpose computer, such as a workstation or personal computer, or may be logically implemented by cloud computing. Theimage producing device 310 may include an input/output unit 320 for establishing a connection between theHMD 110 and thecontroller 210, for example, when theimage producing device 310 includes a peripheral device such as anHMD 110 or acontroller 210, and acommunication unit 330 for establishing a connection by wires such as infrared, Bluetooth, WiFi, or a registered trademark, or a public telephone line. Information (input information) relating to the operation of the user's head or the operation or operation of the controller (input information) input from theHMD 110 and/or thecontroller 210 in thecharacter manipulation system 360 is transmitted to thecontrol unit 340, and is detected in thecontrol unit 340 as input content including the operation of the user's position, line of vision, attitude, etc., speech, pronunciation, operation, etc., and a control program stored in thestorage unit 350 is executed in accordance with the user's input content to perform a process such as controlling the character and generating an image. Thecontrol unit 340 may be composed of a CPU. However, by further providing a GPU specialized for image processing, information processing and image processing can be distributed and overall processing efficiency can be improved. Theimage generating device 310 may also communicate with other computing processing devices to allow other computing processing devices to share information processing and image processing. - The
control unit 340 includes a userinput receiving unit 410 for acquiring input information from a user, acharacter control unit 420 for executing a control program stored in the controlprogram storage unit 520 for a character stored in the characterdata storage unit 510 of thestorage unit 350, and animage producing unit 430 for generating an image based on a character control. Here, the control of the operation of a character can be realized by converting input information, such as the direction or tilt of the user head detected through theHMD 110 or thecontroller 210, or the manual operation, into the operation of each portion of the bone structure created in accordance with the movement or restriction of the joints of the human body, and applying the operation of the bone structure to the previously stored character data by relating the bone structure. Thecharacter control unit 420 can operate various objects in thevirtual space 1, such as acharacter 4 or acamera 3, according to operation by a plurality of users received from the plurality ofcharacter manipulation systems 350. For example, the operation of thecharacter 4 can be controlled based on the input information received from the first user from the firstcharacter manipulation system 360, and thecamera 2 can be operated based on the input information received from the second user from the secondcharacter manipulation system 360 to control thecamera 3. This allows a division of labor in which the first user performs the performance of thecharacter 4 and the second user photographs it. Further, thecontrol unit 340 includes a recording andplayback executing unit 440 for recording and playing back an image-generated character on a track, and anediting executing unit 450 for editing each track and generating the final content. - The
storage unit 350 includes a characterdata storage unit 510 for storing not only image data of a character but also information related to a character such as attributes of a character. The controlprogram storage unit 520 stores a program for controlling the operation of a character or an expression in the virtual space. Thestorage unit 350 includes atrack storage unit 530 for storing action data composed of parameters for controlling the movement of a character in a dynamic image generated by theimage producing unit 430. -
FIG. 9 is a flowchart illustrating an example of a track generation process according to an embodiment of the present invention. - First, the recording and
reproduction executing unit 440 of thecontrol unit 340 of theimage producing device 310 starts recording for storing action data of the moving image related to operation by the first character in the virtual space in the first track of the track storage unit 530 (S101). Here, the position of the camera where the character is to be shot and the viewpoint of the camera (e.g., FPV, TPV, etc.) can be set. For example, in thevirtual space 1 illustrated inFIG. 1 , the position where thecameraman 2 is disposed and the angle of thecamera 3 can be set with respect to thecharacter 4 corresponding to the first character. The recording start operation may be indicated by a remote controller, such ascontroller 210, or may be indicated by other terminals. The operation may also be performed by a user who is equipped with anHMD 110 to manipulate thecontroller 210, to play a character, or by a user other than the user who performs the character. In addition, the recording process may be automatically started based on detecting an operation by a user who performs the character described below. - Subsequently, the user
input detecting unit 410 of thecontrol unit 340 detects information received from theHMD 110 and/or thecontroller 210 relating to the operation of the user's head, speech or pronunciation of the user, and operation or operation of the controller (S102). For example, when the user mounting theHMD 110 tilts the head, thesensor 140 provided in theHMD 110 detects the tilt and transmits information about the tilt to theimage generating device 310. Theimage generating device 310 receives information about the operation of the user through thecommunication unit 330, and the userinput detecting unit 410 detects the operation of the user's head based on the received information. Also, when a user performs a predetermined operation or operation, such as lifting thecontroller 210 or pressing a button, thesensor 260 provided in the controller detects the operation and/or operation and transmits information about the operation and/or operation to theimage generating device 310 using thecontroller 210. Theimage producing device 310 receives information related to the user's controller operation and operation through thecommunication unit 330, and the userinput detecting unit 410 detects the user's controller operation and operation based on the received information. - Subsequently, the
character control unit 420 of thecontrol unit 340 controls the operation of the first character in the virtual space based on the operation of the detected user (S103). For example, based on the user detecting an operation to tilt the head, thecharacter control unit 420 controls to tilt the head of the first character. Also, based on the fact that the user lifts the controller and detects pressing a predetermined button on the controller, thecharacter control unit 420 controls something while extending the arm of the first character upward. In this manner, thecharacter control unit 420 controls the first character to perform the corresponding operation each time the userinput detecting unit 410 detects an operation by a user transmitted from theHMD 110 or thecontroller 210. Stores parameters related to the operation and/or operation detected by the userinput detecting unit 410 in the first track of thetrack storage unit 530. Alternatively, the character may be controlled to perform a predetermined performance action without user input, the action data relating to the predetermined performance action may be stored in the first track, or both user action and action data relating to the predetermined behavior may be stored. - Subsequently, the recording and
reproduction executing unit 440 confirms whether or not the user receives the instruction to end the recording (S104), and when receiving the instruction to end the recording, completes the recording of the first track related to the first character (S105). The recording andreproduction executing unit 440 continues the recording process unless the user receives an instruction to end the recording. Here, the recording andreproduction executing unit 440 may perform the process of automatically completing the recording when the operation by the user acting as a character is no longer detected. It is also possible to execute the recording termination process at a predetermined time by activating a timer rather than accepting instructions from the user. - Subsequently, the recording and
playback executing unit 440 starts recording for storing action data of the moving image associated with operation by the second character in the virtual space in the second track of the track storage unit 530 (S106). Here, the position of the camera where the character is to be shot and the viewpoint of the camera (e.g., FPV, TPV, etc.) can be set. For example, in thevirtual space 1 illustrated inFIG. 1 , the position where thecameraman 2 is disposed and the angle of thecamera 3 can be set with respect to thecharacter 5 corresponding to the second character. For example, the user may set the camera viewpoint by the FPV of thecharacter 5, perform the playback processing of the first track, and perform the desired operation by the user who performs thecharacter 5 while checking the operation of thecharacter 4. The recording start operation may be indicated by a remote controller, such ascontroller 210, or may be indicated by other terminals. The operation may also be performed by a user who is equipped with anHMD 110 to manipulate thecontroller 210, to play a character, or by a user other than the user who performs the character. The recording process may also be automatically initiated based on the detection of an action and/or operation by the user performing the characters described below. - Subsequently, the user
input detecting unit 410 of thecontrol unit 340 detects information received from theHMD 110 and/or thecontroller 210 relating to the operation of the user's head, speech or pronunciation of the user, and operation or operation of the controller (S107). Here, the user may be the same user as the user performing the first character, or may be a different user. For example, when the user mounting theHMD 110 tilts the head, thesensor 140 provided in theHMD 110 detects the tilt and transmits information about the tilt to theimage generating device 310. Theimage generating device 310 receives information about the operation of the user through thecommunication unit 330, and the userinput detecting unit 410 detects the operation of the user's head based on the received information. Also, when a user performs a predetermined operation or operation, such as lifting thecontroller 210, shaking thecontroller 210 left or right, or pressing a button, thesensor 260 provided in the controller detects the operation and/or operation and transmits information about the operation and/or operation to theimage generating device 310 using thecontroller 210. Theimage producing device 310 receives information related to the user's controller operation and operation through thecommunication unit 330, and the userinput detecting unit 410 detects the user's controller operation and operation based on the received information. - Subsequently, the
character control unit 420 of thecontrol unit 340 controls the operation of the second character in the virtual space based on the operation of the detected user (S108). For example, based on the user detecting an operation to tilt the head, thecharacter control unit 420 controls to tilt the head of the second character. Thecharacter control unit 420 also controls the opening of the finger (i.e. “by”) by shaking the arm of the second character, based on the fact that the user has raised the controller to the right and left or pressed a predetermined button on the controller. In this manner, thecharacter control unit 420 controls the second character to perform the corresponding operation each time the userinput detecting unit 410 detects an operation by a user transmitted from theHMD 110 or thecontroller 210. The parameters related to the operation and/or operation detected by the userinput detecting unit 410 are stored in the second track of thetrack storage unit 530. Alternatively, the character may be controlled to perform a predetermined performance action without user input, the action data relating to the predetermined performance action may be stored in a second track, or both user action and action data relating to a predetermined operation may be stored. - Subsequently, the recording and
reproduction executing unit 440 confirms whether or not the user receives the instruction to end the recording (S109), and when the instruction to terminate the recording is received, the recording of the second track related to the second character is completed (S110). The recording andreproduction executing unit 440 continues the recording process unless the user receives an instruction to end the recording. Here, the recording andreproduction executing unit 440 may perform the process of automatically completing the recording when the operation by the user acting as a character is no longer detected. -
FIG. 10 is a flowchart illustrating an example of a track editing process according to an embodiment of the present invention. - First, the
editing execution unit 450 of thecontrol unit 340 of theimage generating device 310 performs a process of editing the first track stored in the track storage unit 530 (S201). For example, the user edits a first track (T1) associated with the first character via a user interface for track editing, as shown inFIG. 11(a) . For example, the user interface displays the area in which the first track is stored along a time series. The user selects a desired bar so that the 3D model of the character is rendered based on the parameters and character data of the stored moving image, and the moving image of the character (e.g., the character 4) disposed in the virtual space as shown inFIG. 1 is reproduced. It should be noted that as a user interface for editing tracks, it is also possible to display, for example, a track name and title (e.g., a “first character”) in a list format, in addition to the display described above. Alternatively, the user interface may allow editing to be performed while the moving image of the character is played directly, or a keyframe of the moving image may be displayed in which the user selects the keyframe to be edited, without showing the tracks or lists as shown inFIG. 11 . In addition, among the 3D characters displayed as a dynamic image, it is possible to perform a process such as changing the color of the character selected as an editing target by the selected time. - As an editing process, for example, in
FIG. 1 , it is contemplated that the placement position of thecharacter 4 corresponding to the first character is changed in thevirtual space 1, the shooting position and angle of thecharacter 4 is changed, and the shooting viewpoint (FPV, TPV) is changed. It is also possible to change or delete the time scale. - Alternatively, a portion of the operation of the
character 4 may be changed as an editing process. For example, for a first character played back in a track (T1), a user may specify a part of the body to be edited and only operate the specified part by operation using thecontroller 210 or theHMD 110. For example, a user may specify an arm of a first character to manipulate only the arm of the first character by the operation ofcontroller 210 while playing back the operation of the first character to modify the movement of the arm. - Next, the
editing execution unit 450 performs the process of editing the second track stored in the track storage unit 530 (S202). For example, the user selects the option (not shown) to edit a second track (T2) associated with the second character via a user interface for track editing, and places a bar, via a user interface, as shown inFIG. 11b , indicating the area in which the second track is stored along a time series. As shown inFIG. 11(b) , the user can adjust the second track relatively, such as synchronizing the playback timing of each track while editing the second track independently of the first track. Similarly, the user may edit other tracks (e.g., a third track (T3)). - As an editing process, for example, in
FIG. 1 , it is contemplated that the placement position of thecharacter 5 corresponding to the second character is changed in thevirtual space 1, the shooting position and angle of thecharacter 5 is changed, and the shooting viewpoint (FPV, TPV) is changed. It is also possible to change or delete the time scale. Here, as an editing process, the user may change the placement position of thesecond character 5 in thevirtual space 1 opposite thecharacter 4 corresponding to the first character. - Further, by playing back each track as an editing process, an animated moving image that is realized in a virtual space as a whole can be created by playing an active image including the operation of a character corresponding to each track in a virtual space. In addition, by playing back each track, it is possible to edit animated moving images that can even be realized in a virtual space by not only reproducing the movement of the character, but also changing the set of the background of the character, increasing or decreasing the object head containing the character, changing the setting of writing, changing the clothing of the character or attaching an accessory, changing the wind amount or direction of the character, and changing the character as is the action data.
- After the editing process is completed, the
editing execution unit 450 stores the edited contents in response to a user's request or automatically (S203). - The editing process of S201 and S202 can be performed in any order, and can be moved back and forth. The storage processing of S203 can also be performed each time the editing of S201 and S202 is performed. Editing can also be terminated by editing S201 only.
- In addition, the track generation process described above can be executed independently for each user. Accordingly, after a track generation process is performed by the first user, a track generation process can be performed by the second user. Accordingly, for example, after the first user operates the
character 4 and stores the performance in the firsttrack storage section 530 as a track in which the first user operates, the second user can control thecamera 3 by operating thecamera 2 while playing the first track, take a later picture of the performance of thecharacter 4, and store the operation of the camera 3 (or the operation of the camera man 2) in the second track. This allows the first and second tracks to be executed to create an image captured by the second user of thecharacter 4 performed by the first user. - The track editing process described above can also be performed independently for each user. Accordingly, after the track creation process is performed by the first user, the editing process of the track created by the first user by the second user can be performed. Accordingly, the operation of one object can be defined by the operation in which the first user and the second user bring together their favorite parts. For example, after the first user operates the
character 4 to record the wrap operation on the track, the second user may manipulate thecharacter 4 to add a dance operation that is performed between the wraps. In this way, even if thesame character 4 is used, when performing a different movement, a user who is skilled in the art is able to perform the operation according to the movement, thereby enabling the overall performance of the high-quality movement. - As described above, by applying the method of multitrack recording (MTR) to the animation production according to the present embodiment, the character operation linked to the user operation can be stored in each track, and the animation production can be realized easily and efficiently by performing editing work within each track or between each track.
- Although the present embodiment has been described above, the above-described embodiment is intended to facilitate the understanding of the present invention and is not intended to be a limiting interpretation of the present invention. The present invention may be modified and improved without departing from the spirit thereof, and the present invention also includes its equivalent.
- For example, in this embodiment, while a character has been described as an example with respect to a track generation method and an editing method, the method disclosed in this embodiment may be applied to an object (vehicle, structure, article, etc.) comprising an action, including not only a character, but also a character.
- For example, although the
image producing device 310 has been described in this embodiment as separate from theHMD 110, theHMD 110 may include all or part of the configuration and functions provided by theimage producing device 310. -
-
- 1 virtual space
- 2 cameraman
- 3 cameras
- 4 characters
- 5 characters
- 110 HMD
- 210 controller
- 310 Image Generator
Claims (6)
1. An animation production method comprising:
placing a first character, a second character, and a virtual camera in a virtual space;
controlling a first action of the first character in response to an operation from a first user; and
controlling a second action of the second character to control the virtual camera in response to an operation from a second user, wherein the virtual camera shoots the first character.
2. The animation production method according to claim 1 , the method further comprising:
placing a third character in the virtual space; and
controlling a third action of the third character in response to an operation from a third user to control a light or an audio mixer.
3. The animation production method according to claim 1 , the method further comprising recording the first action of the first character, wherein controlling the second action of the second character while playing back the first action of the first character.
4. The animation production method according to claim 2 , the method further comprising recording the first action of the first character, wherein controlling the third action of the third character while playing back the first action of the first character.
5. The animation production method according to claim 1 , wherein:
controlling the second action of the second character after controlling the first action of the first character is finished.
6. The animation production method according to claim 5 , the method further comprising:
placing a third character in the virtual space;
recording the first action of the first character; and
controlling a third action of the third character in response to an operation from a third user to control a light, after controlling the second action of the second character is finished, while playing back the first action of the first character.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/834,405 US20220301249A1 (en) | 2020-07-29 | 2022-06-07 | Animation production system for objects in a virtual space |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020-128304 | 2020-07-29 | ||
JP2020128304A JP2022025471A (en) | 2020-07-29 | 2020-07-29 | Animation creation system |
US17/008,258 US11380038B2 (en) | 2020-07-29 | 2020-08-31 | Animation production system for objects in a virtual space |
US17/834,405 US20220301249A1 (en) | 2020-07-29 | 2022-06-07 | Animation production system for objects in a virtual space |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/008,258 Continuation US11380038B2 (en) | 2020-07-29 | 2020-08-31 | Animation production system for objects in a virtual space |
Publications (1)
Publication Number | Publication Date |
---|---|
US20220301249A1 true US20220301249A1 (en) | 2022-09-22 |
Family
ID=80004485
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/008,258 Active US11380038B2 (en) | 2020-07-29 | 2020-08-31 | Animation production system for objects in a virtual space |
US17/834,405 Abandoned US20220301249A1 (en) | 2020-07-29 | 2022-06-07 | Animation production system for objects in a virtual space |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/008,258 Active US11380038B2 (en) | 2020-07-29 | 2020-08-31 | Animation production system for objects in a virtual space |
Country Status (2)
Country | Link |
---|---|
US (2) | US11380038B2 (en) |
JP (1) | JP2022025471A (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2022025471A (en) * | 2020-07-29 | 2022-02-10 | 株式会社AniCast RM | Animation creation system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180239420A1 (en) * | 2016-12-20 | 2018-08-23 | Colopl, Inc. | Method executed on computer for providing virtual space to head mount device, program for executing the method on the computer, and computer apparatus |
WO2020095088A1 (en) * | 2018-11-05 | 2020-05-14 | Ecole Polytechnique Federale De Lausanne (Epfl) | Method and system for creating an out-of-body experience |
US11380038B2 (en) * | 2020-07-29 | 2022-07-05 | AniCast RM Inc. | Animation production system for objects in a virtual space |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5199053B2 (en) * | 2008-12-16 | 2013-05-15 | 株式会社スクウェア・エニックス | GAME DEVICE, GAME REPLAY DISPLAY METHOD, GAME PROGRAM, AND RECORDING MEDIUM |
WO2012007735A2 (en) * | 2010-07-14 | 2012-01-19 | University Court Of The University Of Abertay Dundee | Improvements relating to viewing of real-time, computer-generated environments |
US8854298B2 (en) * | 2010-10-12 | 2014-10-07 | Sony Computer Entertainment Inc. | System for enabling a handheld device to capture video of an interactive application |
US8941666B1 (en) * | 2011-03-24 | 2015-01-27 | Lucasfilm Entertainment Company Ltd. | Character animation recorder |
JP2017146651A (en) | 2016-02-15 | 2017-08-24 | 株式会社コロプラ | Image processing method and image processing program |
US10365784B2 (en) * | 2017-03-08 | 2019-07-30 | Colopl, Inc. | Information processing method and apparatus for executing the information processing method |
CN108905212B (en) * | 2017-03-27 | 2019-12-31 | 网易(杭州)网络有限公司 | Game screen display control method and device, storage medium and electronic equipment |
JP6276882B1 (en) * | 2017-05-19 | 2018-02-07 | 株式会社コロプラ | Information processing method, apparatus, and program for causing computer to execute information processing method |
JP7267753B2 (en) * | 2019-01-21 | 2023-05-02 | キヤノン株式会社 | Control device, control method, and program |
US11049072B1 (en) * | 2019-04-26 | 2021-06-29 | State Farm Mutual Automobile Insurance Company | Asynchronous virtual collaboration environments |
-
2020
- 2020-07-29 JP JP2020128304A patent/JP2022025471A/en active Pending
- 2020-08-31 US US17/008,258 patent/US11380038B2/en active Active
-
2022
- 2022-06-07 US US17/834,405 patent/US20220301249A1/en not_active Abandoned
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180239420A1 (en) * | 2016-12-20 | 2018-08-23 | Colopl, Inc. | Method executed on computer for providing virtual space to head mount device, program for executing the method on the computer, and computer apparatus |
WO2020095088A1 (en) * | 2018-11-05 | 2020-05-14 | Ecole Polytechnique Federale De Lausanne (Epfl) | Method and system for creating an out-of-body experience |
US11380038B2 (en) * | 2020-07-29 | 2022-07-05 | AniCast RM Inc. | Animation production system for objects in a virtual space |
Non-Patent Citations (1)
Title |
---|
Salamin, Patrick, Daniel Thalmann, and Frédéric Vexo. "The benefits of third-person perspective in virtual and augmented reality?." Proceedings of the ACM symposium on Virtual reality software and technology. 2006. (Year: 2006) * |
Also Published As
Publication number | Publication date |
---|---|
JP2022025471A (en) | 2022-02-10 |
US11380038B2 (en) | 2022-07-05 |
US20220036615A1 (en) | 2022-02-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220301249A1 (en) | Animation production system for objects in a virtual space | |
US20230121976A1 (en) | Animation production system | |
JP7470345B2 (en) | Animation Production System | |
US11321898B2 (en) | Animation production system | |
JP7470344B2 (en) | Animation Production System | |
JP2022153479A (en) | Animation creation system | |
US11586278B2 (en) | Animation production system | |
US20220351445A1 (en) | Animation production system | |
US20220351437A1 (en) | Animation production system | |
US20220351438A1 (en) | Animation production system | |
US11443471B2 (en) | Animation production method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |