WO2020130112A1 - Method for providing virtual space having given content - Google Patents

Method for providing virtual space having given content Download PDF

Info

Publication number
WO2020130112A1
WO2020130112A1 PCT/JP2019/049962 JP2019049962W WO2020130112A1 WO 2020130112 A1 WO2020130112 A1 WO 2020130112A1 JP 2019049962 W JP2019049962 W JP 2019049962W WO 2020130112 A1 WO2020130112 A1 WO 2020130112A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
character
virtual space
performer
input
Prior art date
Application number
PCT/JP2019/049962
Other languages
French (fr)
Japanese (ja)
Inventor
義仁 近藤
雅人 室橋
Original Assignee
株式会社エクシヴィ
株式会社XR iPLAB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社エクシヴィ, 株式会社XR iPLAB filed Critical 株式会社エクシヴィ
Publication of WO2020130112A1 publication Critical patent/WO2020130112A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/214Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
    • A63F13/2145Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads the surface being also a display device, e.g. touch screens
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/426Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving on-screen location information, e.g. screen coordinates of an area at which the player is aiming with a light gun
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/428Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving motion or position input signals, e.g. signals representing the rotation of an input controller or a player's arm motions sensed by accelerometers or gyroscopes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/85Providing additional services to players
    • A63F13/86Watching games played by other players
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings

Definitions

  • the present invention relates to a method of providing a virtual space for multiple users. More particularly, it relates to a method of providing a virtual space having predetermined content including characters played by a performer.
  • Motion capture is a technology that digitally captures the movement of a performer user (hereinafter, “performer”) in the physical space, and the captured movement is used for computer animation such as moving images and expression of character movement in games and the like. It
  • HMD head mounted display
  • Patent Document 1 A mechanism to encourage purchase is disclosed.
  • the technology disclosed in the above-mentioned document provides a mechanism for determining the display position of the user avatar in the virtual space depending on the amount of items purchased by the user in order to generate the approval desire of the user or the sense of competition between the users. ing.
  • the present technology allows the display position of the user avatar to be displayed closer to the performer, the user's approval desire can be satisfied, but there is no interaction or communication with the performer.
  • an object of the present invention is to provide a method capable of more effectively promoting interaction between a performer and a user in a virtual space.
  • a method for providing a virtual space having a predetermined content including a character played by a performer user to a plurality of users, including a performer user and a viewing user includes a controller. Detecting the input of the performer user via the input, controlling either the action or the facial expression of the character based on the input of the performer user, and a part of the image of the character based on the input of the performer user. To make changes to the.
  • FIG. 3 shows a functional configuration diagram of an image generating apparatus 310 according to the first embodiment.
  • 3 shows a flowchart illustrating a method for providing a virtual space according to the first embodiment.
  • 3 is a flowchart showing a makeup process according to the first embodiment.
  • 1 shows an example of a virtual space provided to a user according to the first embodiment.
  • FIG. 3 shows another example of the virtual space provided to the user according to the first embodiment.
  • 1 shows an example of character drawing by a performer user according to the first embodiment.
  • 6 shows another example of character drawing by a performer user according to the first embodiment.
  • 9 is a flowchart showing a method of providing a virtual space to a user according to the second embodiment. An example of the virtual space provided to the user according to the second embodiment is shown. 9 shows another example of the virtual space provided to the user according to the second embodiment. In 2nd Embodiment, another example of the virtual space displayed to a user is shown.
  • FIG. 1 is a schematic view showing the outer appearance of a head mounted display (HMD) 110 according to this embodiment.
  • the HMD 110 is mounted on the head of the performer, and includes a display panel 120 so as to be placed in front of the left and right eyes of the performer.
  • a display panel As the display panel, an optically transmissive type display and a non-transmissive type display can be considered.
  • a non-transmissive type display panel that can provide a more immersive feeling is exemplified.
  • An image for the left eye and an image for the right eye are displayed on the display panel 120, and an image having a stereoscopic effect can be provided to the performer by utilizing the parallax of both eyes. If it is possible to display the image for the left eye and the image for the right eye, it is possible to separately provide a display for the left eye and a display for the right eye, or it is possible to provide an integrated display for the left eye and the right eye. is there.
  • the housing unit 130 of the HMD 110 includes a sensor 140.
  • the sensor may include, for example, a magnetic sensor, an acceleration sensor, a gyro sensor, or a combination thereof, which is not illustrated, in order to detect a movement such as the orientation or tilt of the head of the performer.
  • the vertical direction of the head of the performer is the Y axis, and among the axes orthogonal to the Y axis, the axis that connects the center of the display panel 120 and the performer and corresponds to the front-back direction of the performer is the Z axis, and the Y axis and the Z axis.
  • the senor 140 When the axis orthogonal to the right and left direction of the performer is the X axis, the sensor 140 has a rotation angle around the X axis (so-called pitch angle), a rotation angle around the Y axis (so-called yaw angle), and Z. The rotation angle around the axis (so-called roll angle) can be detected.
  • the housing unit 130 of the HMD 110 may include a plurality of light sources 150 (for example, infrared light LEDs, visible light LEDs), and may be installed outside the HMD 110 (for example, indoors).
  • the camera for example, an infrared light camera or a visible light camera detects these light sources, so that the position, orientation, and inclination of the HMD 110 in a specific space can be detected.
  • the HMD 110 may be provided with a camera for detecting a light source installed in the housing unit 130 of the HMD 110.
  • the housing 130 of the HMD 110 can also include an eye tracking sensor.
  • Eye tracking sensors are used to detect the gaze direction and gazing point of the performer's left and right eyes.
  • a method of detecting the line-of-sight direction and detecting the intersection of the line-of-sight directions of the left and right eyes as the point of gaze can be considered.
  • FIG. 2 shows a schematic diagram of the appearance of the controller 210 according to this embodiment.
  • the controller 210 can support a performer to perform a predetermined input in the virtual space.
  • the controller 210 can be configured as a set of left-hand 220 and right-hand 230 controllers.
  • the left-hand controller 220 and the right-hand controller 230 can each include an operation trigger button 240, an infrared LED 250, a sensor 260, a joystick 270, and a menu button 280.
  • the operation trigger button 240 is arranged as positions 240a and 240b at positions where it is assumed that an operation such as pulling the trigger with the middle finger and the index finger is performed when the grip 235 of the controller 210 is held.
  • a plurality of infrared LEDs 250 are provided on a frame 245 which is formed in a ring shape downward from both side surfaces of the controller 210, and the positions of these infrared LEDs are detected by a camera (not shown) provided outside the controller. Thus, the position, orientation and inclination of the controller 210 in a specific space can be detected.
  • the controller 210 may include a sensor 260 in order to detect a movement such as the orientation or the tilt of the controller 210.
  • the sensor 260 may include, for example, a magnetic sensor, an acceleration sensor, a gyro sensor, or a combination thereof.
  • a joystick 270 and a menu button 280 may be provided on the upper surface of the controller 210. The joystick 270 can be moved in the direction of 360 degrees around the reference point, and it is assumed that the joystick 270 is operated by the thumb when the grip 235 of the controller 210 is gripped. It is envisioned that the menu button 280 will be operated with the thumb as well.
  • the controller 210 may include a vibrator (not shown) for giving a vibration to a hand of a performer operating the controller 210.
  • a vibrator for giving a vibration to a hand of a performer operating the controller 210.
  • the controller 210 inputs/outputs. And a communication unit.
  • the system determines the movement and posture of the performer's hand based on the presence/absence of the performer holding the controller 210 and operating various buttons and joysticks, and the information detected by the infrared LED and the sensor.
  • the performer's hand can be displayed on and operated.
  • FIG. 3 is a block diagram of the HMD system 300 according to this embodiment.
  • the HMD system 300 can be composed of, for example, an HMD 110, a controller 210, and an image generation device 310 that functions as a host computer. Furthermore, an infrared camera (not shown) for detecting the position, orientation, inclination, etc. of the HMD 110 and the controller 210 can be added. These devices can be connected to each other by wired or wireless means. For example, each device is provided with a USB port, and communication can be established by connecting with a cable. In addition, HDMI (registered trademark), wired LAN, infrared ray, Bluetooth (registered trademark), WiFi (registered trademark) It is also possible to establish communication by wire or wireless.
  • HDMI registered trademark
  • wired LAN infrared ray
  • Bluetooth registered trademark
  • WiFi registered trademark
  • the image generation device 310 may be a device having a calculation processing function such as a PC, a game machine, or a mobile communication terminal. Further, the image generation apparatus 310 can be connected to a plurality of user terminals 401A, 401B, 401C and the like via a network such as the Internet and can transmit the generated image in the form of streaming or download. Each of the user terminals 401A and the like has an internet browser or an appropriate viewer so that the transmitted image can be reproduced. Here, the image generation device 310 can directly transmit an image to a plurality of user terminals, or can transmit an image via another content distribution server. Further, by causing the HMD 110 to execute the processing executed by the image generating apparatus 310, the HMD 110 can be configured to function as a stand-alone device without depending on the network.
  • FIG. 4 shows a functional configuration diagram of the HMD 110 according to the present embodiment.
  • the HMD 110 may include the sensor 140.
  • the sensor may include, for example, a magnetic sensor, an acceleration sensor, a gyro sensor, or a combination thereof, which is not illustrated in order to detect a movement such as a head direction and a tilt of the performer.
  • an eye tracking sensor can be included. Eye tracking sensors are used to detect the gaze direction and gazing point of the performer's left and right eyes.
  • the LED 150 such as infrared light or ultraviolet light may be provided to detect the movement of the performer's head such as the direction and inclination of the performer with higher accuracy, or to detect the position of the performer's head.
  • a camera 160 for taking an external view of the HMD can be provided.
  • a microphone 170 for collecting the utterance of the performer and a headphone 180 for outputting a voice can be provided.
  • the microphone and headphones may be provided as a device independent of the HMD 110.
  • the HMD 110 can include, for example, an input/output unit 190 for establishing a wired connection with peripheral devices such as the controller 210 and the image generation device 310, and infrared, Bluetooth (registered trademark), and WiFi (registered trademark). ) Etc. can be provided with a communication unit 115 for establishing a wireless connection.
  • the information about the movement such as the direction and the inclination of the head of the performer acquired by the sensor 140 is transmitted to the image generation apparatus 310 by the control unit 125 via the input/output unit 190 and/or the communication unit 115.
  • the image generated based on the movement of the head of the performer is received via the input/output unit 190 and/or the communication unit 115, and the display unit 120 is displayed by the control unit 125. Is output to.
  • FIG. 5 shows a functional block diagram of the controller 210 according to the present embodiment.
  • the controller 210 can be configured as a set of a controller 220 for the left hand and a controller 230 for the right hand, but in any controller, the operation trigger button 240, the joystick 270, An operation unit 245 such as a menu button 280 can be provided.
  • the controller 210 may include a sensor 260 in order to detect a movement such as the orientation or the tilt of the controller 210.
  • the sensor 260 may include, for example, a magnetic sensor, an acceleration sensor, a gyro sensor, or a combination thereof.
  • the controller 210 can include, for example, an input/output unit 255 for establishing a wired connection with a peripheral device such as the HMD 110 or the image generation device 310, and infrared, Bluetooth (registered trademark), WiFi (registered trademark), or the like.
  • a communication unit 265 for establishing a wireless connection can be provided. Information input by the performer via the operation unit 245 and information such as the orientation and inclination of the controller 210 acquired by the sensor 260 are transmitted to the image generation apparatus 310 via the input/output unit 255 and/or the communication unit 265. ..
  • FIG. 6 shows a functional configuration diagram of the image generation apparatus 310 according to the present embodiment.
  • the image generation apparatus 310 stores the input information transmitted from the HMD 110 and the controller 210, the information about the movement of the head of the performer and the movement and operation of the controller acquired by the sensor, etc., and performs a predetermined calculation process to generate an image. It is possible to use a device such as a PC, a game machine, a mobile communication terminal, or the like having a function for generating
  • the image generating apparatus 310 can include an input/output unit 320 for establishing a wired connection with a peripheral device such as the HMD 110 or the controller 210, for example, infrared rays, Bluetooth (registered trademark), WiFi (registered trademark), or the like.
  • a communication unit 330 for establishing a wireless connection can be provided.
  • the character is controlled by executing the control program stored in the storage unit 350, which is detected as input contents including motions such as line of sight and posture, utterances, and operations, and is stored according to the input contents of the performer. Is generated.
  • the control unit 340 may be configured by a CPU, but by further providing a GPU specialized for image processing, it is possible to decentralize information processing and image processing and improve the efficiency of the entire processing.
  • the image generation device 310 can also communicate with another calculation processing device to allow the other calculation processing device to share information processing and image processing.
  • control unit 340 of the image generation apparatus 310 is a user input detection unit that detects information about the movement of the head of the performer, the utterance of the performer, the movement of the controller, and the operation received from the HMD 110 and/or the controller 210.
  • a character control unit 620 that executes a control program stored in the control program storage unit for a character stored in advance in the character data storage unit 660 of the storage unit 350, and an image is generated based on character control.
  • the image generation unit 630 is included.
  • control unit 340 includes an item receiving unit 640 that receives selection of an item to be placed in the virtual space from another user terminal, and corrects and updates the texture data of the character by an operation of a performer user. It has a character drawing unit 650.
  • the image generation unit 630 generates a virtual space including a character display area and a user display area as shown in FIG.
  • the screen of the virtual space can be displayed on the display unit of each user terminal via the network, or can be displayed on the display unit 120 of the HMD 110 attached to the performer.
  • the screen displayed on the display unit 120 of the HMD 110 worn by the performer may include information for performing the performer, which is displayed only by the performer.
  • a screen for operating the performer can be displayed on the display unit 120.
  • the storage unit 350 stores, in the above-described character data storage unit 660, data related to the 3D drawing of the character such as the 3D object model of the character, texture data, and animation data, and information related to the character such as the attribute of the character. .. Further, the control program storage unit 670 stores a program for controlling the motion and facial expression of the character in the virtual space and a program for generating the virtual space containing the contents such as the character and the user avatar.
  • the streaming data storage unit 680 stores the image generated by the image generation unit 630. The image of the virtual space stored as stream data can be simultaneously delivered together with the live image in response to a user request.
  • the storage unit 350 has an item data storage unit 685 that stores data regarding items.
  • the storage unit 350 has a user data storage unit 690 that stores information related to the user and a user avatar.
  • a feature of the image generation apparatus in the present embodiment is that not only an image in a virtual space is transmitted to a plurality of user terminals, but also items and comments can be received from the user terminals. It should be noted that all or a part of the functions of the image generating apparatus 310 is specialized only for generating an image, and a separate content service server is provided, and this content service server transmits the image to the user terminal. It is also possible to have a function of accepting items and comments from the user terminal.
  • FIG. 7 shows a flowchart showing a method for providing a virtual space according to the first embodiment of the present invention. This method is executed, for example, by processing the control program stored in the control program storage unit 670 by each unit of the control unit 340 of the image generation apparatus 310.
  • the input/output detection unit 610 of the control unit 340 detects a user input for correcting a character part played by a performer user (S101).
  • a user input for correcting a character part played by a performer user S101
  • there are several possible forms of user input for example, a method of designating a character part to be modified by directly instructing a character in a virtual space by a controller, or a character on a performer operation screen.
  • a method of designating a character/part and a part to be corrected can be specified on the screen of the pen tab by using ".
  • FIG. 9 is a diagram showing an example of the virtual space 1100 displayed on the user terminal according to the present embodiment.
  • the user terminal can display an image of a virtual space including a character 1120 in an image display unit 1110 such as a viewer for displaying an image embedded in a built-in web browser.
  • the character 1120 arranged in the virtual space moves through the HMD 110 and/or the controller 210 attached to the performer as a performer, such as the tilt and direction of the head of the performer, the utterance content of the performer, or the tilt of the controller 210. It is possible to operate based on a user input such as a movement such as a direction or a direction, an operation content of a performer via the controller 210.
  • an area 1130 for displaying a plurality of user avatars serving as audiences is provided.
  • a comment input unit 1140 in which the user inputs a comment and posts, and a gift item in the user Has a gift item selection unit 1150 for selecting and posting.
  • the comment input and posted by the user is displayed in a predetermined area such as a balloon displayed near the corresponding user avatar position in FIG. 9, for example.
  • the performer user performs a predetermined operation such as instructing a part of the character displayed in the virtual space 710 on the performer operation screen 700 displayed on the display unit 120 of the HMD 110 by the controller.
  • a predetermined operation such as instructing a part of the character displayed in the virtual space 710 on the performer operation screen 700 displayed on the display unit 120 of the HMD 110 by the controller.
  • the character drawing unit 650 determines a character part to be a target for correction drawing (S102). For example, when the user specifies that the part of the character to be modified is eyelashes, the character drawing unit 650 refers to the texture data of the character stored in the character data storage unit 660 of the storage unit 350, and the user draws The texture data including the designated part is displayed in the make-up area of the display unit 120.
  • the makeup display area is displayed as a drawing work area (eg, area 720) on the screen 700 displayed on the display unit 120 of the HMD 110 shown in FIG. It can be displayed on a monitor or can be displayed on the display unit of the pen tab 730 as shown in FIG.
  • a make-up operation is performed for the performer user such that the area 720 separated or overlaid with the area 710 in which the virtual space is displayed is used as a display area for make-up.
  • An easy aspect can be applied.
  • the character drawing unit 650 executes makeup processing (S103). Details of the makeup process will be described with reference to FIG.
  • FIG. 8 is a flowchart showing the makeup process according to the first embodiment.
  • the texture data including the parts of the character that the user wants to modify is displayed in the area 720, and the user performs input using the controller such as the pen tab 730 to display the texture data.
  • the texture data thus created can be freely modified and colored.
  • FIG. 11 shows an example of a screen displayed in the area 720 when the user instructs the makeup of the eyelashes
  • FIG. 12 shows a screen of the area 720 when the user instructs the decoration of the nails. It is an example of a screen displayed.
  • the input/output detection unit 610 detects a user input (S201). For example, in FIG. 11, the user uses a pen in the input area of the pen tab 730 as an input device when he wants to add eyelashes around the eyes of the facial part of the character displayed in the drawing area 720. Make an input.
  • the input/output detection unit 610 detects that the user has input a drawing with a pen.
  • the character drawing unit 650 detects the coordinates of the pen input in the input area of the pen tab 730 (S202).
  • the coordinates on the input area of the pen tab 730 correspond to the coordinates on the area 720 on the performer operation screen 700, and the input content on the pen tab 730 by the user is reflected as the input to the texture data in the area 720.
  • the character drawing unit 650 performs drawing processing on the texture data of the character based on the input content of the user (S203). For example, the drawing process of adding the illustration of the eyelashes input by the user to the texture data of the character is executed.
  • an indicator for correcting the color of the texture may be displayed, and the color may be corrected in response to a user request.
  • the color correction includes correction of brightness, saturation, color tone, sharpness, contrast and the like.
  • the corrected texture data of the character is appropriately stored in the character data storage unit 660 of the storage unit 350.
  • the image generation unit 630 performs an image generation process of arranging the corrected character in the virtual space (S204).
  • the image generation 630 generates a 3D object of the character by mapping the 3D shape data in a predetermined space and the texture data in the 3D shape based on the character data stored in the storage unit 350.
  • a virtual space is generated by superimposing a 3D object of a character on the background image of the virtual space, and the virtual space is displayed on the display units of the user terminal 401 and the HMD 110. For example, as shown in FIG. 10, an image of a character to which eyelashes are added is displayed in the virtual space 710 of the HMD 110.
  • the similar virtual space is also transmitted to each user terminal 401 connected to the image generation apparatus 310 via the network.
  • FIG. 13 shows a flowchart showing a method of providing a virtual space to a user according to the second embodiment. This method is executed, for example, by processing the control program stored in the control program storage unit 670 by each unit of the control unit 340 of the image generation apparatus 310.
  • the item receiving unit 640 of the control unit 340 receives an item from any of the plurality of user terminals 401 (S301). For example, as illustrated in FIGS. 14 and 15, the user selects a gift item from the gift item selection unit 1150 in the virtual space 1100 in which the user selects and posts a gift item, for example, in FIG. 14, the user selects a nail tool. By selecting a makeup tool and consuming the charging points in FIG. 15, the selected item can be transmitted to the character as a gift.
  • the image generation unit 630 arranges the item selected by the user in S101 in the virtual space 1100 (S302). For example, in FIG. 15, when the user selects the palette 1160 for makeup and the brush 1170 for makeup, the image generation unit 630 performs image generation so that the palette 1160 and the brush 1170 are arranged in the virtual space 1110.
  • the character control unit 620 detects the performer user's action and controls the character in the virtual space (S303). For example, the performer user operates so that the palette 1160 and the brush 1170 arranged in the virtual space 1110 come into contact with each other via the character. More specifically, the performer user moves the controller 210 and presses a predetermined operation button while looking at the performer operation screen 700, so that the input/output detection unit 610 of the image generating apparatus 310 causes the operation of the controller 210. Then, as a corresponding action, the character control unit 620 controls the character 1120 to grip an item in the virtual space 1110.
  • the character drawing unit 650 executes makeup processing (S304).
  • the details of the makeup process are as described in the first embodiment.
  • the character drawing unit 650 can also automatically select the parts of the character to be drawn according to the item held by the character 1120 and display the texture data corresponding to the performer operation screen 700.
  • the character drawing unit 650 may recognize the part designated by the item by the character 1120 played by the performer user as a drawing target part and display the texture data.
  • the character 1120 is operating to bring the makeup brush 1170 closer to the character's face portion, so the character drawing unit 650 can recognize the character's face as a drawing target part. ..
  • the image generation unit 630 performs an image generation process of arranging the corrected character in the virtual space (S305).
  • the main image generation process is also as described in the first embodiment. As shown in the example of FIG. 16, an image of the character 1120 to which eyelash makeup has been applied is transmitted in real time to the virtual space.
  • the user can provide a desired item to the character in the virtual space, and the character can decorate his/her own part with the provided item.
  • the character can decorate his/her own part with the provided item.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

[Problem] To provide a method with which it is possible to effectively promote interaction between a performer and a user in a virtual space. [Solution] This method is for providing, to a plurality of users including a performer user and a viewer user, a virtual space that has contained therein a given content including a character performed by said performer user, the method comprising: a step for detecting an input from the performer user via a control device; a step for controlling the character's bodily movement or expression on the basis of the input from the performer user; and a step for making a change to a part of an image of the character on the basis of the input from the performer user.

Description

所定のコンテンツを有する仮想空間を提供する方法Method for providing virtual space with predetermined content
 本発明は、複数のユーザに対する仮想空間を提供する方法に関する。詳しくは、パフォーマによって演じられるキャラクタを含む所定のコンテンツを有する仮想空間を提供する方法に関する。 The present invention relates to a method of providing a virtual space for multiple users. More particularly, it relates to a method of providing a virtual space having predetermined content including characters played by a performer.
 モーションキャプチャは、現実空間におけるパフォーマユーザ(以下、「パフォーマ」)の動きをデジタル的にキャプチャする技術であり、キャプチャした動きは、動画などのコンピュータアニメーションおよびゲーム等におけるキャラクタの動きの表現に利用される。 Motion capture is a technology that digitally captures the movement of a performer user (hereinafter, “performer”) in the physical space, and the captured movement is used for computer animation such as moving images and expression of character movement in games and the like. It
 また、近年、パフォーマが装着したヘッドマウントディスプレイ(以下、「HMD」)の動きや、パフォーマが把持するコントローラによる入力を介した、仮想空間におけるキャラクタの制御方法が考案されている。 In addition, in recent years, a method of controlling a character in a virtual space has been devised through movement of a head mounted display (hereinafter, “HMD”) attached to a performer and input by a controller held by the performer.
 また、ライブ動画等のコンテンツをユーザに配信するコンテンツ共有サービスが提供されており、このようなコンテンツ共有サービスにおいて、コンテンツの投稿者やパフォーマへの肯定的な評価を表現するためのアイテムの取得や購入を促すための仕組みが開示されている。(例えば、特許文献1) In addition, content sharing services that deliver content such as live videos to users are provided, and in such content sharing services, acquisition of items for expressing a positive evaluation to content posters and performers, and A mechanism to encourage purchase is disclosed. (For example, Patent Document 1)
特開2015-090526号公報JP, 2005-090526, A
 上記文献に開示される技術は、ユーザの承認欲求又はユーザ同士の競争意識を芽生えさせるため、ユーザが購入したアイテムの量によってユーザアバタの仮想空間内における表示位置を決定する、という仕組みを提供している。しかしながら、本技術によって、ユーザアバタの表示位置をパフォーマにより近い位置に表示させることで、ユーザの承認欲求を満足させることはできるものの、そこにパフォーマとのインタラクションやコミュニケーションは存在しない。 The technology disclosed in the above-mentioned document provides a mechanism for determining the display position of the user avatar in the virtual space depending on the amount of items purchased by the user in order to generate the approval desire of the user or the sense of competition between the users. ing. However, although the present technology allows the display position of the user avatar to be displayed closer to the performer, the user's approval desire can be satisfied, but there is no interaction or communication with the performer.
 そこで、本発明は、仮想空間において、パフォーマとユーザとのインタラクションをより効果的に促進できる方法を提供することを目的とする。 Therefore, an object of the present invention is to provide a method capable of more effectively promoting interaction between a performer and a user in a virtual space.
 本発明における一実施形態により提供される、パフォーマユーザ及び視聴ユーザを含む、複数のユーザに対し、当該パフォーマユーザによって演じられるキャラクタを含む所定のコンテンツを有する仮想空間を提供する方法は、制御装置を介してパフォーマユーザの入力を検出するステップと、 前記パフォーマユーザの入力に基づいて、キャラクタの動作及び表情のいずれかを制御するステップと、前記パフォーマユーザの入力に基づいて、キャラクタの画像の一部に変更を加えるステップと、有する。 A method for providing a virtual space having a predetermined content including a character played by a performer user to a plurality of users, including a performer user and a viewing user, provided by an embodiment of the present invention includes a controller. Detecting the input of the performer user via the input, controlling either the action or the facial expression of the character based on the input of the performer user, and a part of the image of the character based on the input of the performer user. To make changes to the.
 本発明によれば、仮想空間において、パフォーマとユーザとのインタラクションやコミュニケーションをより効果的に促進することが可能となる。 According to the present invention, it becomes possible to more effectively promote the interaction and communication between the performer and the user in the virtual space.
第1の実施形態にかかるヘッドマウントディスプレイ110の外観の模式図を示す。The schematic diagram of the external appearance of the head mounted display 110 concerning 1st Embodiment is shown. 第1の実施形態にかかるコントローラ210の外観の模式図を示す。The schematic diagram of the external appearance of the controller 210 concerning 1st Embodiment is shown. 第1の実施形態にかかるHMDシステム300の構成図を示す。The block diagram of the HMD system 300 concerning 1st Embodiment is shown. 第1の実施形態にかかるHMD110の機能構成図を示す。The functional block diagram of HMD110 concerning 1st Embodiment is shown. 第1の実施形態にかかるコントローラ210の機能構成図を示す。The functional block diagram of the controller 210 concerning 1st Embodiment is shown. 第1の実施形態にかかる画像生成装置310の機能構成図を示す。FIG. 3 shows a functional configuration diagram of an image generating apparatus 310 according to the first embodiment. 第1の実施形態にかかる、仮想空間を提供する方法を示すフローチャートを示す。3 shows a flowchart illustrating a method for providing a virtual space according to the first embodiment. 第1の実施形態にかかる、メイキャップ処理を示すフローチャートを示す。3 is a flowchart showing a makeup process according to the first embodiment. 第1の実施形態にかかる、ユーザに対して提供される仮想空間の一例を示す。1 shows an example of a virtual space provided to a user according to the first embodiment. 第1の実施形態にかかる、ユーザに対して提供される仮想空間の他の一例を示す。3 shows another example of the virtual space provided to the user according to the first embodiment. 第1の実施形態にかかる、パフォーマユーザによるキャラクタ描画の一例を示す。1 shows an example of character drawing by a performer user according to the first embodiment. 第1の実施形態にかかる、パフォーマユーザによるキャラクタ描画の他の一例を示す。6 shows another example of character drawing by a performer user according to the first embodiment. 第2の実施形態にかかる、ユーザに仮想空間を提供する方法を示すフローチャートを示す。9 is a flowchart showing a method of providing a virtual space to a user according to the second embodiment. 第2の実施形態にかかる、ユーザに提供される仮想空間の一例を示す。An example of the virtual space provided to the user according to the second embodiment is shown. 第2の実施形態にかかる、ユーザに提供される仮想空間の他の一例を示す。9 shows another example of the virtual space provided to the user according to the second embodiment. 第2の実施形態において、ユーザに表示される仮想空間のさらに他の一例を示す。In 2nd Embodiment, another example of the virtual space displayed to a user is shown.
<実施形態1>
 本発明の一実施形態に係るヘッドマウントディスプレイシステムを制御するプログラムの具体例を、以下に図面を参照しつつ説明する。なお、本発明はこれらの例示に限定されるものではなく、特許請求の範囲によって示され、特許請求の範囲と均等の意味及び範囲内でのすべての変更が含まれることが意図される。以下の説明では、図面の説明において同一の要素には同一の符号を付し、重複する説明を省略する。
<Embodiment 1>
A specific example of a program for controlling the head mounted display system according to the embodiment of the present invention will be described below with reference to the drawings. It should be noted that the present invention is not limited to these exemplifications, and is shown by the scope of the claims, and is intended to include meanings equivalent to the scope of the claims and all modifications within the scope. In the following description, the same elements will be denoted by the same reference symbols in the description of the drawings, without redundant description.
図1は、本実施形態にかかるヘッドマウントディスプレイ(以下:HMD)110の外観の模式図を示す。HMD110はパフォーマの頭部に装着され、パフォーマの左右の眼前に配置されるよう表示パネル120を備える。表示パネルとしては、光学透過型と非透過型のディスプレイが考えられるが、本実施形態では、より没入感を提供可能な非透過型の表示パネルを例示する。表示パネル120には、左目用画像と右目用画像とが表示され、両目の視差を利用することにより立体感のある画像をパフォーマに提供することができる。左目用画像と右目用画像とを表示することができれば、左目用ディスプレイと右目用ディスプレイとを個別に備えることも可能であるし、左目用及び右目用の一体型のディスプレイを備えることも可能である。 FIG. 1 is a schematic view showing the outer appearance of a head mounted display (HMD) 110 according to this embodiment. The HMD 110 is mounted on the head of the performer, and includes a display panel 120 so as to be placed in front of the left and right eyes of the performer. As the display panel, an optically transmissive type display and a non-transmissive type display can be considered. In the present embodiment, a non-transmissive type display panel that can provide a more immersive feeling is exemplified. An image for the left eye and an image for the right eye are displayed on the display panel 120, and an image having a stereoscopic effect can be provided to the performer by utilizing the parallax of both eyes. If it is possible to display the image for the left eye and the image for the right eye, it is possible to separately provide a display for the left eye and a display for the right eye, or it is possible to provide an integrated display for the left eye and the right eye. is there.
さらに、HMD110の筐体部130は、センサ140を備える。センサは、パフォーマの頭部の向きや傾きといった動きを検出するために、図示しないが、例えば、磁気センサ、加速度センサ、もしくはジャイロセンサのいずれか、またはこれらの組み合わせを備えることができる。パフォーマの頭部の垂直方向をY軸とし、Y軸と直交する軸のうち、表示パネル120の中心とパフォーマとを結ぶ、パフォーマの前後方向に相当する軸をZ軸とし、Y軸及びZ軸と直交し、パフォーマの左右方向に相当する軸をX軸とするとき、センサ140は、X軸まわりの回転角(いわゆる、ピッチ角)、Y軸まわりの回転角(いわゆる、ヨー角)、Z軸まわりの回転角(いわゆる、ロール角)を検出することができる。 Furthermore, the housing unit 130 of the HMD 110 includes a sensor 140. Although not shown, the sensor may include, for example, a magnetic sensor, an acceleration sensor, a gyro sensor, or a combination thereof, which is not illustrated, in order to detect a movement such as the orientation or tilt of the head of the performer. The vertical direction of the head of the performer is the Y axis, and among the axes orthogonal to the Y axis, the axis that connects the center of the display panel 120 and the performer and corresponds to the front-back direction of the performer is the Z axis, and the Y axis and the Z axis. When the axis orthogonal to the right and left direction of the performer is the X axis, the sensor 140 has a rotation angle around the X axis (so-called pitch angle), a rotation angle around the Y axis (so-called yaw angle), and Z. The rotation angle around the axis (so-called roll angle) can be detected.
さらに、またはセンサ140に代えて、HMD110の筐体部130は、複数の光源150(例えば、赤外光LED、可視光LED)を備えることもでき、HMD110の外部(例えば、室内等)に設置されたカメラ(例えば、赤外光カメラ、可視光カメラ)がこれらの光源を検出することで、特定の空間におけるHMD110の位置、向き、傾きを検出することができる。または、同じ目的で、HMD110に、HMD110の筐体部130に設置された光源を検出するためのカメラを備えることもできる。 Furthermore, or instead of the sensor 140, the housing unit 130 of the HMD 110 may include a plurality of light sources 150 (for example, infrared light LEDs, visible light LEDs), and may be installed outside the HMD 110 (for example, indoors). The camera (for example, an infrared light camera or a visible light camera) detects these light sources, so that the position, orientation, and inclination of the HMD 110 in a specific space can be detected. Alternatively, for the same purpose, the HMD 110 may be provided with a camera for detecting a light source installed in the housing unit 130 of the HMD 110.
さらに、HMD110の筐体部130は、アイトラッキング・センサを備えることもできる。アイトラッキング・センサは、パフォーマの左目及び右目の視線方向及び注視点を検出するために用いられる。アイトラッキング・センサとしては様々な方式が考えられるが、例えば、左目および右目に弱い赤外光を照射してできる角膜上の反射光の位置を基準点とし、反射光の位置に対する瞳孔の位置により視線方向を検出し、左目及び右目の視線方向の交点を注視点として検出する方法などが考えられる。 Furthermore, the housing 130 of the HMD 110 can also include an eye tracking sensor. Eye tracking sensors are used to detect the gaze direction and gazing point of the performer's left and right eyes. There are various possible eye tracking sensors.For example, the position of the reflected light on the cornea formed by irradiating the left and right eyes with weak infrared light is used as a reference point, and the position of the pupil relative to the position of the reflected light is used. A method of detecting the line-of-sight direction and detecting the intersection of the line-of-sight directions of the left and right eyes as the point of gaze can be considered.
 図2は、本実施形態にかかるコントローラ210の外観の模式図を示す。コントローラ210により、仮想空間内において、パフォーマが所定の入力を行うことをサポートすることができる。コントローラ210は、左手用220及び右手用230のコントローラのセットとして構成することができる。左手用コントローラ220及び右手用コントローラ230は、各々操作用トリガーボタン240、赤外線LED250、センサ260、ジョイスティック270、メニューボタン280を有することができる。 FIG. 2 shows a schematic diagram of the appearance of the controller 210 according to this embodiment. The controller 210 can support a performer to perform a predetermined input in the virtual space. The controller 210 can be configured as a set of left-hand 220 and right-hand 230 controllers. The left-hand controller 220 and the right-hand controller 230 can each include an operation trigger button 240, an infrared LED 250, a sensor 260, a joystick 270, and a menu button 280.
 操作用トリガーボタン240は、コントローラ210のグリップ235を把持したときに、中指及び人差し指でトリガーを引くような操作を行うことを想定した位置に240a、240bとして配置される。コントローラ210の両側面から下方にリング状に形成されるフレーム245には、複数の赤外線LED250が備えられ、コントローラ外部に備えられるカメラ(図示せず)により、これらの赤外線LEDの位置を検出することで、特定の空間におけるコントローラ210の位置、向き及び傾きを検出することができる。 The operation trigger button 240 is arranged as positions 240a and 240b at positions where it is assumed that an operation such as pulling the trigger with the middle finger and the index finger is performed when the grip 235 of the controller 210 is held. A plurality of infrared LEDs 250 are provided on a frame 245 which is formed in a ring shape downward from both side surfaces of the controller 210, and the positions of these infrared LEDs are detected by a camera (not shown) provided outside the controller. Thus, the position, orientation and inclination of the controller 210 in a specific space can be detected.
また、コントローラ210は、コントローラ210の向きや傾きといった動きを検出するために、センサ260を内蔵することができる。センサ260として、図示しないが、例えば、磁気センサ、加速度センサ、もしくはジャイロセンサのいずれか、またはこれらの組み合わせを備えることができる。さらに、コントローラ210の上面には、ジョイスティック270及びメニューボタン280を備えることができる。ジョイスティック270は、基準点を中心に360度方向に動かすことができ、コントローラ210のグリップ235を把持したときに、親指で操作されることが想定される。メニューボタン280もまた同様に、親指で操作されることが想定される。さらに、コントローラ210は、コントローラ210を操作するパフォーマの手に振動を与えるためのバイブレータ(図示せず)を内蔵することもできる。ボタンやジョイスティックを介したパフォーマの入力内容やセンサ等を介したコントローラ210の位置、向き及び傾きといった情報を出力するため、また、ホストコンピュータからの情報を受信するために、コントローラ210は、入出力部及び通信部を有する。 In addition, the controller 210 may include a sensor 260 in order to detect a movement such as the orientation or the tilt of the controller 210. Although not shown, the sensor 260 may include, for example, a magnetic sensor, an acceleration sensor, a gyro sensor, or a combination thereof. Further, a joystick 270 and a menu button 280 may be provided on the upper surface of the controller 210. The joystick 270 can be moved in the direction of 360 degrees around the reference point, and it is assumed that the joystick 270 is operated by the thumb when the grip 235 of the controller 210 is gripped. It is envisioned that the menu button 280 will be operated with the thumb as well. Further, the controller 210 may include a vibrator (not shown) for giving a vibration to a hand of a performer operating the controller 210. In order to output information such as the input contents of a performer via a button or a joystick and the position, orientation, and inclination of the controller 210 via a sensor, etc., and to receive information from a host computer, the controller 210 inputs/outputs. And a communication unit.
パフォーマがコントローラ210を把持し、各種ボタンやジョイスティックを操作することの有無、及び赤外線LEDやセンサにより検出される情報によって、システムはパフォーマの手の動きや姿勢を決定し、仮想空間内において擬似的にパフォーマの手を表示させ、動作させることができる。 The system determines the movement and posture of the performer's hand based on the presence/absence of the performer holding the controller 210 and operating various buttons and joysticks, and the information detected by the infrared LED and the sensor. The performer's hand can be displayed on and operated.
図3は、本実施形態にかかるHMDシステム300の構成図を示す。HMDシステム300は、例えば、HMD110、コントローラ210及びホストコンピュータとして機能する画像生成装置310で構成することができる。さらに、HMD110やコントローラ210の位置、向き及び傾き等を検出するための赤外線カメラ(図示せず)等を追加することもできる。これらの装置は、相互に、有線又は無線手段により接続することができる。例えば、各々の装置にUSBポートを備え、ケーブルで接続することで通信を確立することもできるし、他に、HDMI(登録商標)、有線LAN、赤外線、Bluetooth(登録商標)、WiFi(登録商標)等の有線又は無線により通信を確立することもできる。画像生成装置310は、PC、ゲーム機、携帯通信端末等計算処理機能を有する装置であればよい。また、画像生成装置310は、インターネット等のネットワークを介して401A、401B、401C等の複数のユーザ端末と接続し、生成した画像をストリーミングまたはダウンロードの形態で送信することができる。ユーザ端末401A等は、各々インターネットブラウザを備えるか、適切なビューワを備えることで送信された画像を再生することができる。ここで、画像生成装置310は、複数のユーザ端末に対し、直接画像を送信することもできるし、他のコンテンツ配信サーバを介して画像を送信することもできる。また、画像生成装置310により実行される処理を、HMD110により実行させることで、ネットワークに依存せずにHMD110をスタンドアローンで機能する構成とすることもできる。 FIG. 3 is a block diagram of the HMD system 300 according to this embodiment. The HMD system 300 can be composed of, for example, an HMD 110, a controller 210, and an image generation device 310 that functions as a host computer. Furthermore, an infrared camera (not shown) for detecting the position, orientation, inclination, etc. of the HMD 110 and the controller 210 can be added. These devices can be connected to each other by wired or wireless means. For example, each device is provided with a USB port, and communication can be established by connecting with a cable. In addition, HDMI (registered trademark), wired LAN, infrared ray, Bluetooth (registered trademark), WiFi (registered trademark) It is also possible to establish communication by wire or wireless. The image generation device 310 may be a device having a calculation processing function such as a PC, a game machine, or a mobile communication terminal. Further, the image generation apparatus 310 can be connected to a plurality of user terminals 401A, 401B, 401C and the like via a network such as the Internet and can transmit the generated image in the form of streaming or download. Each of the user terminals 401A and the like has an internet browser or an appropriate viewer so that the transmitted image can be reproduced. Here, the image generation device 310 can directly transmit an image to a plurality of user terminals, or can transmit an image via another content distribution server. Further, by causing the HMD 110 to execute the processing executed by the image generating apparatus 310, the HMD 110 can be configured to function as a stand-alone device without depending on the network.
図4は、本実施形態にかかるHMD110の機能構成図を示す。図1において述べたように、HMD110は、センサ140を備えることができる。センサとしては、パフォーマの頭部の向きや傾きといった動きを検出するために、図示しないが、例えば、磁気センサ、加速度センサ、もしくはジャイロセンサのいずれか、またはこれらの組み合わせを備えることができる。さらに、アイトラッキング・センサを備えることもできる。アイトラッキング・センサは、パフォーマの左目及び右目の視線方向及び注視点を検出するために用いられる。さらに精度よく、パフォーマの頭部の向きや傾きといった動きを検出したり、パフォーマの頭部の位置を検出したりするために、赤外光や紫外光といったLED150を備えることもできる。また、HMDの外景を撮影するためのカメラ160を備えることができる。また、パフォーマの発話を集音するためのマイク170、音声を出力するためのヘッドフォン180を備えることもできる。なお、マイクやヘッドフォンはHMD110とは別個独立した装置として有することもできる。 FIG. 4 shows a functional configuration diagram of the HMD 110 according to the present embodiment. As described in FIG. 1, the HMD 110 may include the sensor 140. Although not shown, the sensor may include, for example, a magnetic sensor, an acceleration sensor, a gyro sensor, or a combination thereof, which is not illustrated in order to detect a movement such as a head direction and a tilt of the performer. In addition, an eye tracking sensor can be included. Eye tracking sensors are used to detect the gaze direction and gazing point of the performer's left and right eyes. Further, the LED 150 such as infrared light or ultraviolet light may be provided to detect the movement of the performer's head such as the direction and inclination of the performer with higher accuracy, or to detect the position of the performer's head. In addition, a camera 160 for taking an external view of the HMD can be provided. Further, a microphone 170 for collecting the utterance of the performer and a headphone 180 for outputting a voice can be provided. The microphone and headphones may be provided as a device independent of the HMD 110.
さらに、HMD110は、例えば、コントローラ210や画像生成装置310等の周辺装置との有線による接続を確立するための入出力部190を備えることができ、赤外線、Bluetooth(登録商標)やWiFi(登録商標)等無線による接続を確立するための通信部115を備えることができる。センサ140により取得されたパフォーマの頭部の向きや傾きといった動きに関する情報は、制御部125によって、入出力部190及び/又は通信部115を介して画像生成装置310に送信される。詳細は後述するが、画像生成装置310において、パフォーマの頭部の動きに基づいて生成された画像は、入出力部190及び/又は通信部115を介して受信され、制御部125によって表示部120に出力される。 Further, the HMD 110 can include, for example, an input/output unit 190 for establishing a wired connection with peripheral devices such as the controller 210 and the image generation device 310, and infrared, Bluetooth (registered trademark), and WiFi (registered trademark). ) Etc. can be provided with a communication unit 115 for establishing a wireless connection. The information about the movement such as the direction and the inclination of the head of the performer acquired by the sensor 140 is transmitted to the image generation apparatus 310 by the control unit 125 via the input/output unit 190 and/or the communication unit 115. As will be described in detail later, in the image generating apparatus 310, the image generated based on the movement of the head of the performer is received via the input/output unit 190 and/or the communication unit 115, and the display unit 120 is displayed by the control unit 125. Is output to.
図5は、本実施形態にかかるコントローラ210の機能構成図を示す。図2において述べたように、コントローラ210は、コントローラ210は、左手用220及び右手用230のコントローラのセットとして構成することができるが、いずれのコントローラにおいても、操作用トリガーボタン240、ジョイスティック270、メニューボタン280といった操作部245を備えることができる。また、コントローラ210は、コントローラ210の向きや傾きといった動きを検出するために、センサ260を内蔵することができる。センサ260として、図示しないが、例えば、磁気センサ、加速度センサ、もしくはジャイロセンサのいずれか、またはこれらの組み合わせを備えることができる。さらに、複数の赤外線LED250が備えられ、コントローラ外部に備えられるカメラ(図示せず)により、これらの赤外線LEDの位置を検出することで、特定の空間におけるコントローラ210の位置、向き及び傾きを検出することができる。コントローラ210は、例えば、HMD110や画像生成装置310等の周辺装置との有線による接続を確立するための入出力部255を備えることができ、赤外線、Bluetooth(登録商標)やWiFi(登録商標)等無線による接続を確立するための通信部265を備えることができる。パフォーマにより操作部245を介して入力された情報及びセンサ260によって取得されたコントローラ210の向きや傾きといった情報は、入出力部255及び/又は通信部265を介して画像生成装置310に送信される。 FIG. 5 shows a functional block diagram of the controller 210 according to the present embodiment. As described with reference to FIG. 2, the controller 210 can be configured as a set of a controller 220 for the left hand and a controller 230 for the right hand, but in any controller, the operation trigger button 240, the joystick 270, An operation unit 245 such as a menu button 280 can be provided. In addition, the controller 210 may include a sensor 260 in order to detect a movement such as the orientation or the tilt of the controller 210. Although not shown, the sensor 260 may include, for example, a magnetic sensor, an acceleration sensor, a gyro sensor, or a combination thereof. Further, a plurality of infrared LEDs 250 are provided, and a camera (not shown) provided outside the controller detects the positions of these infrared LEDs to detect the position, orientation, and inclination of the controller 210 in a specific space. be able to. The controller 210 can include, for example, an input/output unit 255 for establishing a wired connection with a peripheral device such as the HMD 110 or the image generation device 310, and infrared, Bluetooth (registered trademark), WiFi (registered trademark), or the like. A communication unit 265 for establishing a wireless connection can be provided. Information input by the performer via the operation unit 245 and information such as the orientation and inclination of the controller 210 acquired by the sensor 260 are transmitted to the image generation apparatus 310 via the input/output unit 255 and/or the communication unit 265. ..
図6は、本実施形態にかかる画像生成装置310の機能構成図を示す。画像生成装置310としては、HMD110やコントローラ210から送信された入力情報やセンサ等により取得されたパフォーマの頭部の動きやコントローラの動きや操作に関する情報を記憶し、所定の計算処理を行い、画像を生成するための機能を有する、PC、ゲーム機及び携帯通信端末等といった装置を使用することができる。画像生成装置310は、例えば、HMD110やコントローラ210等の周辺装置との有線による接続を確立するための入出力部320を備えることができ、赤外線、Bluetooth(登録商標)やWiFi(登録商標)等無線による接続を確立するための通信部330を備えることができる。入出力部320及び/又は通信部330を介して、HMD110及び/又はコントローラ210から受信された、パフォーマの頭部の動きやコントローラの動きや操作に関する情報は、制御部340において、パフォーマの位置、視線、姿勢等の動作、発話、操作等を含めた入力内容として検出され、パフォーマの入力内容に応じて、記憶部350に格納された制御プログラムを実行することで、キャラクタの制御を行い、画像を生成するといった処理がなされる。制御部340は、CPUで構成することもできるが、画像処理に特化したGPUをさらに設けることで、情報処理と画像処理を分散化し、全体の処理の効率化を図ることもできる。画像生成装置310はまた、他の計算処理装置と通信を行い、他の計算処理装置に情報処理や画像処理を分担させることもできる。 FIG. 6 shows a functional configuration diagram of the image generation apparatus 310 according to the present embodiment. The image generation apparatus 310 stores the input information transmitted from the HMD 110 and the controller 210, the information about the movement of the head of the performer and the movement and operation of the controller acquired by the sensor, etc., and performs a predetermined calculation process to generate an image. It is possible to use a device such as a PC, a game machine, a mobile communication terminal, or the like having a function for generating The image generating apparatus 310 can include an input/output unit 320 for establishing a wired connection with a peripheral device such as the HMD 110 or the controller 210, for example, infrared rays, Bluetooth (registered trademark), WiFi (registered trademark), or the like. A communication unit 330 for establishing a wireless connection can be provided. The information about the movement of the head of the performer, the movement of the controller, and the operation received from the HMD 110 and/or the controller 210 via the input/output unit 320 and/or the communication unit 330, in the control unit 340, the position of the performer, The character is controlled by executing the control program stored in the storage unit 350, which is detected as input contents including motions such as line of sight and posture, utterances, and operations, and is stored according to the input contents of the performer. Is generated. The control unit 340 may be configured by a CPU, but by further providing a GPU specialized for image processing, it is possible to decentralize information processing and image processing and improve the efficiency of the entire processing. The image generation device 310 can also communicate with another calculation processing device to allow the other calculation processing device to share information processing and image processing.
さらに、画像生成装置310の制御部340は、HMD110及び/又はコントローラ210から受信された、パフォーマの頭部の動きやパフォーマの発話、また、コントローラの動きや操作に関する情報を検出するユーザ入力検出部610と、予め記憶部350のキャラクタデータ格納部660に格納されたキャラクタに対して、制御プログラム格納部に格納された制御プログラムを実行するキャラクタ制御部620と、キャラクタ制御に基づいて画像を生成する画像生成部630を有する。ここでキャラクタの動きの制御については、HMD110やコントローラ210を介して検出されたパフォーマ頭部の向きや傾き、手の動きといった情報を、人間の身体の関節の動きや制約に則って作成されたボーン構造の各部の動きに変換し、予め格納されたキャラクタデータに対して、ボーン構造を関連付けることで、ボーン構造の動きを適用させることで実現される。その他、制御部340は、他のユーザ端末から、仮想空間に配置するアイテムの選択を受け付けるアイテム受付部640を有し、また、キャラクタのテクスチャデータに対し、パフォーマユーザによる操作により修正及び更新を行うキャラクタ描画部650を有する。画像生成部630によって、図9に示されるような、キャラクタ表示領域及びユーザ表示領域を含む仮想空間が生成される。仮想空間の画面は、ネットワークを経由して、各ユーザ端末の表示部に表示することもできるし、パフォーマが装着するHMD110の表示部120に表示することもできる。特にパフォーマが装着するHMD110の表示部120に表示される画面は、パフォーマ操作用の、パフォーマにしか表示されない情報を含むことができる。また、仮想空間の画面のほか、パフォーマの操作用の画面を表示部120に表示することもできる。 Further, the control unit 340 of the image generation apparatus 310 is a user input detection unit that detects information about the movement of the head of the performer, the utterance of the performer, the movement of the controller, and the operation received from the HMD 110 and/or the controller 210. 610, a character control unit 620 that executes a control program stored in the control program storage unit for a character stored in advance in the character data storage unit 660 of the storage unit 350, and an image is generated based on character control. The image generation unit 630 is included. Here, regarding the control of the movement of the character, information such as the direction and inclination of the performer's head and the movement of the hand detected via the HMD 110 and the controller 210 is created in accordance with the movements and constraints of the joints of the human body. This is realized by converting the motion of each part of the bone structure and associating the bone structure with previously stored character data to apply the motion of the bone structure. In addition, the control unit 340 includes an item receiving unit 640 that receives selection of an item to be placed in the virtual space from another user terminal, and corrects and updates the texture data of the character by an operation of a performer user. It has a character drawing unit 650. The image generation unit 630 generates a virtual space including a character display area and a user display area as shown in FIG. The screen of the virtual space can be displayed on the display unit of each user terminal via the network, or can be displayed on the display unit 120 of the HMD 110 attached to the performer. In particular, the screen displayed on the display unit 120 of the HMD 110 worn by the performer may include information for performing the performer, which is displayed only by the performer. Further, in addition to the virtual space screen, a screen for operating the performer can be displayed on the display unit 120.
記憶部350は、上述のキャラクタデータ格納部660に、キャラクタの3Dオブジェクトモデル、テクスチャデータ、アニメーションデータ等のキャラクタの3D描画に関連するデータのほか、キャラクタの属性等キャラクタに関連する情報を格納する。また、制御プログラム格納部670は、仮想空間におけるキャラクタの動作や表情を制御するためのプログラム及びキャラクタやユーザアバタ等のコンテンツを含む仮想空間を生成するためのプログラムを格納する。ストリーミングデータ格納部680は、画像生成部630で生成された画像を格納する。ストリームデータとして格納された仮想空間の画像は、ユーザの要求に応じて、ライブ画像とともに同時配信することができる。記憶部350は、アイテムに関するデータを格納するアイテムデータ格納部685を有する。さらに、記憶部350は、ユーザに関連する情報及びユーザアバタを格納するユーザデータ格納部690を有する。本実施形態における画像生成装置の特徴としては、複数のユーザ端末に対して仮想空間となる画像を送信するだけでなく、ユーザ端末からアイテムやコメントを受付けることができる。なお、画像生成装置310の全部又は一部の機能を、画像を生成することだけに特化し、別途コンテンツ・サービスサーバを設けて、このコンテンツ・サービスサーバが、ユーザ端末に画像を送信し、また、ユーザ端末からアイテムやコメントを受け付ける等の機能を有することもできる。 The storage unit 350 stores, in the above-described character data storage unit 660, data related to the 3D drawing of the character such as the 3D object model of the character, texture data, and animation data, and information related to the character such as the attribute of the character. .. Further, the control program storage unit 670 stores a program for controlling the motion and facial expression of the character in the virtual space and a program for generating the virtual space containing the contents such as the character and the user avatar. The streaming data storage unit 680 stores the image generated by the image generation unit 630. The image of the virtual space stored as stream data can be simultaneously delivered together with the live image in response to a user request. The storage unit 350 has an item data storage unit 685 that stores data regarding items. Further, the storage unit 350 has a user data storage unit 690 that stores information related to the user and a user avatar. A feature of the image generation apparatus in the present embodiment is that not only an image in a virtual space is transmitted to a plurality of user terminals, but also items and comments can be received from the user terminals. It should be noted that all or a part of the functions of the image generating apparatus 310 is specialized only for generating an image, and a separate content service server is provided, and this content service server transmits the image to the user terminal. It is also possible to have a function of accepting items and comments from the user terminal.
 図7は、本発明の第1の実施形態にかかる、仮想空間を提供する方法を示すフローチャートを示す。本方法は、例えば、制御プログラム格納部670に格納された制御プログラムを画像生成装置310の制御部340の各部により処理されることで実行される。 FIG. 7 shows a flowchart showing a method for providing a virtual space according to the first embodiment of the present invention. This method is executed, for example, by processing the control program stored in the control program storage unit 670 by each unit of the control unit 340 of the image generation apparatus 310.
 まず、制御部340の入出力検出部610は、パフォーマユーザが演じるキャラクタのパーツを修正するためのユーザ入力を検出する(S101)。ここで、ユーザ入力の形態としていくつか考えられるが、例えば、仮想空間におけるキャラクタを、コントローラにより直接指示することにより、修正したいキャラクタのパーツを指定する方法、または、パフォーマの操作用画面において、キャラクタ・パーツ修正用のメニューを表示し、ユーザがコントローラを使って、修正したいパーツを指定する方法や、パフォーマユーザが、セカンドスクリーンとして座標入力装置(例えば、図11のペンタブレット730、以下、「ペンタブ」という)を使用し、ペンタブのスクリーンにおいて、キャラクタ・パーツ修正の旨の指示と、修正したいパーツを指定する方法等が考えられる。 First, the input/output detection unit 610 of the control unit 340 detects a user input for correcting a character part played by a performer user (S101). Here, there are several possible forms of user input. For example, a method of designating a character part to be modified by directly instructing a character in a virtual space by a controller, or a character on a performer operation screen. A method of displaying a part correction menu and allowing the user to specify a part to be corrected using the controller, or a performer user to use a coordinate input device as a second screen (for example, the pen tablet 730 in FIG. It is conceivable that a method of designating a character/part and a part to be corrected can be specified on the screen of the pen tab by using ".
 ここで、図9は、本実施形態にかかる、ユーザ端末に表示される仮想空間1100の一例を示す図である。図7に示すように、ユーザ端末は、内蔵するウェブブラウザにエンベッドされる画像を表示するためのビューワ等の画像表示部1110において、キャラクタ1120を含む仮想空間の画像を表示することができる。仮想空間に配置されるキャラクタ1120は、演者としてのパフォーマに装着されるHMD110及び/またはコントローラ210を介した、パフォーマの頭部の傾きや向きといった動き、パフォーマの発話内容、または、コントローラ210の傾きや向きといった動き、コントローラ210を介したパフォーマの操作内容といった、ユーザ入力に基づいて、動作することができる。さらに、仮想空間1110において、オーディエンスとなる複数のユーザアバタを表示する領域1130が設けられており、さらに、仮想空間1100は、ユーザがコメントを入力し、投稿するコメント入力部1140、ユーザがギフトアイテムを選択し、投稿するギフトアイテム選択部1150を有する。ユーザによって入力され投稿されたコメントは、例えば、図9において、対応するユーザアバタ位置の近傍に表示される吹き出し等の所定の領域に表示される。 Here, FIG. 9 is a diagram showing an example of the virtual space 1100 displayed on the user terminal according to the present embodiment. As shown in FIG. 7, the user terminal can display an image of a virtual space including a character 1120 in an image display unit 1110 such as a viewer for displaying an image embedded in a built-in web browser. The character 1120 arranged in the virtual space moves through the HMD 110 and/or the controller 210 attached to the performer as a performer, such as the tilt and direction of the head of the performer, the utterance content of the performer, or the tilt of the controller 210. It is possible to operate based on a user input such as a movement such as a direction or a direction, an operation content of a performer via the controller 210. Further, in the virtual space 1110, an area 1130 for displaying a plurality of user avatars serving as audiences is provided. Further, in the virtual space 1100, a comment input unit 1140 in which the user inputs a comment and posts, and a gift item in the user Has a gift item selection unit 1150 for selecting and posting. The comment input and posted by the user is displayed in a predetermined area such as a balloon displayed near the corresponding user avatar position in FIG. 9, for example.
 S101において、例えば、図11において、パフォーマユーザが、HMD110の表示部120に表示される、パフォーマ用操作画面700において、仮想空間710に表示されるキャラクタの一部をコントローラにより指示する等の所定操作をすることにより、修正したいキャラクタのパーツを指定することができる。 In S101, for example, in FIG. 11, the performer user performs a predetermined operation such as instructing a part of the character displayed in the virtual space 710 on the performer operation screen 700 displayed on the display unit 120 of the HMD 110 by the controller. By doing, it is possible to specify the part of the character to be modified.
図7に戻り、続いて、ユーザ入力を基に、キャラクタ描画部650は、修正描画する対象となるキャラクタ・パーツを決定する(S102)。例えば、ユーザが、修正したいキャラクタのパーツをまつ毛である、と指示した場合、キャラクタ描画部650は、記憶部350のキャラクタデータ格納部660に格納される、キャラクタのテクスチャデータを参照し、ユーザが指示したパーツが含まれるテクスチャデータを、表示部120のメイキャップ用の領域に表示する。なお、メイキャップ用の表示領域は、図11に示す、HMD110の表示部120に表示される画面700に描画作業用領域(例えば、領域720)として表示する形態のほか、付属の(図示しない)モニタに表示したり、図11に示すようなペンタブ730の表示部に表示することもできる。パフォーマ操作用画面700に表示する場合は、仮想空間が表示される領域710とは分離された、または、オーバーレイされた領域720をメイキャップ用の表示領域とする等パフォーマユーザにとってメイキャップ操作がしやすい態様を適用することができる。 Returning to FIG. 7, subsequently, based on the user input, the character drawing unit 650 determines a character part to be a target for correction drawing (S102). For example, when the user specifies that the part of the character to be modified is eyelashes, the character drawing unit 650 refers to the texture data of the character stored in the character data storage unit 660 of the storage unit 350, and the user draws The texture data including the designated part is displayed in the make-up area of the display unit 120. The makeup display area is displayed as a drawing work area (eg, area 720) on the screen 700 displayed on the display unit 120 of the HMD 110 shown in FIG. It can be displayed on a monitor or can be displayed on the display unit of the pen tab 730 as shown in FIG. When it is displayed on the performer operation screen 700, a make-up operation is performed for the performer user such that the area 720 separated or overlaid with the area 710 in which the virtual space is displayed is used as a display area for make-up. An easy aspect can be applied.
 続いて、キャラクタ描画部650は、メイキャップ処理を実行する(S103)。メイキャップ処理の詳細は、図8を用いて説明する。 Subsequently, the character drawing unit 650 executes makeup processing (S103). Details of the makeup process will be described with reference to FIG.
図8は、第1の実施形態にかかる、メイキャップ処理を示すフローチャートを示す。ここで、一例として、図11に示すように、ユーザが修正したいキャラクタのパーツを含むテクスチャデータが、領域720に表示され、ユーザはペンタブ730等のコントローラを使用して入力を行うことで、表示されたテクスチャデータを、自由に修正・着色等を行うことができる。 FIG. 8 is a flowchart showing the makeup process according to the first embodiment. Here, as an example, as shown in FIG. 11, the texture data including the parts of the character that the user wants to modify is displayed in the area 720, and the user performs input using the controller such as the pen tab 730 to display the texture data. The texture data thus created can be freely modified and colored.
本例においては、図11の所定の領域720に表示されるキャラクタのテクスチャデータに対し、ユーザがペンタブ730によりペン入力を行うことで、キャラクタのパーツに修正等を行う処理について説明する。図11においては、ユーザが睫毛のメイキャップを指示したときに、領域720に表示される画面の例であり、図12は、ユーザがネイルの装飾を指示したときに、領域720の表示部に表示される画面の例である。 In this example, a process will be described in which the user performs pen input on the texture data of the character displayed in the predetermined area 720 of FIG. 11 using the pen tab 730 to correct the parts of the character. FIG. 11 shows an example of a screen displayed in the area 720 when the user instructs the makeup of the eyelashes, and FIG. 12 shows a screen of the area 720 when the user instructs the decoration of the nails. It is an example of a screen displayed.
図8において、まず、入出力検出部610は、ユーザ入力を検出する(S201)。例えば、図11において、ユーザは、描画用領域720に表示される、キャラクタの顔の部位のうち、眼の周辺に睫毛を追加したいときに、入力装置としてペンタブ730の入力領域にペンを使って入力を行う。入出力検出部610は、ユーザによりペンによる描画の入力があった旨を検出する。 In FIG. 8, first, the input/output detection unit 610 detects a user input (S201). For example, in FIG. 11, the user uses a pen in the input area of the pen tab 730 as an input device when he wants to add eyelashes around the eyes of the facial part of the character displayed in the drawing area 720. Make an input. The input/output detection unit 610 detects that the user has input a drawing with a pen.
 次に、キャラクタ描画部650は、ペンタブ730の入力領域においてペン入力のあった座標を検出する(S202)。ペンタブ730の入力領域上の座標は、パフォーマ用操作画面700における領域720上の座標に対応しており、ユーザによる、ペンタブ730上の入力内容は、領域720におけるテクスチャデータに対する入力として反映される。 Next, the character drawing unit 650 detects the coordinates of the pen input in the input area of the pen tab 730 (S202). The coordinates on the input area of the pen tab 730 correspond to the coordinates on the area 720 on the performer operation screen 700, and the input content on the pen tab 730 by the user is reflected as the input to the texture data in the area 720.
 次に、キャラクタ描画部650は、ユーザの入力内容に基づいて、キャラクタのテクスチャデータに描画処理を行う(S203)。例えば、ユーザが入力した睫毛のイラストをキャラクタのテクスチャデータに追加する描画処理を実行する。ここで、テクスチャの色の補正を行うためのインジケータを表示し、ユーザ要求に応じて、色の補正を行うこともできる。ここで、色の補正とは、明るさ、彩度、色調、鮮明さ、コントラスト等の補正を含む。修正後のキャラクタのテクスチャデータは、適宜記憶部350のキャラクタデータ格納部660に格納される。 Next, the character drawing unit 650 performs drawing processing on the texture data of the character based on the input content of the user (S203). For example, the drawing process of adding the illustration of the eyelashes input by the user to the texture data of the character is executed. Here, an indicator for correcting the color of the texture may be displayed, and the color may be corrected in response to a user request. Here, the color correction includes correction of brightness, saturation, color tone, sharpness, contrast and the like. The corrected texture data of the character is appropriately stored in the character data storage unit 660 of the storage unit 350.
 続いて、画像生成部630は、修正されたキャラクタを仮想空間内に配置する画像生成処理を行う(S204)。画像生成630は、記憶部350に格納されたキャラクタデータを基に、3D形状データを所定の空間にマッピングし、テクスチャデータを3D形状にマッピングさせることで、キャラクタの3Dオブジェクトを生成する。仮想空間の背景画像に、キャラクタの3Dオブジェクトが重畳することで仮想空間が生成され、仮想空間がユーザ端末401及びHMD110の表示部に表示される。例えば、図10に示すように、HMD110の仮想空間710に、睫毛が追加されたキャラクタの画像が表示される。同様の仮想空間は、画像生成装置310にネットワーク経由で接続する各ユーザ端末401に対しても送信される。 Subsequently, the image generation unit 630 performs an image generation process of arranging the corrected character in the virtual space (S204). The image generation 630 generates a 3D object of the character by mapping the 3D shape data in a predetermined space and the texture data in the 3D shape based on the character data stored in the storage unit 350. A virtual space is generated by superimposing a 3D object of a character on the background image of the virtual space, and the virtual space is displayed on the display units of the user terminal 401 and the HMD 110. For example, as shown in FIG. 10, an image of a character to which eyelashes are added is displayed in the virtual space 710 of the HMD 110. The similar virtual space is also transmitted to each user terminal 401 connected to the image generation apparatus 310 via the network.
以上のように、本実施形態により、ユーザ端末に提供される、キャラクタ画像を含む仮想空間において、リアルタイムにキャラクタ画像を変更する処理を適用することで、キャラクタとの一体感をユーザに提供することができ、よりインタラクティブな空間を提供することができる。
<第2実施形態>
As described above, according to the present embodiment, it is possible to provide the user with a sense of unity with the character by applying the process of changing the character image in real time in the virtual space including the character image provided to the user terminal. It is possible to provide a more interactive space.
<Second Embodiment>
 図13は、第2の実施形態にかかる、ユーザに仮想空間を提供する方法を示すフローチャートを示す。本方法は、例えば、制御プログラム格納部670に格納された制御プログラムを画像生成装置310の制御部340の各部により処理されることで実行される。 FIG. 13 shows a flowchart showing a method of providing a virtual space to a user according to the second embodiment. This method is executed, for example, by processing the control program stored in the control program storage unit 670 by each unit of the control unit 340 of the image generation apparatus 310.
 まず、制御部340のアイテム受付部640は、複数のユーザ端末401のいずれかからアイテムを受け付ける(S301)。例えば、図14及び図15に示すように、仮想空間1100の、ユーザがギフトアイテムを選択し、投稿するギフトアイテム選択部1150から、ユーザは、例えば、図14において、ネイル用の道具を選択することが可能であり、図15において、メイク道具を選択し、課金ポイントを消費することで、選択したアイテムをキャラクタに対してギフトとして送信することができる。 First, the item receiving unit 640 of the control unit 340 receives an item from any of the plurality of user terminals 401 (S301). For example, as illustrated in FIGS. 14 and 15, the user selects a gift item from the gift item selection unit 1150 in the virtual space 1100 in which the user selects and posts a gift item, for example, in FIG. 14, the user selects a nail tool. By selecting a makeup tool and consuming the charging points in FIG. 15, the selected item can be transmitted to the character as a gift.
 続いて、画像生成部630は、S101においてユーザが選択したアイテムを仮想空間1100に配置する(S302)。例えば、図15において、ユーザがメイク用のパレット1160とメイク用の筆1170を選択したとき、画像生成部630は、パレット1160と筆1170を仮想空間1110に配置するよう、画像生成を行う。 Subsequently, the image generation unit 630 arranges the item selected by the user in S101 in the virtual space 1100 (S302). For example, in FIG. 15, when the user selects the palette 1160 for makeup and the brush 1170 for makeup, the image generation unit 630 performs image generation so that the palette 1160 and the brush 1170 are arranged in the virtual space 1110.
 続いて、キャラクタ制御部620は、パフォーマユーザの動作を検出し、仮想空間内のキャラクタを制御する(S303)。例えば、パフォーマユーザは、仮想空間1110内に配置されたパレット1160と筆1170に対し、キャラクタを介して接触するよう動作する。より具体的には、パフォーマユーザは、パフォーマ用操作画面700を見ながら、コントローラ210を移動させ、所定の操作ボタンを押すことで、画像生成装置310の入出力検出部610は、コントローラ210の動作を検出し、キャラクタ制御部620は、対応する動作として、キャラクタ1120が仮想空間1110内のアイテムを把持するよう制御する。 Subsequently, the character control unit 620 detects the performer user's action and controls the character in the virtual space (S303). For example, the performer user operates so that the palette 1160 and the brush 1170 arranged in the virtual space 1110 come into contact with each other via the character. More specifically, the performer user moves the controller 210 and presses a predetermined operation button while looking at the performer operation screen 700, so that the input/output detection unit 610 of the image generating apparatus 310 causes the operation of the controller 210. Then, as a corresponding action, the character control unit 620 controls the character 1120 to grip an item in the virtual space 1110.
 続いて、キャラクタ描画部650は、メイキャップ処理を実行する(S304)。メイキャップ処理の詳細は、第1の実施形態において説明した通りである。ここで、キャラクタ描画部650は、キャラクタ1120が把持したアイテムに応じて、描画対象となるキャラクタのパーツを自動的に選択し、パフォーマ操作画面700に対応するテクスチャデータを表示させることもできる。または、キャラクタ描画部650は、パフォーマユーザによって演じられるキャラクタ1120が、アイテムにより指示したパーツを、描画対象のパーツとして認識し、テクスチャデータを表示することもできる。図16の例においては、キャラクタ1120は、メイク用の筆1170をキャラクタの顔部分に近づけるように動作しているので、キャラクタ描画部650は、キャラクタの顔を描画対象パーツとして認識することができる。 Subsequently, the character drawing unit 650 executes makeup processing (S304). The details of the makeup process are as described in the first embodiment. Here, the character drawing unit 650 can also automatically select the parts of the character to be drawn according to the item held by the character 1120 and display the texture data corresponding to the performer operation screen 700. Alternatively, the character drawing unit 650 may recognize the part designated by the item by the character 1120 played by the performer user as a drawing target part and display the texture data. In the example of FIG. 16, the character 1120 is operating to bring the makeup brush 1170 closer to the character's face portion, so the character drawing unit 650 can recognize the character's face as a drawing target part. ..
続いて、画像生成部630は、修正されたキャラクタを仮想空間内に配置する画像生成処理を行う(S305)。本画像生成処理についても、第1の実施形態において説明した通りである。図16の例に示すように、睫毛のメイキャップを施したキャラクタ1120の画像が仮想空間にリアルタイムに送信されている。 Subsequently, the image generation unit 630 performs an image generation process of arranging the corrected character in the virtual space (S305). The main image generation process is also as described in the first embodiment. As shown in the example of FIG. 16, an image of the character 1120 to which eyelash makeup has been applied is transmitted in real time to the virtual space.
以上のように、本実施形態により、仮想空間において、ユーザは所望のアイテムをキャラクタに提供し、キャラクタは、提供されたアイテムにより自身のパーツに装飾を施すことができるので、通常のチャットや会話によるコミュニケーションに加えて、新しい形でのコミュニケーションの形を提供することができる。 As described above, according to the present embodiment, the user can provide a desired item to the character in the virtual space, and the character can decorate his/her own part with the provided item. In addition to communication by, it is possible to provide a new form of communication.
 上述した実施の形態は、本発明の理解を容易にするための例示に過ぎず、本発明を限定して解釈するためのものではない。本発明は、その趣旨を逸脱することなく、変更、改良することができると共に、本発明にはその均等物が含まれることは言うまでもない。 The above-described embodiments are merely examples for facilitating the understanding of the present invention, and are not intended to limit the interpretation of the present invention. It goes without saying that the present invention can be modified and improved without departing from the spirit thereof and that the present invention includes equivalents thereof.
110 ヘッドマウントディスプレイ
115 通信部
120 表示パネル
125 制御部
140 センサ
150 光源
160 カメラ
170 マイク
180 ヘッドフォン
190 入出力部
210 コントローラ
220 左手用コントローラ
230 右手用コントローラ
240a、240b トリガーボタン
245 操作部
250 赤外線LED
255 入出力部
260 センサ
270 ジョイスティック
280 メニューボタン
290 フレーム
310 画像生成装置
320 入出力部
330 通信部
340 制御部
350 記憶部
401A、401B、401C ユーザ端末 
 
110 Head Mounted Display 115 Communication Unit 120 Display Panel 125 Control Unit 140 Sensor 150 Light Source 160 Camera 170 Microphone 180 Headphone 190 Input/Output Unit 210 Controller 220 Left Hand Controller 230 Right Hand Controller 240a, 240b Trigger Button 245 Operation Unit 250 Infrared LED
255 input/output unit 260 sensor 270 joystick 280 menu button 290 frame 310 image generation device 320 input/output unit 330 communication unit 340 control unit 350 storage unit 401A, 401B, 401C user terminal

Claims (5)

  1.  パフォーマユーザ及び視聴ユーザを含む、複数のユーザに対し、当該パフォーマユーザによって演じられるキャラクタを含む所定のコンテンツを有する仮想空間を提供する方法であって、
    制御装置を介してパフォーマユーザの入力を検出するステップと、 
    前記パフォーマユーザの入力に基づいて、キャラクタの動作及び表情のいずれかを制御するステップと、
     前記パフォーマユーザの入力に基づいて、キャラクタの画像の一部に変更を加えるステップと、有する仮想空間を提供する方法。
    A method for providing a virtual space having a predetermined content including a character played by the performer user to a plurality of users including a performer user and a viewing user,
    Detecting a performer user's input via a controller,
    Controlling one of a character's movements and facial expressions based on the performer user's input;
    Modifying a portion of the image of the character based on input from the performer user, and providing a virtual space having the method.
  2.  請求項1に記載の仮想空間を提供する方法であって、
     前記変更を加えたキャラクタの画像を送信するステップを、更に含む仮想空間を提供する方法。
    A method for providing a virtual space according to claim 1, wherein
    A method of providing a virtual space, further comprising the step of transmitting an image of the modified character.
  3.  請求項1に記載の仮想空間を提供する方法であって、
     前記パフォーマユーザの入力は、座標入力装置により座標を入力することを含む、仮想空間を提供する方法。
    A method for providing a virtual space according to claim 1, wherein
    The method of providing a virtual space, wherein the performing user input includes inputting coordinates by a coordinate input device.
  4.  請求項1に記載の仮想空間を提供する方法であって、
     前記パフォーマユーザの入力は、座標入力装置の表示部に表示される前記キャラクタの画像に描画することを含む、仮想空間を提供する方法。
    A method for providing a virtual space according to claim 1, wherein
    The method of providing a virtual space, wherein the performer user's input includes drawing on an image of the character displayed on a display unit of a coordinate input device.
  5.  請求項1に記載の仮想空間を提供する方法であって、
     複数のオブジェクトを提供するステップと、
    前記複数のユーザのうち、第1のユーザから、前記複数のオブジェクトのうち一のオブジェクトの選択を受け付けるステップと、
    前記一のオブジェクトを仮想空間に配置するステップと、
    前記パフォーマユーザの入力は、前記キャラクタと前記一のオブジェクトとを相互に作用させることを含む、仮想空間を提供する方法。
    A method for providing a virtual space according to claim 1, wherein
    Providing multiple objects,
    Accepting a selection of one of the plurality of objects from a first user of the plurality of users;
    Arranging the one object in a virtual space,
    The method of providing a virtual space, wherein the input of the performer user includes interacting the character and the one object.
PCT/JP2019/049962 2018-12-20 2019-12-19 Method for providing virtual space having given content WO2020130112A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018238360A JP2020101902A (en) 2018-12-20 2018-12-20 Method for providing virtual space having prescribed content
JP2018-238360 2018-12-20

Publications (1)

Publication Number Publication Date
WO2020130112A1 true WO2020130112A1 (en) 2020-06-25

Family

ID=71101406

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/049962 WO2020130112A1 (en) 2018-12-20 2019-12-19 Method for providing virtual space having given content

Country Status (2)

Country Link
JP (1) JP2020101902A (en)
WO (1) WO2020130112A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7062248B1 (en) 2021-12-17 2022-05-06 17Live株式会社 Computer programs, terminals and methods

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018045663A (en) * 2016-09-16 2018-03-22 株式会社リコー Display control device, display control program, display system and display control method
JP6382468B1 (en) * 2018-05-08 2018-08-29 グリー株式会社 Movie distribution system, movie distribution method, and movie distribution program for distributing movie including animation of character object generated based on movement of actor

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018045663A (en) * 2016-09-16 2018-03-22 株式会社リコー Display control device, display control program, display system and display control method
JP6382468B1 (en) * 2018-05-08 2018-08-29 グリー株式会社 Movie distribution system, movie distribution method, and movie distribution program for distributing movie including animation of character object generated based on movement of actor

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CG WORLD, VR ANIME PRODUCTION TOOL TO SUPPORT LIVESTREAMING, vol. 241, 1 September 2018 (2018-09-01), pages 52 - 57 *

Also Published As

Publication number Publication date
JP2020101902A (en) 2020-07-02

Similar Documents

Publication Publication Date Title
US20180373413A1 (en) Information processing method and apparatus, and program for executing the information processing method on computer
US20190240573A1 (en) Method for controlling characters in virtual space
US10223064B2 (en) Method for providing virtual space, program and apparatus therefor
WO2019216249A1 (en) Method for providing virtual space having prescribed content
US20180374275A1 (en) Information processing method and apparatus, and program for executing the information processing method on computer
JP2023015061A (en) program
JP2022184958A (en) animation production system
JP6684746B2 (en) Information processing method, computer and program
WO2020130112A1 (en) Method for providing virtual space having given content
JP2023116432A (en) animation production system
JP2022153479A (en) Animation creation system
JP2022153478A (en) Animation creation system
JP2022153477A (en) Animation creation system
JP2022153476A (en) Animation creation system
JP6964302B2 (en) Animation production method
JP7218874B2 (en) animation production system
JP7218873B2 (en) animation production system
JP2019219702A (en) Method for controlling virtual camera in virtual space
JP7218875B2 (en) animation production system
JP7390542B2 (en) Animation production system
JP6955725B2 (en) Animation production system
JP2022025473A (en) Video distribution method
JP2020149397A (en) Method of controlling communication between characters in virtual space
JP2022025472A (en) Animation creation system
JP2020067810A (en) Character control method in virtual space

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19897645

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19897645

Country of ref document: EP

Kind code of ref document: A1