WO2019216249A1 - Procédé de fourniture d'un espace virtuel ayant un contenu prescrit - Google Patents

Procédé de fourniture d'un espace virtuel ayant un contenu prescrit Download PDF

Info

Publication number
WO2019216249A1
WO2019216249A1 PCT/JP2019/017727 JP2019017727W WO2019216249A1 WO 2019216249 A1 WO2019216249 A1 WO 2019216249A1 JP 2019017727 W JP2019017727 W JP 2019017727W WO 2019216249 A1 WO2019216249 A1 WO 2019216249A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
virtual space
providing
performer
information
Prior art date
Application number
PCT/JP2019/017727
Other languages
English (en)
Japanese (ja)
Inventor
義仁 近藤
雅人 室橋
Original Assignee
株式会社エクシヴィ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社エクシヴィ filed Critical 株式会社エクシヴィ
Priority to JP2020518268A priority Critical patent/JPWO2019216249A1/ja
Publication of WO2019216249A1 publication Critical patent/WO2019216249A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering

Definitions

  • the present invention relates to a method for providing a virtual space for a plurality of users. More specifically, the present invention relates to a method for providing a virtual space having predetermined content including a character played by a performer.
  • Motion capture is a technology that digitally captures the performance of a performer user (hereinafter referred to as “performer”) in real space, and the captured motion is used to represent character motion in computer animation such as video and games.
  • performer a performer user
  • the captured motion is used to represent character motion in computer animation such as video and games.
  • HMD head mounted display
  • Patent Document 1 A mechanism for encouraging purchases is disclosed.
  • the technology disclosed in the above document provides a mechanism for determining the display position in the virtual space of the user avatar according to the amount of items purchased by the user in order to spawn a desire for approval of the user or a sense of competition between users. ing.
  • the display position of the user avatar is displayed at a position closer to the performer according to the present technology, the user's approval desire can be satisfied, but there is no interaction or communication with the performer.
  • an object of the present invention is to provide a method capable of more effectively promoting the interaction between a performer and a user in a virtual space.
  • a method for providing a virtual space having a predetermined content including a character played by a performer user to a plurality of users including a performer user and a viewing user An action is received, the received predetermined action is stored as history information for each user, a virtual space including predetermined content and information for identifying the user is generated, and information generated based on the history information is displayed.
  • a virtual space characterized by this is provided.
  • the schematic diagram of the external appearance of the head mounted display 110 concerning 1st Embodiment is shown.
  • the schematic diagram of the external appearance of the controller 210 concerning 1st Embodiment is shown.
  • 1 is a configuration diagram of an HMD system 300 according to a first embodiment.
  • the functional block diagram of HMD110 concerning 1st Embodiment is shown.
  • the functional block diagram of the controller 210 concerning 1st Embodiment is shown.
  • 1 is a functional configuration diagram of an image generation apparatus 310 according to a first embodiment.
  • An example of the virtual space displayed on a user apparatus concerning 1st Embodiment is shown.
  • the other example of the virtual space displayed on a user apparatus concerning 1st Embodiment is shown.
  • the further another example of the virtual space displayed on a user apparatus concerning 1st Embodiment is shown.
  • An example of the user management table concerning 1st Embodiment is shown.
  • An example of the user gift history management table concerning a 1st embodiment is shown.
  • An example of the user comment history management table concerning a 1st embodiment is shown.
  • 1st Embodiment the flowchart as an example of the process which provides a user with virtual space is shown.
  • 1st Embodiment the flowchart as another example of the process which provides a user with virtual space is shown.
  • 2nd Embodiment the flowchart as an example of the process which provides a user with virtual space is shown.
  • an example of the virtual space displayed to a user is shown.
  • 3rd Embodiment the flowchart as an example of the process which provides a user with virtual space is shown.
  • 3rd Embodiment an example of the virtual space displayed to a user is shown.
  • 4th Embodiment an example of the virtual space displayed to a user is shown.
  • 5th Embodiment an example of the virtual space displayed on a performer is shown.
  • 6th Embodiment an example of the virtual space displayed to a user is shown.
  • FIG. 1 is a schematic view of the appearance of a head mounted display (hereinafter referred to as HMD) 110 according to the present embodiment.
  • the HMD 110 is mounted on the performer's head and includes a display panel 120 so as to be placed in front of the left and right eyes of the performer.
  • a display panel As the display panel, an optical transmission type and a non-transmission type display are conceivable.
  • a non-transmission type display panel capable of providing a more immersive feeling is exemplified.
  • the display panel 120 displays an image for the left eye and an image for the right eye. By using the parallax between both eyes, an image with a stereoscopic effect can be provided to the performer. If the image for the left eye and the image for the right eye can be displayed, the display for the left eye and the display for the right eye can be provided separately, or an integrated display for the left eye and the right eye can be provided. is there.
  • the housing part 130 of the HMD 110 includes a sensor 140.
  • the sensor can include, for example, a magnetic sensor, an acceleration sensor, a gyro sensor, or a combination thereof, in order to detect movements such as the head direction and inclination of the performer.
  • the vertical direction of the head of the performer is defined as the Y axis, and among the axes orthogonal to the Y axis, the axis corresponding to the front and rear direction of the performer that connects the center of the display panel 120 and the performer is defined as the Z axis.
  • the axis corresponding to the left and right direction of the performer is the X axis
  • the sensor 140 has a rotation angle around the X axis (so-called pitch angle), a rotation angle around the Y axis (so-called yaw angle), Z A rotation angle around the axis (so-called roll angle) can be detected.
  • the housing unit 130 of the HMD 110 can include a plurality of light sources 150 (for example, infrared light LEDs and visible light LEDs) and is installed outside the HMD 110 (for example, indoors).
  • the detected camera for example, an infrared light camera or a visible light camera detects these light sources, so that the position, orientation, and inclination of the HMD 110 in a specific space can be detected.
  • the HMD 110 may be provided with a camera for detecting a light source installed in the housing part 130 of the HMD 110.
  • the housing part 130 of the HMD 110 can include an eye tracking sensor.
  • the eye tracking sensor is used to detect the gaze direction and the gazing point of the left and right eyes of the performer.
  • Various types of eye tracking sensors are conceivable.
  • the position of the reflected light on the cornea formed by irradiating the left eye and the right eye with weak infrared light is used as a reference point, and depending on the position of the pupil relative to the position of the reflected light.
  • a method of detecting the line-of-sight direction and detecting the intersection of the left-eye and right-eye line-of-sight directions as a gazing point can be considered.
  • FIG. 2 is a schematic diagram of the appearance of the controller 210 according to the present embodiment.
  • the controller 210 can support the performer to perform a predetermined input in the virtual space.
  • the controller 210 may be configured as a set of left hand 220 and right hand 230 controllers.
  • the left hand controller 220 and the right hand controller 230 may each include an operation trigger button 240, an infrared LED 250, a sensor 260, a joystick 270, and a menu button 280.
  • the operation trigger button 240 is arranged as 240a and 240b at positions that are assumed to perform an operation of pulling the trigger with the middle finger and the index finger when the grip 235 of the controller 210 is gripped.
  • a frame 245 formed in a ring shape downward from both side surfaces of the controller 210 is provided with a plurality of infrared LEDs 250, and the position of these infrared LEDs is detected by a camera (not shown) provided outside the controller.
  • a camera not shown
  • the controller 210 can incorporate a sensor 260 in order to detect movements such as the orientation and inclination of the controller 210.
  • the sensor 260 can include, for example, a magnetic sensor, an acceleration sensor, a gyro sensor, or a combination thereof.
  • a joystick 270 and a menu button 280 can be provided on the upper surface of the controller 210.
  • the joystick 270 can be moved 360 degrees around the reference point, and is assumed to be operated with the thumb when the grip 235 of the controller 210 is gripped.
  • the menu button 280 is operated with the thumb.
  • the controller 210 may incorporate a vibrator (not shown) for applying vibration to the hands of the performer who operates the controller 210.
  • the controller 210 In order to output information such as the position, orientation, and tilt of the controller 210 via a sensor or the like through the input contents of a performer via a button or a joystick, and to receive information from the host computer, the controller 210 Part and a communication part.
  • the system determines the movement and posture of the performer's hand based on whether the performer holds the controller 210 and operates various buttons and joysticks, and information detected by the infrared LED and sensor.
  • the performer's hand can be displayed and operated.
  • FIG. 3 is a configuration diagram of the HMD system 300 according to the present embodiment.
  • the HMD system 300 can be configured by, for example, an HMD 110, a controller 210, and an image generation apparatus 310 that functions as a host computer. Furthermore, an infrared camera (not shown) for detecting the position, orientation, inclination, and the like of the HMD 110 and the controller 210 can be added.
  • These devices can be connected to each other by wired or wireless means.
  • each device is equipped with a USB port, and communication can be established by connecting with a cable.
  • HDMI registered trademark
  • wired LAN infrared
  • Bluetooth registered trademark
  • WiFi registered trademark
  • the image generation apparatus 310 may be an apparatus having a calculation processing function such as a PC, a game machine, or a mobile communication terminal. Further, the image generation apparatus 310 can connect to a plurality of user apparatuses such as 401A, 401B, and 401C via a network such as the Internet, and transmit the generated image in a streaming or download form. Each of the user devices 401A and the like can reproduce an image transmitted by being provided with an Internet browser or an appropriate viewer. Here, the image generation device 310 can directly transmit an image to a plurality of user devices, or can transmit an image via another content distribution server.
  • a calculation processing function such as a PC, a game machine, or a mobile communication terminal.
  • the image generation apparatus 310 can connect to a plurality of user apparatuses such as 401A, 401B, and 401C via a network such as the Internet, and transmit the generated image in a streaming or download form.
  • Each of the user devices 401A and the like can reproduce an image transmitted by being provided with
  • FIG. 4 is a functional configuration diagram of the HMD 110 according to the present embodiment.
  • the HMD 110 may include a sensor 140.
  • a sensor although not shown in order to detect movements such as the head direction and inclination of the performer, for example, a magnetic sensor, an acceleration sensor, a gyro sensor, or a combination thereof can be provided.
  • an eye tracking sensor can be provided. The eye tracking sensor is used to detect the gaze direction and the gazing point of the left and right eyes of the performer.
  • an LED 150 such as infrared light or ultraviolet light may be provided.
  • photographing the outside scene of HMD can be provided.
  • a microphone 170 for collecting the performer's utterance and a headphone 180 for outputting sound can be provided. Note that the microphone and the headphone can be provided as an independent device from the HMD 110.
  • the HMD 110 can include, for example, an input / output unit 190 for establishing a wired connection with peripheral devices such as the controller 210 and the image generation device 310, and can include infrared, Bluetooth (registered trademark), and WiFi (registered trademark).
  • the communication unit 115 for establishing a wireless connection can be provided. Information relating to the movement of the performer's head, such as the orientation and tilt of the performer, acquired by the sensor 140 is transmitted to the image generation device 310 by the control unit 125 via the input / output unit 190 and / or the communication unit 115.
  • the image generated by the image generation device 310 based on the movement of the performer's head is received via the input / output unit 190 and / or the communication unit 115, and the display unit 120 is displayed by the control unit 125. Is output.
  • FIG. 5 is a functional configuration diagram of the controller 210 according to the present embodiment.
  • the controller 210 can be configured as a set of controllers for the left hand 220 and the right hand 230, but in either controller, the operation trigger button 240, the joystick 270, An operation unit 245 such as a menu button 280 can be provided.
  • the controller 210 can incorporate a sensor 260 in order to detect movements such as the orientation and inclination of the controller 210.
  • the sensor 260 can include, for example, a magnetic sensor, an acceleration sensor, a gyro sensor, or a combination thereof.
  • the controller 210 can include, for example, an input / output unit 255 for establishing a wired connection with a peripheral device such as the HMD 110 or the image generation device 310, such as infrared rays, Bluetooth (registered trademark), WiFi (registered trademark), or the like.
  • a communication unit 265 for establishing a wireless connection can be provided. Information input by the performer via the operation unit 245 and information such as the orientation and inclination of the controller 210 acquired by the sensor 260 are transmitted to the image generation device 310 via the input / output unit 255 and / or the communication unit 265. .
  • FIG. 6 is a functional configuration diagram of the image generation apparatus 310 according to the present embodiment.
  • the image generation device 310 stores information related to the head movement of the performer and the movement and operation of the controller acquired by the input information and sensors transmitted from the HMD 110 and the controller 210, performs predetermined calculation processing, and performs image processing.
  • Devices such as a PC, a game machine, and a mobile communication terminal that have a function for generating can be used.
  • the image generation device 310 can include an input / output unit 320 for establishing a wired connection with peripheral devices such as the HMD 110 and the controller 210, for example, infrared, Bluetooth (registered trademark), WiFi (registered trademark), etc.
  • a communication unit 330 for establishing a wireless connection can be provided.
  • Information on the head movement of the performer and the movement and operation of the controller received from the HMD 110 and / or the controller 210 via the input / output unit 320 and / or the communication unit 330 is transmitted to the control unit 340 in the position of the performer, Characters are controlled by executing a control program that is detected as input content including actions such as line of sight, posture, speech, operation, etc., and stored in the storage unit 350 in accordance with the input content of the performer. Is generated.
  • the control unit 340 can be configured by a CPU, but by further providing a GPU specialized for image processing, information processing and image processing can be distributed, and overall processing efficiency can be improved.
  • the image generation apparatus 310 can also communicate with other calculation processing apparatuses and share information processing and image processing with other calculation processing apparatuses.
  • control unit 340 of the image generation apparatus 310 is a user input detection unit that detects information about the head movement of the performer, the speech of the performer, and the movement and operation of the controller received from the HMD 110 and / or the controller 210.
  • a character control unit 620 that executes a control program stored in the control program storage unit for a character stored in advance in the character data storage unit 660 of the storage unit 350, and an image based on the character control An image generation unit 630 is included.
  • information such as the orientation and tilt of the performer head detected by the HMD 110 or the controller 210 and the movement of the hand is created in accordance with the movement and restriction of the joints of the human body.
  • control unit 340 includes an item receiving unit 640 that receives selection of items to be arranged in the virtual space from other user devices, and includes a comment receiving unit 650 that receives comments.
  • the additional information generation unit 655 generates additional information based on the history information storing the received items and comments.
  • the image generation unit 630 generates a virtual space including the character display area including the additional information and the background image and the user display area as shown in FIG. The screen of the virtual space can be displayed on the display unit of each user apparatus via the network, or can be displayed on the display unit 120 of the HMD 110 worn by the performer.
  • the screen displayed on the display unit 120 of the HMD 110 worn by the performer can include information that can be displayed only on the performer, in addition to the additional information described above.
  • a screen for performing the performer can be displayed on the display unit 120.
  • the frame rate of the character image and the frame rate of the background image can be set at different speeds.
  • the frame rate of the character image can be set relatively low by setting the frame rate of the character image to 8 fps and the frame rate of the background image to 30 fps. In this manner, the motion of the character image can be shown without a sense of incongruity by setting the frame rate close to the animation that is not the existing 3DCG.
  • the storage unit 350 stores information related to the character, such as character attributes, in addition to the character image data, in the character data storage unit 660 described above.
  • the control program storage unit 670 stores a program for controlling the movement and facial expression of a character in the virtual space and a program for generating a virtual space including content such as a character and a user avatar.
  • the streaming data storage unit 680 stores the image generated by the image generation unit 630.
  • the virtual space image stored as the stream data can be simultaneously distributed together with the live image in response to a user request.
  • the storage unit 350 includes an item data storage unit 685 that stores data related to items.
  • the storage unit 350 includes a user data storage unit 690 that stores information related to the user and a user avatar.
  • the image generation apparatus is characterized by not only transmitting an image that is a virtual space to a plurality of user apparatuses, but also receiving items and comments from the user apparatuses. It should be noted that all or part of the functions of the image generation apparatus 310 are specialized only for generating an image, a separate content service server is provided, the content service server transmits an image to the user apparatus, and It is also possible to have a function of receiving items and comments from the user device.
  • FIG. 7 is a diagram illustrating an example of the virtual space 1100 displayed on the user device according to the present embodiment.
  • the user device can display an image of a virtual space including the character 1120 in an image display unit 1110 such as a viewer for displaying an image embedded in a built-in web browser.
  • the character 1120 arranged in the virtual space is a movement such as the tilt and orientation of the performer's head, the utterance content of the performer, or the tilt of the controller 210 via the HMD 110 and / or the controller 210 attached to the performer as a performer. It is possible to operate based on user input such as a movement such as a direction and an operation content of a performer via the controller 210.
  • an area 1130 for displaying a plurality of user avatars serving as audiences is provided.
  • the virtual space 1100 includes a comment input unit 1140 for a user to input a comment and a post, and a user for a gift item.
  • a gift item selection unit 1150 for posting is displayed in a predetermined area 1160 such as a balloon displayed in the vicinity of the corresponding user avatar position in FIG. 7, for example.
  • the item receiving unit 640 or the comment receiving unit 650 of the control unit 340 receives a predetermined action (posting a gift item or a comment) from the user device (S101).
  • the item reception unit 640 or the comment reception unit 650 of the control unit 340 stores the received gift item or comment in the user data storage unit 690 (S102). More specifically, the action received in the user management table shown in FIG. 10 is stored as history information, and if the received action includes a gift item post, the points consumed to purchase the item (pt ). As the history information, the user gift history management table shown in FIG. 11 can be updated. In FIG.
  • the user gift history management table stores information such as event date, posted gift (stuffed animal), consumed pt (50 pt), and the like. Update the table.
  • the user comment history table shown in FIG. 12 can be updated as history information. In FIG. 12, for example, when the user A posts a comment “I will come again”, the table is updated by storing the event date and time and the content of the posted comment in the user comment history management table.
  • the additional information generation unit 655 of the control unit 340 then stores the user management table, the user gift history management table, and / or the user comment history table stored in the user data storage unit 690. Reference is made (S103).
  • the additional information generation unit 655 generates additional information related to the predetermined user (S104). For example, when a predetermined action is received from the user, the additional information generation unit 655 generates information associated with the user avatar corresponding to the user A as illustrated in FIG. As an example of the content of the information, information such as “user name: Yoshi”, “latest history: box (post)”, “consumption pt: 1500”, and “comment: I will come back” can be generated.
  • the image generation unit 630 generates an image provided to the user from the generated additional information together with elements (for example, a character and a user avatar) constituting another virtual space (S105).
  • the additional information is displayed in the vicinity of the user avatar provided to the user and displayed in the virtual space (in FIG. 8, inside the balloon 1170).
  • the generated additional information may be all or a part of the information illustrated in FIG. 8.
  • the performer can immediately detect information related to the user, and can express words of gratitude and effort to the user. Can do.
  • the performer can refer to the information and put a special word on the user.
  • additional information a history of comments and / or gift item postings made by the user over a predetermined period in the past (for example, the past week) can be displayed in a list. Thereby, the performer can trace each past action performed by the user, and can widen the range of communication.
  • the range in which the additional information is displayed can be an unspecified user that is not limited to a specific user, or can be limited to the performer and its target user from the viewpoint of privacy, or can be displayed only to the performer.
  • the performer can refer to the user's information without being known to the user, and can provide a special effect to the user.
  • the image generation unit 690 of the control unit 340 generates an image corresponding to the gift item posted by the user (S201), and the generated gift item image 1180 is a character image in the virtual space 1100 of FIG. 1120 is displayed.
  • the additional information generation unit 655 of the control unit 340 determines whether or not the character 1120 in FIG. 9 has touched the displayed gift item 1180 (S202). More specifically, the positional relationship between the characters 1120 in the virtual space is confirmed by comparing the coordinates of the hand of the character 1120 with the coordinates corresponding to the area where the gift item 1180 is placed.
  • the performer moves the controller 210 of FIG. 2 up and down, so that the input detection unit 610 of the control unit 340 detects a change in the position of the controller 210 via the acceleration sensor of the controller 210, and the character control unit 620 controls to move the arm of the character 1120 up and down according to the detected change in position, and the performer has the palm position of the tip of the arm of the character 1120 approximately coincident with the position of the gift item 1180.
  • a predetermined button for example, trigger button 240b
  • the additional information generation unit 655 first identifies the user who posted the gift item 1180, and the user's User data is referred to (S203). If it is determined in S202 that the character 1120 has not touched the gift item 1180 (“No” in S202), the process returns to the original process and monitoring is continued.
  • the additional information generation unit 655 refers to the user management table, the user gift history management table, and / or the user comment history table stored in the user data storage unit 690, Additional information related to the user is generated (S204). For example, when the user ID is “user A”, as shown in FIG. 9, information associated with the user avatar corresponding to the user A is generated. As an example of the content of the information, information such as “user name: Yoshi”, “latest history: box (post)”, “consumption pt: 1500”, and “comment: I will come back” can be generated.
  • the image generation unit 630 generates an image to be provided to the user from the generated additional information together with elements (for example, characters and user avatars) constituting other virtual spaces (S205). As shown in FIG. 9, the additional information is displayed in the vicinity of the user avatar provided to the user and displayed in the virtual space (in FIG. 9, inside the balloon 1170). Other aspects are the same as the example described with reference to FIGS.
  • information related to the user is triggered by the fact that the user has taken a predetermined action or that the character has taken a predetermined action on the gift item posted by the user.
  • the performer's viewpoint direction in the virtual space and the viewpoint of the performer detected through the eye tracking sensor which are detected through various sensors of the HMD 110 attached to the performer.
  • An aspect of displaying information related to a user corresponding to a user avatar located in the area is also conceivable.
  • FIG. 15 shows a flowchart as an example of processing for providing a virtual space to a user in the second embodiment.
  • the image generation unit 630 of the control unit 340 generates a user avatar image of a predetermined user stored in the user data storage unit 690, and performs a display process (S301).
  • the additional information generation unit 655 refers to the user management table stored in the user data storage unit 690, and confirms information such as the number of logins and the last login date of the user, so that the user can use the service for the first time It is confirmed whether to log in or browse a room provided by the performer (a page on which the performer provides content) (S302).
  • the additional information generation unit 655 displays an image.
  • the generating unit performs processing for adding a special effect to the user avatar (S303).
  • S303 when it is confirmed that the user is a second or subsequent login or page viewer (“No” in S302), the process returns to the original process.
  • a display 1270 for notifying that the user has logged in for the first time can be performed.
  • a process for making the user avatar distinguishable from other user avatars such as a process for highlighting the user avatar, a process for blinking, and a process for enlarging the display can be considered.
  • This special effect may be visible only to the performer.
  • FIG. 17 shows a flowchart as an example of processing for providing a virtual space to a user in the third embodiment.
  • the image generation unit 630 of the control unit 340 generates an image of a predetermined gift item stored in the item data storage unit 685, and performs display processing (S401).
  • the additional information generation unit 655 instructs the image generation unit 630 to apply a special effect to the displayed gift item, and the image generation unit 630 performs processing for adding special processing (S402).
  • a process for attracting the performer's attention to the gift item such as a process of highlighting a gift item, a process of blinking, or a process of enlarging display, can be considered.
  • This special effect may be visible only to the performer.
  • a special effect can be given to the gift item based on the information that the user has purchased and posted the gift item for the first time.
  • the additional information generation unit 655 confirms whether the performer has made any reaction to the posted gift item (S403). For example, in FIG. 18, it is confirmed whether the performer has read a comment (for example, “Thank you, user A”) about the ice cream gift item posted by the user.
  • a confirmation method a voice recognition technology is used to recognize a word related to a corresponding gift item or a corresponding user name from a voice uttered by a performer, or a gift is given through a character played by the performer.
  • a method of detecting a predetermined action on an item for example, a character touching a gift item) can be considered.
  • the additional information generation unit 655 cancels the special effect applied to the gift item (S404).
  • the performer can sensuously understand that some kind of reaction has been shown to the gift item posted by the user, and can confirm the gift item that has not yet been reacted.
  • the special effect is maintained for the gift item.
  • the invention disclosed in the present embodiment can be similarly applied to comments posted by the user.
  • FIG. 19 shows an example of a virtual space displayed to the user in the fourth embodiment.
  • an item image 1470 related to a predetermined product sold in the real world is displayed in the character display area of the virtual space 1400, and information 1480 related to the product is displayed when a predetermined condition is satisfied.
  • Display in the virtual space For example, when the positional relationship between the character 1410 displayed in the character display area and the item image 1470 becomes a predetermined distance, or there is an interaction between the character 1410 and the item image 1470 (for example, the character is When the item image is grasped), information 1480 related to the product can be displayed.
  • the information 1480 related to the product can be, for example, the product name, the seller, the price, the contents explaining the product, etc., and these information are acquired from the external server via the application program interface. It can also be stored in advance as product related information in the image generating apparatus 310 together with the item image 1470 related to the product.
  • FIG. 20 shows an example of a virtual space displayed on the performer in the fifth embodiment.
  • a virtual space 1500 shown in FIG. 20 is displayed on the display unit 120 or a monitor (not shown) of the HMD 110 worn by the performer.
  • the virtual space 1500 also includes a service image display area 1550 for displaying the virtual space displayed on the user device 401, a character 1510 displayed in the area 1550, and a predetermined object (for example, , Comment list) 1520 has an operation image display area 1560 for operating.
  • an image corresponding to the image arranged in the service image display area is basically arranged.
  • the operation image display area performer moves the finger of the character 1510 by operating the controller 210 while referring to the reference image 1540 for character finger operation.
  • a character hand / finger operation reference image 1540 an operation hand is moved to a position that is shifted by a predetermined distance from the approximate coordinates of the position of the performer's hand instruction image 1570 acquired via the position of the controller 210 (in the depth direction of the screen).
  • An image 1580 can be displayed.
  • the performer grasps a part (comment 1590) of the object 1530 (comment list) displayed in the operation image display area 1560 while referring to the image 1580 of the operation hand.
  • the gripping position of the object (comment) of the character 1510 is reflected at the position shifted by the predetermined distance, so that the action of the character gripping the object with the fingertip can be expressed in a more natural form. it can.
  • a list 1520 of comments posted by the user is displayed near the character, and a corresponding comment list 1530 is also displayed in the operation image display area 1560.
  • a part of the comment for example, the comment “Nemu” posted by the user name “hide”
  • the performer grasps the comment 1590, additional information regarding the user “Hide” who posted the comment can be displayed. Accordingly, the performer can make a special inquiry to the user while reading the user comment and referring to the information about the user.
  • a special effect can be used as an effect based on the content of the comment.
  • an effect can be used in which fireworks and cakes are displayed on the screen.
  • the comment list 1520 displayed in the service image display area 1550 and the comment list 1530 displayed in the operation image display area 1560 are highlighted, so that the viewing user can see the comment list.
  • the content of the comment list may be understood by the performer (or the character 1510 played by the performer).
  • FIG. 21 shows an example of a virtual space displayed to the user in the sixth embodiment.
  • the gift item 1620 posted from the user to the character 1610 shown in FIG. 21A is changed to another gift item 1630 as shown in FIG. That is, the user purchases a gift item in advance through a purchasing means 1150 as shown in FIG. 7 in exchange for a point, and in FIG. 21A, a gift box 1620 is posted as a gift item at a specified date and time. It can be provided (delivered) and presented as a gift item 1630 (for example, tea) shown in FIG. 21B according to the action of the character 1120 (such as contact with the gift item 1620). it can.
  • the additional information 1640 indicating the contents of the gift item 1630 and the poster can be displayed to the user at the time of opening.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Transfer Between Computers (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

La présente invention a pour objet de fournir un procédé dans lequel une interaction entre une personne effectuant la réalisation et un utilisateur peut être favorisée de manière plus efficace dans un espace virtuel. À cet effet, l'invention porte sur un procédé destiné à fournir, à une pluralité d'utilisateurs dont un utilisateur effectuant la réalisation et un utilisateur effectuant la visualisation, un espace virtuel qui a un contenu prescrit comprenant un caractère effectué par l'utilisateur effectuant l'exécution, l'espace virtuel est fourni et est caractérisé en ce qu'une action prescrite est reçue de l'utilisateur, l'action prescrite reçue est stockée en tant qu'informations d'historique pour chaque utilisateur, l'espace virtuel comprenant un contenu prescrit et des informations d'identification d'utilisateur est généré, et des informations générées sur la base des informations d'historique sont affichées.
PCT/JP2019/017727 2018-05-07 2019-04-25 Procédé de fourniture d'un espace virtuel ayant un contenu prescrit WO2019216249A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2020518268A JPWO2019216249A1 (ja) 2018-05-07 2019-04-25 所定のコンテンツを有する仮想空間を提供する方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018089230 2018-05-07
JP2018-089230 2018-05-07

Publications (1)

Publication Number Publication Date
WO2019216249A1 true WO2019216249A1 (fr) 2019-11-14

Family

ID=68468365

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/017727 WO2019216249A1 (fr) 2018-05-07 2019-04-25 Procédé de fourniture d'un espace virtuel ayant un contenu prescrit

Country Status (2)

Country Link
JP (1) JPWO2019216249A1 (fr)
WO (1) WO2019216249A1 (fr)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6823150B1 (ja) * 2019-11-29 2021-01-27 グリー株式会社 情報処理システム、情報処理方法およびコンピュータプログラム
JP2022044619A (ja) * 2019-12-27 2022-03-17 グリー株式会社 情報処理システム、情報処理方法およびコンピュータプログラム
JP2022044620A (ja) * 2019-12-27 2022-03-17 グリー株式会社 情報処理システム、情報処理方法およびコンピュータプログラム
US20220167044A1 (en) * 2020-04-30 2022-05-26 Gree, Inc. Video distribution device, video distribution method, and video distribution process
JP2022091811A (ja) * 2020-05-01 2022-06-21 グリー株式会社 動画配信システム、情報処理方法およびコンピュータプログラム
JP2023012486A (ja) * 2020-01-27 2023-01-25 グリー株式会社 情報処理システム、情報処理方法およびコンピュータプログラム
US20230156295A1 (en) * 2019-11-29 2023-05-18 Gree, Inc. Video distribution system, information processing method, and computer program
JP2023096234A (ja) * 2021-12-27 2023-07-07 株式会社カプコン プログラム、端末装置、システム
WO2024004007A1 (fr) * 2022-06-28 2024-01-04 楽天モバイル株式会社 Distribution vidéo pour commander l'affichage de commentaires postés pendant la distribution
JP7493188B2 (ja) 2022-08-29 2024-05-31 グリー株式会社 動画配信システム、動画配信方法および動画配信プログラム

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012120098A (ja) * 2010-12-03 2012-06-21 Linkt Co Ltd 情報提供システム
JP2017040971A (ja) * 2015-08-17 2017-02-23 株式会社コロプラ ヘッドマウントディスプレイシステムを制御する方法、および、プログラム

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012120098A (ja) * 2010-12-03 2012-06-21 Linkt Co Ltd 情報提供システム
JP2017040971A (ja) * 2015-08-17 2017-02-23 株式会社コロプラ ヘッドマウントディスプレイシステムを制御する方法、および、プログラム

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230156295A1 (en) * 2019-11-29 2023-05-18 Gree, Inc. Video distribution system, information processing method, and computer program
JP2021087150A (ja) * 2019-11-29 2021-06-03 グリー株式会社 情報処理システム、情報処理方法およびコンピュータプログラム
JP6823150B1 (ja) * 2019-11-29 2021-01-27 グリー株式会社 情報処理システム、情報処理方法およびコンピュータプログラム
US11843643B2 (en) 2019-12-27 2023-12-12 Gree, Inc. Information processing system, information processing method, and computer program
JP2022044619A (ja) * 2019-12-27 2022-03-17 グリー株式会社 情報処理システム、情報処理方法およびコンピュータプログラム
JP7485308B2 (ja) 2019-12-27 2024-05-16 グリー株式会社 情報処理システム、情報処理方法およびコンピュータプログラム
JP2022044620A (ja) * 2019-12-27 2022-03-17 グリー株式会社 情報処理システム、情報処理方法およびコンピュータプログラム
JP7301298B2 (ja) 2019-12-27 2023-07-03 グリー株式会社 情報処理システム、情報処理方法およびコンピュータプログラム
JP2023012486A (ja) * 2020-01-27 2023-01-25 グリー株式会社 情報処理システム、情報処理方法およびコンピュータプログラム
JP7418708B2 (ja) 2020-01-27 2024-01-22 グリー株式会社 情報処理システム、情報処理方法およびコンピュータプログラム
US11750873B2 (en) * 2020-04-30 2023-09-05 Gree, Inc. Video distribution device, video distribution method, and video distribution process
US20220167044A1 (en) * 2020-04-30 2022-05-26 Gree, Inc. Video distribution device, video distribution method, and video distribution process
JP7455298B2 (ja) 2020-05-01 2024-03-26 グリー株式会社 動画配信システム、情報処理方法およびコンピュータプログラム
JP2022091811A (ja) * 2020-05-01 2022-06-21 グリー株式会社 動画配信システム、情報処理方法およびコンピュータプログラム
JP7354524B2 (ja) 2021-12-27 2023-10-03 株式会社カプコン プログラム、端末装置、システム
JP2023096234A (ja) * 2021-12-27 2023-07-07 株式会社カプコン プログラム、端末装置、システム
WO2024004007A1 (fr) * 2022-06-28 2024-01-04 楽天モバイル株式会社 Distribution vidéo pour commander l'affichage de commentaires postés pendant la distribution
JP7493188B2 (ja) 2022-08-29 2024-05-31 グリー株式会社 動画配信システム、動画配信方法および動画配信プログラム

Also Published As

Publication number Publication date
JPWO2019216249A1 (ja) 2021-06-24

Similar Documents

Publication Publication Date Title
WO2019216249A1 (fr) Procédé de fourniture d'un espace virtuel ayant un contenu prescrit
JP6408634B2 (ja) 頭部装着ディスプレイ
CN107636605B (zh) 传达在头戴式显示器渲染的环境中的虚拟对象的触感和移动的动态手套
CN111624770B (zh) 头戴式显示器上的夹捏和保持手势导航
CN108292040B (zh) 优化头戴式显示器屏幕上的内容定位的方法
US10341612B2 (en) Method for providing virtual space, and system for executing the method
JP6244593B1 (ja) 情報処理方法、装置、および当該情報処理方法をコンピュータに実行させるためのプログラム
US10438394B2 (en) Information processing method, virtual space delivering system and apparatus therefor
US10545339B2 (en) Information processing method and information processing system
JP6481057B1 (ja) 仮想空間におけるキャラクタの制御方法
US20190026950A1 (en) Program executed on a computer for providing virtual space, method and information processing apparatus for executing the program
US20210165482A1 (en) Application processing system, method of processing application, and storage medium storing program for processing application
US20180348987A1 (en) Method executed on computer for providing virtual space, program and information processing apparatus therefor
US20220233956A1 (en) Program, method, and information terminal device
US20180374275A1 (en) Information processing method and apparatus, and program for executing the information processing method on computer
US20180247454A1 (en) Unknown
US20180348531A1 (en) Method executed on computer for controlling a display of a head mount device, program for executing the method on the computer, and information processing apparatus therefor
US20240013502A1 (en) Storage medium, method, and information processing apparatus
JP2019128721A (ja) ユーザの動きをアバタに反映するためのプログラム、当該プログラムを実行するための情報処理装置、およびアバタを含む映像を配信するための方法
JP2018124981A (ja) 情報処理方法、装置、および当該情報処理方法をコンピュータに実行させるためのプログラム
US20220323862A1 (en) Program, method, and information processing terminal
WO2020130112A1 (fr) Procédé de fourniture d'un espace virtuel ayant un contenu donné
JP6458179B1 (ja) プログラム、情報処理装置、および方法
JP7132374B2 (ja) ゲームプログラム、ゲーム方法、および情報端末装置
JP6453499B1 (ja) プログラム、情報処理装置、および方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19800163

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020518268

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19800163

Country of ref document: EP

Kind code of ref document: A1