US20210368228A1 - Video distribution system, video distribution method, and storage medium storing video distribution program for distributing video containing animation of character object generated based on motion of actor - Google Patents

Video distribution system, video distribution method, and storage medium storing video distribution program for distributing video containing animation of character object generated based on motion of actor Download PDF

Info

Publication number
US20210368228A1
US20210368228A1 US17/395,241 US202117395241A US2021368228A1 US 20210368228 A1 US20210368228 A1 US 20210368228A1 US 202117395241 A US202117395241 A US 202117395241A US 2021368228 A1 US2021368228 A1 US 2021368228A1
Authority
US
United States
Prior art keywords
video
display
decorative
actor
character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US17/395,241
Inventor
Masashi Watanabe
Yasunori KURITA
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GREE Inc
Original Assignee
GREE Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2018089612A external-priority patent/JP6382468B1/en
Priority claimed from JP2018144681A external-priority patent/JP6420930B1/en
Priority claimed from JP2018144682A external-priority patent/JP2020005238A/en
Priority claimed from JP2018144683A external-priority patent/JP6764442B2/en
Priority claimed from JP2018193258A external-priority patent/JP2019198057A/en
Priority claimed from JP2019009432A external-priority patent/JP6847138B2/en
Application filed by GREE Inc filed Critical GREE Inc
Priority to US17/395,241 priority Critical patent/US20210368228A1/en
Assigned to GREE, INC. reassignment GREE, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KURITA, Yasunori, WATANABE, MASASHI
Publication of US20210368228A1 publication Critical patent/US20210368228A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4722End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4784Supplemental services, e.g. displaying phone caller identification, shopping application receiving rewards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8146Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Definitions

  • the present disclosure relates to a video distribution system, a video distribution method, and a storage medium storing a video distribution program, for distributing a video containing animation of a character object generated based on motions of an actor.
  • Video distribution systems that generate an animation of a character object based on actor's motions and distribute a video including the animation of the character object have been known. Such a video distribution system is disclosed, for example, in Japanese Patent Application Publication No. 2015-184689 (“the '689 Publication”).
  • a viewing user can purchase a gift item and provide the gift item to a performer (a content distributor) as a gift.
  • the '098 Publication describes that the gift object is preferably displayed in a background region of a distributed view so as to avoid interference with the video.
  • Displaying a gift object to overlap with a video may deteriorate the viewing experience of a viewing user. For example, if a main part of the video is hidden behind the gift object, the viewer may feel his/her viewing of the video is impeded. In particular, when a large amount of gift object is displayed to overlap with the video, this drawback may be more severe. Therefore, in the '098 Publication, gift objects are not displayed in a content display region that displays the video, but displayed in the background region outside the content display region.
  • an object of the present disclosure is to provide a technical improvement which solves or alleviates at least part of the drawbacks of the prior art mentioned above.
  • an object of the present invention is to provide a video distribution system, a video distribution method, and a storage medium storing a video distribution program, capable of displaying a gift object to overlap with a video without deteriorating the viewing experience of a viewing user.
  • a video distribution system is a video distribution system for distributing a video containing animation of a character object generated based on a motion of an actor, the video distribution system comprising: one or more computer processors; and a storage for storing a candidate list including candidates of decorative objects to be displayed in the video in association with the character object.
  • the one or more computer processors execute computer-readable instructions to: in response to reception of a first display request from a viewing user, the first display request being sent for requesting display of a first decorative object among the decorative objects, add the first decorative object to the candidate list, and display the first decorative object in the video upon selection of the first decorative object from the candidate list.
  • the first decorative object is displayed in the video in association with a specific body part of the character object.
  • the first object is displayed in the video so as not to contact with the character object.
  • the selection of the first decorative object from the candidate list is performed by someone other than the viewing user.
  • the selection of the first decorative object from the candidate list is performed by a supporter who supports distribution of the video.
  • the selection of the first decorative object from the candidate list is performed by the actor.
  • the one or more computer processors display the first object in the video.
  • a no-display period is provided in a distribution period of the video, and the first object and the decorative objects are displayed in the video at a timing in the distribution period of the video other than the no-display period.
  • the first object is displayed in the video after an end of the no-display period.
  • the one or more computer processors are configured to: receive a purchase request from the viewing user, the purchase request being sent for purchasing the first decorative object, perform a payment process in response to the purchase request, and cancel the payment process when the first decorative object is not selected before distribution of the video is ended.
  • the one or more computer processors are configured to: receive a purchase request from the viewing user, the purchase request being sent for purchasing the first decorative object, perform a payment process in response to the purchase request, and provide the viewing user with points when the first decorative object is not selected before distribution of the video is ended.
  • the one or more computer processors are configured to: receive a purchase request from the viewing user, the purchase request being sent for purchasing the first decorative object, add the first decorative object to a possession list in response to the purchase request, the possession list being a list of objects possessed by the viewing user, in response to reception of the first display request from the viewing user, the first display request being sent for requesting display of the first decorative object, add the first decorative object to the candidate list and remove the first decorative object from the possession list, and add the first decorative object to the possession list when the first decorative object is not selected before distribution of the video is ended.
  • a video distribution method performed by one or more computer processors executing computer-readable instructions to distribute a video containing animation of a character object generated based on a motion of an actor.
  • the video distribution method comprises: storing a candidate list including candidates of decorative objects to be displayed in the video in association with the character object, in response to reception of a first display request from a viewing user, the first display request being sent for requesting display of a first decorative object among the decorative objects, adding the first decorative object to the candidate list, and displaying the first decorative object in the video upon selection of the first decorative object from the candidate list.
  • a non-transitory computer-readable storage medium storing a video distribution program for distributing a video containing animation of a character object generated based on a motion of an actor.
  • the video distribution program causes one or more computer processors to: store a candidate list including candidates of decorative objects to be displayed in the video in association with the character object, in response to reception of a first display request from a viewing user, the first display request being sent for requesting display of a first decorative object among the decorative objects, add the first decorative object to the candidate list, and display the first decorative object in the video upon selection of the first decorative object from the candidate list.
  • a gift object can be displayed to overlap with a video without deteriorating the viewing experience of a viewing user.
  • FIG. 1 is a block diagram illustrating a video distribution system according to one embodiment.
  • FIG. 2 schematically illustrates an installation of a studio where a video to be distributed in the video distribution system of FIG. 1 is produced.
  • FIG. 3 illustrates a possession list stored in the video distribution system of FIG. 1 .
  • FIG. 4 illustrates a candidate list stored in the video distribution system of FIG. 1 .
  • FIG. 5 illustrates an example of a video displayed on the client device 10 a in one embodiment.
  • An animation of a character object is included in FIG. 5 .
  • FIG. 6 illustrates an example of a video displayed on the client device 10 a in one embodiment.
  • a normal object is included in FIG. 6 .
  • FIG. 7 illustrates an example of a video displayed on the client device 10 a in one embodiment.
  • a decorative object is included in FIG. 7 .
  • FIG. 8 schematically illustrates an example of a decorative object selection screen for selecting a desired decorative object from among the decorative objects included in the candidate list.
  • FIG. 9 is a flow chart showing a flow of a video distribution process in one embodiment.
  • FIG. 10 is a flowchart of a process for displaying a normal object according to an embodiment.
  • FIG. 11 is a flowchart of a process for displaying a decorative object according to an embodiment.
  • FIG. 12 is a diagram for describing a no-display period set for a video distributed in the video distribution system of FIG. 1 .
  • FIG. 1 is a block diagram illustrating a video distribution system 1 according to one embodiment
  • FIG. 2 schematically illustrates an installation of a studio where a video to be distributed in the video distribution system 1 is produced
  • FIGS. 3 to 4 are for describing information stored in the video distribution system 1 .
  • the video distribution system 1 includes client devices 10 a to 10 c , a server device 20 , a studio unit 30 , and a storage 60 .
  • the client devices 10 a to 10 c , the server device 20 , and the storage 60 are communicably interconnected over a network 50 .
  • the server device 20 is configured to distribute a video including an animation of a character, as described later.
  • the character included in the video may be motion-controlled in a virtual space.
  • the video may be distributed from the server device 20 to each of the client devices 10 a to 10 c .
  • a first viewing user who is a user of the client device 10 a , a second viewing user who is a user of the client device 10 b , and a third viewing user who is a user of the client device 10 c are able to view the distributed video with their respective client devices.
  • the video distribution system 1 may include less than three client devices, or may include more than three client devices.
  • the client devices 10 a to 10 c are information processing devices such as smartphones.
  • the client devices 10 a to 10 c each may be a mobile phone, a tablet, a personal computer, an electronic book reader, a wearable computer, a game console, or any other information processing devices that are capable of playing videos.
  • Each of the client devices 10 a to 10 c may include a computer processor, a memory unit, a communication I/F, a display, a sensor unit including various sensors such as a gyro sensor, a sound collecting device such as a microphone, and a storage for storing various information.
  • the server device 20 includes a computer processor 21 , a communication I/F 22 , and a storage 23 .
  • the computer processor 21 is a computing device which loads various programs realizing an operating system and various functions from the storage 23 or other storage into a memory unit and executes instructions included in the loaded programs.
  • the computer processor 21 is, for example, a CPU, an MPU, a DSP, a GPU, any other computing device, or a combination thereof.
  • the computer processor 21 may be realized by means of an integrated circuit such as ASIC, PLD, FPGA, MCU, or the like. Although the computer processor 21 is illustrated as a single component in FIG. 1 , the computer processor 21 may be a collection of a plurality of physically separate computer processors.
  • a program or instructions included in the program that are described as being executed by the computer processor 21 may be executed by a single computer processor or executed by a plurality of computer processors distributively. Further, a program or instructions included in the program executed by the computer processor 21 may be executed by a plurality of virtual computer processors.
  • the communication I/F 22 may be implemented as hardware, firmware, or communication software such as a TCP/IP driver or a PPP driver, or a combination thereof.
  • the server device 20 is able to transmit and receive data to and from other devices via the communication I/F 22 .
  • the storage 23 is a storage device accessed by the computer processor 21 .
  • the storage 23 is, for example, a magnetic disk, an optical disk, a semiconductor memory, or various other storage device capable of storing data.
  • Various programs may be stored in the storage 23 . At least some of the programs and various data that may be stored in the storage 23 may be stored in a storage (for example, a storage 60 ) that is physically separated from the server device 20 .
  • Most of components of the studio unit 30 are disposed, for example, in a studio room R shown in FIG. 2 .
  • an actor A 1 and an actor A 2 give performances in the studio room R.
  • the studio unit 30 is configured to detect motions and expressions of the actor A 1 and the actor A 2 , and to output the detection result information to the server device 20 .
  • Both the actor A 1 and the actor A 2 are objects whose motions and expressions are captured by a sensor group provided in the studio unit 30 , which will be described later.
  • the actor A 1 and the actor A 2 are, for example, humans, animals, or moving objects that give performances.
  • the actor A 1 and the actor A 2 may be, for example, autonomous robots.
  • the number of actors in the studio room R may be one or three or more.
  • the studio unit 30 includes six motion sensors 31 a to 31 f attached to the actor A 1 , a controller 33 a held by the left hand of the actor A 1 , a controller 33 b held by the right hand of the actor A 1 , and a camera 37 a attached to the head of the actor A 1 via an attachment 37 b .
  • the studio unit 30 also includes six motion sensors 32 a to 32 f attached to the actor A 2 , a controller 34 a held by the left hand of the actor A 2 , a controller 34 b held by the right hand of the actor A 2 , and a camera 38 a attached to the head of the actor A 2 via an attachment 38 b .
  • a microphone for collecting audio data may be provided to each of the attachment 37 b and the attachment 38 b .
  • the microphone can collect speeches of the actor A 1 and the actor A 2 as voice data.
  • the microphones may be wearable microphones attached to the actor A 1 and the actor A 2 via the attachment 37 b and the attachment 38 b .
  • the microphones may be installed on the floor, wall or ceiling of the studio room R.
  • the studio unit 30 includes a base station 35 a , a base station 35 b , a tracking sensor 36 a , a tracking sensor 36 b , and a display 39 .
  • a supporter computer 40 is installed in a room next to the studio room R, and these two rooms are separated from each other by a glass window.
  • the server device 20 may be installed in the same room as the room in which the supporter computer 40 is installed.
  • the motion sensors 31 a to 31 f and the motion sensors 32 a to 32 f cooperate with the base station 35 a and the base station 35 b to detect their position and orientation.
  • the base station 35 a and the base station 35 b are multi-axis laser emitters.
  • the base station 35 a emits flashing light for synchronization and then emits a laser beam about, for example, a vertical axis for scanning.
  • the base station 35 a emits a laser beam about, for example, a horizontal axis for scanning.
  • Each of the motion sensors 31 a to 31 f and the motion sensors 32 a to 32 f may be provided with a plurality of optical sensors for detecting incidence of the flashing lights and the laser beams from the base station 35 a and the base station 35 b , respectively.
  • the motion sensors 31 a to 31 f and the motion sensors 32 a to 32 f each may detect its position and orientation based on a time difference between an incident timing of the flashing light and an incident timing of the laser beam, time when each optical sensor receives the light and or beam, an incident angle of the laser light detected by each optical sensor, and any other information as necessary.
  • the motion sensors 31 a to 31 f and the motion sensors 32 a to 32 f may be, for example, Vive Trackers provided by HTC CORPORATION.
  • the base station 35 a and the base station 35 b may be, for example, base stations provided by HTC CORPORATION.
  • Detection result information about the position and the orientation of each of the motion sensors 31 a to 31 f and the motion sensors 32 a to 32 f that are estimated in the corresponding motion sensor is transmitted to the server device 20 .
  • the detection result information may be wirelessly transmitted to the server device 20 from each of the motion sensors 31 a to 31 f and the motion sensors 32 a to 32 f . Since the base station 35 a and the base station 35 b emit flashing light and a laser light for scanning at regular intervals, the detection result information of each motion sensor is updated at each interval.
  • the six motion sensors 31 a to 31 f are mounted on the actor A.
  • the motion sensors 31 a , 31 b , 31 c , 31 d , 31 e , and 31 f are attached to the left wrist, the right wrist, the left instep, the right instep, the hip, and top of the head of the actor A 1 , respectively.
  • the motion sensors 31 a to 31 f may each be attached to the actor A 1 via an attachment.
  • the six motion sensors 32 a to 32 f are mounted on the actor A 2 .
  • the motion sensors 32 a to 32 f may be attached to the actor A 2 at the same positions as the motion sensors 31 a to 31 f .
  • the motion sensors 31 a to 31 f and the motion sensors 32 a to 32 f shown in FIG. 2 are merely an example.
  • the motion sensors 31 a to 31 f may be attached to various parts of the body of the actor A 1
  • the motion sensors 32 a to 32 f may be attached to various parts of the body of the actor A 2 .
  • the number of motion sensors attached to the actor A 1 and the actor A 2 may be less than or more than six.
  • body motions of the actor A 1 and the actor A 2 are detected by detecting the position and the orientation of the motion sensors 31 a to 31 f and the motion sensors 32 a to 32 f attached to the body parts of the actor A 1 and the actor A 2 .
  • a plurality of infrared LEDs are mounted on each of the motion sensors attached to the actor A 1 and the actor A 2 , and light from the infrared LEDs are sensed by infrared cameras provided on the floor and/or wall of the studio room R to detect the position and the orientation of each of the motion sensors.
  • Visible light LEDs may be used instead of the infrared LEDs, and in this case light from the visible light LEDs may be sensed by visible light cameras to detect the position and the orientation of each of the motion sensors.
  • a light emitting unit for example, the infrared LED or visible light LED
  • a light receiving unit for example, the infrared camera or visible light camera
  • a plurality of reflective markers may be used instead of the motion sensors 31 a - 31 f and the motion sensors 32 a - 32 f .
  • the reflective markers may be attached to the actor A 1 and the actor A 2 using an adhesive tape or the like.
  • the position and orientation of each reflective marker can be estimated by capturing images of the actor A 1 and the actor A 2 to which the reflective markers are attached to generate captured image data and performing image processing on the captured image data.
  • the controller 33 a and the controller 33 b supply, to the server device 20 , control signals that correspond to operation of the actor A 1 .
  • the controller 34 a and the controller 34 b supply, to the server device 20 , control signals that correspond to operation of the actor A 2 .
  • the tracking sensor 36 a and the tracking sensor 36 b generate tracking information for determining configuration information of a virtual camera used for constructing a virtual space included in the video.
  • the tracking information of the tracking sensor 36 a and the tracking sensor 36 b is calculated as the position in its three-dimensional orthogonal coordinate system and the angle around each axis.
  • the position and orientation of the tracking sensor 36 a may be changed according to operation of the operator.
  • the tracking sensor 36 a transmits the tracking information indicating the position and the orientation of the tracking sensor 36 a to the server device 20 .
  • the position and the orientation of the tracking sensor 36 b may be set according to operation of the operator.
  • the tracking sensor 36 b transmits the tracking information indicating the position and the orientation of the tracking sensor 36 b to the server device 20 .
  • the camera 37 a is attached to the head of the actor A 1 as described above.
  • the camera 37 a is disposed so as to capture an image of the face of the actor A 1 .
  • the camera 37 a continuously captures images of the face of the actor A 1 to obtain imaging data of the face of the actor A 1 .
  • the camera 38 a is attached to the head of the actor A 2 .
  • the camera 38 a is disposed so as to capture an image of the face of the actor A 2 and continuously capture images of the face of the actor A 2 to obtain captured image data of the face of the actor A 2 .
  • the camera 37 a transmits the captured image data of the face of the actor A 1 to the server device 20
  • the camera 38 a transmits the captured image data of the face of the actor A 2 to the server device 20
  • the camera 37 a and the camera 38 a may be 3D cameras capable of detecting the depth of a face of a person.
  • the display 39 is configured to display information received from the support computer 40 .
  • the information transmitted from the support computer 40 to the display 39 may include, for example, text information, image information, and various other information.
  • the display 39 is disposed at a position where the actor A 1 and the actor A 2 are able to see the display 39 .
  • the supporter computer 40 is installed in the next room of the studio room R. Since the room in which the supporter computer 40 is installed and the studio room R are separated by the glass window, an operator of the supporter computer 40 (sometimes referred to as “supporter” in the specification) is able to see the actor A 1 and the actor A 2 . In the illustrated embodiment, supporters B 1 and B 2 are present in the room as the operators of the supporter computer 40 .
  • the supporter computer 40 may be configured to be capable of changing the setting(s) of the component(s) of the studio unit 30 according to the operation by the supporter B 1 and the supporter B 2 .
  • the supporter computer 40 can change, for example, the setting of the scanning interval performed by the base station 35 a and the base station 35 b , the position or orientation of the tracking sensor 36 a and the tracking sensor 36 b , and various settings of other devices.
  • At least one of the supporter B 1 and the supporter B 2 is able to input a message to the supporter computer 40 , and the input message is displayed on the display 39 .
  • the components and functions of the studio unit 30 shown in FIG. 2 are merely example.
  • the studio unit 30 applicable to the invention may include various constituent elements that are not shown.
  • the studio unit 30 may include a projector.
  • the projector is able to project a video distributed to the client device 10 a or another client device on the screen S.
  • the storage 23 stores model data 23 a , object data 23 b , a possession list 23 c , a candidate list 23 d , and any other information required for generation and distribution of a video to be distributed.
  • the model data 23 a is model data for generating animation of a character.
  • the model data 23 a may be three-dimensional model data for generating three-dimensional animation, or may be two-dimensional model data for generating two-dimensional animation.
  • the model data 23 a includes, for example, rig data (also referred to as “skeleton data”) indicating a skeleton of a character, and surface data indicating the shape or texture of a surface of the character.
  • the model data 23 a may include two or more different pieces of model data.
  • the pieces of model data may each have different rig data, or may have the same rig data.
  • the pieces of model data may have different surface data or may have the same surface data.
  • the model data 23 a in order to generate a character object corresponding to the actor A 1 and a character object corresponding to the actor A 2 , the model data 23 a includes at least two types of model data different from each other.
  • the model data for the character object corresponding to the actor A 1 and the model data for the character object corresponding to the actor A 2 may have, for example, the same rig data but different surface data from each other.
  • the object data 23 b includes asset data used for constructing a virtual space in the video.
  • the object data 23 b includes data for rendering a background of the virtual space in the video, data for rendering various objects displayed in the video, and data for rendering any other objects displayed in the video.
  • the object data 23 a may include object position information indicating the position of an object in the virtual space.
  • the object data 23 b may include a gift object displayed in the video in response to a display request from viewing users of the client devices 10 a to 10 c .
  • the gift object may include an effect object, a normal object, and a decorative object. Viewing users are able to purchase a desired gift object.
  • the effect object is an object that affects the impression of the entire viewing screen of the distributed video, and is, for example, an object representing confetti.
  • the object representing confetti may be displayed on the entire viewing screen, which can change the impression of the entire viewing screen.
  • the effect object may be displayed so as to overlap with the character object, but it is different from the decorative object in that it is not displayed in association with a specific portion of the character object.
  • the normal object is an object functioning as a digital gift from a viewing user (for example, the actor A 1 or the actor A 2 ) to an actor, for example, an object resembling a stuffed toy or a bouquet.
  • the normal object is displayed on the display screen of the video such that it does not contact the character object.
  • the normal object is displayed on the display screen of the video such that it does not overlap with the character object.
  • the normal object may be displayed in the virtual space such that it overlaps with an object other than the character object.
  • the normal object may be displayed so as to overlap with the character object, but it is different from the decorative object in that it is not displayed in association with a specific portion of the character object.
  • the normal object when the normal object is displayed such that it overlaps with the character object, the normal object may hide portions of the character object other than the head including the face of the character object but does not hide the head of the character object.
  • the decorative object is an object displayed on the display screen in association with a specific part of the character object.
  • the decorative object displayed on the display screen in association with a specific part of the character object is displayed adjacent to the specific part of the character object on the display screen.
  • the decorative object displayed on the display screen in association with a specific part of the character object is displayed such that it partially or entirely covers the specific part of the character object on the display screen.
  • the specific part may be specified by three-dimensional position information that indicates a position in a three-dimensional coordinate space, or the specific part may be associated with position information in the three-dimensional coordinate space.
  • a specific part in the head of a character may be specified in the units of the front left side, the front right side, the rear left side, the rear right side, the middle front side, and the middle rear side of the head, the left eye, the right eye, the left ear, the right ear, and the whole hair.
  • the decorative object is an object that can be attached to a character object, for example, an accessory (such as a headband, a necklace, an earring, etc.), clothes (such as a T-shirt), a costume, and any other object which can be attached to the character object.
  • the object data 23 b corresponding to the decorative object may include attachment position information indicating which part of the character object the decorative object is associated with.
  • the attachment position information of a decorative object may indicate to which part of the character object the decorative object is attached. For example, when the decorative object is a headband, the attachment position information of the decorative object may indicate that the decorative object is attached to the “head” of the character object.
  • the attachment position information may be associated with a plurality of positions in the three-dimensional coordinate space.
  • the attachment position information that indicates the position to which a decorative object representing “a headband” is attached may be associated with two parts of “the rear left side of the head” and “the rear right side of the head” of the character object.
  • the decorative object representing “a headband” may be attached to both “the rear left side of the head” and “the rear right side of the head.”
  • the attachment position information of the decorative object may indicate that the decorative object is attached to the “torso” of the character object.
  • a duration of time of displaying the gift objects may be set for each gift object depending on its type.
  • the duration of displaying the decorative object may be set longer than the duration of displaying the effect object and the duration of displaying the normal object.
  • the duration of displaying the decorative object may be set to 60 seconds, while the duration of displaying the effect object may be set to 5 seconds and the duration of displaying the normal object may be set to 10 seconds.
  • the possession list 23 c is a list showing gift objects possessed by viewing users of a video.
  • An example of the possession list 23 c is shown in FIG. 3 .
  • an object ID for identifying a gift object possessed by a viewing user is stored in association with account information of the viewing user (for example, user ID of the viewing user).
  • the viewing users include, for example, the first to third viewing users of the client devices 10 a to 10 c.
  • the candidate list 23 d is a list of decorative objects for which a display request has been made from a viewing user. As will be described later, a viewing user who possesses a decorative object(s) is able to make a request to display his/her own decorative objects.
  • object IDs for identifying decorative objects are stored in association with the account information of the viewing user who has made a request to display the decorative objects.
  • the candidate list 23 d may be created for each distributor.
  • the candidate list 23 d may be stored, for example, in association with distributor identification information that identify a distributor(s) (the actor A 1 , the actor A 2 , the supporter B 1 , and/or the supporter B 2 ).
  • the computer processor 21 functions as a body motion data generation unit 21 a , a face motion data generation unit 21 b , an animation generation unit 21 c , a video generation unit 21 d , a video distribution unit 21 e , a display request processing unit 21 f , a decorative object selection unit 21 g , and an object purchase processing unit 21 h by executing computer-readable instructions included in a distributed program.
  • the computer processor 21 may be realized by a computer processor other than the computer processor 21 of the video distribution system 1 .
  • at least some of the functions realized by the computer processor 21 may be realized by a computer processor mounted on the supporter computer 40 .
  • the body motion data generation unit 21 a generates first body motion data of each part of the body of the actor A 1 based on detection result information of the corresponding motion sensors 31 a to 31 f , and generates second body motion data, which is a digital representation of the position and the orientation of each part of the body of the actor A 2 , based on detection result information of the corresponding motion sensors 32 a to 32 f .
  • the first body motion data and the second body motion data may be collectively referred to simply as “body motion data.”
  • the body motion data is serially generated with time as needed.
  • the body motion data may be generated at predetermined sampling time intervals.
  • the body motion data can represent body motions of the actor A 1 and the actor A 2 in time series as digital data.
  • the motion sensors 31 a to 31 f and the motion sensors 32 a to 32 f are attached to the left and right limbs, the waist, and the head of the actor A 1 and the actor A 2 , respectively. Based on the detection result information of the motion sensors 31 a to 31 f and the motion sensors 32 a to 32 f , it is possible to digitally represent the position and orientation of the substantially whole body of the actor A 1 and the actor A 2 in time series.
  • the body motion data can define, for example, the position and rotation angle of bones corresponding to the rig data included in the model data 23 a.
  • the face motion data generation unit 21 b generates first face motion data, which is a digital representation of motions of the face of the actor A 1 , based on captured image data of the camera 37 a , and generates second face motion data, which is a digital representation of motions of the face of the actor A 2 , based on captured image data of the camera 38 a .
  • first face motion data and the second face motion data may be collectively referred to simply as “face motion data.”
  • the face motion data is serially generated with time as needed.
  • the face motion data may be generated at predetermined sampling time intervals.
  • the face motion data can digitally represent facial motions (changes in facial expression) of the actor A 1 and the actor A 2 in time series.
  • the animation generation unit 21 c is configured to apply the body motion data generated by the body motion data generation unit 21 a and the face motion data generated by the face motion data generation unit 21 b to predetermined model data included in the model data 23 a in order to generate an animation of a character object that moves in a virtual space and whose facial expression changes. More specifically, the animation generation unit 21 c may generate an animation of a character object moving in synchronization with the motion of the body and facial expression of the actor A 1 based on the first body motion data and the first face motion data related to the actor A 1 , and generate an animation of a character object moving in synchronization with the motion of the body and facial expression of the actor A 2 based on the second body motion data and the second face motion data related to the actor A 2 .
  • a character object generated based on the motion and expression of the actor A 1 may be referred to as a “first character object”, and a character object generated based on the motion and expression of the actor A 2 may be referred to as a “second character object.”
  • the video generation unit 21 d constructs a virtual space using the object data 23 b , and generates a video that includes the virtual space, the animation of the first character object corresponding to the actor A 1 , and the animation of the second character object corresponding to the actor A 2 .
  • the first character object is disposed in the virtual space so as to correspond to the position of the actor A 1 with respect to the tracking sensor 36 a
  • the second character object is disposed in the virtual space so as to correspond to the position of the actor A 2 with respect to the tracking sensor 36 a .
  • the video generation unit 21 d constructs a virtual space based on tracking information of the tracking sensor 36 a .
  • the video generation unit 21 d determines configuration information (the position in the virtual space, a gaze position, a gazing direction, and the angle of view) of the virtual camera based on the tracking information of the tracking sensor 36 a .
  • the video generation unit 21 d determines a rendering area in the entire virtual space based on the configuration information of the virtual camera and generates moving image information for displaying the rendering area in the virtual space.
  • the video generation unit 21 d may be configured to determine the position and the orientation of the first character object and the second character object in the virtual space, and the configuration information of the virtual camera based on tracking information of the tracking sensor 36 b instead of or in addition to the tracking information of the tracking sensor 36 a.
  • the video generation unit 21 d is able to include voices of the actor A 1 and the actor A 2 collected by the microphone in the studio unit 30 with the generated moving image.
  • the video generation unit 21 d generates an animation of the first character object moving in synchronization with the motion of the body and facial expression of the actor A 1 , and an animation of the second character moving in synchronization with the motion of the body and facial expression of the actor A 2 .
  • the video generation unit 21 d then includes the voices of the actor A 1 and the actor A 2 with the animations respectively to generate a video for distribution.
  • the video distribution unit 21 e distributes the video generated by the video generation unit 21 d .
  • the video is distributed to the client devices 10 a to 10 c and other client devices over the network 50 .
  • the received video is reproduced in the client devices 10 a to 10 c.
  • the video may be distributed to a client device (not shown) installed in the studio room R, and projected from the client device onto the screen S via a short focus projector.
  • the video may also be distributed to the supporter computer 40 . In this way, the supporter B 1 and the supporter B 2 can check the viewing screen of the distributed video.
  • FIG. 5 An example of the screen on which the video distributed from the server device 20 to the client device 10 a and reproduced by the client device 10 a is displayed is illustrated in FIG. 5 .
  • a display image 70 of the video distributed from the server device 20 is displayed on the display of the client device 10 a .
  • the display image 70 displayed on the client device 10 a includes a character object 71 A corresponding to the actor A 1 , a character object 71 B corresponding to the actor A 2 , a table object 72 a representing a table, in a virtual space.
  • the object 72 is not a gift object, but is one of objects used for constructing a virtual space included in the object data 23 b .
  • the character object 71 A is generated by applying the first body motion data and the first face motion data of the actor A 1 to the model data for the actor A 1 included in the model data 23 a .
  • the character object 71 A is motion-controlled based on the first body motion data and the first face motion data.
  • the character object 71 B is generated by applying the second body motion data and the second face motion data of the actor A 2 to the model data for the actor A 2 included in the model data 23 a .
  • the character object 71 B is motion-controlled based on the second body motion data and the second face motion data.
  • the character object 71 A is controlled to move in the screen in synchronization with the motions of the body and facial expression of the actor A 1
  • the character object 71 B is controlled to move in the screen in synchronization with the motions of the body and facial expression of the actor A 2 .
  • the video from the server device 20 may be distributed to the supporter computer 40 .
  • the video distributed to the supporter computer 40 is displayed on the supporter computer 40 in the same manner as FIG. 5 .
  • the supporter B 1 and the supporter B 2 are able to change the configurations of the components of the studio unit 30 while viewing the video reproduced by the supporter computer 40 .
  • they can cause an instruction signal for changing the orientation of the tracking sensor 36 a to be sent from the supporter computer 40 to the tracking sensor 36 a .
  • the tracking sensor 36 a is able to change its orientation in accordance with the instruction signal.
  • the tracking sensor 36 a may be rotatably attached to a stand via a pivoting mechanism that includes an actuator disposed around the axis of the stand.
  • the actuator of the pivoting mechanism may be driven based on the instruction signal, and the tracking sensor 36 a may be turned by an angle according to the instruction signal.
  • the supporter B 1 and the supporter B 2 may cause the supporter computer 40 to transmit an instruction for using the tracking information of the tracking sensor 36 b to the tracking sensor 36 a and the tracking sensor 36 b , instead of the tracking information from the tracking sensor 36 a.
  • the supporter B 1 and the supporter B 2 may input a message indicating the instruction(s) into the support computer 40 and the message may be output to the display 39 .
  • the supporter B 1 and the supporter B 2 can instruct the actor A 1 or the actor A 2 to change his/her standing position through the message displayed on the display 39 .
  • the display request processing unit 21 f receives a display request to display a gift object from a client device of a viewing user, and performs processing according to the display request.
  • Each viewing user is able to transmit a display request to display a gift object to the server device 20 by operating his/her client device.
  • the first viewing user can transmit a display request to display a gift object to the server device 20 by operating the client device 10 a .
  • the display request to display a gift object may include the user ID of the viewing user and the identification information (object ID) that identifies the object for which the display request is made.
  • the gift object may include the effect object, the normal object, and the decorative object.
  • the effect object and the normal object are examples of the first object.
  • a display request for requesting display of the effect object or the normal object is an example of a second display request.
  • the display request processing unit 21 f may determine what type of gift object the request is requesting to display. For example, the display request processing unit 21 f may determine which of the effect object, the normal object, or the decorative object the display request is requesting to display. The display request processing unit 21 f may determine what type of gift object the request is requesting to display based on the object ID included in the display request.
  • the display request processing unit 21 f when the display request processing unit 21 f received a display request to display a specific effect object from a viewing user, the display request processing unit 21 f performs a process, in response to the display request, to display the effect object for which the display request is made in the display image 70 of the video. For example, when a display request to display an effect object simulating confetti is made, the display request processing unit 21 f displays in the display image 70 an effect object 73 simulating confetti based on the display request as shown in FIG. 6 .
  • the display request processing unit 21 f when the display request processing unit 21 f received a display request to display a specific normal object from a viewing user, the display request processing unit 21 f performs a process, in response to the display request, to display the normal object for which the display request is made in the video 70 . For example, when a display request to display a normal object simulating a stuffed bear is made, the display request processing unit 21 f displays a normal object 74 simulating a stuffed bear in the display image 70 based on the display request as shown in FIG. 6 .
  • the display request for the normal object 74 may include a display position specifying parameter for specifying the display position of the normal object 74 in the virtual space.
  • the display request processing unit 21 f displays the normal object 74 at the position in the virtual space specified by the display position specifying parameter.
  • the display position specifying parameter may specify the upper position of the table object 72 a representing a table as the display position of the normal object 74 .
  • a viewing user is able to specify the position where the normal object is to be displayed by using the display position specifying parameter while watching the layouts of the character object 71 A, the character object 71 B, the gift object, and other objects included in the video 70 .
  • the normal object 74 may be displayed such that it moves within the display image 70 of the video.
  • the normal object 74 may be displayed such that it falls from the top to the bottom of the screen.
  • the normal object 74 may be displayed in the display image 70 during the fall, which is from when the object starts to fall and to when the object has fallen to the floor of the virtual space of the video 70 , and may disappear from the display image 70 after it has fallen to the floor.
  • a viewing user can view the falling normal object 74 from the start of the fall to the end of the fall.
  • the moving direction of the normal object 74 in the screen can be specified as desired.
  • the normal object 74 may be displayed in the display image 70 so as to move from the left to the right, the right to the left, the upper left to the lower left, or any other direction in the video 70 .
  • the normal object 74 may move on various paths.
  • the normal object 74 can move on a linear path, a circular path, an elliptical path, a spiral path, or any other paths.
  • the viewing user may include, in the display request to display the normal object, a moving direction parameter that specifies the moving direction of the normal object 74 and/or a path parameter that specifies the path on which the normal object 74 moves, in addition to or in place of the display position specifying parameter.
  • those whose size in the virtual space is smaller than a reference size may be displayed such that a part or all of the object(s) is overlapped with the character object 71 A and/or the character object 71 B.
  • those whose size in the virtual space is larger than the reference size may be displayed at a position where the object is not overlapped with the character object.
  • the object is displayed behind the overlapping character object.
  • the display request processing unit 21 f when the display request processing unit 21 f received a display request to display a specific decorative object from a viewing user, the display request processing unit 21 f adds the decorative object for which the display request is made to the candidate list 23 d based on the display request.
  • the display request to display the decorative object is an example of a first display request.
  • the display request processing unit 21 f may store, in the candidate list 23 d , identification information (object ID) identifying the specific decorative object for which the display request has been made from the viewing user, in association with the user ID of the viewing user (see FIG. 4 ).
  • the user ID of the viewing user who made the display request and the decorative object ID of the decorative object for which the display request is made by the viewing user are associated with each other and stored in the candidate list 23 d.
  • the decorative object selection unit 21 g in response to one or more of the decorative objects included in the candidate list 23 d being selected, performs a process to display the selected decorative object in the display image 70 of the video.
  • a decorative object selected from the candidate list 23 d may be referred to as a “selected decorative object”.
  • the selection of the decorative object from the candidate list 23 d is made, for example, by the supporter B 1 and/or the supporter B 2 who operate the supporter computer 40 .
  • the supporter computer 40 displays a decorative object selection screen.
  • FIG. 8 shows an example of a decorative object selection screen 80 in one embodiment.
  • the decorative object selection screen 80 is displayed, for example, on the display of the supporter computer 40 .
  • the decorative object selection screen 80 shows, for example, each of the plurality of decorative objects included in the candidate list 23 d in a tabular form.
  • the decorative object selection screen 80 in one embodiment includes a first column 81 showing the type of the decorative object, a second column 82 showing the image of the decorative object, and a third column 83 showing the body part of a character object associated with the decorative object. Further, on the decorative object selection screen 80 , selection buttons 84 a to 84 c for selecting each decorative object are displayed. Thus, the decorative object selection screen 80 displays decorative objects that can be selected as the selected decorative object.
  • the supporters B 1 and B 2 are able to select one or more of the decorative objects shown on the decorative object selection screen 80 .
  • the supporter B 1 and the supporter B 2 are able to select a headband by selecting the selection button 84 a .
  • the display request processing unit 21 f displays the selected decorative object 75 that simulates the selected headband on the display screen 70 of the video, as shown in FIG. 7 .
  • the selected decorative object 75 is displayed on the display image 70 in association with a specific body part of a character object.
  • the selected decorative object 75 may be displayed such that it contacts with the specific body part of the character object.
  • the selected decorative object 75 simulating the headband is associated with the head of the character object, it is attached to the head of the character object 71 A as shown in FIG. 7 .
  • the decorative object may be displayed on the display screen 70 such that it moves along with the motion of the specific part of the character object. For example, when the head of the character object 71 A with the headband moves, the selected decorative object 75 simulating the headband moves in accordance with the motion of the head of the character object 71 A as if the headband is attached to the head of the character object 71 A.
  • the object data 23 b may include attachment position information indicating which part of the character object the decorative object is associated with.
  • the decorative object selection unit 21 g may prohibit selection of a decorative object included in the candidate list 23 d as the selected decorative object 75 , if the decorative object is to be attached to a body part that overlaps with the body part indicated by the attachment position information of another decorative object already attached to the character object.
  • a headband associated with “the rear left side of the head” and “the rear right side of the head” and a hair accessory associated with “the rear left side of the head” cannot be attached at the same time since these decorative objects overlap with each other in “the rear left side of the head.”
  • a headband associated with “the rear left side of the head” and “the rear right side of the head” and an earring associated with “the left ear (of the head)” and “the right ear (of the head)” can be attached at the same time since these decorative objects do not overlap with each other in any specific body part of a character object.
  • the selected decorative object 75 may be displayed on the display screen 70 in association with the character object 71 B instead of the character object 71 A.
  • the selected decorative object 75 may be displayed on the display screen 70 in association with the character object 71 A and the character object 71 B.
  • the decorative object selection screen 80 may be configured to exclude information identifying a user who holds the decorative object or a user who has made a display request to display the decorative object. By configuring the decorative object selection screen 80 in this manner, it is possible to prevent a selector from giving preference to a particular user when selecting a decorative object.
  • the decorative object selection screen 80 may display, for each decorative object, information regarding a user who holds the decorative object or a user who made a display request for the decorative object.
  • Such information displayed for each decorative object may include, for example, the number of times the user who made the display request for the decorative object has made display requests for the decorative object so far and the number of times the decorative object has been actually selected (for example, information indicating that the display request to display the decorative object has been made five times and the decorative object has been selected two times among the five times), the number of times the user has viewed the video of the character object 71 A and/or the character object 71 B, the number of times the user has viewed videos (regardless of whether the character object 71 A and/or the character object 71 B appears in the videos or not), the amount of money which the user spent for the gift object, the number of times the user has purchased the objects, the points possessed by the user that can be used in the video distribution system 1 , the level of the user in the video distribution system 1 , and any other information about
  • a constraint(s) may be imposed on the display of decorative objects to eliminate overlapping.
  • a decorative object associated with the specific body part of the character object if a decorative object associated with the specific body part of the character object is already selected, selection of other decorative objects associated with the specific body part may be prohibited.
  • the other decorative objects associated with the “head” for example, a decorative object simulating a “hat” associated with the head
  • a selection button for selecting the decorative object simulating the hat is disabled on decorative object selection screen 80 . According to this embodiment, it is possible to prevent the decorative object from being displayed so as to overlap with a specific part of the character object.
  • the decorative object selection screen 80 may be displayed on another device instead of or in addition to the supporter computer 40 .
  • the decorative object selection screen 80 may be displayed on the display 39 and/or the screen S in the studio room R.
  • the actor A 1 and the actor A 2 are able to select a desired decorative object based on the decorative object selection screen 80 displayed on the display 39 or the screen S. Selection of the decorative object by the actor A 1 and the actor A 2 may be made, for example, by operating the controller 33 a , the controller 33 b , the controller 34 a , or the controller 34 b.
  • the object purchase processing unit 21 h transmits, to a client device of the viewing user (for example, the client device 10 a ), purchase information of each of the plurality of gift objects that can be purchased in relation to the video.
  • the purchase information of each gift object may include the type of the gift object (the effect object, the normal object, or the decorative object), the image of the gift object, the price of the gift object, and any other information necessary to purchase the gift object.
  • the viewing user is able to select a gift object to purchase it considering the gift object purchase information displayed on the client device 10 a .
  • the selection of the gift objects to be purchased may be performed by operating the client device 10 a .
  • a purchase request for the gift object is transmitted to the server device 20 .
  • the object purchase processing unit 21 h performs a payment process based on the purchase request.
  • the purchased gift object is held by the viewing user.
  • the object ID of the purchased gift object is stored in the possession list 23 c in association with the user ID of the viewing user who purchased the object.
  • Gift objects that can be purchased may be different for each video.
  • the gift objects may be made purchasable in two or more different videos. That is, the purchasable gift objects may include a gift object unique to each video and a common gift object that can be purchased in multiple videos.
  • the effect object that simulates confetti may be the common gift object that can be purchased in the two or more different videos.
  • the purchased effect object when a user purchases an effect object while viewing a video, the purchased effect object may be displayed automatically in the video that the user is viewing in response to completion of the payment process for purchasing the effect object.
  • the purchased normal object when a user purchases a normal object while viewing a video, the purchased normal object may be automatically displayed in the video that the user is viewing in response to completion of the payment process for purchasing the normal object.
  • a notification of the completion of the payment process may be sent to the client device 10 a , and a confirmation screen may be displayed to confirm whether the viewing user wants to make a display request to display the purchased effect object on the client device 10 a .
  • the display request to display the purchased effect object may be sent from the client device of the viewing user to the display request processing unit 21 f , and the display request processing unit 21 f may perform the process to display the purchased effect object in the display image 70 of the video.
  • a confirmation screen may be displayed on the client device 10 a to confirm whether the viewing user wants to make a display request to display the purchased normal object, in the same manner as above.
  • FIG. 9 is a flow chart showing a flow of a video distribution process in one embodiment
  • FIG. 10 is a flowchart of a process for displaying a normal object according to one embodiment
  • FIG. 11 is a flowchart of a process for displaying a decorative object according to one embodiment.
  • the actor A 1 and the actor A 2 are giving performances in the studio room R.
  • step S 11 body motion data, which is a digital representation of the body motions of the actor A 1 and the actor A 2 , and face motion data, which is a digital representation of the facial motions (expression) of the actor A 1 and the actor A 2 , are generated.
  • Generation of the body motion data is performed, for example, by the body motion data generation unit 21 a described above
  • generation of the face motion data is performed, for example, by the face motion data generation unit 21 b described above.
  • step S 12 the body motion data and the face motion data of the actor A 1 are applied to the model data for the actor A 1 to generate animation of the first character object that moves in synchronization with the motions of the body and facial expression of the actor A 1 .
  • the body motion data and the face motion data of the actor A 2 are applied to the model data for the actor A 2 to generate animation of the second character object that moves in synchronization with the motions of the body and facial expression of the actor A 2 .
  • the generation of the animation is performed, for example, by the above-described animation generation unit 21 c.
  • step S 13 a video including the animation of the first character object corresponding to the actor A 1 and the animation of the second character object corresponding to the actor A 2 is generated.
  • the voices of the actor A 1 and the actor A 2 may be included in the video.
  • the animation of the first character object and the animation of the second character object may be provided in the virtual space. Generation of the video is performed, for example, by the above-described video generation unit 21 d.
  • step S 14 the video generated in step S 13 is distributed.
  • the video is distributed to the client devices 10 a to 10 c and other client devices over the network 50 .
  • the video may be distributed to the supporter computer 40 and/or may be projected on the screen S in the studio room R.
  • the video is distributed continuously over a predetermined distribution period.
  • the distribution period of the video may be set to, for example, 30 seconds, 1 minute, 5 minutes, 10 minutes, 30 minutes, 60 minutes, 120 minutes, and any other length of time.
  • step S 15 it is determined whether a termination condition for ending the distribution of the video is satisfied.
  • the termination condition is, for example, that the distribution ending time has come, that the supporter computer 40 has issued an instruction to end the distribution, or any other conditions. If the termination condition is not satisfied, the steps S 11 to S 14 of the process are repeatedly executed, and distribution of the video including the animation synchronized with the movements of the actor A 1 and the actor A 2 is continued. When it is determined that the termination condition is satisfied for the video, the distribution process of the video is ended.
  • step S 21 it is determined whether a display request for a normal object has been made while a video is distributed.
  • the first viewing user may select one or more specific normal objects from his/her own normal objects and send a display request to display the selected normal objects from the client device 10 a to the server device 20 .
  • a display request for a normal object may be generated in response to the purchase process or the payment process performed for the normal object.
  • Step 21 may be performed by the display request processing unit 21 f described above.
  • Step S 22 is a process for displaying in the video being distributed the normal object for which the display request is made, based on the display request. For example, when a display request for the normal object 74 is made while a video is distributed, the normal object 74 for which the display request is made is displayed in the display screen 70 of the video, as shown in FIG. 6 .
  • the display request for the normal object is ended.
  • the display process of the normal object shown in FIG. 10 is performed repeatedly in the distribution period of the video.
  • the display process of the effect object is performed by the same procedure as described above for the normal object.
  • the effect object 73 for which the display request is made is displayed in the display screen 70 of the video, as shown in FIG. 6 .
  • the effect object 73 shown in FIG. 6 simulates confetti.
  • the effect object 73 that simulates confetti may be displayed so as to overlap (or contact) with the character object 71 A and the character object 71 B, but it is different from the decorative object in that it is not displayed in association with a specific portion of the character object 71 A and the character object 71 B.
  • the display process of the decorative object is performed in parallel with the distribution process of the video shown in FIG. 9 . It is also possible that the display process of the decorative object is performed in parallel with the display process of the normal object shown in FIG. 10 .
  • step S 31 it is determined whether a display request for a decorative object has been made while a video is distributed.
  • the first viewing user may select a first decorative object from his/her own decorative objects and send a display request to display the selected first decorative object from the client device 10 a to the server device 20 .
  • Step 31 may be performed by the display request processing unit 21 f described above.
  • step S 32 the first decorative object for which the display request has been made is added to the candidate list based on the display request.
  • the candidate list is a list of candidate objects for a decorative object to be displayed in the video being distributed, and one example of the candidate list is the candidate list 23 d described above.
  • step S 33 it is determined whether a specific decorative object has been selected from the decorate objects included in the candidate list.
  • step S 34 the specific decorative object that has been selected (“the selected decorative object”) is removed from the candidate list and the selected decorative object is displayed in the display screen of the video being distributed.
  • the selected decorative object the specific decorative object that has been selected
  • the decorative object 75 is displayed in the display image 70 , as shown in FIG. 7 .
  • the first decorative object for which the display request was made in S 31 is selected from the candidate list while the video is distributed, the first decorative object is displayed in the display screen 70 , and if not selected, it is not displayed in the display image 70 .
  • step S 35 it is determined whether the distribution of the video being distributed is completed. The determination made in step S 35 may be based on the same criterion as in step S 15 , for example. When it is determined in step S 35 that the distribution is not completed, the display process of the decorative object returns to step S 31 and then repeats steps S 31 to S 35 . When it is determined that the distribution is completed, the display process of the decorative object proceeds to step S 36 .
  • step S 36 is related to the decorative objects that remain in the candidate list when the distribution of the video is completed (these decorative objects may be herein referred to as “non-selected objects”).
  • the process performed in step S 36 may be herein referred to as the non-selected object process.
  • a non-selected object is an object which was purchased by a viewing user and for which a display request was made while a video is distributed. Therefore, the non-selected object process performed in step S 36 may be a process to refund the expense for purchasing the non-selected object to the viewing user who made the display request for the non-selected object. In another embodiment, the non-selected object process may be a process to cancel the payment process for purchasing the non-selected object. In another embodiment, the non-selected object process may be a process to provide the viewing user who made the display request for the non-selected object with a decorative object that is different from the non-selected decorative object.
  • the non-selected object process may be a process to provide the user who purchased the non-selected object with points that can be used in the video distribution system 1 , instead of refunding the purchase expense or canceling the payment process.
  • the video distribution system 1 may be configured such that users consume points to view videos.
  • the points provided to the user who possesses the non-selected object in the non-selected object process may be usable for viewing videos in the video distribution system 1 .
  • the non-selected object process may be a process to add, to the possession list, the non-selected object as an object possessed by the first viewing user.
  • the non-selected object can be returned to the first viewing user.
  • the non-selected object process may be a process to retain the candidate list as of the end of the video distribution until the next time the same distributor distributes a video.
  • the distributor can reuse the candidate list used in the previous video distribution.
  • the reused candidate list includes the decorative object for which a display request was made in the previous video distribution and which was not actually displayed in the video (that is, the non-selected object).
  • the next video distribution can be performed using the candidate list including the non-selected object that was not selected in the previous video distribution.
  • the non-selected object may be selected and displayed in a video in the next video distribution.
  • step S 36 After the process of step S 36 is completed, the display process of the decorative object is ended.
  • FIG. 12 is a schematic diagram for describing the no-display period.
  • FIG. 12 shows that a video is distributed between the time t 1 and the time t 2 .
  • the time t 1 is the start time of the video distribution
  • the time t 2 is the end time of the video distribution.
  • the time period between the time t 3 and the time t 4 is the no-display period 91 .
  • the effect object or the normal object for which the display request is made is not displayed in the distributed video during the no-display period 91 , and this object is displayed in the video at a time after the end of the no-display period 91 (that is, after the time t 4 ).
  • the selected decorative object is not displayed in the distributed video during the no-display period 91 and is displayed in the video at a time after the end of the no-display period 91 .
  • the display request for the decorative object may be received in the no-display period 91 .
  • the decorative object for which the display request is made may be added to the candidate list during the no-display period 91 .
  • the gift objects include three types of objects: the decorative object, the normal object, and the effect object. Among them, only the decorative object is displayed in association with a character object.
  • the animation of the character object is an element that attracts viewing users' attention. For example, in the video shown in FIGS. 5 to 7 , it is presumed that the character object 71 A and the character object 71 B attract attention.
  • the decorative object may be kept from being displayed in the video until the decorative object is selected from the candidate list 23 d , so as to prevent that the decorative object is displayed disorderly around or over the character objects.
  • the viewing experience of the viewing users can be prevented from being deteriorated.
  • the gift objects include the category of decorative object to be displayed in association with a character object, making it possible to restrain the number (the amount) of decorative objects displayed in association with a character object that constitutes the main part of a video.
  • the normal object 74 is displayed in a video in response to a display request from a viewing user.
  • the normal object 74 is displayed in the display screen 70 of the video so as not to contact or overlap with the character object 71 A and the character object 71 B, and therefore, the visibility of the character object 71 A and the character object 71 B is less affected. With this arrangement, it is possible to prevent the viewing experience of users from being deteriorated due to reduced visibility of the character objects.
  • the effect object 73 and the normal object 74 are displayed in a video in response to a display request from a viewing user.
  • the effect object 73 and the normal object 74 are displayed in the display screen 70 for a smaller duration than the decorative object 75 , and therefore, the visibility of the character object 71 A and the character object 71 B is less affected.
  • this arrangement it is possible to prevent the viewing experience of users from being deteriorated due to reduced visibility of the character objects.
  • a decorative object is selected from the candidate list 23 d by someone (for example, the supporter B 1 , the supporter B 2 , the actor A 1 , or the Actor A 2 ) other than the viewing user who has made the display request for the decorative object, and therefore, it is possible to restrain the number of displayed decorative objects.
  • a gift object is not displayed in a video during the no-display period 91 .
  • a produced video can be viewed without interruption by the gift object.
  • the no-display period 91 is set at a time period within the video during which a visual performance is given by the actor A 1 and the actor A 2 , the performance of the actors can be presented to the viewers without interruption by the first object and the decorative object.
  • a user can present a decorative object to a character.
  • a system having higher originality and to provide service having higher originality with the system as compared to systems in which presenting a decorative object is not allowed.
  • capturing and generating the images of the video to be distributed may be performed in a site other than the studio room R.
  • capturing the images for generating the video to be distributed may be performed at an actor's home or a supporter's home.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Processing Or Creating Images (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

A system according to one aspect causes a viewer user device of a viewer user to play a video during a distribution period. The video may contain animation of a character object. In response to receipt of a first display request sent from the viewer user device for requesting arrangement of a first decorative object in the video, the system may arrange the first decorative object in the video in association with the character object for a first display time. In response to receipt of a second display request sent from the viewer user device for requesting display of a first normal object in the video, the system may display the first normal object in the video for second display time shorter than the first display time.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a Continuation Application of U.S. Ser. No. 16/406,195 (filed on May 8, 2019), which in turn claims the benefit of priority from Japanese Patent Application Serial No. 2018-089612 (filed on May 8, 2018), Japanese Patent Application Serial No. 2018-144681 (filed on Aug. 1, 2018), Japanese Patent Application Serial No. 2018-144682 (filed on Aug. 1, 2018), Japanese Patent Application Serial No. 2018-144683 (filed on Aug. 1, 2018), Japanese Patent Application Serial No. 2018-193258 (Oct. 12, 2018), and Japanese Patent Application Serial No. 2019-009432 (Jan. 23, 2019), the contents of each of which are hereby incorporated by reference in their entirety.
  • TECHNICAL FIELD
  • The present disclosure relates to a video distribution system, a video distribution method, and a storage medium storing a video distribution program, for distributing a video containing animation of a character object generated based on motions of an actor.
  • BACKGROUND
  • Video distribution systems that generate an animation of a character object based on actor's motions and distribute a video including the animation of the character object have been known. Such a video distribution system is disclosed, for example, in Japanese Patent Application Publication No. 2015-184689 (“the '689 Publication”).
  • Also known are content distribution systems that receive a request from a viewing user who is viewing contents, and in response to the request, display on a display screen a gift object corresponding to an item purchased by the viewing user. For example, in the video distribution system disclosed in Japanese Patent Application Publication No. 2012-120098 (“the '098 Publication”), a viewing user can purchase a gift item and provide the gift item to a performer (a content distributor) as a gift. The '098 Publication describes that the gift object is preferably displayed in a background region of a distributed view so as to avoid interference with the video.
  • Displaying a gift object to overlap with a video may deteriorate the viewing experience of a viewing user. For example, if a main part of the video is hidden behind the gift object, the viewer may feel his/her viewing of the video is impeded. In particular, when a large amount of gift object is displayed to overlap with the video, this drawback may be more severe. Therefore, in the '098 Publication, gift objects are not displayed in a content display region that displays the video, but displayed in the background region outside the content display region.
  • SUMMARY
  • It is an object of the present disclosure to provide a technical improvement which solves or alleviates at least part of the drawbacks of the prior art mentioned above. In particular, an object of the present invention is to provide a video distribution system, a video distribution method, and a storage medium storing a video distribution program, capable of displaying a gift object to overlap with a video without deteriorating the viewing experience of a viewing user.
  • A video distribution system according to one aspect is a video distribution system for distributing a video containing animation of a character object generated based on a motion of an actor, the video distribution system comprising: one or more computer processors; and a storage for storing a candidate list including candidates of decorative objects to be displayed in the video in association with the character object. The one or more computer processors execute computer-readable instructions to: in response to reception of a first display request from a viewing user, the first display request being sent for requesting display of a first decorative object among the decorative objects, add the first decorative object to the candidate list, and display the first decorative object in the video upon selection of the first decorative object from the candidate list.
  • In one aspect, the first decorative object is displayed in the video in association with a specific body part of the character object.
  • In one aspect, the first object is displayed in the video so as not to contact with the character object.
  • In one aspect, the selection of the first decorative object from the candidate list is performed by someone other than the viewing user.
  • In one aspect, the selection of the first decorative object from the candidate list is performed by a supporter who supports distribution of the video.
  • In one aspect, the selection of the first decorative object from the candidate list is performed by the actor.
  • In one aspect, in response to reception of a second display request from the viewing user viewing the video, the second display request being sent for requesting display of a first object that is different from the decorative objects, the one or more computer processors display the first object in the video.
  • In one aspect, a no-display period is provided in a distribution period of the video, and the first object and the decorative objects are displayed in the video at a timing in the distribution period of the video other than the no-display period.
  • In one aspect, when the second display request is received in the no-display period, the first object is displayed in the video after an end of the no-display period.
  • In one aspect, the one or more computer processors are configured to: receive a purchase request from the viewing user, the purchase request being sent for purchasing the first decorative object, perform a payment process in response to the purchase request, and cancel the payment process when the first decorative object is not selected before distribution of the video is ended.
  • In one aspect, the one or more computer processors are configured to: receive a purchase request from the viewing user, the purchase request being sent for purchasing the first decorative object, perform a payment process in response to the purchase request, and provide the viewing user with points when the first decorative object is not selected before distribution of the video is ended.
  • In one aspect, the one or more computer processors are configured to: receive a purchase request from the viewing user, the purchase request being sent for purchasing the first decorative object, add the first decorative object to a possession list in response to the purchase request, the possession list being a list of objects possessed by the viewing user, in response to reception of the first display request from the viewing user, the first display request being sent for requesting display of the first decorative object, add the first decorative object to the candidate list and remove the first decorative object from the possession list, and add the first decorative object to the possession list when the first decorative object is not selected before distribution of the video is ended.
  • In one aspect, provided is a video distribution method performed by one or more computer processors executing computer-readable instructions to distribute a video containing animation of a character object generated based on a motion of an actor. The video distribution method comprises: storing a candidate list including candidates of decorative objects to be displayed in the video in association with the character object, in response to reception of a first display request from a viewing user, the first display request being sent for requesting display of a first decorative object among the decorative objects, adding the first decorative object to the candidate list, and displaying the first decorative object in the video upon selection of the first decorative object from the candidate list.
  • In one aspect, provided is a non-transitory computer-readable storage medium storing a video distribution program for distributing a video containing animation of a character object generated based on a motion of an actor. The video distribution program causes one or more computer processors to: store a candidate list including candidates of decorative objects to be displayed in the video in association with the character object, in response to reception of a first display request from a viewing user, the first display request being sent for requesting display of a first decorative object among the decorative objects, add the first decorative object to the candidate list, and display the first decorative object in the video upon selection of the first decorative object from the candidate list.
  • Advantages
  • According the embodiments of the present disclosure, a gift object can be displayed to overlap with a video without deteriorating the viewing experience of a viewing user.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram illustrating a video distribution system according to one embodiment.
  • FIG. 2 schematically illustrates an installation of a studio where a video to be distributed in the video distribution system of FIG. 1 is produced.
  • FIG. 3 illustrates a possession list stored in the video distribution system of FIG. 1.
  • FIG. 4 illustrates a candidate list stored in the video distribution system of FIG. 1.
  • FIG. 5 illustrates an example of a video displayed on the client device 10 a in one embodiment. An animation of a character object is included in FIG. 5.
  • FIG. 6 illustrates an example of a video displayed on the client device 10 a in one embodiment. A normal object is included in FIG. 6.
  • FIG. 7 illustrates an example of a video displayed on the client device 10 a in one embodiment. A decorative object is included in FIG. 7.
  • FIG. 8 schematically illustrates an example of a decorative object selection screen for selecting a desired decorative object from among the decorative objects included in the candidate list.
  • FIG. 9 is a flow chart showing a flow of a video distribution process in one embodiment.
  • FIG. 10 is a flowchart of a process for displaying a normal object according to an embodiment.
  • FIG. 11 is a flowchart of a process for displaying a decorative object according to an embodiment.
  • FIG. 12 is a diagram for describing a no-display period set for a video distributed in the video distribution system of FIG. 1.
  • DESCRIPTION OF THE EMBODIMENTS
  • Various embodiments of the disclosure will be described hereinafter with reference to the accompanying drawings. Throughout the drawings, the same or similar elements are denoted by the same reference numerals.
  • With reference to FIGS. 1 to 4, a video distribution system according to an embodiment will be described. FIG. 1 is a block diagram illustrating a video distribution system 1 according to one embodiment, FIG. 2 schematically illustrates an installation of a studio where a video to be distributed in the video distribution system 1 is produced, and FIGS. 3 to 4 are for describing information stored in the video distribution system 1.
  • The video distribution system 1 includes client devices 10 a to 10 c, a server device 20, a studio unit 30, and a storage 60. The client devices 10 a to 10 c, the server device 20, and the storage 60 are communicably interconnected over a network 50. The server device 20 is configured to distribute a video including an animation of a character, as described later. The character included in the video may be motion-controlled in a virtual space.
  • The video may be distributed from the server device 20 to each of the client devices 10 a to 10 c. A first viewing user who is a user of the client device 10 a, a second viewing user who is a user of the client device 10 b, and a third viewing user who is a user of the client device 10 c are able to view the distributed video with their respective client devices. The video distribution system 1 may include less than three client devices, or may include more than three client devices.
  • The client devices 10 a to 10 c are information processing devices such as smartphones. In addition to the smartphone, the client devices 10 a to 10 c each may be a mobile phone, a tablet, a personal computer, an electronic book reader, a wearable computer, a game console, or any other information processing devices that are capable of playing videos. Each of the client devices 10 a to 10 c may include a computer processor, a memory unit, a communication I/F, a display, a sensor unit including various sensors such as a gyro sensor, a sound collecting device such as a microphone, and a storage for storing various information.
  • In the illustrated embodiment, the server device 20 includes a computer processor 21, a communication I/F 22, and a storage 23.
  • The computer processor 21 is a computing device which loads various programs realizing an operating system and various functions from the storage 23 or other storage into a memory unit and executes instructions included in the loaded programs. The computer processor 21 is, for example, a CPU, an MPU, a DSP, a GPU, any other computing device, or a combination thereof. The computer processor 21 may be realized by means of an integrated circuit such as ASIC, PLD, FPGA, MCU, or the like. Although the computer processor 21 is illustrated as a single component in FIG. 1, the computer processor 21 may be a collection of a plurality of physically separate computer processors. In this specification, a program or instructions included in the program that are described as being executed by the computer processor 21 may be executed by a single computer processor or executed by a plurality of computer processors distributively. Further, a program or instructions included in the program executed by the computer processor 21 may be executed by a plurality of virtual computer processors.
  • The communication I/F 22 may be implemented as hardware, firmware, or communication software such as a TCP/IP driver or a PPP driver, or a combination thereof. The server device 20 is able to transmit and receive data to and from other devices via the communication I/F 22.
  • The storage 23 is a storage device accessed by the computer processor 21. The storage 23 is, for example, a magnetic disk, an optical disk, a semiconductor memory, or various other storage device capable of storing data. Various programs may be stored in the storage 23. At least some of the programs and various data that may be stored in the storage 23 may be stored in a storage (for example, a storage 60) that is physically separated from the server device 20.
  • Most of components of the studio unit 30 are disposed, for example, in a studio room R shown in FIG. 2. As illustrated in FIG. 2, an actor A1 and an actor A2 give performances in the studio room R. The studio unit 30 is configured to detect motions and expressions of the actor A1 and the actor A2, and to output the detection result information to the server device 20.
  • Both the actor A1 and the actor A2 are objects whose motions and expressions are captured by a sensor group provided in the studio unit 30, which will be described later. The actor A1 and the actor A2 are, for example, humans, animals, or moving objects that give performances. The actor A1 and the actor A2 may be, for example, autonomous robots. The number of actors in the studio room R may be one or three or more.
  • The studio unit 30 includes six motion sensors 31 a to 31 f attached to the actor A1, a controller 33 a held by the left hand of the actor A1, a controller 33 b held by the right hand of the actor A1, and a camera 37 a attached to the head of the actor A1 via an attachment 37 b. The studio unit 30 also includes six motion sensors 32 a to 32 f attached to the actor A2, a controller 34 a held by the left hand of the actor A2, a controller 34 b held by the right hand of the actor A2, and a camera 38 a attached to the head of the actor A2 via an attachment 38 b. A microphone for collecting audio data may be provided to each of the attachment 37 b and the attachment 38 b. The microphone can collect speeches of the actor A1 and the actor A2 as voice data. The microphones may be wearable microphones attached to the actor A1 and the actor A2 via the attachment 37 b and the attachment 38 b. Alternatively the microphones may be installed on the floor, wall or ceiling of the studio room R. In addition to the components described above, the studio unit 30 includes a base station 35 a, a base station 35 b, a tracking sensor 36 a, a tracking sensor 36 b, and a display 39. A supporter computer 40 is installed in a room next to the studio room R, and these two rooms are separated from each other by a glass window. The server device 20 may be installed in the same room as the room in which the supporter computer 40 is installed.
  • The motion sensors 31 a to 31 f and the motion sensors 32 a to 32 f cooperate with the base station 35 a and the base station 35 b to detect their position and orientation. In one embodiment, the base station 35 a and the base station 35 b are multi-axis laser emitters. The base station 35 a emits flashing light for synchronization and then emits a laser beam about, for example, a vertical axis for scanning. The base station 35 a emits a laser beam about, for example, a horizontal axis for scanning. Each of the motion sensors 31 a to 31 f and the motion sensors 32 a to 32 f may be provided with a plurality of optical sensors for detecting incidence of the flashing lights and the laser beams from the base station 35 a and the base station 35 b, respectively. The motion sensors 31 a to 31 f and the motion sensors 32 a to 32 f each may detect its position and orientation based on a time difference between an incident timing of the flashing light and an incident timing of the laser beam, time when each optical sensor receives the light and or beam, an incident angle of the laser light detected by each optical sensor, and any other information as necessary. The motion sensors 31 a to 31 f and the motion sensors 32 a to 32 f may be, for example, Vive Trackers provided by HTC CORPORATION. The base station 35 a and the base station 35 b may be, for example, base stations provided by HTC CORPORATION.
  • Detection result information about the position and the orientation of each of the motion sensors 31 a to 31 f and the motion sensors 32 a to 32 f that are estimated in the corresponding motion sensor is transmitted to the server device 20. The detection result information may be wirelessly transmitted to the server device 20 from each of the motion sensors 31 a to 31 f and the motion sensors 32 a to 32 f. Since the base station 35 a and the base station 35 b emit flashing light and a laser light for scanning at regular intervals, the detection result information of each motion sensor is updated at each interval.
  • In the illustrated embodiment, the six motion sensors 31 a to 31 f are mounted on the actor A. The motion sensors 31 a, 31 b, 31 c, 31 d, 31 e, and 31 f are attached to the left wrist, the right wrist, the left instep, the right instep, the hip, and top of the head of the actor A1, respectively. The motion sensors 31 a to 31 f may each be attached to the actor A1 via an attachment. The six motion sensors 32 a to 32 f are mounted on the actor A2. The motion sensors 32 a to 32 f may be attached to the actor A2 at the same positions as the motion sensors 31 a to 31 f. The motion sensors 31 a to 31 f and the motion sensors 32 a to 32 f shown in FIG. 2 are merely an example. The motion sensors 31 a to 31 f may be attached to various parts of the body of the actor A1, and the motion sensors 32 a to 32 f may be attached to various parts of the body of the actor A2. The number of motion sensors attached to the actor A1 and the actor A2 may be less than or more than six. As described above, body motions of the actor A1 and the actor A2 are detected by detecting the position and the orientation of the motion sensors 31 a to 31 f and the motion sensors 32 a to 32 f attached to the body parts of the actor A1 and the actor A2.
  • In one embodiment, a plurality of infrared LEDs are mounted on each of the motion sensors attached to the actor A1 and the actor A2, and light from the infrared LEDs are sensed by infrared cameras provided on the floor and/or wall of the studio room R to detect the position and the orientation of each of the motion sensors. Visible light LEDs may be used instead of the infrared LEDs, and in this case light from the visible light LEDs may be sensed by visible light cameras to detect the position and the orientation of each of the motion sensors. As described above, a light emitting unit (for example, the infrared LED or visible light LED) may be provided in each of the plurality of motion sensors attached to the actor, and a light receiving unit (for example, the infrared camera or visible light camera) provided in the studio room R senses the light from the light emitting unit to detect the position and the orientation of each of the motion sensors.
  • In one embodiment, a plurality of reflective markers may be used instead of the motion sensors 31 a-31 f and the motion sensors 32 a-32 f. The reflective markers may be attached to the actor A1 and the actor A2 using an adhesive tape or the like. The position and orientation of each reflective marker can be estimated by capturing images of the actor A1 and the actor A2 to which the reflective markers are attached to generate captured image data and performing image processing on the captured image data.
  • The controller 33 a and the controller 33 b supply, to the server device 20, control signals that correspond to operation of the actor A1. Similarly, the controller 34 a and the controller 34 b supply, to the server device 20, control signals that correspond to operation of the actor A2.
  • The tracking sensor 36 a and the tracking sensor 36 b generate tracking information for determining configuration information of a virtual camera used for constructing a virtual space included in the video. The tracking information of the tracking sensor 36 a and the tracking sensor 36 b is calculated as the position in its three-dimensional orthogonal coordinate system and the angle around each axis. The position and orientation of the tracking sensor 36 a may be changed according to operation of the operator. The tracking sensor 36 a transmits the tracking information indicating the position and the orientation of the tracking sensor 36 a to the server device 20. Similarly, the position and the orientation of the tracking sensor 36 b may be set according to operation of the operator. The tracking sensor 36 b transmits the tracking information indicating the position and the orientation of the tracking sensor 36 b to the server device 20.
  • The camera 37 a is attached to the head of the actor A1 as described above. For example, the camera 37 a is disposed so as to capture an image of the face of the actor A1. The camera 37 a continuously captures images of the face of the actor A1 to obtain imaging data of the face of the actor A1. Similarly, the camera 38 a is attached to the head of the actor A2. The camera 38 a is disposed so as to capture an image of the face of the actor A2 and continuously capture images of the face of the actor A2 to obtain captured image data of the face of the actor A2. The camera 37 a transmits the captured image data of the face of the actor A1 to the server device 20, and the camera 38 a transmits the captured image data of the face of the actor A2 to the server device 20. The camera 37 a and the camera 38 a may be 3D cameras capable of detecting the depth of a face of a person.
  • The display 39 is configured to display information received from the support computer 40. The information transmitted from the support computer 40 to the display 39 may include, for example, text information, image information, and various other information. The display 39 is disposed at a position where the actor A1 and the actor A2 are able to see the display 39.
  • In the illustrated embodiment, the supporter computer 40 is installed in the next room of the studio room R. Since the room in which the supporter computer 40 is installed and the studio room R are separated by the glass window, an operator of the supporter computer 40 (sometimes referred to as “supporter” in the specification) is able to see the actor A1 and the actor A2. In the illustrated embodiment, supporters B1 and B2 are present in the room as the operators of the supporter computer 40.
  • The supporter computer 40 may be configured to be capable of changing the setting(s) of the component(s) of the studio unit 30 according to the operation by the supporter B1 and the supporter B2. The supporter computer 40 can change, for example, the setting of the scanning interval performed by the base station 35 a and the base station 35 b, the position or orientation of the tracking sensor 36 a and the tracking sensor 36 b, and various settings of other devices. At least one of the supporter B1 and the supporter B2 is able to input a message to the supporter computer 40, and the input message is displayed on the display 39.
  • The components and functions of the studio unit 30 shown in FIG. 2 are merely example. The studio unit 30 applicable to the invention may include various constituent elements that are not shown. For example, the studio unit 30 may include a projector. The projector is able to project a video distributed to the client device 10 a or another client device on the screen S.
  • Next, information stored in the storage 23 in one embodiment will be described. In the illustrated embodiment, the storage 23 stores model data 23 a, object data 23 b, a possession list 23 c, a candidate list 23 d, and any other information required for generation and distribution of a video to be distributed.
  • The model data 23 a is model data for generating animation of a character. The model data 23 a may be three-dimensional model data for generating three-dimensional animation, or may be two-dimensional model data for generating two-dimensional animation. The model data 23 a includes, for example, rig data (also referred to as “skeleton data”) indicating a skeleton of a character, and surface data indicating the shape or texture of a surface of the character. The model data 23 a may include two or more different pieces of model data. The pieces of model data may each have different rig data, or may have the same rig data. The pieces of model data may have different surface data or may have the same surface data. In the illustrated embodiment, in order to generate a character object corresponding to the actor A1 and a character object corresponding to the actor A2, the model data 23 a includes at least two types of model data different from each other. The model data for the character object corresponding to the actor A1 and the model data for the character object corresponding to the actor A2 may have, for example, the same rig data but different surface data from each other.
  • The object data 23 b includes asset data used for constructing a virtual space in the video. The object data 23 b includes data for rendering a background of the virtual space in the video, data for rendering various objects displayed in the video, and data for rendering any other objects displayed in the video. The object data 23 a may include object position information indicating the position of an object in the virtual space.
  • In addition to the above, the object data 23 b may include a gift object displayed in the video in response to a display request from viewing users of the client devices 10 a to 10 c. The gift object may include an effect object, a normal object, and a decorative object. Viewing users are able to purchase a desired gift object.
  • The effect object is an object that affects the impression of the entire viewing screen of the distributed video, and is, for example, an object representing confetti. The object representing confetti may be displayed on the entire viewing screen, which can change the impression of the entire viewing screen. The effect object may be displayed so as to overlap with the character object, but it is different from the decorative object in that it is not displayed in association with a specific portion of the character object.
  • The normal object is an object functioning as a digital gift from a viewing user (for example, the actor A1 or the actor A2) to an actor, for example, an object resembling a stuffed toy or a bouquet. In one embodiment, the normal object is displayed on the display screen of the video such that it does not contact the character object. In one embodiment, the normal object is displayed on the display screen of the video such that it does not overlap with the character object. The normal object may be displayed in the virtual space such that it overlaps with an object other than the character object. The normal object may be displayed so as to overlap with the character object, but it is different from the decorative object in that it is not displayed in association with a specific portion of the character object. In one embodiment, when the normal object is displayed such that it overlaps with the character object, the normal object may hide portions of the character object other than the head including the face of the character object but does not hide the head of the character object.
  • The decorative object is an object displayed on the display screen in association with a specific part of the character object. In one embodiment, the decorative object displayed on the display screen in association with a specific part of the character object is displayed adjacent to the specific part of the character object on the display screen. In one embodiment, the decorative object displayed on the display screen in association with a specific part of the character object is displayed such that it partially or entirely covers the specific part of the character object on the display screen. The specific part may be specified by three-dimensional position information that indicates a position in a three-dimensional coordinate space, or the specific part may be associated with position information in the three-dimensional coordinate space. For example, a specific part in the head of a character may be specified in the units of the front left side, the front right side, the rear left side, the rear right side, the middle front side, and the middle rear side of the head, the left eye, the right eye, the left ear, the right ear, and the whole hair.
  • The decorative object is an object that can be attached to a character object, for example, an accessory (such as a headband, a necklace, an earring, etc.), clothes (such as a T-shirt), a costume, and any other object which can be attached to the character object. The object data 23 b corresponding to the decorative object may include attachment position information indicating which part of the character object the decorative object is associated with. The attachment position information of a decorative object may indicate to which part of the character object the decorative object is attached. For example, when the decorative object is a headband, the attachment position information of the decorative object may indicate that the decorative object is attached to the “head” of the character object. When the attachment position of a decorative object is specified as a position in a three-dimensional coordinate space, the attachment position information may be associated with a plurality of positions in the three-dimensional coordinate space. For example, the attachment position information that indicates the position to which a decorative object representing “a headband” is attached may be associated with two parts of “the rear left side of the head” and “the rear right side of the head” of the character object. In other words, the decorative object representing “a headband” may be attached to both “the rear left side of the head” and “the rear right side of the head.” When the decorative object is a T-shirt, the attachment position information of the decorative object may indicate that the decorative object is attached to the “torso” of the character object.
  • A duration of time of displaying the gift objects may be set for each gift object depending on its type. In one embodiment, the duration of displaying the decorative object may be set longer than the duration of displaying the effect object and the duration of displaying the normal object. For example, the duration of displaying the decorative object may be set to 60 seconds, while the duration of displaying the effect object may be set to 5 seconds and the duration of displaying the normal object may be set to 10 seconds.
  • The possession list 23 c is a list showing gift objects possessed by viewing users of a video. An example of the possession list 23 c is shown in FIG. 3. As illustrated, in the possession list 23 c, an object ID for identifying a gift object possessed by a viewing user is stored in association with account information of the viewing user (for example, user ID of the viewing user). The viewing users include, for example, the first to third viewing users of the client devices 10 a to 10 c.
  • The candidate list 23 d is a list of decorative objects for which a display request has been made from a viewing user. As will be described later, a viewing user who possesses a decorative object(s) is able to make a request to display his/her own decorative objects. In the candidate list 23 d, object IDs for identifying decorative objects are stored in association with the account information of the viewing user who has made a request to display the decorative objects. The candidate list 23 d may be created for each distributor. The candidate list 23 d may be stored, for example, in association with distributor identification information that identify a distributor(s) (the actor A1, the actor A2, the supporter B1, and/or the supporter B2).
  • Functions realized by the computer processor 21 will be now described more specifically. The computer processor 21 functions as a body motion data generation unit 21 a, a face motion data generation unit 21 b, an animation generation unit 21 c, a video generation unit 21 d, a video distribution unit 21 e, a display request processing unit 21 f, a decorative object selection unit 21 g, and an object purchase processing unit 21 h by executing computer-readable instructions included in a distributed program. At least some of the functions that can be realized by the computer processor 21 may be realized by a computer processor other than the computer processor 21 of the video distribution system 1. For example, at least some of the functions realized by the computer processor 21 may be realized by a computer processor mounted on the supporter computer 40.
  • The body motion data generation unit 21 a generates first body motion data of each part of the body of the actor A1 based on detection result information of the corresponding motion sensors 31 a to 31 f, and generates second body motion data, which is a digital representation of the position and the orientation of each part of the body of the actor A2, based on detection result information of the corresponding motion sensors 32 a to 32 f. In the specification, the first body motion data and the second body motion data may be collectively referred to simply as “body motion data.” The body motion data is serially generated with time as needed. For example, the body motion data may be generated at predetermined sampling time intervals. Thus, the body motion data can represent body motions of the actor A1 and the actor A2 in time series as digital data. In the illustrated embodiment, the motion sensors 31 a to 31 f and the motion sensors 32 a to 32 f are attached to the left and right limbs, the waist, and the head of the actor A1 and the actor A2, respectively. Based on the detection result information of the motion sensors 31 a to 31 f and the motion sensors 32 a to 32 f, it is possible to digitally represent the position and orientation of the substantially whole body of the actor A1 and the actor A2 in time series. The body motion data can define, for example, the position and rotation angle of bones corresponding to the rig data included in the model data 23 a.
  • The face motion data generation unit 21 b generates first face motion data, which is a digital representation of motions of the face of the actor A1, based on captured image data of the camera 37 a, and generates second face motion data, which is a digital representation of motions of the face of the actor A2, based on captured image data of the camera 38 a. In the specification, the first face motion data and the second face motion data may be collectively referred to simply as “face motion data.” The face motion data is serially generated with time as needed. For example, the face motion data may be generated at predetermined sampling time intervals. Thus, the face motion data can digitally represent facial motions (changes in facial expression) of the actor A1 and the actor A2 in time series.
  • The animation generation unit 21 c is configured to apply the body motion data generated by the body motion data generation unit 21 a and the face motion data generated by the face motion data generation unit 21 b to predetermined model data included in the model data 23 a in order to generate an animation of a character object that moves in a virtual space and whose facial expression changes. More specifically, the animation generation unit 21 c may generate an animation of a character object moving in synchronization with the motion of the body and facial expression of the actor A1 based on the first body motion data and the first face motion data related to the actor A1, and generate an animation of a character object moving in synchronization with the motion of the body and facial expression of the actor A2 based on the second body motion data and the second face motion data related to the actor A2. In the specification, a character object generated based on the motion and expression of the actor A1 may be referred to as a “first character object”, and a character object generated based on the motion and expression of the actor A2 may be referred to as a “second character object.”
  • The video generation unit 21 d constructs a virtual space using the object data 23 b, and generates a video that includes the virtual space, the animation of the first character object corresponding to the actor A1, and the animation of the second character object corresponding to the actor A2. The first character object is disposed in the virtual space so as to correspond to the position of the actor A1 with respect to the tracking sensor 36 a, and the second character object is disposed in the virtual space so as to correspond to the position of the actor A2 with respect to the tracking sensor 36 a. Thus, it is possible to change the position and the orientation of the first character object and the second character object in the virtual space by changing the position or the orientation of the tracking sensor 36 a.
  • In one embodiment, the video generation unit 21 d constructs a virtual space based on tracking information of the tracking sensor 36 a. For example, the video generation unit 21 d determines configuration information (the position in the virtual space, a gaze position, a gazing direction, and the angle of view) of the virtual camera based on the tracking information of the tracking sensor 36 a. Moreover, the video generation unit 21 d determines a rendering area in the entire virtual space based on the configuration information of the virtual camera and generates moving image information for displaying the rendering area in the virtual space.
  • The video generation unit 21 d may be configured to determine the position and the orientation of the first character object and the second character object in the virtual space, and the configuration information of the virtual camera based on tracking information of the tracking sensor 36 b instead of or in addition to the tracking information of the tracking sensor 36 a.
  • The video generation unit 21 d is able to include voices of the actor A1 and the actor A2 collected by the microphone in the studio unit 30 with the generated moving image.
  • As described above, the video generation unit 21 d generates an animation of the first character object moving in synchronization with the motion of the body and facial expression of the actor A1, and an animation of the second character moving in synchronization with the motion of the body and facial expression of the actor A2. The video generation unit 21 d then includes the voices of the actor A1 and the actor A2 with the animations respectively to generate a video for distribution.
  • The video distribution unit 21 e distributes the video generated by the video generation unit 21 d. The video is distributed to the client devices 10 a to 10 c and other client devices over the network 50. The received video is reproduced in the client devices 10 a to 10 c.
  • The video may be distributed to a client device (not shown) installed in the studio room R, and projected from the client device onto the screen S via a short focus projector. The video may also be distributed to the supporter computer 40. In this way, the supporter B1 and the supporter B2 can check the viewing screen of the distributed video.
  • An example of the screen on which the video distributed from the server device 20 to the client device 10 a and reproduced by the client device 10 a is displayed is illustrated in FIG. 5. As shown, a display image 70 of the video distributed from the server device 20 is displayed on the display of the client device 10 a. The display image 70 displayed on the client device 10 a includes a character object 71A corresponding to the actor A1, a character object 71B corresponding to the actor A2, a table object 72 a representing a table, in a virtual space. The object 72 is not a gift object, but is one of objects used for constructing a virtual space included in the object data 23 b. The character object 71A is generated by applying the first body motion data and the first face motion data of the actor A1 to the model data for the actor A1 included in the model data 23 a. The character object 71A is motion-controlled based on the first body motion data and the first face motion data. The character object 71B is generated by applying the second body motion data and the second face motion data of the actor A2 to the model data for the actor A2 included in the model data 23 a. The character object 71B is motion-controlled based on the second body motion data and the second face motion data. Thus, the character object 71A is controlled to move in the screen in synchronization with the motions of the body and facial expression of the actor A1, and the character object 71B is controlled to move in the screen in synchronization with the motions of the body and facial expression of the actor A2.
  • As described above, the video from the server device 20 may be distributed to the supporter computer 40. The video distributed to the supporter computer 40 is displayed on the supporter computer 40 in the same manner as FIG. 5. The supporter B1 and the supporter B2 are able to change the configurations of the components of the studio unit 30 while viewing the video reproduced by the supporter computer 40. In one embodiment, when the supporter B1 and the supporter B2 wish to change the angle of the character object 71A and the character object 71B in the video being distributed, they can cause an instruction signal for changing the orientation of the tracking sensor 36 a to be sent from the supporter computer 40 to the tracking sensor 36 a. The tracking sensor 36 a is able to change its orientation in accordance with the instruction signal. For example, the tracking sensor 36 a may be rotatably attached to a stand via a pivoting mechanism that includes an actuator disposed around the axis of the stand. When the tracking sensor 36 a received an instruction signal instructing to change its orientation, the actuator of the pivoting mechanism may be driven based on the instruction signal, and the tracking sensor 36 a may be turned by an angle according to the instruction signal. In one embodiment, the supporter B1 and the supporter B2 may cause the supporter computer 40 to transmit an instruction for using the tracking information of the tracking sensor 36 b to the tracking sensor 36 a and the tracking sensor 36 b, instead of the tracking information from the tracking sensor 36 a.
  • In one embodiment, when the supporter B1 and the supporter B2 determine that some instructions are needed for the actor A1 or the actor A2 as they are viewing the video reproduced on the supporter computer 40, they may input a message indicating the instruction(s) into the support computer 40 and the message may be output to the display 39. For example, the supporter B1 and the supporter B2 can instruct the actor A1 or the actor A2 to change his/her standing position through the message displayed on the display 39.
  • The display request processing unit 21 f receives a display request to display a gift object from a client device of a viewing user, and performs processing according to the display request. Each viewing user is able to transmit a display request to display a gift object to the server device 20 by operating his/her client device. For example, the first viewing user can transmit a display request to display a gift object to the server device 20 by operating the client device 10 a. The display request to display a gift object may include the user ID of the viewing user and the identification information (object ID) that identifies the object for which the display request is made.
  • As described above, the gift object may include the effect object, the normal object, and the decorative object. The effect object and the normal object are examples of the first object. In addition, a display request for requesting display of the effect object or the normal object is an example of a second display request. Upon receipt of a display request to display a gift object from a client device of a viewing user, the display request processing unit 21 f may determine what type of gift object the request is requesting to display. For example, the display request processing unit 21 f may determine which of the effect object, the normal object, or the decorative object the display request is requesting to display. The display request processing unit 21 f may determine what type of gift object the request is requesting to display based on the object ID included in the display request.
  • In one embodiment, when the display request processing unit 21 f received a display request to display a specific effect object from a viewing user, the display request processing unit 21 f performs a process, in response to the display request, to display the effect object for which the display request is made in the display image 70 of the video. For example, when a display request to display an effect object simulating confetti is made, the display request processing unit 21 f displays in the display image 70 an effect object 73 simulating confetti based on the display request as shown in FIG. 6.
  • In one embodiment, when the display request processing unit 21 f received a display request to display a specific normal object from a viewing user, the display request processing unit 21 f performs a process, in response to the display request, to display the normal object for which the display request is made in the video 70. For example, when a display request to display a normal object simulating a stuffed bear is made, the display request processing unit 21 f displays a normal object 74 simulating a stuffed bear in the display image 70 based on the display request as shown in FIG. 6.
  • The display request for the normal object 74 may include a display position specifying parameter for specifying the display position of the normal object 74 in the virtual space. In this case, the display request processing unit 21 f displays the normal object 74 at the position in the virtual space specified by the display position specifying parameter. For example, the display position specifying parameter may specify the upper position of the table object 72 a representing a table as the display position of the normal object 74. A viewing user is able to specify the position where the normal object is to be displayed by using the display position specifying parameter while watching the layouts of the character object 71A, the character object 71B, the gift object, and other objects included in the video 70.
  • In one embodiment, the normal object 74 may be displayed such that it moves within the display image 70 of the video. For example, the normal object 74 may be displayed such that it falls from the top to the bottom of the screen. In this case, the normal object 74 may be displayed in the display image 70 during the fall, which is from when the object starts to fall and to when the object has fallen to the floor of the virtual space of the video 70, and may disappear from the display image 70 after it has fallen to the floor. A viewing user can view the falling normal object 74 from the start of the fall to the end of the fall. The moving direction of the normal object 74 in the screen can be specified as desired. For example, the normal object 74 may be displayed in the display image 70 so as to move from the left to the right, the right to the left, the upper left to the lower left, or any other direction in the video 70. The normal object 74 may move on various paths. For example, the normal object 74 can move on a linear path, a circular path, an elliptical path, a spiral path, or any other paths. The viewing user may include, in the display request to display the normal object, a moving direction parameter that specifies the moving direction of the normal object 74 and/or a path parameter that specifies the path on which the normal object 74 moves, in addition to or in place of the display position specifying parameter. In one embodiment, among the effect objects and the normal objects, those whose size in the virtual space is smaller than a reference size (for example, a piece of paper of confetti of the effect object 73) may be displayed such that a part or all of the object(s) is overlapped with the character object 71A and/or the character object 71B. In one embodiment, among the effect objects and the normal objects, those whose size in the virtual space is larger than the reference size (for example, the normal object 74 (the stuffed bear)) may be displayed at a position where the object is not overlapped with the character object. In one embodiment, among the effect objects and the normal objects, if those whose size in the virtual space is larger than the reference size (for example, the normal object 74 (the stuffed bear)) is overlapped with the character object 71A and/or the character object 71B, the object is displayed behind the overlapping character object.
  • In one embodiment, when the display request processing unit 21 f received a display request to display a specific decorative object from a viewing user, the display request processing unit 21 f adds the decorative object for which the display request is made to the candidate list 23 d based on the display request. The display request to display the decorative object is an example of a first display request. For example, the display request processing unit 21 f may store, in the candidate list 23 d, identification information (object ID) identifying the specific decorative object for which the display request has been made from the viewing user, in association with the user ID of the viewing user (see FIG. 4). When more than one display request to display a decorative object is made, for each of the display requests, the user ID of the viewing user who made the display request and the decorative object ID of the decorative object for which the display request is made by the viewing user are associated with each other and stored in the candidate list 23 d.
  • In one embodiment, in response to one or more of the decorative objects included in the candidate list 23 d being selected, the decorative object selection unit 21 g performs a process to display the selected decorative object in the display image 70 of the video. In the specification, a decorative object selected from the candidate list 23 d may be referred to as a “selected decorative object”.
  • The selection of the decorative object from the candidate list 23 d is made, for example, by the supporter B1 and/or the supporter B2 who operate the supporter computer 40. In one embodiment, the supporter computer 40 displays a decorative object selection screen. FIG. 8 shows an example of a decorative object selection screen 80 in one embodiment. The decorative object selection screen 80 is displayed, for example, on the display of the supporter computer 40. The decorative object selection screen 80 shows, for example, each of the plurality of decorative objects included in the candidate list 23 d in a tabular form. As illustrated, the decorative object selection screen 80 in one embodiment includes a first column 81 showing the type of the decorative object, a second column 82 showing the image of the decorative object, and a third column 83 showing the body part of a character object associated with the decorative object. Further, on the decorative object selection screen 80, selection buttons 84 a to 84 c for selecting each decorative object are displayed. Thus, the decorative object selection screen 80 displays decorative objects that can be selected as the selected decorative object.
  • The supporters B1 and B2 are able to select one or more of the decorative objects shown on the decorative object selection screen 80. For example, the supporter B1 and the supporter B2 are able to select a headband by selecting the selection button 84 a. When it is detected by the decorative object selection unit 21 g that the headband is selected, the display request processing unit 21 f displays the selected decorative object 75 that simulates the selected headband on the display screen 70 of the video, as shown in FIG. 7. The selected decorative object 75 is displayed on the display image 70 in association with a specific body part of a character object. The selected decorative object 75 may be displayed such that it contacts with the specific body part of the character object. For example, since the selected decorative object 75 simulating the headband is associated with the head of the character object, it is attached to the head of the character object 71A as shown in FIG. 7. The decorative object may be displayed on the display screen 70 such that it moves along with the motion of the specific part of the character object. For example, when the head of the character object 71A with the headband moves, the selected decorative object 75 simulating the headband moves in accordance with the motion of the head of the character object 71A as if the headband is attached to the head of the character object 71A.
  • As described above, the object data 23 b may include attachment position information indicating which part of the character object the decorative object is associated with. In one embodiment, the decorative object selection unit 21 g may prohibit selection of a decorative object included in the candidate list 23 d as the selected decorative object 75, if the decorative object is to be attached to a body part that overlaps with the body part indicated by the attachment position information of another decorative object already attached to the character object. For example, a headband associated with “the rear left side of the head” and “the rear right side of the head” and a hair accessory associated with “the rear left side of the head” cannot be attached at the same time since these decorative objects overlap with each other in “the rear left side of the head.” In contrast, a headband associated with “the rear left side of the head” and “the rear right side of the head” and an earring associated with “the left ear (of the head)” and “the right ear (of the head)” can be attached at the same time since these decorative objects do not overlap with each other in any specific body part of a character object.
  • The selected decorative object 75 may be displayed on the display screen 70 in association with the character object 71B instead of the character object 71A. Alternatively, the selected decorative object 75 may be displayed on the display screen 70 in association with the character object 71A and the character object 71B.
  • In one embodiment, the decorative object selection screen 80 may be configured to exclude information identifying a user who holds the decorative object or a user who has made a display request to display the decorative object. By configuring the decorative object selection screen 80 in this manner, it is possible to prevent a selector from giving preference to a particular user when selecting a decorative object.
  • In one embodiment, the decorative object selection screen 80 may display, for each decorative object, information regarding a user who holds the decorative object or a user who made a display request for the decorative object. Such information displayed for each decorative object may include, for example, the number of times the user who made the display request for the decorative object has made display requests for the decorative object so far and the number of times the decorative object has been actually selected (for example, information indicating that the display request to display the decorative object has been made five times and the decorative object has been selected two times among the five times), the number of times the user has viewed the video of the character object 71A and/or the character object 71B, the number of times the user has viewed videos (regardless of whether the character object 71A and/or the character object 71B appears in the videos or not), the amount of money which the user spent for the gift object, the number of times the user has purchased the objects, the points possessed by the user that can be used in the video distribution system 1, the level of the user in the video distribution system 1, and any other information about the user who made the display request to display the respective decorative object. According to this embodiment, it is possible to select the decorative object based on the behavior and/or the viewing history of the user who has made the display request for the decorative object in the video distribution system 1.
  • In one embodiment, a constraint(s) may be imposed on the display of decorative objects to eliminate overlapping. For example, with regard to the character object 71A, if a decorative object associated with the specific body part of the character object is already selected, selection of other decorative objects associated with the specific body part may be prohibited. As shown in the embodiment of FIG. 7, when the headband associated with the “head” of the character object 71B is already selected, the other decorative objects associated with the “head” (for example, a decorative object simulating a “hat” associated with the head) are not displayed on the decorative object selection screen 80, or a selection button for selecting the decorative object simulating the hat is disabled on decorative object selection screen 80. According to this embodiment, it is possible to prevent the decorative object from being displayed so as to overlap with a specific part of the character object.
  • The decorative object selection screen 80 may be displayed on another device instead of or in addition to the supporter computer 40. For example, the decorative object selection screen 80 may be displayed on the display 39 and/or the screen S in the studio room R. In this case, the actor A1 and the actor A2 are able to select a desired decorative object based on the decorative object selection screen 80 displayed on the display 39 or the screen S. Selection of the decorative object by the actor A1 and the actor A2 may be made, for example, by operating the controller 33 a, the controller 33 b, the controller 34 a, or the controller 34 b.
  • In one embodiment, in response to a request from a viewing user of the video, the object purchase processing unit 21 h transmits, to a client device of the viewing user (for example, the client device 10 a), purchase information of each of the plurality of gift objects that can be purchased in relation to the video. The purchase information of each gift object may include the type of the gift object (the effect object, the normal object, or the decorative object), the image of the gift object, the price of the gift object, and any other information necessary to purchase the gift object. The viewing user is able to select a gift object to purchase it considering the gift object purchase information displayed on the client device 10 a. The selection of the gift objects to be purchased may be performed by operating the client device 10 a. When a gift object to be purchased is selected by the viewing user, a purchase request for the gift object is transmitted to the server device 20. The object purchase processing unit 21 h performs a payment process based on the purchase request. When the payment process is completed, the purchased gift object is held by the viewing user. In this case, the object ID of the purchased gift object is stored in the possession list 23 c in association with the user ID of the viewing user who purchased the object.
  • Gift objects that can be purchased may be different for each video. The gift objects may be made purchasable in two or more different videos. That is, the purchasable gift objects may include a gift object unique to each video and a common gift object that can be purchased in multiple videos. For example, the effect object that simulates confetti may be the common gift object that can be purchased in the two or more different videos.
  • In one embodiment, when a user purchases an effect object while viewing a video, the purchased effect object may be displayed automatically in the video that the user is viewing in response to completion of the payment process for purchasing the effect object. In the same manner, when a user purchases a normal object while viewing a video, the purchased normal object may be automatically displayed in the video that the user is viewing in response to completion of the payment process for purchasing the normal object.
  • In another embodiment, in response to completion of the payment process performed by the object purchase processing unit 21 h for the effect object to be purchased, a notification of the completion of the payment process may be sent to the client device 10 a, and a confirmation screen may be displayed to confirm whether the viewing user wants to make a display request to display the purchased effect object on the client device 10 a. When the viewing user selects to make the display request for the purchased effect object, the display request to display the purchased effect object may be sent from the client device of the viewing user to the display request processing unit 21 f, and the display request processing unit 21 f may perform the process to display the purchased effect object in the display image 70 of the video. Even when the normal object is to be purchased, a confirmation screen may be displayed on the client device 10 a to confirm whether the viewing user wants to make a display request to display the purchased normal object, in the same manner as above.
  • Next, with reference to FIGS. 9 to 11, a video distribution process in one embodiment will be described. FIG. 9 is a flow chart showing a flow of a video distribution process in one embodiment, FIG. 10 is a flowchart of a process for displaying a normal object according to one embodiment, and FIG. 11 is a flowchart of a process for displaying a decorative object according to one embodiment. In the video distribution process, it is assumed that the actor A1 and the actor A2 are giving performances in the studio room R.
  • First, in step S11, body motion data, which is a digital representation of the body motions of the actor A1 and the actor A2, and face motion data, which is a digital representation of the facial motions (expression) of the actor A1 and the actor A2, are generated. Generation of the body motion data is performed, for example, by the body motion data generation unit 21 a described above, and generation of the face motion data is performed, for example, by the face motion data generation unit 21 b described above.
  • Next, in step S12, the body motion data and the face motion data of the actor A1 are applied to the model data for the actor A1 to generate animation of the first character object that moves in synchronization with the motions of the body and facial expression of the actor A1. Similarly, the body motion data and the face motion data of the actor A2 are applied to the model data for the actor A2 to generate animation of the second character object that moves in synchronization with the motions of the body and facial expression of the actor A2. The generation of the animation is performed, for example, by the above-described animation generation unit 21 c.
  • Next, in step S13, a video including the animation of the first character object corresponding to the actor A1 and the animation of the second character object corresponding to the actor A2 is generated. The voices of the actor A1 and the actor A2 may be included in the video. The animation of the first character object and the animation of the second character object may be provided in the virtual space. Generation of the video is performed, for example, by the above-described video generation unit 21 d.
  • Next, the process proceeds to step S14 where the video generated in step S13 is distributed. The video is distributed to the client devices 10 a to 10 c and other client devices over the network 50. The video may be distributed to the supporter computer 40 and/or may be projected on the screen S in the studio room R. The video is distributed continuously over a predetermined distribution period. The distribution period of the video may be set to, for example, 30 seconds, 1 minute, 5 minutes, 10 minutes, 30 minutes, 60 minutes, 120 minutes, and any other length of time.
  • Subsequently in step S15, it is determined whether a termination condition for ending the distribution of the video is satisfied. The termination condition is, for example, that the distribution ending time has come, that the supporter computer 40 has issued an instruction to end the distribution, or any other conditions. If the termination condition is not satisfied, the steps S11 to S14 of the process are repeatedly executed, and distribution of the video including the animation synchronized with the movements of the actor A1 and the actor A2 is continued. When it is determined that the termination condition is satisfied for the video, the distribution process of the video is ended.
  • Next, with further reference to FIG. 10, a description is given of the display process of the normal object that is performed while a video is distributed. The display process of the normal object is performed in parallel with the distribution process of the video shown in FIG. 9.
  • In step S21, it is determined whether a display request for a normal object has been made while a video is distributed. For example, the first viewing user may select one or more specific normal objects from his/her own normal objects and send a display request to display the selected normal objects from the client device 10 a to the server device 20. As described above, a display request for a normal object may be generated in response to the purchase process or the payment process performed for the normal object. Step 21 may be performed by the display request processing unit 21 f described above.
  • When a display request for the normal object has been made, the display process proceeds to step S22. Step S22 is a process for displaying in the video being distributed the normal object for which the display request is made, based on the display request. For example, when a display request for the normal object 74 is made while a video is distributed, the normal object 74 for which the display request is made is displayed in the display screen 70 of the video, as shown in FIG. 6.
  • When no display request is made for a normal object, the display request for the normal object is ended. The display process of the normal object shown in FIG. 10 is performed repeatedly in the distribution period of the video.
  • The display process of the effect object is performed by the same procedure as described above for the normal object. For example, when a display request for the effect object 73 is made while a video is distributed, the effect object 73 for which the display request is made is displayed in the display screen 70 of the video, as shown in FIG. 6. The effect object 73 shown in FIG. 6 simulates confetti. The effect object 73 that simulates confetti may be displayed so as to overlap (or contact) with the character object 71A and the character object 71B, but it is different from the decorative object in that it is not displayed in association with a specific portion of the character object 71A and the character object 71B.
  • Next, with further reference to FIG. 11, a description is given of the display process of the decorative object that is performed while a video is distributed. The display process of the decorative object is performed in parallel with the distribution process of the video shown in FIG. 9. It is also possible that the display process of the decorative object is performed in parallel with the display process of the normal object shown in FIG. 10.
  • In step S31, it is determined whether a display request for a decorative object has been made while a video is distributed. For example, the first viewing user may select a first decorative object from his/her own decorative objects and send a display request to display the selected first decorative object from the client device 10 a to the server device 20. Step 31 may be performed by the display request processing unit 21 f described above.
  • When a display request for the first decorative object has been made, the display process proceeds to step S32. In step S32, the first decorative object for which the display request has been made is added to the candidate list based on the display request. The candidate list is a list of candidate objects for a decorative object to be displayed in the video being distributed, and one example of the candidate list is the candidate list 23 d described above.
  • Next, in step S33, it is determined whether a specific decorative object has been selected from the decorate objects included in the candidate list.
  • When a specific decorative object has been selected, the display process proceeds to step S34, where the specific decorative object that has been selected (“the selected decorative object”) is removed from the candidate list and the selected decorative object is displayed in the display screen of the video being distributed. For example, when the decorative object 75 is selected from the candidate list while the video 70 shown in FIG. 5 is distributed, the decorative object 75 that has been selected is displayed in the display image 70, as shown in FIG. 7. If the first decorative object for which the display request was made in S31 is selected from the candidate list while the video is distributed, the first decorative object is displayed in the display screen 70, and if not selected, it is not displayed in the display image 70.
  • When no decorative object is selected from the candidate list in step S33 or the display process of the selected decorative object is completed in step S34, the display process of the decorative object proceeds to step S35. In step S35, it is determined whether the distribution of the video being distributed is completed. The determination made in step S35 may be based on the same criterion as in step S15, for example. When it is determined in step S35 that the distribution is not completed, the display process of the decorative object returns to step S31 and then repeats steps S31 to S35. When it is determined that the distribution is completed, the display process of the decorative object proceeds to step S36.
  • The process performed in step S36 is related to the decorative objects that remain in the candidate list when the distribution of the video is completed (these decorative objects may be herein referred to as “non-selected objects”). The process performed in step S36 may be herein referred to as the non-selected object process.
  • A non-selected object is an object which was purchased by a viewing user and for which a display request was made while a video is distributed. Therefore, the non-selected object process performed in step S36 may be a process to refund the expense for purchasing the non-selected object to the viewing user who made the display request for the non-selected object. In another embodiment, the non-selected object process may be a process to cancel the payment process for purchasing the non-selected object. In another embodiment, the non-selected object process may be a process to provide the viewing user who made the display request for the non-selected object with a decorative object that is different from the non-selected decorative object.
  • In another embodiment, the non-selected object process may be a process to provide the user who purchased the non-selected object with points that can be used in the video distribution system 1, instead of refunding the purchase expense or canceling the payment process. The video distribution system 1 may be configured such that users consume points to view videos. The points provided to the user who possesses the non-selected object in the non-selected object process may be usable for viewing videos in the video distribution system 1.
  • In another embodiment, the non-selected object process may be a process to add, to the possession list, the non-selected object as an object possessed by the first viewing user. Thus, the non-selected object can be returned to the first viewing user.
  • In another embodiment, the non-selected object process may be a process to retain the candidate list as of the end of the video distribution until the next time the same distributor distributes a video. Thus, in the next video distribution, the distributor can reuse the candidate list used in the previous video distribution. The reused candidate list includes the decorative object for which a display request was made in the previous video distribution and which was not actually displayed in the video (that is, the non-selected object). Thus, the next video distribution can be performed using the candidate list including the non-selected object that was not selected in the previous video distribution. The non-selected object may be selected and displayed in a video in the next video distribution.
  • After the process of step S36 is completed, the display process of the decorative object is ended.
  • In one embodiment, there may be provided a no-display period during which display of a gift object in a distributed video is prohibited. FIG. 12 is a schematic diagram for describing the no-display period. FIG. 12 shows that a video is distributed between the time t1 and the time t2. In other words, the time t1 is the start time of the video distribution, and the time t2 is the end time of the video distribution. In the time period for the video distribution, the time period between the time t3 and the time t4 is the no-display period 91. When a display request r1 for a gift object is made in the no-display period 91, the gift object is not displayed in the display image of the video during the no-display period 91. More specifically, when a display request for an effect object or a normal object among the gift objects is made in the no-display period 91, the effect object or the normal object for which the display request is made is not displayed in the distributed video during the no-display period 91, and this object is displayed in the video at a time after the end of the no-display period 91 (that is, after the time t4). When a decorative object is selected from the candidate list during the no-display period 91, the selected decorative object is not displayed in the distributed video during the no-display period 91 and is displayed in the video at a time after the end of the no-display period 91. The display request for the decorative object may be received in the no-display period 91. When a display request for the decorative object is made in the no-display period 91, the decorative object for which the display request is made may be added to the candidate list during the no-display period 91.
  • In the above embodiment, the gift objects include three types of objects: the decorative object, the normal object, and the effect object. Among them, only the decorative object is displayed in association with a character object. In a video containing an animation of a character object, the animation of the character object is an element that attracts viewing users' attention. For example, in the video shown in FIGS. 5 to 7, it is presumed that the character object 71A and the character object 71B attract attention. In the above embodiment, even when a display request is made for a decorative object to be displayed in association with the character object 71A and the character object 71B, the decorative object may be kept from being displayed in the video until the decorative object is selected from the candidate list 23 d, so as to prevent that the decorative object is displayed disorderly around or over the character objects. Thus, the viewing experience of the viewing users can be prevented from being deteriorated.
  • In the conventional video distribution systems, any type of gift object was displayed in a video in response to a display request for the gift object. Therefore, if it is allowed to display gift objects in a video in an overlapping manner, a large amount of gift object may be displayed in the video, resulting in a deteriorated viewing experience of the users viewing the video. In the above embodiment, the gift objects include the category of decorative object to be displayed in association with a character object, making it possible to restrain the number (the amount) of decorative objects displayed in association with a character object that constitutes the main part of a video.
  • Among the gift objects, the normal object 74 is displayed in a video in response to a display request from a viewing user. In the above embodiment, the normal object 74 is displayed in the display screen 70 of the video so as not to contact or overlap with the character object 71A and the character object 71B, and therefore, the visibility of the character object 71A and the character object 71B is less affected. With this arrangement, it is possible to prevent the viewing experience of users from being deteriorated due to reduced visibility of the character objects.
  • Among the gift objects, the effect object 73 and the normal object 74 are displayed in a video in response to a display request from a viewing user. In the above embodiment, the effect object 73 and the normal object 74 are displayed in the display screen 70 for a smaller duration than the decorative object 75, and therefore, the visibility of the character object 71A and the character object 71B is less affected. With this arrangement, it is possible to prevent the viewing experience of users from being deteriorated due to reduced visibility of the character objects.
  • In the above embodiment, a decorative object is selected from the candidate list 23 d by someone (for example, the supporter B1, the supporter B2, the actor A1, or the Actor A2) other than the viewing user who has made the display request for the decorative object, and therefore, it is possible to restrain the number of displayed decorative objects.
  • In the above embodiment, a gift object is not displayed in a video during the no-display period 91. Thus, a produced video can be viewed without interruption by the gift object. For example, when the no-display period 91 is set at a time period within the video during which a visual performance is given by the actor A1 and the actor A2, the performance of the actors can be presented to the viewers without interruption by the first object and the decorative object.
  • In the above embodiment, it is presumed that a viewing user who views a video including character objects such as the character object 71A and the character object 71B is fond of these character objects. Therefore, the viewing user is more satisfied when the character objects wear the decorative object rather than the effect object or the normal object. Thus, the decorative object that can be attached to the character objects induces the user to repeatedly view the video including the character objects.
  • In the video distribution system 1 according to the above embodiment, a user can present a decorative object to a character. Thus, it is possible to provide a system having higher originality and to provide service having higher originality with the system, as compared to systems in which presenting a decorative object is not allowed. As a result, it is possible to attract may users with the video distribution system 1 and to increase the number of times the users view videos in the video distribution system 1.
  • Embodiments of the disclosure are not limited to the above embodiments but various modifications are possible within a spirit of the invention. For example, capturing and generating the images of the video to be distributed may be performed in a site other than the studio room R. For example, capturing the images for generating the video to be distributed may be performed at an actor's home or a supporter's home.
  • The procedures described herein, particularly those described with a flowchart, are susceptible of omission of part of the steps constituting the procedure, adding steps not explicitly included in the steps constituting the procedure, and/or reordering the steps. The procedure subjected to such omission, addition, or reordering is also included in the scope of the present invention unless diverged from the purport of the present invention.

Claims (18)

What is claimed is:
1. A system for causing a viewer user device of a viewer user to play a video during a distribution period, the video containing animation of a character object that is generated based on a motion of an actor, the system comprising one or more computer processors
wherein the one or more computer processors execute computer-readable instructions to:
arrange, in response to receipt of a first display request sent from the viewer user device for requesting arrangement of a first decorative object in the video, the first decorative object in the video in association with the character object for a first display time; and
display, in response to receipt of a second display request sent from the viewer user device for requesting display of a first normal object in the video, the first normal object in the video for second display time shorter than the first display time.
2. The system of claim 1, wherein the first decorative object is arranged in the video in association with a specific body part of the character object.
3. The system of claim 1, wherein the first normal object is displayed in the video so as not to contact with the character object.
4. The system of claim 1,
wherein the second display request includes a position parameter for designating a display position of the first normal object, and
wherein the first normal object is displayed in the video at a position designated by the position parameter.
5. The system of claim 1,
wherein the second display request includes a direction parameter for designating a moving direction of the first normal object, and
wherein the first normal object is displayed in the video so as to move in a direction designated by the direction parameter.
6. The system of claim 1,
wherein the second display request includes a trajectory parameter for designating a trajectory of the first normal object, and
wherein the first normal object is displayed in the video so as to move along the trajectory designated by the trajectory parameter.
7. The system of claim 1,
wherein a no-display period is set in the distribution period of the video, and
wherein the first normal object and the first decorative object are displayed in the video at a timing in the distribution period of the video other than the no-display period.
8. The system of claim 7, wherein in case the second display request is received in the no-display period, the first normal object is displayed in the video after the no-display period is ended.
9. The system of claim 1, further comprising a storage configured to store a possession list and a candidate list, the possession list containing one or more objects owned by the viewer user, the candidate list containing one or more candidates of decorative objects;
wherein the one or more computer processors are configured further to:
receive, from the viewer user device, a purchase request for purchasing the first decorative object,
in response to receipt of the purchase request, add the first decorative object to the possession list,
in response to reception of the first display request from the viewer user device, add the first decorative object to the candidate list as one of the one or more candidates and remove the first decorative object from the possession list,
arrange, upon selection of the first decorative object from the candidate list, the first decorative object in the video, and
return the first decorative object to the possession list in case the first decorative object is not selected until the distribution period has lapsed.
10. The system of claim 1, wherein the one or more computer processors are configured to:
receive, from the viewer user device, a purchase request for purchasing the first decorative object,
perform a payment process in response to the purchase request, and
cancel the payment process in case the first object is not selected until the distribution period has lapsed.
11. The system of claim 1, wherein the one or more computer processors are configured to:
provide the viewer user with points in case the decorative object is not selected until the distribution period has lapsed.
12. The system of claim 2, wherein the specific body part is determined prior to playing the video.
13. The system of claim 12, further comprising a storage configured to store attachment position information designating the specific body part with which the decorative object is associated.
14. The system of claim 12, wherein the first normal object is displayed in the video without being associated with the specific body part of the character object.
15. The system of claim 1, wherein the first display request includes identification information identifying the viewer user.
16. The system of claim 1, wherein the first normal object is displayed in the video so as not to overlap the character object in case the first normal object is larger than a reference size.
17. A method performed by one or more computer processors executing computer-readable instructions to cause a viewer user device of a viewer user to play a video during a distribution period, the video containing animation of a character object that is generated based on a motion of an actor, the method comprising:
arranging, in response to receipt of a first display request sent from the viewer user device for requesting arrangement of a first decorative object in the video, the first decorative object in the video in association with the character object for a first display time; and
displaying, in response to receipt of a second display request sent from the viewer user device for requesting display of a first normal object in the video, the first normal object in the video for second display time shorter than the first display time.
18. A non-transitory computer-readable storage medium storing a program for causing a viewer user device of a viewer user to play a video during a distribution period, the video containing animation of a character object that is generated based on a motion of an actor, the program causing one or more computer processors to:
arrange, in response to receipt of a first display request sent from the viewer user device for requesting arrangement of a first decorative object in the video, the first decorative object in the video in association with the character object for a first display time; and
display, in response to receipt of a second display request sent from the viewer user device for requesting display of a first normal object in the video, the first normal object in the video for second display time shorter than the first display time.
US17/395,241 2018-05-08 2021-08-05 Video distribution system, video distribution method, and storage medium storing video distribution program for distributing video containing animation of character object generated based on motion of actor Pending US20210368228A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/395,241 US20210368228A1 (en) 2018-05-08 2021-08-05 Video distribution system, video distribution method, and storage medium storing video distribution program for distributing video containing animation of character object generated based on motion of actor

Applications Claiming Priority (14)

Application Number Priority Date Filing Date Title
JP2018089612A JP6382468B1 (en) 2018-05-08 2018-05-08 Movie distribution system, movie distribution method, and movie distribution program for distributing movie including animation of character object generated based on movement of actor
JP2018-089612 2018-05-08
JP2018-144683 2018-08-01
JP2018144681A JP6420930B1 (en) 2018-08-01 2018-08-01 Movie distribution system, movie distribution method, and movie distribution program for distributing movie including animation of character object generated based on movement of actor
JP2018-144681 2018-08-01
JP2018144682A JP2020005238A (en) 2018-08-01 2018-08-01 Video distribution system, video distribution method and video distribution program for distributing a video including animation of character object generated based on motion of actor
JP2018-144682 2018-08-01
JP2018144683A JP6764442B2 (en) 2018-08-01 2018-08-01 Video distribution system, video distribution method, and video distribution program that distributes videos including animations of character objects generated based on the movements of actors.
JP2018193258A JP2019198057A (en) 2018-10-12 2018-10-12 Moving image distribution system, moving image distribution method and moving image distribution program distributing moving image including animation of character object generated based on actor movement
JP2018-193258 2018-10-12
JP2019-009432 2019-01-23
JP2019009432A JP6847138B2 (en) 2019-01-23 2019-01-23 A video distribution system, video distribution method, and video distribution program that distributes videos containing animations of character objects generated based on the movements of actors.
US16/406,195 US11202118B2 (en) 2018-05-08 2019-05-08 Video distribution system, video distribution method, and storage medium storing video distribution program for distributing video containing animation of character object generated based on motion of actor
US17/395,241 US20210368228A1 (en) 2018-05-08 2021-08-05 Video distribution system, video distribution method, and storage medium storing video distribution program for distributing video containing animation of character object generated based on motion of actor

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/406,195 Continuation US11202118B2 (en) 2018-05-08 2019-05-08 Video distribution system, video distribution method, and storage medium storing video distribution program for distributing video containing animation of character object generated based on motion of actor

Publications (1)

Publication Number Publication Date
US20210368228A1 true US20210368228A1 (en) 2021-11-25

Family

ID=66476393

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/406,195 Active US11202118B2 (en) 2018-05-08 2019-05-08 Video distribution system, video distribution method, and storage medium storing video distribution program for distributing video containing animation of character object generated based on motion of actor
US17/395,241 Pending US20210368228A1 (en) 2018-05-08 2021-08-05 Video distribution system, video distribution method, and storage medium storing video distribution program for distributing video containing animation of character object generated based on motion of actor

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US16/406,195 Active US11202118B2 (en) 2018-05-08 2019-05-08 Video distribution system, video distribution method, and storage medium storing video distribution program for distributing video containing animation of character object generated based on motion of actor

Country Status (5)

Country Link
US (2) US11202118B2 (en)
EP (1) EP3567866A1 (en)
KR (2) KR102585051B1 (en)
CN (4) CN115002535A (en)
WO (1) WO2019216146A1 (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11044535B2 (en) 2018-08-28 2021-06-22 Gree, Inc. Video distribution system for live distributing video containing animation of character object generated based on motion of distributor user, distribution method, and storage medium storing video distribution program
WO2020105568A1 (en) 2018-11-20 2020-05-28 グリー株式会社 System, method, and program for delivering moving-image
WO2020138143A1 (en) 2018-12-28 2020-07-02 グリー株式会社 Video delivery system, video delivery method, video delivery program, information processing terminal, and video viewing program
JP7070533B2 (en) * 2019-11-26 2022-05-18 セイコーエプソン株式会社 Image data generation method, program and information processing equipment
CN110933454B (en) * 2019-12-06 2021-11-02 广州酷狗计算机科技有限公司 Method, device, equipment and storage medium for processing live broadcast budding gift
JP7001719B2 (en) 2020-01-29 2022-02-04 グリー株式会社 Computer programs, server devices, terminal devices, and methods
US11633669B2 (en) 2020-06-22 2023-04-25 Gree, Inc. Video modification and transmission
US11583767B2 (en) 2020-06-23 2023-02-21 Gree, Inc. Video modification and transmission
JP6883140B1 (en) * 2020-12-18 2021-06-09 グリー株式会社 Information processing system, information processing method and computer program
JP7199791B2 (en) * 2020-12-18 2023-01-06 グリー株式会社 Information processing system, information processing method and computer program
CN113949921A (en) * 2021-08-31 2022-01-18 上海二三四五网络科技有限公司 Control method and control device for short video cache cleaning

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070197296A1 (en) * 2004-08-27 2007-08-23 Nhn Corporation Method and system for providing character having game item functions
US20090075726A1 (en) * 2007-09-17 2009-03-19 Merit Industries, Inc. Amusement device having electronic game and jukebox functionalities
US20100045697A1 (en) * 2008-08-22 2010-02-25 Microsoft Corporation Social Virtual Avatar Modification
US20110285703A1 (en) * 2009-09-10 2011-11-24 Tri-D Communications 3d avatar service providing system and method using background image
US20130215116A1 (en) * 2008-03-21 2013-08-22 Dressbot, Inc. System and Method for Collaborative Shopping, Business and Entertainment
US20130307875A1 (en) * 2012-02-08 2013-11-21 Glen J. Anderson Augmented reality creation using a real scene
US20140035913A1 (en) * 2012-08-03 2014-02-06 Ebay Inc. Virtual dressing room
US20170364860A1 (en) * 2016-06-17 2017-12-21 Wal-Mart Stores, Inc. Vector-based characterizations of products and individuals with respect to processing returns
US20180167427A1 (en) * 2016-12-12 2018-06-14 Facebook, Inc. Systems and methods for interactive broadcasting
US20180183844A1 (en) * 2016-12-28 2018-06-28 Facebook, Inc. Systems and methods for interactive broadcasting
US20180247446A1 (en) * 2015-09-28 2018-08-30 Infime Development Ltd. Method and system utilizing texture mapping
US20180342106A1 (en) * 2017-05-26 2018-11-29 Brandon Rosado Virtual reality system
US20190102929A1 (en) * 2017-10-03 2019-04-04 StarChat Inc. Methods and systems for mediating multimodule animation events
US20190313146A1 (en) * 2018-04-10 2019-10-10 General Workings Inc. System and methods for interactive filters in live streaming media
US20200143447A1 (en) * 2016-12-26 2020-05-07 Hong Kong Liveme Corporation Limited Method and device for recommending gift and mobile terminal

Family Cites Families (87)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5690407A (en) 1979-12-20 1981-07-22 Sony Corp Encoding circuit
JPS58152426A (en) 1982-03-08 1983-09-10 日新化成株式会社 Float tank for cultivation fishing
JPS6018894A (en) 1983-07-12 1985-01-30 Fujitsu Ltd Semiconductor storage device
JPH0825432B2 (en) 1986-09-24 1996-03-13 日産自動車株式会社 Wiping angle switching device for vehicle wipers
JPS63132727A (en) 1986-11-22 1988-06-04 Zeniya Alum Seisakusho:Kk Rotary machining device for press
US5923337A (en) 1996-04-23 1999-07-13 Image Link Co., Ltd. Systems and methods for communicating through computer animated images
JP2001087548A (en) 1999-02-25 2001-04-03 Snk Corp Hand carry type game machine, game method, and storage medium
JP2001137541A (en) 1999-11-17 2001-05-22 Square Co Ltd Method of displaying object, game device and memory medium
JP4047554B2 (en) 2001-05-11 2008-02-13 日本放送協会 Illumination method, illumination device, display method, display device, and photographing system
JP2002344755A (en) 2001-05-11 2002-11-29 Ricoh Co Ltd Color correction method
JP2003091345A (en) 2001-09-18 2003-03-28 Sony Corp Information processor, guidance presenting method, guidance presenting program and recording medium recording the guidance presenting program
JP2003255964A (en) 2002-02-28 2003-09-10 Daiichikosho Co Ltd Karaoke system
JP2004150972A (en) 2002-10-31 2004-05-27 Matsushita Electric Ind Co Ltd Navigation system
JP2009510886A (en) * 2005-09-30 2009-03-12 エスケー・シー・アンド・シー・カンパニー・リミテッド Digital fashion album service system directly produced by the user and its operation method
US8286218B2 (en) * 2006-06-08 2012-10-09 Ajp Enterprises, Llc Systems and methods of customized television programming over the internet
US20080052242A1 (en) * 2006-08-23 2008-02-28 Gofigure! Llc Systems and methods for exchanging graphics between communication devices
EP1912175A1 (en) 2006-10-09 2008-04-16 Muzlach AG System and method for generating a video signal
JP4673862B2 (en) 2007-03-02 2011-04-20 株式会社ドワンゴ Comment distribution system, comment distribution server, terminal device, comment distribution method, and program
US20090019053A1 (en) * 2007-07-13 2009-01-15 Yahoo! Inc. Method for searching for and marketing fashion garments online
JP4489800B2 (en) 2007-08-30 2010-06-23 株式会社スクウェア・エニックス Image generating apparatus and method, program, and recording medium
JP5397595B2 (en) 2008-05-27 2014-01-22 セイコーエプソン株式会社 Liquid ejecting head unit and liquid ejecting apparatus
US20090319601A1 (en) 2008-06-22 2009-12-24 Frayne Raymond Zvonaric Systems and methods for providing real-time video comparison
JP2010033298A (en) * 2008-07-28 2010-02-12 Namco Bandai Games Inc Program, information storage medium, and image generation system
JP5576612B2 (en) 2009-02-17 2014-08-20 株式会社タイトー Date / time linked image generation program and game machine
KR101671900B1 (en) * 2009-05-08 2016-11-03 삼성전자주식회사 System and method for control of object in virtual world and computer-readable recording medium
US8803889B2 (en) 2009-05-29 2014-08-12 Microsoft Corporation Systems and methods for applying animations or motions to a character
US20110025689A1 (en) * 2009-07-29 2011-02-03 Microsoft Corporation Auto-Generating A Visual Representation
US9098873B2 (en) * 2010-04-01 2015-08-04 Microsoft Technology Licensing, Llc Motion-based interactive shopping environment
US10805102B2 (en) 2010-05-21 2020-10-13 Comcast Cable Communications, Llc Content recommendation system
JP5498459B2 (en) 2010-09-30 2014-05-21 株式会社エクシング Video information distribution system
JP2012120098A (en) 2010-12-03 2012-06-21 Linkt Co Ltd Information provision system
GB201102128D0 (en) 2011-02-08 2011-03-23 Mustafa Bilal I Method and system for providing video game content
US9354763B2 (en) 2011-09-26 2016-05-31 The University Of North Carolina At Charlotte Multi-modal collaborative web-based video annotation system
KR20130053466A (en) * 2011-11-14 2013-05-24 한국전자통신연구원 Apparatus and method for playing contents to provide an interactive augmented space
WO2013082270A1 (en) * 2011-11-29 2013-06-06 Watchitoo, Inc. System and method for synchronized interactive layers for media broadcast
CN102595340A (en) 2012-03-15 2012-07-18 浙江大学城市学院 Method for managing contact person information and system thereof
CN102630033A (en) * 2012-03-31 2012-08-08 彩虹集团公司 Method for converting 2D (Two Dimension) into 3D (Three Dimension) based on dynamic object detection
US20140013200A1 (en) 2012-07-09 2014-01-09 Mobitude, LLC, a Delaware LLC Video comment feed with prioritization
CN103797812B (en) 2012-07-20 2018-10-12 松下知识产权经营株式会社 Band comments on moving image generating means and with comment moving image generation method
JP5713048B2 (en) 2013-05-01 2015-05-07 ブラザー工業株式会社 Karaoke system
JP6137935B2 (en) 2013-05-09 2017-05-31 株式会社モバダイ Body motion evaluation apparatus, karaoke system, and program
US20150082203A1 (en) * 2013-07-08 2015-03-19 Truestream Kk Real-time analytics, collaboration, from multiple video sources
JP5726987B2 (en) 2013-11-05 2015-06-03 株式会社 ディー・エヌ・エー Content distribution system, distribution program, and distribution method
JP6213920B2 (en) * 2013-12-13 2017-10-18 株式会社コナミデジタルエンタテインメント GAME SYSTEM, CONTROL METHOD AND COMPUTER PROGRAM USED FOR THE SAME
JP2015184689A (en) 2014-03-20 2015-10-22 株式会社Mugenup Moving image generation device and program
JP6209118B2 (en) 2014-03-28 2017-10-04 株式会社エクシング Karaoke device, karaoke system, and program
US10332311B2 (en) * 2014-09-29 2019-06-25 Amazon Technologies, Inc. Virtual world generation engine
JP2016143332A (en) 2015-02-04 2016-08-08 フォッグ株式会社 Content providing device, content providing program, and content providing method
US9473810B2 (en) 2015-03-02 2016-10-18 Calay Venture S.á r.l. System and method for enhancing live performances with digital content
WO2016145129A1 (en) 2015-03-09 2016-09-15 Ventana 3D, Llc Avatar control system
EP3272126A1 (en) 2015-03-20 2018-01-24 Twitter, Inc. Live video stream sharing
CN106034068A (en) 2015-03-20 2016-10-19 阿里巴巴集团控股有限公司 Method and device for private chat in group chat, client-side, server and system
JP6605827B2 (en) 2015-03-30 2019-11-13 株式会社バンダイナムコエンターテインメント Server system
JP2015146218A (en) 2015-04-16 2015-08-13 株式会社 ディー・エヌ・エー Content distribution system, distribution program, and distribution method
JP6605224B2 (en) 2015-04-22 2019-11-13 株式会社バンダイナムコエンターテインメント Server and program
JP5837249B2 (en) 2015-05-07 2015-12-24 グリー株式会社 COMMUNITY PROVIDING PROGRAM, COMPUTER CONTROL METHOD, AND COMPUTER
JP2017022555A (en) 2015-07-10 2017-01-26 日本電気株式会社 Relay broadcast system and control method thereof
US10324522B2 (en) 2015-11-25 2019-06-18 Jakob Balslev Methods and systems of a motion-capture body suit with wearable body-position sensors
WO2017159383A1 (en) 2016-03-16 2017-09-21 ソニー株式会社 Information processing device, information processing method, program, and moving-image delivery system
US10373381B2 (en) * 2016-03-30 2019-08-06 Microsoft Technology Licensing, Llc Virtual object manipulation within physical environment
JP2016174941A (en) * 2016-05-23 2016-10-06 株式会社コナミデジタルエンタテインメント Game system, control method used for the same, and computer program
US20170368454A1 (en) 2016-06-22 2017-12-28 Proletariat, Inc. Systems, methods and computer readable media for a viewer controller
JP6659479B2 (en) 2016-06-28 2020-03-04 Line株式会社 Information processing apparatus control method, information processing apparatus, and program
CN106131698A (en) * 2016-06-29 2016-11-16 北京金山安全软件有限公司 Information display method and device and electronic equipment
JP2018005005A (en) 2016-07-04 2018-01-11 ソニー株式会社 Information processing device, information processing method, and program
US10304244B2 (en) * 2016-07-08 2019-05-28 Microsoft Technology Licensing, Llc Motion capture and character synthesis
CN106210855B (en) * 2016-07-11 2019-12-13 网易(杭州)网络有限公司 object display method and device
CN106303578B (en) * 2016-08-18 2020-10-16 北京奇虎科技有限公司 Information processing method based on anchor program, electronic equipment and server
JP6546886B2 (en) 2016-09-01 2019-07-17 株式会社 ディー・エヌ・エー System, method, and program for distributing digital content
US10356340B2 (en) 2016-09-02 2019-07-16 Recruit Media, Inc. Video rendering with teleprompter overlay
US10109073B2 (en) 2016-09-21 2018-10-23 Verizon Patent And Licensing Inc. Feature tracking and dynamic feature addition in an augmented reality environment
CN106412614A (en) * 2016-10-26 2017-02-15 天脉聚源(北京)传媒科技有限公司 Electronic gift playing method and device
US9983684B2 (en) * 2016-11-02 2018-05-29 Microsoft Technology Licensing, Llc Virtual affordance display at virtual target
JP6906929B2 (en) 2016-11-10 2021-07-21 株式会社バンダイナムコエンターテインメント Game system and programs
CN106550278B (en) 2016-11-11 2020-03-27 广州华多网络科技有限公司 Method and device for grouping interaction of live broadcast platform
US10498794B1 (en) 2016-11-30 2019-12-03 Caffeine, Inc. Social entertainment platform
JP6965896B2 (en) 2017-01-31 2021-11-10 株式会社ニコン Display control system and display control method
JP6178941B1 (en) 2017-03-21 2017-08-09 株式会社ドワンゴ Reaction selection device, reaction selection method, reaction selection program
CN107194979A (en) * 2017-05-11 2017-09-22 上海微漫网络科技有限公司 The Scene Composition methods and system of a kind of virtual role
JP6999152B2 (en) 2017-07-14 2022-01-18 泰南雄 中野 Content distribution device and content distribution system
CN107493515B (en) 2017-08-30 2021-01-01 香港乐蜜有限公司 Event reminding method and device based on live broadcast
CN107680157B (en) * 2017-09-08 2020-05-12 广州华多网络科技有限公司 Live broadcast-based interaction method, live broadcast system and electronic equipment
CN107871339B (en) * 2017-11-08 2019-12-24 太平洋未来科技(深圳)有限公司 Rendering method and device for color effect of virtual object in video
KR102661019B1 (en) * 2018-02-23 2024-04-26 삼성전자주식회사 Electronic device providing image including 3d avatar in which motion of face is reflected by using 3d avatar corresponding to face and method for operating thefeof
JP6397595B1 (en) 2018-04-12 2018-09-26 株式会社ドワンゴ Content distribution server, content distribution system, content distribution method and program
JP6382468B1 (en) 2018-05-08 2018-08-29 グリー株式会社 Movie distribution system, movie distribution method, and movie distribution program for distributing movie including animation of character object generated based on movement of actor
JP6491388B1 (en) 2018-08-28 2019-03-27 グリー株式会社 Video distribution system, video distribution method, and video distribution program for live distribution of a video including animation of a character object generated based on the movement of a distribution user

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070197296A1 (en) * 2004-08-27 2007-08-23 Nhn Corporation Method and system for providing character having game item functions
US20090075726A1 (en) * 2007-09-17 2009-03-19 Merit Industries, Inc. Amusement device having electronic game and jukebox functionalities
US20130215116A1 (en) * 2008-03-21 2013-08-22 Dressbot, Inc. System and Method for Collaborative Shopping, Business and Entertainment
US20100045697A1 (en) * 2008-08-22 2010-02-25 Microsoft Corporation Social Virtual Avatar Modification
US20110285703A1 (en) * 2009-09-10 2011-11-24 Tri-D Communications 3d avatar service providing system and method using background image
US20130307875A1 (en) * 2012-02-08 2013-11-21 Glen J. Anderson Augmented reality creation using a real scene
US20140035913A1 (en) * 2012-08-03 2014-02-06 Ebay Inc. Virtual dressing room
US20180247446A1 (en) * 2015-09-28 2018-08-30 Infime Development Ltd. Method and system utilizing texture mapping
US20170364860A1 (en) * 2016-06-17 2017-12-21 Wal-Mart Stores, Inc. Vector-based characterizations of products and individuals with respect to processing returns
US20180167427A1 (en) * 2016-12-12 2018-06-14 Facebook, Inc. Systems and methods for interactive broadcasting
US20200143447A1 (en) * 2016-12-26 2020-05-07 Hong Kong Liveme Corporation Limited Method and device for recommending gift and mobile terminal
US20180183844A1 (en) * 2016-12-28 2018-06-28 Facebook, Inc. Systems and methods for interactive broadcasting
US20180342106A1 (en) * 2017-05-26 2018-11-29 Brandon Rosado Virtual reality system
US20190102929A1 (en) * 2017-10-03 2019-04-04 StarChat Inc. Methods and systems for mediating multimodule animation events
US20190313146A1 (en) * 2018-04-10 2019-10-10 General Workings Inc. System and methods for interactive filters in live streaming media

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"OBS Studio - Adding Alerts for Follower, Subscriber, Donation" (i.e., Vid1), Published 03/08/2017, Available online at <https://www.youtube.com/watch?v=uBs7QK3OrBQ> *
"Streamlabs Facemasks", Published 2017, Available online at <https://discord.com/channels/346751285514469388/346799995262992404/380472926799003650> and <https://discord.com/channels/346751285514469388/346799995262992404/348233304774017036> *
"Streamlabs OBS Pause Unpause Alert Queue", Published 2017, Available online at: <https://twitter.com/streamlabs/status/908783936565788672?lang=en> and <https://www.reddit.com/r/Twitch/comments/8cb0mz/streamlabs_chatbot_pauseunpause_queue_notification/> *
"Unpacking Clothing Items from Inventory – Second Life" (i.e., Vid0), Published 09/08/2014, Available online at <youtue.com/watch?v=1Gy5niBqclg> *

Also Published As

Publication number Publication date
KR102481333B1 (en) 2022-12-23
US20190349625A1 (en) 2019-11-14
KR20210005183A (en) 2021-01-13
KR102585051B1 (en) 2023-10-04
KR20230006652A (en) 2023-01-10
US11202118B2 (en) 2021-12-14
CN115002533A (en) 2022-09-02
WO2019216146A1 (en) 2019-11-14
CN115002535A (en) 2022-09-02
EP3567866A1 (en) 2019-11-13
CN115002534A (en) 2022-09-02
CN110460892A (en) 2019-11-15
CN110460892B (en) 2022-06-14

Similar Documents

Publication Publication Date Title
US20210368228A1 (en) Video distribution system, video distribution method, and storage medium storing video distribution program for distributing video containing animation of character object generated based on motion of actor
JP6382468B1 (en) Movie distribution system, movie distribution method, and movie distribution program for distributing movie including animation of character object generated based on movement of actor
JP6420930B1 (en) Movie distribution system, movie distribution method, and movie distribution program for distributing movie including animation of character object generated based on movement of actor
JP6431233B1 (en) Video distribution system that distributes video including messages from viewing users
US20230119404A1 (en) Video distribution system for live distributing video containing animation of character object generated based on motion of distributor user, video distribution method, and storage medium storing thereon video distribution program
US20240114214A1 (en) Video distribution system distributing video that includes message from viewing user
US11778283B2 (en) Video distribution system for live distributing video containing animation of character object generated based on motion of actors
JP2024023273A (en) Video distribution system for distributing video including animation of character object generated based on motion of actor, video distribution method and video distribution program
JP6847138B2 (en) A video distribution system, video distribution method, and video distribution program that distributes videos containing animations of character objects generated based on the movements of actors.
JP2020017981A (en) Moving image distribution system distributing moving image including message from viewer user
JP6498832B1 (en) Video distribution system that distributes video including messages from viewing users
JP2020043578A (en) Moving image distribution system, moving image distribution method, and moving image distribution program, for distributing moving image including animation of character object generated on the basis of movement of actor
JP6764442B2 (en) Video distribution system, video distribution method, and video distribution program that distributes videos including animations of character objects generated based on the movements of actors.
JP6431242B1 (en) Video distribution system that distributes video including messages from viewing users
JP6592214B1 (en) Video distribution system that distributes video including messages from viewing users
JP7279114B2 (en) A video distribution system that distributes videos containing messages from viewing users
JP2019198057A (en) Moving image distribution system, moving image distribution method and moving image distribution program distributing moving image including animation of character object generated based on actor movement
JP2020005238A (en) Video distribution system, video distribution method and video distribution program for distributing a video including animation of character object generated based on motion of actor

Legal Events

Date Code Title Description
AS Assignment

Owner name: GREE, INC., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WATANABE, MASASHI;KURITA, YASUNORI;REEL/FRAME:057097/0278

Effective date: 20190410

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER