WO2018142494A1 - Display control system and display control method - Google Patents

Display control system and display control method Download PDF

Info

Publication number
WO2018142494A1
WO2018142494A1 PCT/JP2017/003496 JP2017003496W WO2018142494A1 WO 2018142494 A1 WO2018142494 A1 WO 2018142494A1 JP 2017003496 W JP2017003496 W JP 2017003496W WO 2018142494 A1 WO2018142494 A1 WO 2018142494A1
Authority
WO
WIPO (PCT)
Prior art keywords
item
user
performer
display
data
Prior art date
Application number
PCT/JP2017/003496
Other languages
French (fr)
Japanese (ja)
Inventor
幸司 細見
栗山 孝司
祐輝 勝俣
和明 出口
志茂 諭
Original Assignee
株式会社 ニコン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社 ニコン filed Critical 株式会社 ニコン
Priority to CN201780084966.XA priority Critical patent/CN110249631B/en
Priority to PCT/JP2017/003496 priority patent/WO2018142494A1/en
Priority to JP2018565130A priority patent/JP6965896B2/en
Priority to TW107102798A priority patent/TWI701628B/en
Publication of WO2018142494A1 publication Critical patent/WO2018142494A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests

Definitions

  • the present invention relates to a display control system that performs live distribution and a display control method.
  • Patent Document 1 and the like describe a server for distributing content.
  • a display device control unit that causes a display device to display an image of a real space in which a performer exists as a target of live distribution, an acquisition unit that acquires three-dimensional position information of the real space, and a user Based on the detection unit that detects a user action for presenting an item to the performer, the three-dimensional position information acquired by the acquisition unit, and the user action information of the user action detected by the detection unit.
  • a display control system including an item display control unit that calculates an item position where the item is to be arranged in space and displays the calculated item position on the real space so that the performer can recognize the calculated item position.
  • an image of a real space in which a performer exists is distributed live, three-dimensional position information of the real space is acquired, and a user action for a user to present an item to the performer is detected. Then, based on the acquired three-dimensional position information and user action information of the detected user action, an item position where the item is to be arranged in the real space is calculated, and the calculated item position is calculated as the performer.
  • a display control method for displaying on the real space is provided.
  • FIG. 1 The figure which shows the whole structure of a live delivery system.
  • (A) is a flowchart of a live delivery process
  • (b) is a figure which shows the display screen of a user terminal at the time of live delivery.
  • (A) is a figure which shows the display screen of the user terminal at the time of selecting an item
  • (b) is a figure which shows the display screen of the user terminal at the time of selecting a performer
  • (c) is when an item and a performer are selected
  • (d) is a figure which shows the display screen of a studio monitor when an item and a performer are selected
  • (e) is the user terminal and studio which a user performed the operation
  • the flowchart which shows the acquisition process in which a performer acquires the said item.
  • (A) is a figure which shows the display screen of a user terminal and a studio monitor when a performer picks up the headband item of a cat ear
  • (b) is a user terminal and a studio monitor when a performer wears the item of a headband
  • (C) is a figure which shows the display screen of a user terminal and a studio monitor when a performer turns sideways. The flowchart of the return process from a performer to a user.
  • (A) The figure which shows the display screen of a user terminal and a studio monitor when a performer has a return sign ball in his hand
  • (b) is the display screen of a user terminal and a studio monitor when a performer throws a sign ball
  • (C) is a figure which shows the display screen of a user terminal and a studio monitor when a user receives a sign ball.
  • (A) is a figure which shows the display screen of a user terminal and a studio monitor when an effect is added with respect to a performer
  • (b) is the display screen of a user terminal and a studio monitor when a user performs the operation
  • the figure which shows, (c) is the figure which shows the display screen of the user terminal and studio monitor when the performer receives the item,
  • (d) is the display screen of the user terminal and studio monitor when the tower is displayed in the background image FIG.
  • the live distribution system 1 includes a studio 10 in which a performance such as a live performance is demonstrated, a server 20 that performs live distribution of content data acquired in the studio 10, and content data distributed by the server 20.
  • User terminals 40, 60, and 70 are provided.
  • the server 20 and the user terminals 40, 60, 70 are connected via the network 2.
  • the number of user terminals is not limited to two as shown here, and may be one, tens or hundreds.
  • the studio 10 includes a playback device 11, a speaker 12, a microphone 13, an RGB camera 14, a depth camera 15, a projector 16, and a studio monitor 17.
  • the playback device 11 plays back the music data to be played and emits music based on the music data from the speaker 12 connected to the playback device 11.
  • the microphone 13 is owned by each performer A, B, C, and collects the voices of the performers A, B, C.
  • the RGB camera 14 is the first camera in the live distribution system 1.
  • the RGB camera 14 is a digital camera having a moving image shooting function.
  • the RGB camera 14 is a video camera.
  • the RGB camera 14 is a display data generation camera or the like.
  • the RGB camera 14 is a video camera that captures an actual space in which performers A, B, and C are playing.
  • the RGB camera 14 includes an image sensor such as a CCD or CMOS, detects light such as visible light, and outputs display data composed of color signals of three colors (red, green, and blue).
  • the RGB camera 14 images subjects such as performers A, B, and C, and outputs ornamental data that can display the captured subjects on the display unit such as the user terminals 40, 60, and 70 as display data.
  • the RGB camera 14 outputs imaging data to be displayed on the studio monitor 17 as display data.
  • the RGB camera 14 outputs, as display data, video data displayed on a large display device having a large screen installed in a public place where users A, B, and C are located, a live venue, or a concert hall. To do.
  • the RGB camera 14 does not have to be a video camera, and may be, for example, a smart device terminal having a moving image capturing function. In this case, by fixing the smart device terminal to a tripod or the like, it can function in the same manner as a video camera.
  • the depth camera 15 is the second camera in the live distribution system 1.
  • the depth camera 15 is an infrared camera.
  • the depth camera 15 is a three-dimensional position information acquisition camera.
  • the depth camera 15 acquires depth information that is the distance from the subject to the subject.
  • the depth camera 15 is an acquisition unit that acquires depth information, which is a distance from performers A, B, C, and the like that are subjects.
  • the depth camera 15 may include a distance to a performer A that is a part of the subject (depth information), a distance to a performer B that is a part of the subject (depth information), and a performer C that is a part of the subject. Each distance (depth information) is acquired.
  • the depth camera 15 acquires depth information that is a distance to each point of a studio that is a part of the subject.
  • the depth camera 15 acquires three-dimensional position information of a real space including performers A, B, and C as a subject and a studio.
  • the depth camera 15 includes a light projecting unit that projects infrared rays and an infrared detection unit that detects infrared rays.
  • the depth camera 15 acquires three-dimensional position information such as depth information in real space from the time until the infrared pulse projected from the light projecting unit is reflected and returned.
  • the RGB camera 14 and the depth camera 15 may be an integrated device or separate devices.
  • the projector 16 displays an object of an item that is a present for the performers A, B, and C on a stage by a technique such as projection mapping.
  • the studio monitor 17 is a display device that is disposed in the studio 10 that is a real space and displays video. As an example, the studio monitor 17 is a display installed in front of the stage so that the performers A, B, and C are visible. Device.
  • the studio monitor 17 is a flat display as an example, and is an LCD display device or an organic EL display device.
  • the studio monitor 17 displays the performance images of the performers A, B, and C captured by the RGB camera 14.
  • the server 20 generates live data as content data played by the performers A, B, and C.
  • the server 20 delivers to the user terminals 40, 60, and 70 based on various data such as music data from the playback device 11, audio data from the microphone 13, and video data from the RGB camera 14.
  • Performance live data by the performers A, B, and C is generated, and the live data is distributed live to the user terminals 40, 60, and 70. That is, the server 20 relays live performances by the performers A, B, and C to the user terminals 40, 60, and 70.
  • Live data may be generated by the data generation device or the like in the studio 10 and transmitted to the server 20.
  • the users A, B, and C participating in the live distribution system 1 are fans of performers A, B, and C as an example, and can view live data using the user terminals 40, 60, and 70.
  • the user terminal 40 includes a desktop or laptop personal computer 40a, and a wearable terminal connected to the personal computer 40a or a smart watch 50 as a smart device terminal.
  • the personal computer 40a is a desktop type
  • the user terminal 40 includes a desktop type personal computer 40a, a monitor connected to the personal computer 40a, and a smart watch 50 connected to the personal computer 40a.
  • the personal computer 40a is a laptop computer
  • the user terminal 40 includes a laptop personal computer 40a having a display unit and a smart watch 50 connected to the laptop personal computer 40a. Yes.
  • the user A of the user terminal 40 wears the smart watch 50 on the dominant arm or the like, and the smart watch 50 is connected to the personal computer 40a by wire or wirelessly.
  • the smart watch 50 includes a detection unit such as an acceleration sensor or a gyro sensor. For example, when the user A performs an operation of throwing an object, the smart watch 50 detects the acceleration, angle (posture), and angular velocity as user operation information.
  • the personal computer 40a may be connected to a head mounted display (HMD) by wire or wirelessly. Further, the HMD may have a configuration as the personal computer 40a.
  • HMD head mounted display
  • Examples of the HMD include an optical see-through head mounted display, a video see-through head mounted display, and a non-transmissive head mounted display.
  • AR augmented reality
  • VR virtual reality
  • an object of an item for a gift or a return to be described later can be displayed.
  • the user terminal 60 is a smart device terminal such as a smartphone or a tablet as an example, and is a portable information processing terminal.
  • the smartphone includes a touch panel on the display surface.
  • the user terminal 60 includes a detection unit such as an acceleration sensor or a gyro sensor. For example, when the user B performs an operation of throwing an object, the user terminal 60 detects the acceleration, angle, and angular velocity as user operation information. Since the user terminal 60 is a small portable information processing terminal, the user of the user terminal 60 can view live data anywhere.
  • the user terminal 70 of the user C includes a smart device terminal 60a and a smart watch 50.
  • the smart device terminal 60a functions like the laptop personal computer 40a of the user A.
  • the user can view the video displayed on the display unit of the smart device terminal 60a held in the other hand. it can.
  • the user terminals 40, 60, and 70 can virtually present items to performers A, B, and C who are actually performing at that time while viewing live data.
  • an item selection in which a list of objects as first objects of items that can be presented to the performers A, B, and C together with live data is displayed.
  • the object is displayed.
  • the items include ornaments such as bouquets and headbands, effects for directing the performer's actions when viewed on the user terminals 40, 60, and 70, background images of the place where the performer performs.
  • the users A, B, and C select any one item from the object list in the item selection object, and further select performers to be presented from the performers A, B, and C.
  • FIG. 1 shows an example in which the performer A is selected as the performer and the cat-ear headband is selected as the item.
  • the user A of the user terminal 40 performs an operation of throwing an object by waving his arm while wearing the smart watch 50.
  • the user B of the user terminal 60 performs an operation of swinging an arm and throwing an object while holding the user terminal 60.
  • the user C of the user terminal 70 performs an operation of throwing an object by waving his arm while wearing the smart watch 50.
  • the user terminals 40, 60, and 70 transmit operation data such as acceleration data, angle (attitude) data, angular velocity data, and the like as user operation information as detection results to the server 20.
  • the user terminals 60 and 70 use the finger or stylus pen to trace the display surface on which live data is displayed in the direction of the performers A, B, and C, and the coordinate data and the like are displayed.
  • the operation data may be transmitted to the server 20.
  • the server 20 displays the video of the item 18 indicated by the item ID transmitted from the user terminals 40, 60, and 70 on the floor surface of the studio 10 using the projector 16.
  • the item 18 displayed as a video is displayed in front of the performer indicated by the performer ID transmitted from the user terminals 40, 60, and 70.
  • FIG. 1 shows an example in which the headband item 18 is displayed in front of the performer A.
  • the projector 16 causes the item 18 to be displayed as if the item was thrown in the direction of the performers A, B, C from the user A, B, C side, i.e., the position in front of the performers A, B, C. Display video.
  • the position of the floor where the item falls is a specific position specified in the studio 10 and is transmitted from the user terminals 40, 60, and 70. It is determined based on operation data such as acceleration data, angle data, angular velocity data, etc. as user operation information.
  • the item position is specified by three-dimensional position information such as depth information.
  • the item position can be specified by a three-dimensional coordinate system having the detection unit of the depth camera 15 as an origin.
  • the item position where the item 18 is displayed is far away in front of the performers A, B, and C.
  • the item position where the item 18 is displayed is a relatively close position in front of the performers A, B, and C.
  • the item position at which item 18 is displayed is the performer A, B after the item bounces off the wall behind performers A, B, C. , C, the front and rear positions.
  • the performers A, B, and C can visually recognize that the items are thrown in the direction of the users A, B, and C that are not actually in the studio 10.
  • the item object is displayed on the display surface of the user terminals 40, 60, and 70 at the item object position in the image captured by the RGB camera 14 along with the image. Is done.
  • the users A, B, and C can also visually recognize that the item that the user has thrown has reached the performers A, B, and C.
  • the item position where the item 18 is displayed in the real space is the user A, B, Corresponding to the direction in which C threw the item, the position is on the right or left side with respect to performer A.
  • the item position where the item 18 is displayed may be the front of the performer B or the performer C.
  • Such an item position is specified by three-dimensional position information determined based on operation data such as acceleration data, angle data, and angular velocity data as detection results detected by the user terminals 40, 60, and 70.
  • the studio monitor 17 may display the same image as the display screen of the user terminals 40, 60, and 70.
  • the depth camera 15 of the studio 10 constantly calculates three-dimensional position information such as depth information at various locations in the studio 10.
  • the depth camera 15 extracts the person areas of the performers A, B, and C and divides them into a person area and a non-person area.
  • the depth camera 15 acquires 25 skeleton positions of each of the performers A, B, and C as skeleton data, and further calculates depth information of each skeleton position.
  • the skeleton positions include skeleton positions such as left and right hands, heads, necks, left and right shoulders, left and right elbows, left and right knees, and left and right feet.
  • the number of skeleton positions to be acquired is not limited to 25.
  • the depth camera 15 calculates the distance from the wall or floor surface in the studio 10.
  • the depth information is, for example, the distance from the objective lens or sensor surface in front of the depth camera 15 to the measurement target position (places on the wall of the studio 10 or places on the floor).
  • the depth information is, for example, the distance from the objective lens or sensor surface in front of the depth camera 15 to the skeleton position of the performer as the subject.
  • the server 20 displays the headband object picked up as if held by the right or left hand of the performer A on the display surface of the user terminals 40, 60, and 70, and the headband object is displayed by the performer A. Is displayed on the head.
  • the users A, B, and C can recognize the headband that the presenter has given to the performer A on the display surfaces of the user terminals 40 and 60, and the performer A can visually recognize how the headband is worn. .
  • the users A and B can visually recognize the performance of the performer A wearing the headband on the display surfaces of the user terminals 40, 60 and 70.
  • the direction of the object of the headband is displayed in accordance with the direction of the performer A.
  • the orientation of each performer A, B, C is determined by detecting the faces of the performers A, B, C from the display data from the RGB camera 14 and calculating the skeleton positions of the performers A, B, C from the depth camera 15. To do.
  • the data for displaying the item object is also three-dimensional data.
  • the direction of the headband object is changed in accordance with the direction of the performer A.
  • the state in which the performer A wears the headband is also displayed on the studio monitor 17 so that the performers A, B, and C can visually recognize the states displayed on the user terminals 40, 60, and 70.
  • the server 20 is presented to the performers A, B, and C when the item is presented to the performers A, B, and C by the user (when the items are received by the performers A, B, and C). All items (received by A, B, C) and their corresponding user IDs can be identified. Using this, the server 20 can specify the user ID that presented the item when the performers A, B, and C pick up the item.
  • an image of the object of the user ID that presents the headband item to the performer A is displayed on the display screen of the studio monitor 17 of the studio 10.
  • the video of the object with the user ID is displayed on the floor surface of the studio 10 by the projector 16.
  • the server 20 identifies the user ID of the user who gives the return by voice recognition processing from the voice data from the microphone 13, and prompts the user. Be able to return.
  • This voice recognition process can also be performed by a device other than the microphone 13 (such as a server 20 or a device installed in the studio 10).
  • a device other than the microphone 13 such as a server 20 or a device installed in the studio 10.
  • the performer touches an item for example, a headband
  • the performers A, B, and C are touching.
  • the user ID that gave the item is specified so that it can be returned to the user.
  • the server 20 identifies all live distribution viewers by user ID, and performers A, B, and C perform return processing for users who have purchased items of a certain amount or more. It can be so.
  • the user terminal may include a line-of-sight detection unit that detects the line of sight of the user, and the time when the user gazes at a specific performer may be calculated. In such a case, when the performer A performs the return process, it is possible to return to the user with the user ID who has been watching the performer A for a certain period of time. In addition, for the user with the user ID who has not seen the performer A for a certain period of time, the user with the user ID extracted at random is returned.
  • performer A displays the video of the item object as the second object as a present of return for users A, B, and C on display on studio monitor 17 and user terminals 40, 60, and 70. Display on the surface.
  • the return item is the sign ball of the performer A
  • the sign is displayed as if the sign ball is gripped by the right or left hand of the performer A on the display surface of the studio monitor 17 and the user terminals 40, 60, and 70.
  • the video of the ball item object is displayed at the position of the right hand or left hand of the performer A.
  • the performer A can now grasp the present situation with the sign ball, and the users A, B, and C also throw the sign ball that the performer A will return to himself / herself. Can be grasped.
  • the depth camera 15 detects that the performer A has thrown the sign ball based on a change in depth information of a hand holding the sign ball of the performer A. Then, the item object is displayed on the display screen of the user terminals 40, 60, and 70 as if the performer A threw a sign ball toward the users A, B, and C.
  • the user terminal 40, 60, 70 is operated to catch the sign ball displayed on the display surface, the users A, B, C have received the sign ball.
  • the gift of return is not limited to a sign ball.
  • the users A, B, and C can also throw items to the performers A, B, and C when live delivery is being performed again on the received return items such as sign balls.
  • the present as a return item may be mailed to users A, B, and C at a later date.
  • it may be an actual sign ball, goods such as colored paper and performers, albums such as CDs and DVDs, concert coupons, and the like.
  • the depth camera 15 includes, for example, a projector such as a projector that projects pulse-modulated infrared light and an infrared detection unit such as an infrared camera until the projected infrared pulse is reflected and returned. Depth information is calculated from the time (Time of Flight (TOF) method). As an example, the depth camera 15 constantly calculates three-dimensional position information such as depth information at various locations in the studio 10.
  • a projector such as a projector that projects pulse-modulated infrared light and an infrared detection unit such as an infrared camera until the projected infrared pulse is reflected and returned.
  • Depth information is calculated from the time (Time of Flight (TOF) method).
  • TOF Time of Flight
  • the depth camera 15 constantly calculates three-dimensional position information such as depth information at various locations in the studio 10.
  • the depth camera 15 extracts the person areas of the performers A, B, and C and divides them into a person area and a non-person area.
  • the person area is calculated based on the difference value before and after the person appears in the same place (studio 10 as an example). Further, as an example, an area where the amount of detected infrared rays exceeds a threshold is determined as a person area.
  • the depth camera 15 detects the skeleton position.
  • the depth camera 15 obtains depth information of various places in the person area, and based on the depth and shape feature quantities, a part of the person in the person area in the real space (left and right hands, head, neck, left and right shoulders). Left and right elbows, left and right knees, left and right feet, etc.) and the center position of each part is calculated as the skeleton position.
  • the depth camera 15 uses the feature amount dictionary stored in the storage unit to collate the feature amount determined from the person region with the feature amount of each part registered in the feature amount dictionary, thereby Each part in is calculated.
  • the depth camera 15 outputs the detection result of the infrared detection unit to other devices (server 20, user terminals 40, 60, 70, a calculation device installed in the studio 10, etc.), and the other devices send the depth information. Processing such as calculation, extraction of a person area, division into a person area and a non-person area, detection of a person area, detection of a skeleton position, and identification of each part in the person area may be performed.
  • the motion capture process as described above is performed without adding markers to the performers A, B, and C.
  • the performers A, B, and C may be performed with markers.
  • the depth information when calculating the depth information, a method may be used in which the projected infrared pattern is read and the depth information is obtained from the distortion of the pattern (Light Coding method). Furthermore, the depth information may be calculated from parallax information obtained by a twin-lens camera or a plurality of cameras. Further, the depth information can be calculated by recognizing an image obtained by the RGB camera 14 and analyzing the image using a photogrammetry technique or the like. In this case, since the RGB camera 14 functions as a detection unit, the depth camera 15 becomes unnecessary.
  • the server 20 includes an interface (hereinafter simply referred to as “IF”) with each unit of the studio 10 and is connected by wire or wirelessly.
  • IF interface
  • an audio IF 21 an RGB camera IF 22, a depth camera IF 23, a projector IF 24, and a display IF 25 are provided.
  • a database 26, a data storage unit 27, a network IF 28, a main memory 29, and a control unit 30 are provided.
  • the server 20 distributes live data to the user terminals 40, 60, and 70, and further functions as a display control device that controls the display of the projector 16, the studio monitor 17, and the user terminals 40, 60, and 70.
  • the audio IF 21 is connected to the playback device 11 and the microphone 13 of the studio 10.
  • the audio IF 21 receives music data to be played from the playback device 11 and voice data of the performers A, B, and C from the microphone 13.
  • the RGB camera IF 22 receives the video data of the studio 10 captured by the RGB camera 14.
  • the depth camera IF 23 receives depth information of various places in the studio 10 and the performers A, B, and C, data on the person area, depth information on the skeleton position, and the like.
  • the projector IF 24 controls the projector 16 to display items on the floor surface of the stage of the studio 10.
  • the display IF 25 controls the studio monitor 17 installed in the studio 10. As an example, the display IF 25 displays the item object and user ID presented to the performers A, B, and C on the display surface of the studio monitor 17. Thereby, performers A, B, and C can know from whom the item was presented.
  • the database 26 as a management unit manages the items of each user in association with the user ID of the user registered in this system for each live. Specifically, the database 26 manages item IDs, item destination IDs, presence / absence of item reception, presence / absence of return, and success / failure of return receipt in association with each user ID for each live.
  • the item ID is an ID that uniquely identifies an item purchased by the user, and is an ID that uniquely identifies an item presented by the user in each live.
  • the item destination ID is an ID that uniquely identifies the performer who presented the item.
  • Whether or not the item has been received manages whether or not the performer selected by the user succeeded in receiving the item presented by the user.
  • the presence / absence of return is managed to determine whether the performer who received the item presented by the user gave a return to the user.
  • Whether or not the return is successfully received manages whether or not the return has been successfully received.
  • the database 26 manages all users who can participate in live distribution in association with user IDs. A user who participates in each live is selected from all registered users.
  • the database 26 manages the price of each item in association with the item ID as an example. Further, as an example, the total purchase price corresponding to each user's performer is managed.
  • the user A presents the item ID “A (bouquet)” to the performer C, but shows that the present has not been received by the performer C.
  • the user B presents the item ID “C (headband)” to the performer A, the item A is received by the performer A, the performer A gives a return, and the user A receives the return.
  • the user C presents the item ID “B (effect)” to the performer B, the item B is received by the performer B, and the performer B returns, but the user C fails to receive the return. It is shown that.
  • the data storage unit 27 is a storage device such as a hard disk.
  • the data storage unit 27 stores control data related to the live distribution system 1, display data for displaying item objects, and the like.
  • the display data for displaying the object of the item is, for example, three-dimensional data when the item is a tangible object, such as a decoration, and the decoration is also displayed according to the direction of the performer. Yes.
  • the data of the ornament is displayed from the front of the ornament when the performer is facing the front, and the ornament is also displayed sideways when the performer is facing sideways.
  • the control program is a distribution program that distributes live data to the user terminals 40, 60, and 70.
  • control program is an item display control program for displaying the object of the item presented by the user on the floor surface of the studio 10 by the projector 16. Further, as an example, the control program controls the display device that displays the item object presented by the user in association with the performers A, B, and C on the display surface of the studio monitor 17 and the user terminals 40, 60, and 70. It is a program.
  • the control program is, for example, a display device control program that displays an object of the sender's user ID on the display surface of the studio monitor 17 or the user terminals 40, 60, and 70. Furthermore, as an example, when the performer who gave the item gives a return to the user, the control program displays the return item object on the display screen of the studio monitor 17 or the user terminal 40, 60, or 70.
  • the display device control program to be executed.
  • the network IF 28 connects the server 20 and the user terminals 40, 60, 70 via the network 2 such as the Internet.
  • the main memory 29 is, for example, a RAM, and temporarily stores live data being distributed, a control program, and the like.
  • the control unit 30 is a CPU as an example, and controls the overall operation of the server 20.
  • the control unit 30 is a distribution unit that distributes live data to the user terminals 40, 60, and 70 in accordance with a distribution control program.
  • the control unit 30 is an item display control unit that displays the item presented by the user on the floor surface of the studio 10 by the projector 16 according to the item display control program.
  • the control unit 30 is a display device control unit that controls display of the user terminals 40, 60, and 70, and is a display device control unit that controls display of the studio monitor.
  • Such a control unit 30 generates display data for displaying the item object presented by the user in association with the performers A, B, and C on the display surfaces of the studio monitor 17 and the user terminals 40, 60, and 70. And display. Further, as an example, display data for displaying the sender's user ID is generated and displayed on the display surface of the studio monitor 17 and the user terminals 40, 60, and 70. Furthermore, the control part 30 displays the item of return on the display surface of the studio monitor 17 or the user terminals 40, 60, 70, for example, when the performer who presented the item returns the item to the user. Generate and display data.
  • the item object When the item object is displayed on the display screen of the studio monitor 17 or the user terminals 40, 60, and 70, the item object is displayed at the item object position that should originally exist in the real space of the studio 10. That is, as for the item position, the position in the real space of the studio 10 is specified by the three-dimensional position information. Even if the orientation of the RGB camera 14 is changed, the item object displayed on the display surface of the studio monitor 17 or the user terminal 40, 60, 70 is an appropriate item object position in the video acquired in that orientation. Is displayed. Further, when the item position is out of the imaging range of the RGB camera 14, the item object is not displayed on the display screen of the studio monitor 17 or the user terminals 40, 60, and 70. On the display surfaces of the studio monitor 17 and the user terminals 40, 60, and 70, the item object is operated by the performers A, B, and C even when the performers A, B, and C squat down or jump up. Will be displayed according to.
  • the control unit 30 does not have to perform all of the above processes, and may perform some processes in cooperation with other devices.
  • a control device such as a personal computer may be installed in a studio 10, and the control device installed in the studio 10 and the server 20 may be linked to perform a process as described above.
  • the server 20 includes a database 26, a main memory 29, and a control unit 30.
  • the control device includes an audio IF 21, an RGB camera IF 22, a depth camera IF 23, a projector IF 24, a display IF 25, a data storage unit 27, and a network IF 28.
  • the control device may perform processing other than the update of the database 26. For example, a process for displaying an object of an item presented by the user on the projector 16 or a process for displaying the object on the display surface of the studio monitor 17 or the user terminals 40, 60, and 70.
  • a part of the above processing may be performed in cooperation with the user terminals 40, 60, and 70.
  • real space video data acquired by the RGB camera 14, three-dimensional position information acquired by the depth camera 15, and the like are transmitted to the user terminals 40, 60, and 70.
  • the user terminals 40, 60, and 70 detect the movements of the users A, B, and C, and the final arrival positions based on the real space video data, the three-dimensional position information, and the detection results of the movements of the users A, B, and C
  • the trajectory of the item up to and the display of the item object are displayed on its own display surface.
  • the user terminal 40 is a device managed by the user A, and includes a desktop or laptop personal computer 40 a and a smart watch 50.
  • the laptop personal computer 40a includes an audio IF 41, a display IF 42, a network IF 43, a communication IF 44, a data storage unit 45, an operation IF 46, a main memory 47, and a control unit 48.
  • the audio IF 41 is connected to an audio output device such as a speaker, an earphone, and headphones, and an audio input device such as a microphone.
  • the display IF 42 is connected to a display unit 49 configured by a display device such as a liquid crystal display device.
  • the network IF 43 communicates with the server 20 via the network 2 as an example.
  • the communication IF 44 communicates with the smart watch 50 as an example.
  • the communication IF 44 and the smart watch 50 are connected by a wireless LAN or a wired LAN, and acceleration data, angle data, angular velocity data, and the like as user operation information are input from the smart watch 50.
  • the data storage unit 45 is a non-volatile memory, and is, for example, a hard disk or a flash memory.
  • the data storage unit 45 stores a live data reproduction program, a communication control program with the smart watch 50, and the like.
  • the operation IF 46 is connected to an operation device such as a keyboard and a mouse.
  • the main memory 47 is a RAM, for example, and temporarily stores live data being distributed, a control program, and the like.
  • the control unit 48 is a CPU, for example, and controls the overall operation of the user terminal 40. As an example, the control unit 48 selects one or a plurality of performers A, B, and C when playing live data, transmits the performer selection data to the server 20, and item One or a plurality of item selection data is transmitted to the server 20 from the list of objects. Furthermore, as an example, the control unit 48 transmits operation data such as acceleration data, angle data, and angular velocity data as user motion information detected by the smart watch 50 to the server 20.
  • Smart watch 50 is a wristwatch-type information processing terminal that is worn on the wrist of user A's dominant arm as an example.
  • the smart watch 50 includes a sensor 51, a communication IF 52, a data storage unit 53, a main memory 54, and a control unit 55.
  • the sensor 51 is, for example, an acceleration sensor or a gyro sensor.
  • the communication IF 52 transmits acceleration data detected by the sensor 51, angle data of the smart watch 50, and angular velocity data to the personal computer 40a.
  • the sensor 51 transmits operation data such as acceleration data, angle data, angular velocity data, etc. as user motion information related to the swing of the arm from the communication IF 52 to the personal computer 40a.
  • the data storage unit 53 is a non-volatile memory, and is, for example, a hard disk or a flash memory.
  • the data storage unit 53 stores a driver for driving the sensor 51, a communication control program with the personal computer 40a, and the like.
  • the control unit 55 is a CPU as an example, and controls the overall operation of the smart watch 50.
  • the terminal connected to the user terminal 40 may be a small and portable information processing terminal such as a smartphone provided with an acceleration sensor and a gyro sensor instead of the smart watch 50.
  • the user terminal 60 is a device managed by the user B, and is a smart device terminal such as a smartphone or a tablet.
  • the user terminal 60 includes an audio IF 61, a display IF 62, an operation IF 63, a sensor 64, a network IF 65, a data storage unit 66, a main memory 67, and a control unit 68.
  • the audio IF 61 is connected to a sound output device such as a built-in speaker or earphone or a sound input device such as a built-in microphone.
  • the audio IF 61 emits live data from an audio output device.
  • the display IF 62 is connected to a small display unit 69 such as a built-in liquid crystal panel or organic EL panel.
  • the display unit 69 is provided with a touch panel, and the operation IF 63 is connected to the touch panel.
  • the sensor 64 is, for example, an acceleration sensor or a gyro sensor.
  • the network IF 65 communicates with the server 20 via the network 2 as an example. For example, when the user performs an action of throwing an object, the network IF 65 transmits, to the server 20, acceleration data, angle data, and operation data of angular velocity data regarding arm swing as user operation information detected by the sensor 64.
  • the data storage unit 66 is a non-volatile memory, for example, a flash memory.
  • the data storage unit 66 stores a live data reproduction program and the like.
  • the main memory 67 is a RAM, for example, and temporarily stores live data being distributed, a control program, and the like.
  • the control unit 68 is a CPU as an example, and controls the overall operation of the user terminal 60. As an example, when playing back live data, the control unit 68 selects one or a plurality of performers A, B, and C, transmits the selected data to the server 20, One or more selection data from the list of objects is transmitted to the server 20. Further, as an example, when the user performs an operation of throwing an object with the user terminal 60, the control unit 48 receives operation data such as acceleration data, angle data, angular velocity data, coordinate data, etc. of the arm swing. Send to.
  • operation data such as acceleration data, angle data, angular velocity data, coordinate data, etc. of the arm swing.
  • the depth camera 15 Prior to the live distribution, first, in the studio 10, the depth camera 15 acquires depth information of each place in the studio 10, calculates the person area, then calculates the skeleton position in the person area, Allow depth information to be calculated. Thereafter, in the depth camera 15, motion capture processing is executed. In addition, the user terminals 40, 60, and 70 log in to the server 20 so that live distribution can be viewed.
  • step S1 when performers A, B, and C start playing, in step S1, the server 20 generates live data as content data. Specifically, the server 20 is input with real space video data played by the performers A, B, and C in the studio 10 captured by the RGB camera 14. In addition, music data is input from the playback device 11 to the server 20, and voice data of performers A, B, and C are input from the microphone 13. And the server 20 produces
  • step S ⁇ b> 3 the server 20 delivers live data to the user terminals 40, 60, and 70 live. That is, the server 20 delivers performances by the performers A, B, and C to the user terminals 40, 60, and 70 in real time. As a result, as shown in FIG. 3B, the user terminals 40, 60, and 70 display the live video 71 based on the live data on the display surface and output live audio.
  • the server 20 displays an item selection object 72 that displays a list of selectable item objects on the display surface of the user terminal 60 so as to be superimposed on the live video 71 (see FIG. 5A).
  • the item selection object 72 includes, in order from the left, an object 72a indicating a bouquet item, an item object 72b to which an effect for directing the performer's action is added, and a cat ear headband.
  • An object 72c indicating an item and an object 72d indicating an item of a background image of live distribution are listed in a line.
  • the items listed in the item selection object 72 are items prepared by the operator.
  • the prepared items may be different for each live, or may be common to all live. Moreover, some items may overlap in a plurality of live performances.
  • the price of each item is managed in association with the item ID.
  • the server 20 stores video data, image data, audio data, music data, and the like as item data for displaying the item in association with the item ID.
  • the item data is three-dimensional data as an example.
  • Each item is charged, the amount is determined according to the item, and the price is associated with the item ID.
  • the item of the bouquet with the item ID “A” is 200 yen.
  • the item to which the effect of item ID “B” is added is 300 yen.
  • the item of the cat ear headband of item ID “C” is 500 yen.
  • the item “D” of the background image with the item ID “D” is 1000 yen.
  • user A shown in the database of FIG. 2 purchases a bouquet of item ID “A” for 200 yen
  • user B purchases a headband of item ID “C” for 500 yen.
  • the user C has purchased the effect of the item ID “B” for 300 yen.
  • the users A, B, and C can present items to the performers A, B, and C by purchasing these items through the user terminals 40, 60, and 70.
  • the performers A, B, C and the operator can obtain sales corresponding to the items presented by the users A, B, C.
  • all items presented by the users A, B, and C become sales for the performers A, B, and C and their operators.
  • one user may purchase one item in one live, and may purchase several items.
  • the total amount of purchases corresponding to the performers of each user is managed.
  • the server 20 managed by the operator can perform a process of giving a preferential return from the performer to a user who has purchased many items.
  • the user terminal 60 sends item selection data including the user ID and the item ID of the selected item to the server 20.
  • the server 20 performs an item selection process based on the item selection data.
  • the server 20 transmits selection data for displaying only the object 72 c of the selected item on the user terminal 60 to the user terminal 60, and the object is superimposed on the live video 71 on the display surface of the user terminal 60. 72c is displayed.
  • FIG. 5B shows a state in which the server 20 selects an object 72c indicating a headband item and displays it on the lower corner of the display surface of the user terminal 60 as an example.
  • the server 20 displays a similar display on the studio monitor 17 in order to inform the performers A, B, and C that the item selection process is currently in progress.
  • the server 20 displays the performer selection object 73 surrounding the performers A, B, and C one by one on the display surface of the user terminal 60 as an example.
  • the first teaching object 73a in which the user B next notifies the performer selection operation is also displayed.
  • the user terminal 60 transmits performer selection data including the user ID and the performer ID of the selected performer to the server 20.
  • the server 20 performs a performer selection process based on the performer selection data.
  • the server 20 displays the performer determination object 74 for the selected performer A so as to be superimposed on the live video 71 on the display surface of the user terminal 60.
  • 5C shows a state in which the server 20 selects the performer A and displays it on the display screen of the user terminal 60 as an example.
  • the performer selection object 73 and the performer determination object 74 may be rectangular, but are not limited thereto, and may be, for example, circular or triangular.
  • the server 20 displays the same on the studio monitor 17 in order to inform the performers A, B, and C that they are currently in the performer selection process.
  • the studio monitor 17 displays a second teaching object 74a indicating that the item selected by the user B is about to be sent.
  • the server 20 registers the selected item and the performer in the database 26 when the item and the performer are selected. Thereafter, the user of the user terminal 60 is in a state in which items can be presented to performers A, B, and C in the studio 10 while operating the user terminal 60. Specifically, the user B of the user terminal 60 can perform a pseudo-experience of throwing the selected item against the performer he / she selects by throwing an object while holding the user terminal 60 in his / her hand. Specifically, the server 20 starts synchronization processing with the user terminal 60, and the user terminal 60 operates data such as acceleration data, angle data, and angular velocity data as user motion information detected by the sensor 64 for each unit time. Is transmitted to the server 20.
  • step S ⁇ b> 14 the server 20 stores a threshold value for determining that the user has performed the throwing motion, and determines that the throwing motion has been performed at the user terminal 60 when the threshold value is exceeded.
  • the server 20 stores threshold values such as acceleration data, angle data, and angular velocity data in order to specify a throwing motion.
  • the server 20 determines that a throwing motion has been performed when acceleration data, angle data, angular velocity data, etc. exceed a threshold value. Further, when the distance between the start point and the end point when the touch panel swipe operation is performed exceeds a threshold value, it is determined that the throwing operation has been performed.
  • acceleration data, angle data, angular velocity data, or the like exceeds a threshold value, it is determined that a throwing motion has been performed.
  • step S15 the server 20 analyzes the direction and speed of the user's arm swing based on operation data such as acceleration data, angle data, and angular velocity data related to arm swing transmitted from the user terminal 60. Thereby, the server 20 calculates an item position which is a trajectory or a drop position when the thrown item is thrown.
  • the item position can be specified by a three-dimensional coordinate system having the detection unit of the depth camera 15 as the origin.
  • step S ⁇ b> 16 the server 20 generates display data of the falling object 75 indicating the falling headband item displayed on the display surface of the user terminal 60 based on the analysis result, and transmits the display data to the user terminal 60.
  • the falling object 75 is displayed in real time on the display surface of the user terminal 60 so as to come toward the performer A.
  • similar display data of the fall object 75 is transmitted to the studio monitor 17 of the studio 10, and the fall object 75 is also displayed on the studio monitor 17 in real time.
  • the fall object 75 is displayed on the display screen of the studio monitor 17 or the user terminal 60, the fall object 75 is displayed at the item object position that should originally exist in the real space of the studio 10.
  • the item position in the real space of the studio 10 is specified by the three-dimensional position information. Therefore, even if the orientation of the RGB camera 14 is changed, the falling object 75 is displayed at an appropriate item object position in the image acquired in the orientation of the RGB camera 14. Further, when the item position is out of the imaging range of the RGB camera 14, the falling object 75 is not displayed.
  • the process of displaying the falling object 75 on the studio monitor 17 may be omitted because the falling object 75 is displayed on the floor surface of the studio 10 by the projector 16 in the next step S17.
  • step S ⁇ b> 17 the server 20 transmits the display data of the falling object 75 to the projector 16, and the projector 16 is the object object and the item position of the item that comes to the performer A in a flying state on the floor surface of the studio 10.
  • the object of the item that has fallen on is displayed in real time. Thereby, performers A, B, and C can grasp the falling position of the falling object 75.
  • the falling object 75 only needs to be displayed at least at the item position, and it is not necessary to display a state or locus in the middle of flying to the item position. Further, the falling object 75 may be displayed so that the item position does not fall within the performer action range where the performers A, B, and C act. Further, the object may enter the performer action range until the item position is reached, but the falling object 75 is displayed so that the object of the item presented by the user does not finally enter the performer action range. It may be. Further, even if the item position calculated by detecting the operation of presenting the item to the performer (for example, throwing) is located within the performer action range, the object may not be displayed within the performer action range.
  • the object is displayed near the outside of the performer action range (the position outside the performer action range and closest to the calculated item position). According to such a display form, it is possible to prevent the performers A, B, and C from stepping on the items that were presented by mistake.
  • the performer action range is, for example, a stage in the studio 10 or the like.
  • the performer action range may be set to a range that is different between the periods of the prelude, interlude, and postlude in one song, and other periods.
  • different ranges may be set for the period during which the song is played and the period during which the song is not played (for example, the period during which the song is played is the range in which the performer acts in accordance with the song, and the song is played) If there is no period, the range of performer activity is assumed to be none).
  • different ranges may be set according to the music being played, or the same range may be set throughout the live distribution period. Not only the performance but also a different range may be set for the acting period and the period before and after the acting.
  • the server 20 determines that the performer A has picked up the headband item. As an example, in order to identify the action of picking up an item that has fallen on the floor of the studio 10, the server 20 determines the distance between the skeleton position of either the left or right hand and the skeleton position of either the left or right hand, or the left or right hand.
  • a threshold value relating to the distance between the skeleton position and the floor surface is stored.
  • the server 20 determines that the calculated distance between the left and right hand skeleton position and the left or right foot skeleton position, the distance between the left or right hand skeleton position and the floor position at the item position, etc. exceeds a threshold. It is determined that performer A has picked up the headband item. As an example, the server 20 determines that the performer A has picked up the headband item when the position of either the left or right hand overlaps the item position on the floor. In other words, it is determined whether the performer is positioned within the range of the object of the item presented to the performer by the user.
  • the range of the object of the item is determined by the three-dimensional position information, the user action information, and the item type.
  • the determination of whether the left or right hand position and the item position on the floor overlap each other includes, as an example, three-dimensional information (one point) of the performer's hand position and three-dimensional information (one point) of the item position It is determined whether or not. Further, as an example, it is determined whether or not the three-dimensional information (multiple points) of the position of the performer's hand and the three-dimensional information (one point) of the item position overlap.
  • the three-dimensional information (one point) of the performer's hand position and the three-dimensional information (multiple points) of the item position overlap. Furthermore, as an example, it is determined whether the three-dimensional information (plural points) of the position of the performer's hand overlaps with the three-dimensional information (multiple points) of the item position. Further, as an example, it is determined whether or not the three-dimensional information on the performer's hand position (region such as a fingertip) overlaps with the three-dimensional information on the item position (region where the falling object 75 is displayed).
  • Whether or not the position of either the left or right hand and the item position on the floor are overlapped is determined by the fact that the 3D information of the performer's hand position and the 3D information of the item position are not a single point but a plurality of points or areas. It is easier to do this by determining whether they overlap.
  • step S ⁇ b> 22 the server 20 performs control to hide the falling object 75 displayed on the floor surface of the studio 10. This is because if the item is picked up by the performer A, the item disappears from the floor.
  • step S23 the server 20 analyzes the acquisition operation of the performer A. That is, the server 20 analyzes each skeleton position of the performer A, the depth information of the skeleton position, and the operation from the position of the face of the performer A to the mounting of the headband item on the head.
  • step S ⁇ b> 24 the server 20 generates display data of the acquisition object 76 indicating the headband item from picking up to wearing on the head on the display surface of the studio monitor 17 and the user terminal 60 based on the analysis result. And transmitted to the studio monitor 17 and the user terminal 60. Thereby, as an example, the acquisition object 76 is displayed on the display surfaces of the studio monitor 17 and the user terminal 60 in association with the picked-up hand until the acquisition object 76 moves from the item position on the floor surface to the head.
  • step S25 the server 20 analyzes the mounting operation in which the performer A mounts the headband item on the head. That is, the server 20 analyzes the operation of mounting the headband item on the head from each skeleton position of the performer A, depth information of the skeleton position, and the position of the face of the performer A. As an example, the server 20 detects the mounting operation when the position of either the left or right hand overlaps the position of the head.
  • step S ⁇ b> 26 the server 20 generates display data for displaying the acquisition object 76 on the display positions of the studio monitor 17 and the user terminal 60 at the mounting positions of the heads of the performers A, B, and C. Transmit to the terminal 60.
  • the server 20 generates display data for displaying the acquisition object 76 along the boundary between the hair color and the background.
  • the state where the acquisition object 76 is mounted on the head of the performer A is displayed on the display surfaces of the studio monitor 17 and the user terminal 60 (see FIG. 7B).
  • the server 20 tracks the head of the performer A and displays the headband item as always being worn even if the performer A moves.
  • Actor A may turn sideways depending on choreography. Even in such a case, the server 20 displays the acquisition object 76 in accordance with the direction of the performer A (see FIG. 7C).
  • the direction of each performer A can be determined by detecting the face of the performer A from the display data from the RGB camera 14, and calculating the skeleton position of the performer A from the depth camera 15, and data for displaying the object of the item Is also three-dimensional data, and can be displayed from any orientation. Based on these data, when it is detected that the performer A has turned sideways, the direction of the headband object is also changed in accordance with the direction of the performer A.
  • the acquisition object 76 is displayed in accordance with the action of the performer A even when the performer A squats down or jumps up.
  • the server 20 acquires the item by the selected performer, the server 26 registers the success in the database 26.
  • the server 20 sends the ID object 76a of the user ID indicating the user B who presents the headband item to the performer A in the studio monitor 17 and the user terminal 60. Display on the display surface.
  • the performer A can visually recognize the user ID of the user who presented the headband item, and the user B can also display his / her own user ID so that his / her head is presented. It can be visually confirmed that the item of the band is attached to the performer A.
  • the server 20 may be displayed on the floor surface of the studio 10 by the projector 16.
  • the period in which the item object is displayed in association with the performer may be the entire period in which live distribution has been performed since acquisition or may be for each song. Further, it may not be displayed during the interlude.
  • an item once worn by the performer can be removed and stored or placed in a box (which may be a real object such as being installed in a studio or a virtual object similar to an item).
  • a box which may be a real object such as being installed in a studio or a virtual object similar to an item.
  • the performer can wear a plurality of presents.
  • the performer can wear a plurality of headbands at once, or the headband previously worn by the performer can be removed and a new headband can be picked up and worn.
  • the box can be used like a desk or a storage box, and a headband replacement operation or the like can be produced without a sense of incongruity.
  • step S ⁇ b> 31 the server 20 determines whether the music being played live has entered an interlude. For example, the server 20 can determine that an interlude has been entered when there is no voice input from the microphone 13 for a predetermined period. For example, the server 20 can determine that an interlude has been entered when a detection signal indicating that an interlude has been entered from the playback device 11. As an example, it can be determined that an interlude has been entered by detecting an action indicating that an interlude has been entered. As an example, the server 20 starts synchronization processing with the user terminal 60 so that the display on the user terminal 60 and the return receipt processing operated on the user terminal 60 can be detected.
  • the return process may be performed between songs instead of between interludes.
  • it may be between the Nth act and the N + 1th act.
  • the end of the interlude can be determined when there is a voice input from the microphone 13 or when a detection signal indicating the end of the interlude is input from the playback device 11. Moreover, it can be determined that the interlude has ended by detecting an action indicating that the interlude has ended.
  • step S27 described above the server 20 displays the user ID of the user B who presented the headband item to the performer A on the display screen of the studio monitor 17 and the user terminal 60. Therefore, the performer A calls the user ID of the user B who gave the headband item to the performer A toward the microphone 13. Then, in step S32, the server 20 recognizes the voice data collected by the microphone 13 and identifies the user ID of the user B. It should be noted that the server 20 registers in the database 26 that a user who is a returnee is to return.
  • the server 20 detects the return operation of the performer A. For example, the server 20 detects a characteristic specific action indicating that the performer shifts to a return action. As an example, the server 20 stores a threshold value of each skeleton position for determining the specific action, and determines that the performer A has performed the specific action when the data of each skeleton position exceeds the threshold value.
  • the return item from the performer A to the user B is, for example, a sign ball of the performer A. The performer A signs the user B who is not actually in the studio 10 from the studio 10 following the specific action. Move the ball.
  • step S34 the server 20 analyzes the return operation from each skeleton position of the performer A, depth information of the skeleton position, and the position of the face of the performer A.
  • step S35 as an example, the server 20 displays a sign ball return object 77 on the display surface of the studio monitor 17 and the user terminal 60 at the position of the left or right hand of the performer A at the start of the throwing motion by the performer A. Is generated and transmitted to the studio monitor 17 and the user terminal 60. Thereby, as shown in FIG. 9A, the return object 77 is displayed in real time on the display screens of the studio monitor 17 and the user terminal 60 as an example.
  • the server 20 generates display data for displaying the receiving object 78 imitating the hand of the user B on the display surfaces of the studio monitor 17 and the user terminal 60 and transmits the display data to the studio monitor 17 and the user terminal 60. This is a virtual target when the receiving object 78 throws a sign ball.
  • step S36 the server 20 analyzes the throwing motion by the performer A. Specifically, the server 20 detects the swing of the performer A's arm from each skeleton position of the performer A, depth information of the skeleton position, and the position of the face of the performer A.
  • step S ⁇ b> 37 the server 20 generates display data for displaying the return object 77 on the left or right hand position of the performer A in the middle of the throwing motion on the display surfaces of the studio monitor 17 and the user terminal 60. Also, display data for displaying the return object 77 in a state of flying away from either the left or right hand is generated. And it transmits to the studio monitor 17 and the user terminal 60. Accordingly, as shown in FIG. 9B, as an example, the display screens of the studio monitor 17 and the user terminal 60 are displayed in real time as if a sign ball was thrown in the direction of the receiving object 78.
  • an effect may be given to the arm of the performer A who throws the sign ball.
  • this effect is to detect the movement of the performer and display it on the downstream edge of the movement direction of the performer A's arm as an effect according to the detected movement. It is.
  • the effect is displayed in association with the arm that throws the sign ball of the performer A when the return operation is detected. Then, when the operation moves to the throwing motion, it is displayed on the downstream edge of the arm moving direction in accordance with the movement of the waving arm. Further, when the return operation is detected, the background image may be changed to a specific image displayed during the return process.
  • the user terminal 60 transmits the received data including the user ID to the server 20.
  • the receiving operation is an operation of holding the ball, and is an operation of clicking any position on the screen or the receiving object 78 with the mouse.
  • the receiving operation is an operation of touching the touch panel as an example.
  • the server 20 registers in the database 26 that the return has been received. At this time, the server 20 generates display data for displaying the state where the return object 77 is caught by the receiving object 78 on the display surfaces of the studio monitor 17 and the user terminal 60, and Send. Thereby, as shown in FIG.9 (c), the state which caught the sign ball with the hand is displayed on the display surface of the studio monitor 17 and the user terminal 60 as an example.
  • the receiving failure data including the user ID is transmitted to the server 20, and the server 20 receives the receiving failure data.
  • the fact that receipt of return has failed is registered in the database 26.
  • FIG. 10A an effect object 81 is added to the selected performer A.
  • the video of the effect object 81 is detected on the downstream edge of the movement direction of the performer A's arm, which detects the movement of the performer and, as an effect corresponding to the detected movement, moves a large number of blinking star-shaped figures. ing.
  • Such an effect object 81 is not a tangible object like the above-mentioned headband.
  • FIG. 10B when throwing against the selected performer A, the box object 82 with a present ribbon or the like is used.
  • FIG. 10C when the selected and acquired by the performer A, that is, when the position of either the left or right hand overlaps the item position, the box object 82 is hidden, and thereafter the effect object Control to display 81 is performed.
  • the performer A squats down or jumps up, it is displayed according to the operation of the performer A.
  • the effect may be changed before and after the performers A, B, and C jump.
  • effects are displayed as blinking star shapes before jumping, and different shapes are blinking displayed after jumping.
  • a plurality of specific operations are defined in advance, and when one specific operation is detected, a display for giving a specific effect associated with the operation is performed. As an example, when a specific operation is detected, the display for applying the effect is stopped. Further, as an example, the display for applying the effect is not performed until a specific operation is detected.
  • FIG. 10D shows a state in which an object 72d indicating an item of a live distribution background image is selected.
  • a background image object 72d is not a tangible object like the above-described headband. For this reason, when throwing against the selected performer A, it is preferable to use the present box object 82.
  • a present box object 82 may be displayed.
  • the return object 77 may be displayed only on the user terminals 40, 60, and 70 of the user who makes the return. Thereby, the one-to-one communication between the performer and the user can be realized.
  • the effects listed below can be obtained.
  • a user desire for example, a user purchases an item from a desire for the performer to receive the item until the performer receives it, and presents it to the performer.
  • the user wants to increase the possibility that the performer will receive it, so he tries to throw an item close to the performer.
  • a competitive consciousness between users for example, a competitive consciousness is born that the performer has received the item or has received a reward. Thereby, item purchase by the user can be prompted. In this way, the profits of the operator and the performer can be increased.
  • the performers A, B, and C can visually recognize the user IDs of the users A, B, and C who presented the items. (6) The performers A, B, and C can return to the users A, B, and C who gave them presents. Thereby, two-way communication can be realized between the performers A, B, and C and the user.
  • Return items can also be displayed on the user terminals 40, 60, and 70 according to the actions of the performers A, B, and C. Furthermore, by making it possible to perform an operation for catching a return item with good timing on the user terminals 40, 60, and 70, it is possible to further enhance entertainment.
  • the said live delivery system can also be suitably changed and implemented as follows.
  • performers A, B, and C give a return to users A, B, and C who gave presents
  • performers A, B, and C do not perform a return operation using return object 77 Also good.
  • a tangible gift for return may be mailed to the users A, B, and C who gave the gift.
  • the process of the live delivery system 1 can be simplified.
  • tangible gifts may be mailed to users A, B, and C at a later date.
  • it may be an actual sign ball, goods such as colored paper and performers, albums such as CDs and DVDs, concert coupons, and the like.
  • the sender in this case may be performers A, B, and C, or may be the operator of this system.
  • the user may not be able to receive a tangible present (not mailed).
  • Performers A, B, and C do not have to give back to users A, B, and C who gave them presents. That is, the server 20 may omit the return process, and even if an item is received, the return item need not be mailed.
  • the user ID may not be managed in association with the gift item. Thereby, the process of the live delivery system 1 can be simplified.
  • the touch panel When the touch panel is provided like the user terminal 60, the display surface on which the live data is displayed is traced in the direction of the performers A, B, and C by using a finger or a stylus pen. Then, an operation of presenting items to the selected performers A, B, and C may be performed. In this case, an acceleration sensor and a gyro sensor are not required for the user terminal.
  • the selected performer A is performed by moving the pointer in the direction of the performers A, B, and C displayed using the mouse. , B, C may be operated to present items.
  • At least the fall object of the item thrown to the performers A, B, and C may be displayed at least at the item position, and the trajectory until the fall object reaches the item position may be omitted.
  • the actual space where the performer will perform may be other than the studio 10, or a live venue or a concert venue.
  • the projector 16 will display the item object on the stage, and the user will be present to the performer using his / her small and portable information processing terminal such as the user terminal 60 while in the audience seat. To throw items.
  • the means for displaying the item object on the studio 10 is not limited to the projector 16.
  • the floor of the studio 10 is configured by arranging a plurality of flat display panels such as liquid crystal display panels so that the display surface faces the floor surface, and laying a transparent synthetic resin plate on the display surface.
  • the object of the item may be displayed. Further, the item position may be simply indicated by the laser pointer.
  • the item may be displayed using an aerial display technology, an aerial imaging technology, or an aerial imaging technology.
  • the item may be displayed as a two-dimensional image (computer graphics (CG)) or a three-dimensional image (computer graphics (CG)).
  • an object of an item may be represented by spreading a large number of rods on the floor surface and raising and lowering the rods in a direction perpendicular to the floor surface to change the floor surface into a wave shape.
  • the means for displaying the item object on the studio 10 may be a combination of these devices.
  • Return items may be registered in the database 26, and the return items may be thrown to the performer at the next live.
  • Such items are non-sale items that cannot be purchased by the user.
  • control may be performed so that the performer wears the item with priority.
  • the non-sold item may be a decoration, an effect, or a background image.
  • a specific action by a performer or user is detected by the detection part of the smart watch 50 or the detection part of the smart device terminal 60a. It is not limited to what determines (detects) based on. For example, the determination may be made by calculating an inter-frame difference or a motion vector based on the video acquired by the camera.
  • a camera having a video shooting function such as a web camera or a video camera is installed in front of the user.
  • the camera here is, for example, a web camera provided integrally with a laptop personal computer, and as an example, a web camera or a video camera connected to a desktop personal computer. Also, as an example, a camera built in a smart device terminal.
  • the user terminals 40, 60, 70, the server 20, or other devices calculate motion data for throwing the user's object based on the inter-frame difference of the frames constituting the video data, and based on the motion data Detects throwing motion.
  • the motion vector of the object from the reference frame is detected, and the motion of throwing the user's object is detected.
  • the track of the item and the item are displayed on the item position on the floor of the studio 10 and the display surface of the user terminals 40, 60, and 70 so that the performers A, B, and C can recognize them.
  • the actions that the performers A, B, and C acquire the items that the users A, B, and C present to the performers A, B, and C are detected using the image analysis using the inter-frame difference and the motion vector described above. Also good.
  • the above-described image analysis detects the actions of the performers A, B, and C squatting and bending when picking up items, and touching the item or touching the item position where the item is displayed. Thereafter, the performers A, B, and C can perform processing for attaching items and adding effects to the performers A, B, and C.
  • the operation of wearing items acquired by the performers A, B, and C may be detected using the image analysis or the like.
  • the movement of the item to the heads of the performers A, B, and C can be detected by the image analysis.
  • an operation in which performers A, B, and C return items to users A, B, and C may be detected using the image analysis process.
  • performers A, B, and C can be detected in the studio 10 by the image analysis.
  • the actions performed by the performers A, B, and C can be detected by the image analysis process without using depth information, and the actions performed by the users A, B, and C can be detected by the image analysis process. .
  • acceleration data not all acceleration data, angle data, and angular velocity data, but at least acceleration data as motion data may be used. This is because the flying distance of the thrown item can be calculated from the acceleration data.
  • the studio monitor 17 may be omitted.
  • the ID object 76a of the user ID of the user who threw the item object may be displayed on the projector 16.
  • the server 20 randomly extracts user terminals, and the object of the item from the extracted user terminals is displayed in the studio monitor 17 or the user terminals 40, 60, 70. Is displayed on the display screen. Further, the server 20 displays the object of the item from the extracted user terminal on the floor surface of the studio 10 by the projector 16.
  • items presented to the performers A, B, C from the users A, B, C are simply displayed on the display surface of the user terminals 40, 60, 70 and the display surface of the studio monitor 17. Also good. Such an item can be selected from the item selection object 72 shown in FIG. 5A, or image data that makes the performer's stage gorgeous by the user regardless of the item selection object 72, etc. It may be an item. Such items that are simply displayed on the display surface of the user terminals 40, 60, and 70 or the display surface of the studio monitor 17 are charged when the user purchases an item, for example. In the case of a self-made item, it may be free.
  • the items of the wearing equipment and the effect items worn by the performers A, B, and C such as the headband may be further charged when the performers A, B, and C actually acquire the items. That is, you may make it charge twice, when the user A, B, C purchases an item and when the performers A, B, C acquire an item. In addition, you may make it charge only when performers A, B, and C acquire an item.
  • a return from the performers A, B, and C to the users A, B, and C it may be a simple display or production that can be seen only by the user who received the return.
  • the operation of returning from the performers A, B, and C to the users A, B, and C may not be performed.
  • a real object such as a sign ball may not be mailed to the users A, B, and C.
  • the item may be a simple program including image data and moving image data created by the users A, B, and C using software.
  • an effect program such as an effect that makes the stage of the performer including the movement of the object gorgeous.
  • DESCRIPTION OF SYMBOLS 1 ... Live delivery system, 2 ... Network, 10 ... Studio, 11 ... Playback apparatus, 12 ... Speaker, 13 ... Microphone, 14 ... RGB camera, 15 ... Depth camera, 16 ... Projector, 17 ... Studio monitor, 18 ... Item, DESCRIPTION OF SYMBOLS 20 ... Server, 21 ... Audio IF, 22 ... RGB camera IF, 23 ... Depth camera IF, 24 ... Projector IF, 25 ... Display IF, 26 ... Database, 27 ... Data storage part, 28 ... Network IF, 29 ... Main memory , 30 ... control unit, 40 ... user terminal, 41 ... audio IF, 42 ... display IF, 43 ... network IF, 44 ...

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • User Interface Of Digital Computer (AREA)
  • Studio Devices (AREA)

Abstract

A display control system that: delivers live a real-space video in which a player is present; acquires three-dimensional position information of the real space; detects a user motion for a user to present an item to the player; calculates, on the basis of the acquired three-dimensional position information and user motion information pertaining to the detected user motion, an item position in the real space at which the item is to be disposed; and displays the calculated item position in the real space in such a way as to be recognizable by the player.

Description

表示制御システム、及び、表示制御方法Display control system and display control method
 本発明は、ライブ配信を行う表示制御システム、及び、表示制御方法に関する。 The present invention relates to a display control system that performs live distribution and a display control method.
 特許文献1等には、コンテンツを配信するサーバが記載されている。 Patent Document 1 and the like describe a server for distributing content.
特許第5530557号公報Japanese Patent No. 5530557
 本発明の一態様によれば、演者が存在する実空間の映像をライブ配信の対象として表示装置に表示させる表示装置制御部と、前記実空間の三次元位置情報を取得する取得部と、ユーザが前記演者へアイテムをプレゼントするためのユーザ動作を検出する検出部と、前記取得部が取得した三次元位置情報と、前記検出部が検出したユーザ動作のユーザ動作情報とに基づいて、前記実空間上において前記アイテムを配置すべきアイテム位置を算出し、算出した前記アイテム位置を前記演者が認識できるように前記実空間上に表示するアイテム表示制御部とを備えた表示制御システムが提供される。 According to an aspect of the present invention, a display device control unit that causes a display device to display an image of a real space in which a performer exists as a target of live distribution, an acquisition unit that acquires three-dimensional position information of the real space, and a user Based on the detection unit that detects a user action for presenting an item to the performer, the three-dimensional position information acquired by the acquisition unit, and the user action information of the user action detected by the detection unit. There is provided a display control system including an item display control unit that calculates an item position where the item is to be arranged in space and displays the calculated item position on the real space so that the performer can recognize the calculated item position. .
 本発明の他の態様によれば、演者が存在する実空間の映像をライブ配信し、前記実空間の三次元位置情報を取得し、ユーザが前記演者へアイテムをプレゼントするためのユーザ動作を検出し、前記取得した三次元位置情報と、前記検出したユーザ動作のユーザ動作情報とに基づいて、前記実空間上において前記アイテムを配置すべきアイテム位置を算出し、算出した前記アイテム位置を前記演者が認識できるように前記実空間上に表示する表示制御方法が提供される。 According to another aspect of the present invention, an image of a real space in which a performer exists is distributed live, three-dimensional position information of the real space is acquired, and a user action for a user to present an item to the performer is detected. Then, based on the acquired three-dimensional position information and user action information of the detected user action, an item position where the item is to be arranged in the real space is calculated, and the calculated item position is calculated as the performer. A display control method for displaying on the real space is provided.
ライブ配信システムの全体構成を示す図。The figure which shows the whole structure of a live delivery system. サーバとユーザ端末のブロック図。The block diagram of a server and a user terminal. (a)はライブ配信処理のフローチャート、(b)はライブ配信時におけるユーザ端末の表示画面を示す図。(A) is a flowchart of a live delivery process, (b) is a figure which shows the display screen of a user terminal at the time of live delivery. ユーザによるアイテム/演者選択処理のフローチャート。The flowchart of the item / actor selection process by a user. (a)はアイテムを選択する際のユーザ端末の表示画面を示す図、(b)は演者を選択する際のユーザ端末の表示画面を示す図、(c)はアイテムと演者が選択された際のユーザ端末の表示画面を示す図、(d)はアイテムと演者が選択された際のスタジオモニタの表示画面を示す図、(e)はユーザがアイテムを投げる動作をした際のユーザ端末及びスタジオモニタの表示画面を示す図。(A) is a figure which shows the display screen of the user terminal at the time of selecting an item, (b) is a figure which shows the display screen of the user terminal at the time of selecting a performer, (c) is when an item and a performer are selected The figure which shows the display screen of a user terminal, (d) is a figure which shows the display screen of a studio monitor when an item and a performer are selected, (e) is the user terminal and studio which a user performed the operation | movement which throws an item The figure which shows the display screen of a monitor. 演者が当該アイテムを取得する取得処理を示すフローチャート。The flowchart which shows the acquisition process in which a performer acquires the said item. (a)は演者が猫耳のヘッドバンドのアイテムを拾った際のユーザ端末及びスタジオモニタの表示画面を示す図、(b)は演者がヘッドバンドのアイテムを装着した際のユーザ端末及びスタジオモニタの表示画面を示す図、(c)は演者が横を向いた際のユーザ端末及びスタジオモニタの表示画面を示す図。(A) is a figure which shows the display screen of a user terminal and a studio monitor when a performer picks up the headband item of a cat ear, (b) is a user terminal and a studio monitor when a performer wears the item of a headband. (C) is a figure which shows the display screen of a user terminal and a studio monitor when a performer turns sideways. 演者からユーザへの返礼処理のフローチャート。The flowchart of the return process from a performer to a user. (a)は演者が返礼のサインボールを手に持った際のユーザ端末及びスタジオモニタの表示画面を示す図、(b)は演者がサインボールを投げた際のユーザ端末及びスタジオモニタの表示画面を示す図、(c)はユーザがサインボールを受け取った際のユーザ端末及びスタジオモニタの表示画面を示す図。(A) The figure which shows the display screen of a user terminal and a studio monitor when a performer has a return sign ball in his hand, (b) is the display screen of a user terminal and a studio monitor when a performer throws a sign ball (C) is a figure which shows the display screen of a user terminal and a studio monitor when a user receives a sign ball. (a)は演者に対してエフェクトが付加された際のユーザ端末及びスタジオモニタの表示画面を示す図、(b)はユーザがアイテムを投げる動作をした際のユーザ端末及びスタジオモニタの表示画面を示す図、(c)は演者がアイテムを受け取った際のユーザ端末及びスタジオモニタの表示画面を示す図、(d)は背景画像にタワーが表示された際のユーザ端末及びスタジオモニタの表示画面を示す図。(A) is a figure which shows the display screen of a user terminal and a studio monitor when an effect is added with respect to a performer, (b) is the display screen of a user terminal and a studio monitor when a user performs the operation | movement which throws an item. The figure which shows, (c) is the figure which shows the display screen of the user terminal and studio monitor when the performer receives the item, (d) is the display screen of the user terminal and studio monitor when the tower is displayed in the background image FIG.
 以下、図1~図10を用いて、本発明が適用されたライブ配信システムを図面を参照して説明する。
 〔ライブ配信システムの概要〕
 図1に示すように、ライブ配信システム1は、生演奏等のパフォーマンスが実演されるスタジオ10と、スタジオ10で取得されたコンテンツデータをライブ配信するサーバ20と、サーバ20で配信されたコンテンツデータを視聴するユーザ端末40,60,70とを備えている。サーバ20とユーザ端末40,60,70とは、ネットワーク2を介して接続されている。なお、ユーザ端末の数は、ここで示す2台に限定されるものではなく、1台であっても、数十台でも数百台であってもよい。
A live distribution system to which the present invention is applied will be described below with reference to the drawings with reference to FIGS.
[Overview of live distribution system]
As shown in FIG. 1, the live distribution system 1 includes a studio 10 in which a performance such as a live performance is demonstrated, a server 20 that performs live distribution of content data acquired in the studio 10, and content data distributed by the server 20. User terminals 40, 60, and 70 are provided. The server 20 and the user terminals 40, 60, 70 are connected via the network 2. The number of user terminals is not limited to two as shown here, and may be one, tens or hundreds.
 スタジオ10内の実空間では、一例として、被写体としての3人の演者A,B,Cがステージで楽曲を演奏し、歌唱している。勿論、演者の人数は3人に限定されるものではなく、1人や2人でもよいし、4人以上であってもよい。また、演者A,B,Cは、1つのバンド等のグループであってもよいし、それぞれが独立して活動する演者A,B,Cの集まりであってもよい。スタジオ10には、再生装置11と、スピーカ12と、マイク13と、RGBカメラ14と、デプスカメラ15と、プロジェクタ16と、スタジオモニタ17とを備えている。 In the real space in the studio 10, as an example, three performers A, B, and C as subjects perform music on the stage and sing. Of course, the number of performers is not limited to three, but may be one, two, or four or more. Further, the performers A, B, and C may be a group such as one band, or may be a group of performers A, B, and C that are active independently. The studio 10 includes a playback device 11, a speaker 12, a microphone 13, an RGB camera 14, a depth camera 15, a projector 16, and a studio monitor 17.
 再生装置11は、演奏する楽曲データを再生し、楽曲データに基づく楽曲を、再生装置11と接続されたスピーカ12から放音する。マイク13は、各演者A,B,Cが所持しており、演者A,B,Cの音声を集音する。一例として、RGBカメラ14は、ライブ配信システム1において、第1カメラである。RGBカメラ14は、動画撮影機能を有するデジタルカメラである。一例として、RGBカメラ14は、ビデオカメラである。一例として、RGBカメラ14は、表示データ生成用カメラ等である。RGBカメラ14は、演者A,B,Cが演奏している実空間を撮像するビデオカメラである。RGBカメラ14は、一例として、CCDやCMOS等の撮像素子を備え、可視光等の光を検出し、3色(赤、緑、青)のカラー信号で構成された表示データを出力する。一例として、RGBカメラ14は、演者A,B,C等の被写体を撮像し、表示データとして、ユーザ端末40,60,70等の表示部に撮像した被写体を表示できる観賞用データを出力する。また、一例として、RGBカメラ14は、表示データとして、スタジオモニタ17に表示されるように撮像データを出力する。更に、一例として、RGBカメラ14は、表示データとして、ユーザA,B,Cが居る公共場所やライブ会場やコンサートホールに設置された大型スクリーンを備えた大型表示装置に表示される映像データを出力する。なお、RGBカメラ14は、ビデオカメラである必要はなく、例えば動画撮像機能を備えたスマートデバイス端末であってもよい。この場合、スマートデバイス端末を三脚等に固定することによって、ビデオカメラと同様に機能させることができる。 The playback device 11 plays back the music data to be played and emits music based on the music data from the speaker 12 connected to the playback device 11. The microphone 13 is owned by each performer A, B, C, and collects the voices of the performers A, B, C. As an example, the RGB camera 14 is the first camera in the live distribution system 1. The RGB camera 14 is a digital camera having a moving image shooting function. As an example, the RGB camera 14 is a video camera. As an example, the RGB camera 14 is a display data generation camera or the like. The RGB camera 14 is a video camera that captures an actual space in which performers A, B, and C are playing. For example, the RGB camera 14 includes an image sensor such as a CCD or CMOS, detects light such as visible light, and outputs display data composed of color signals of three colors (red, green, and blue). As an example, the RGB camera 14 images subjects such as performers A, B, and C, and outputs ornamental data that can display the captured subjects on the display unit such as the user terminals 40, 60, and 70 as display data. Further, as an example, the RGB camera 14 outputs imaging data to be displayed on the studio monitor 17 as display data. Further, as an example, the RGB camera 14 outputs, as display data, video data displayed on a large display device having a large screen installed in a public place where users A, B, and C are located, a live venue, or a concert hall. To do. Note that the RGB camera 14 does not have to be a video camera, and may be, for example, a smart device terminal having a moving image capturing function. In this case, by fixing the smart device terminal to a tripod or the like, it can function in the same manner as a video camera.
 一例として、デプスカメラ15は、ライブ配信システム1において、第2カメラである。一例として、デプスカメラ15は、赤外線カメラである。一例として、デプスカメラ15は、三次元位置情報取得用カメラである。デプスカメラ15は、自身から被写体までの距離である深度情報等を取得する。デプスカメラ15は、一例として、被写体となる演者A,B,C等との距離である深度情報等を取得する取得部である。一例として、デプスカメラ15は、被写体の一部である演者Aまでの距離(深度情報)、被写体の一部である演者Bまでの距離(深度情報)及び被写体の一部である演者Cまでの距離(深度情報)をそれぞれ取得する。一例として、デプスカメラ15は、被写体の一部であるスタジオの各点までの距離である深度情報等を取得する。一例として、デプスカメラ15は、被写体として演者A、B及びCとスタジオとを含む実空間の三次元位置情報を取得する。デプスカメラ15は、赤外線を投光する投光部と、赤外線を検出する赤外線検出部とを備えている。デプスカメラ15は、一例として、投光部から投光した赤外線パルスが反射して戻ってくるまでの時間から実空間における深度情報等の三次元位置情報を取得する。なお、RGBカメラ14とデプスカメラ15とは、一体の装置であってもよいし、別体の装置であってもよい。 As an example, the depth camera 15 is the second camera in the live distribution system 1. As an example, the depth camera 15 is an infrared camera. As an example, the depth camera 15 is a three-dimensional position information acquisition camera. The depth camera 15 acquires depth information that is the distance from the subject to the subject. For example, the depth camera 15 is an acquisition unit that acquires depth information, which is a distance from performers A, B, C, and the like that are subjects. As an example, the depth camera 15 may include a distance to a performer A that is a part of the subject (depth information), a distance to a performer B that is a part of the subject (depth information), and a performer C that is a part of the subject. Each distance (depth information) is acquired. As an example, the depth camera 15 acquires depth information that is a distance to each point of a studio that is a part of the subject. As an example, the depth camera 15 acquires three-dimensional position information of a real space including performers A, B, and C as a subject and a studio. The depth camera 15 includes a light projecting unit that projects infrared rays and an infrared detection unit that detects infrared rays. For example, the depth camera 15 acquires three-dimensional position information such as depth information in real space from the time until the infrared pulse projected from the light projecting unit is reflected and returned. The RGB camera 14 and the depth camera 15 may be an integrated device or separate devices.
 プロジェクタ16は、一例としてステージに演者A,B,Cへのプレゼントであるアイテムのオブジェクトをプロジェクションマッピング等の手法で表示する。スタジオモニタ17は、実空間であるスタジオ10に配置され、映像が表示される表示装置であり、一例として、主に演者A,B,Cが視認可能なようにステージの前に設置される表示装置である。スタジオモニタ17は、一例としてフラットディスプレイであり、LCD表示装置、有機EL表示装置である。そして、スタジオモニタ17は、RGBカメラ14で撮像された演者A,B,Cのパフォーマンスの映像が表示される。 As an example, the projector 16 displays an object of an item that is a present for the performers A, B, and C on a stage by a technique such as projection mapping. The studio monitor 17 is a display device that is disposed in the studio 10 that is a real space and displays video. As an example, the studio monitor 17 is a display installed in front of the stage so that the performers A, B, and C are visible. Device. The studio monitor 17 is a flat display as an example, and is an LCD display device or an organic EL display device. The studio monitor 17 displays the performance images of the performers A, B, and C captured by the RGB camera 14.
 サーバ20は、演者A,B,Cによって演奏されたコンテンツデータとしてのライブデータを生成する。一例として、サーバ20は、再生装置11からの楽曲データ、マイク13からの音声データ、RGBカメラ14からの映像データ等の各種のデータに基づいて、ユーザ端末40,60,70に対して配信するための演者A,B,Cによるパフォーマンスのライブデータを生成し、ライブデータをユーザ端末40,60,70に対してライブ配信する。すなわち、サーバ20は、ユーザ端末40,60,70に対して、演者A,B,Cによるパフォーマンスを生中継する。 The server 20 generates live data as content data played by the performers A, B, and C. As an example, the server 20 delivers to the user terminals 40, 60, and 70 based on various data such as music data from the playback device 11, audio data from the microphone 13, and video data from the RGB camera 14. Performance live data by the performers A, B, and C is generated, and the live data is distributed live to the user terminals 40, 60, and 70. That is, the server 20 relays live performances by the performers A, B, and C to the user terminals 40, 60, and 70.
 なお、演者A,B,Cによるパフォーマンスは、伴奏を再生装置11により再生するのではなく、実際に演者A,B,Cがギターやドラム等の楽器を演奏し、その音をマイクで集音するものであってもよい。スタジオ10でライブデータをデータ生成装置等で生成し、サーバ20に送信するようにしてもよい。 In the performance by the performers A, B, and C, the accompaniment is not played back by the playback device 11, but the performers A, B, and C actually play instruments such as guitars and drums, and the sound is collected by the microphone. You may do. Live data may be generated by the data generation device or the like in the studio 10 and transmitted to the server 20.
 ライブ配信システム1に参加するユーザA,B,Cは、一例として、演者A,B,Cのファンであり、ユーザ端末40,60,70を用いてライブデータを視聴することができる。ユーザ端末40は、一例として、デスクトップ型又はラップトップ型のパーソナルコンピュータ40aと、パーソナルコンピュータ40aと接続されたウェアラブル端末やスマートデバイス端末としてのスマートウォッチ50とを備えている。パーソナルコンピュータ40aがデスクトップ型である場合、ユーザ端末40は、デスクトップ型のパーソナルコンピュータ40aと、パーソナルコンピュータ40aと接続されたモニタと、パーソナルコンピュータ40aと接続されたスマートウォッチ50とを備えている。また、パーソナルコンピュータ40aがラップトップ型である場合、ユーザ端末40は、表示部を備えたラップトップ型のパーソナルコンピュータ40aと、ラップトップ型のパーソナルコンピュータ40aと接続されたスマートウォッチ50とを備えている。一例として、ユーザ端末40のユーザAは、スマートウォッチ50を利き腕等に装着しており、スマートウォッチ50は、パーソナルコンピュータ40aと有線又は無線で接続されている。スマートウォッチ50は、加速度センサやジャイロセンサ等の検出部を備えており、一例としてユーザAが物を投げる動作をしたとき、ユーザ動作情報として、その加速度や角度(姿勢)や角速度を検出する。 The users A, B, and C participating in the live distribution system 1 are fans of performers A, B, and C as an example, and can view live data using the user terminals 40, 60, and 70. As an example, the user terminal 40 includes a desktop or laptop personal computer 40a, and a wearable terminal connected to the personal computer 40a or a smart watch 50 as a smart device terminal. When the personal computer 40a is a desktop type, the user terminal 40 includes a desktop type personal computer 40a, a monitor connected to the personal computer 40a, and a smart watch 50 connected to the personal computer 40a. When the personal computer 40a is a laptop computer, the user terminal 40 includes a laptop personal computer 40a having a display unit and a smart watch 50 connected to the laptop personal computer 40a. Yes. As an example, the user A of the user terminal 40 wears the smart watch 50 on the dominant arm or the like, and the smart watch 50 is connected to the personal computer 40a by wire or wirelessly. The smart watch 50 includes a detection unit such as an acceleration sensor or a gyro sensor. For example, when the user A performs an operation of throwing an object, the smart watch 50 detects the acceleration, angle (posture), and angular velocity as user operation information.
 なお、パーソナルコンピュータ40aは、ヘッドマウントディスプレイ(HMD)が有線又は無線で接続されていてもよい。また、HMDは、パーソナルコンピュータ40aとしての構成を備えていてもよい。HMDとしては、光学シースルー型ヘッドマウントディスプレイ、ビデオシースルー型ヘッドマウントディスプレイ、非透過型ヘッドマウントディスプレイ等が挙げられる。光学シースルー型ヘッドマウントディスプレイの場合はAR(拡張現実)による表示を行うことができる。ビデオシースルー型ヘッドマウントディスプレイや非透過型ヘッドマウントディスプレイの場合はVR(仮想現実)による表示を行うことができる。HMDには、後述するプレゼントや返礼ためのアイテムのオブジェクトを表示することができる。 The personal computer 40a may be connected to a head mounted display (HMD) by wire or wirelessly. Further, the HMD may have a configuration as the personal computer 40a. Examples of the HMD include an optical see-through head mounted display, a video see-through head mounted display, and a non-transmissive head mounted display. In the case of an optical see-through head mounted display, display by AR (augmented reality) can be performed. In the case of a video see-through type head mounted display or a non-transmissive type head mounted display, display by VR (virtual reality) can be performed. On the HMD, an object of an item for a gift or a return to be described later can be displayed.
 また、ユーザ端末60は、一例として、スマートフォンやタブレット等のスマートデバイス端末であり、携帯可能な小型の情報処理端末である。スマートフォンは、一例として、表示面にタッチパネルを備えている。また、ユーザ端末60は、加速度センサやジャイロセンサ等の検出部を備えており、一例として、ユーザBが物を投げる動作をしたとき、ユーザ動作情報として、その加速度や角度や角速度を検出する。ユーザ端末60は、携帯可能な小型の情報処理端末であることから、ユーザ端末60のユーザは、どこでもライブデータを視聴することができる。 The user terminal 60 is a smart device terminal such as a smartphone or a tablet as an example, and is a portable information processing terminal. As an example, the smartphone includes a touch panel on the display surface. In addition, the user terminal 60 includes a detection unit such as an acceleration sensor or a gyro sensor. For example, when the user B performs an operation of throwing an object, the user terminal 60 detects the acceleration, angle, and angular velocity as user operation information. Since the user terminal 60 is a small portable information processing terminal, the user of the user terminal 60 can view live data anywhere.
 ユーザCのユーザ端末70は、スマートデバイス端末60aとスマートウォッチ50とを備えている。この場合、スマートデバイス端末60aがユーザAのラップトップ型のパーソナルコンピュータ40aのような機能を担う。これにより、ユーザCがスマートウォッチ50を装着した利き腕で物を投げる動作をしているときにも、もう一方の手に持ったスマートデバイス端末60aの表示部に表示された映像を視認することができる。また、スマートデバイス端末60aを机等に置いたり三脚に固定した状態で、物を投げる動作を行うことができる。これにより、スマートデバイス端末60aの表示部に表示された映像を見ながらスマートウォッチ50を装着した利き腕で投げ動作等をすることができる。 The user terminal 70 of the user C includes a smart device terminal 60a and a smart watch 50. In this case, the smart device terminal 60a functions like the laptop personal computer 40a of the user A. Thus, even when the user C is throwing an object with the dominant arm wearing the smart watch 50, the user can view the video displayed on the display unit of the smart device terminal 60a held in the other hand. it can. Further, it is possible to perform an operation of throwing an object with the smart device terminal 60a placed on a desk or the like or fixed to a tripod. Thereby, it is possible to perform a throwing operation or the like with the dominant arm on which the smart watch 50 is worn while watching the video displayed on the display unit of the smart device terminal 60a.
 ユーザ端末40,60,70では、ライブデータを視聴しながら、そのとき実際にパフォーマンスしている演者A,B,Cに対して、アイテムを仮想的にプレゼントすることができる。一例として、ユーザ端末40,60,70の表示面には、ライブデータとともに、演者A,B,Cに対してプレゼントすることが可能なアイテムの第1オブジェクトとしてのオブジェクトが一覧表示されたアイテム選択オブジェクトが表示される。アイテムとしては、花束、ヘッドバンド等の装飾具、ユーザ端末40,60,70で視聴する際演者の動作を演出するエフェクト、演者が実演する場所の背景画像等を挙げることができる。そして、ユーザA,B,Cは、アイテム選択オブジェクト内のオブジェクト一覧の中から何れか1つのアイテムを選択し、さらに、プレゼントする演者を演者A,B,Cの中から選択する。図1では、演者として演者Aが選択され、また、アイテムとして猫耳のヘッドバンドが選択されている例を示している。 The user terminals 40, 60, and 70 can virtually present items to performers A, B, and C who are actually performing at that time while viewing live data. As an example, on the display surface of the user terminals 40, 60, and 70, an item selection in which a list of objects as first objects of items that can be presented to the performers A, B, and C together with live data is displayed. The object is displayed. Examples of the items include ornaments such as bouquets and headbands, effects for directing the performer's actions when viewed on the user terminals 40, 60, and 70, background images of the place where the performer performs. Then, the users A, B, and C select any one item from the object list in the item selection object, and further select performers to be presented from the performers A, B, and C. FIG. 1 shows an example in which the performer A is selected as the performer and the cat-ear headband is selected as the item.
 この後、ユーザ端末40のユーザAは、スマートウォッチ50を装着した状態で腕を振って物を投げる動作をする。ユーザ端末60のユーザBは、ユーザ端末60を持った状態で腕を振って物を投げる動作をする。ユーザ端末70のユーザCは、スマートウォッチ50を装着した状態で腕を振って物を投げる動作をする。すると、ユーザ端末40,60,70は、サーバ20に対して検出結果であるユーザ動作情報としての加速度データ、角度(姿勢)データ、角速度データ等の操作データをサーバ20に送信する。なお、ユーザ端末60,70では、ライブデータが表示されている表示面を指やスタイラスペンを用いて、表示されている演者A,B,Cの方向へなぞる操作を行い、その座標データ等の操作データをサーバ20に送信するようにしてもよい。 Thereafter, the user A of the user terminal 40 performs an operation of throwing an object by waving his arm while wearing the smart watch 50. The user B of the user terminal 60 performs an operation of swinging an arm and throwing an object while holding the user terminal 60. The user C of the user terminal 70 performs an operation of throwing an object by waving his arm while wearing the smart watch 50. Then, the user terminals 40, 60, and 70 transmit operation data such as acceleration data, angle (attitude) data, angular velocity data, and the like as user operation information as detection results to the server 20. The user terminals 60 and 70 use the finger or stylus pen to trace the display surface on which live data is displayed in the direction of the performers A, B, and C, and the coordinate data and the like are displayed. The operation data may be transmitted to the server 20.
 すると、サーバ20は、ユーザ端末40,60,70から送信されたアイテムIDが示すアイテム18の映像をプロジェクタ16を使ってスタジオ10の床面に映像表示する。映像表示されるアイテム18は、一例として、ユーザ端末40,60,70から送信された演者IDが示す演者の前に表示される。図1では、演者Aの前にヘッドバンドのアイテム18が映像表示されている例を示している。一例として、プロジェクタ16は、表示されるアイテム18を、アイテムがユーザA,B,Cの側、すなわち演者A,B,Cの前方位置から演者A,B,Cの方向に投げられたように映像を表示する。実空間において、アイテムが落下する床面の位置、すなわち最終的にアイテム18が表示されるアイテム位置は、スタジオ10内で特定される特定位置であって、ユーザ端末40,60,70から送信されたユーザ動作情報としての加速度データ、角度データ、角速度データ等の操作データに基づいて決まる。アイテム位置は、深度情報等の三次元位置情報で特定される。アイテム位置は、一例として、デプスカメラ15の検出部を原点とする三次元座標系等で特定することができる。 Then, the server 20 displays the video of the item 18 indicated by the item ID transmitted from the user terminals 40, 60, and 70 on the floor surface of the studio 10 using the projector 16. As an example, the item 18 displayed as a video is displayed in front of the performer indicated by the performer ID transmitted from the user terminals 40, 60, and 70. FIG. 1 shows an example in which the headband item 18 is displayed in front of the performer A. As an example, the projector 16 causes the item 18 to be displayed as if the item was thrown in the direction of the performers A, B, C from the user A, B, C side, i.e., the position in front of the performers A, B, C. Display video. In the real space, the position of the floor where the item falls, that is, the item position where the item 18 is finally displayed is a specific position specified in the studio 10 and is transmitted from the user terminals 40, 60, and 70. It is determined based on operation data such as acceleration data, angle data, angular velocity data, etc. as user operation information. The item position is specified by three-dimensional position information such as depth information. As an example, the item position can be specified by a three-dimensional coordinate system having the detection unit of the depth camera 15 as an origin.
 一例として、ユーザA,B,Cがアイテムを弱く投げるような動作をした場合、アイテム18が表示されるアイテム位置は、演者A,B,Cの前方で遠く離れた位置となる。ユーザA,B,Cがアイテムを強く投げるような動作をした場合、アイテム18が表示されるアイテム位置は、演者A,B,Cの前方で比較的近い位置となる。更に、アイテムがユーザA,B,Cによって強く投げられ過ぎた場合、アイテム18が表示されるアイテム位置は、アイテムが演者A,B,Cの後方にある壁で跳ね返った後、演者A,B,Cに対して前方や後方の位置となる。これにより、演者A,B,Cは、実際にはスタジオ10にはいないユーザA,B,Cから自分たちの方向にアイテムが投げられたように視認することができる。演者A,B,Cに対してアイテムが投げられた様子は、ユーザ端末40,60,70の表示面にも映像とともに、アイテムオブジェクトがRGBカメラ14が撮像した映像の中におけるアイテムオブジェクト位置に表示される。これにより、ユーザA,B,Cも、自分の投げたアイテムが演者A,B,Cの近くに届いたことを視認することができる。更に、一例として、ユーザA,B,Cが特定の演者Aに対して右側又は左側にアイテムを投げる動作をした場合、実空間において、アイテム18が表示されるアイテム位置は、ユーザA,B,Cがアイテムを投げた方向に対応して、演者Aに対して右側又は左側の位置となる。一例として、演者Aに対して右側にアイテムを投げてしまった場合、アイテム18が表示されるアイテム位置は、演者Bや演者Cの正面となってもよい。このようなアイテム位置は、ユーザ端末40,60,70で検出された検出結果としての加速度データ、角度データ、角速度データ等の操作データに基づいて決まる三次元位置情報で特定される。 As an example, when the user A, B, or C performs an action of throwing an item weakly, the item position where the item 18 is displayed is far away in front of the performers A, B, and C. When the users A, B, and C perform an action of strongly throwing an item, the item position where the item 18 is displayed is a relatively close position in front of the performers A, B, and C. In addition, if the item is thrown too hard by users A, B, C, the item position at which item 18 is displayed is the performer A, B after the item bounces off the wall behind performers A, B, C. , C, the front and rear positions. Thus, the performers A, B, and C can visually recognize that the items are thrown in the direction of the users A, B, and C that are not actually in the studio 10. When an item is thrown at the performers A, B, and C, the item object is displayed on the display surface of the user terminals 40, 60, and 70 at the item object position in the image captured by the RGB camera 14 along with the image. Is done. Accordingly, the users A, B, and C can also visually recognize that the item that the user has thrown has reached the performers A, B, and C. Furthermore, as an example, when the user A, B, or C performs an action of throwing an item to the right side or the left side with respect to a specific performer A, the item position where the item 18 is displayed in the real space is the user A, B, Corresponding to the direction in which C threw the item, the position is on the right or left side with respect to performer A. As an example, when an item is thrown to the right side with respect to the performer A, the item position where the item 18 is displayed may be the front of the performer B or the performer C. Such an item position is specified by three-dimensional position information determined based on operation data such as acceleration data, angle data, and angular velocity data as detection results detected by the user terminals 40, 60, and 70.
 なお、スタジオモニタ17にもユーザ端末40,60,70の表示面と同様な映像表示を行ってもよい。
 スタジオ10のデプスカメラ15は、一例として、常時、スタジオ10における各所の深度情報等の三次元位置情報を算出している。デプスカメラ15は、一例として、演者A,B,Cの人物領域を抽出し、人物領域と非人物領域とに区分する。デプスカメラ15は、一例として、演者A,B,Cの各々の25カ所の骨格位置を骨格データとして取得し、さらに、各骨格位置の深度情報を算出している。骨格位置としては、一例として左右の手、頭部、首、左右の肩、左右の肘、左右の膝、左右の足等の骨格位置が含まれている。なお、取得する骨格位置の数は、25カ所に限定されるものではない。また、デプスカメラ15は、スタジオ10における壁や床面との距離を算出している。ここで、深度情報は、一例としてデプスカメラ15の前方の対物レンズ又はセンサ面から測定対象位置(スタジオ10の壁の各所や床の各所)との距離である。また、深度情報は、一例としてデプスカメラ15の前方の対物レンズ又はセンサ面から被写体となる演者の骨格位置までの距離である。
Note that the studio monitor 17 may display the same image as the display screen of the user terminals 40, 60, and 70.
As an example, the depth camera 15 of the studio 10 constantly calculates three-dimensional position information such as depth information at various locations in the studio 10. For example, the depth camera 15 extracts the person areas of the performers A, B, and C and divides them into a person area and a non-person area. For example, the depth camera 15 acquires 25 skeleton positions of each of the performers A, B, and C as skeleton data, and further calculates depth information of each skeleton position. Examples of the skeleton positions include skeleton positions such as left and right hands, heads, necks, left and right shoulders, left and right elbows, left and right knees, and left and right feet. The number of skeleton positions to be acquired is not limited to 25. Further, the depth camera 15 calculates the distance from the wall or floor surface in the studio 10. Here, the depth information is, for example, the distance from the objective lens or sensor surface in front of the depth camera 15 to the measurement target position (places on the wall of the studio 10 or places on the floor). The depth information is, for example, the distance from the objective lens or sensor surface in front of the depth camera 15 to the skeleton position of the performer as the subject.
 アイテム18が表示されているアイテム位置に対して演者Aの右手又は左手、一例として演者Aの右手又は左手の深度情報に基づく位置がアイテム位置に重なったときは、演者Aがアイテムを拾ったときである。そこで、プロジェクタ16は、アイテム位置に演者Aの右手又は左手が重なったとき、アイテム18を非表示とする。ここでのアイテムは、ヘッドバンドである。そこで、演者Aが拾ったヘッドバンドを頭部に装着するように振る舞う。この間、サーバ20は、ユーザ端末40,60,70の表示面に対して、演者Aの右手又は左手に把持されたように拾ったヘッドバンドのオブジェクトを映像表示し、ヘッドバンドのオブジェクトが演者Aの頭部に装着される様子を表示する。これにより、ユーザA,B,Cは、ユーザ端末40,60の表示面で、自分がプレゼントしたヘッドバンドが演者Aに認識され、演者Aがそのヘッドバンドが装着する様子を視認することができる。 When the position based on the depth information of the right hand or left hand of the performer A overlaps the item position with respect to the item position where the item 18 is displayed, for example, when the performer A picks up the item It is. Therefore, the projector 16 hides the item 18 when the right hand or the left hand of the performer A overlaps the item position. The item here is a headband. Therefore, the headband picked up by the performer A behaves so as to be worn on the head. During this time, the server 20 displays the headband object picked up as if held by the right or left hand of the performer A on the display surface of the user terminals 40, 60, and 70, and the headband object is displayed by the performer A. Is displayed on the head. As a result, the users A, B, and C can recognize the headband that the presenter has given to the performer A on the display surfaces of the user terminals 40 and 60, and the performer A can visually recognize how the headband is worn. .
 これ以降、ユーザA,Bは、ユーザ端末40,60,70の表示面で、演者Aがヘッドバンドを装着した状態で演奏している様子を視認することができる。一例として、演者Aが横向いたとき、ヘッドバンドのオブジェクトの向きも演者Aの向きに合わせて表示される。各演者A,B,Cの向きは、RGBカメラ14からの表示データから演者A,B,Cの顔検出をし、デプスカメラ15から演者A,B,Cの骨格位置を算出することで判定する。また、アイテムのオブジェクトを表示するデータも三次元データである。一例として、これらのデータに基づいて、演者Aが横向きになったことを検出した場合には、演者Aの向きに合わせてヘッドバンドのオブジェクトの向きも変化される。演者Aがヘッドバンドを装着した状態は、スタジオモニタ17にも表示され、演者A,B,Cに、ユーザ端末40,60,70に表示されている状態を視認させることができる。 Thereafter, the users A and B can visually recognize the performance of the performer A wearing the headband on the display surfaces of the user terminals 40, 60 and 70. As an example, when the performer A turns sideways, the direction of the object of the headband is displayed in accordance with the direction of the performer A. The orientation of each performer A, B, C is determined by detecting the faces of the performers A, B, C from the display data from the RGB camera 14 and calculating the skeleton positions of the performers A, B, C from the depth camera 15. To do. Further, the data for displaying the item object is also three-dimensional data. As an example, when it is detected that the performer A is turned sideways based on these data, the direction of the headband object is changed in accordance with the direction of the performer A. The state in which the performer A wears the headband is also displayed on the studio monitor 17 so that the performers A, B, and C can visually recognize the states displayed on the user terminals 40, 60, and 70.
 一例として、サーバ20は、ユーザから演者A,B,Cにアイテムがプレゼントされた時点(演者A,B,Cがアイテムを受け取った時点)で、演者A,B,Cにプレゼントされた(演者A,B,Cが受け取った)すべてのアイテムと、それに対応するユーザIDを特定することができる。これを利用して、サーバ20は、演者A,B,Cがアイテムを拾った時点で、アイテムをプレゼントしてくれたユーザIDを特定することができる。 As an example, the server 20 is presented to the performers A, B, and C when the item is presented to the performers A, B, and C by the user (when the items are received by the performers A, B, and C). All items (received by A, B, C) and their corresponding user IDs can be identified. Using this, the server 20 can specify the user ID that presented the item when the performers A, B, and C pick up the item.
 間奏に入ったときには、一例として、スタジオ10のスタジオモニタ17の表示面にヘッドバンドのアイテムを演者Aに対してプレゼントしたユーザIDのオブジェクトの映像を表示する。また、一例として、ユーザIDのオブジェクトの映像をプロジェクタ16でスタジオ10の床面に表示する。これにより、演者Aは、ヘッドバンドをプレゼントしてくれたユーザA,B,Cを把握することができる。 When an interlude is entered, as an example, an image of the object of the user ID that presents the headband item to the performer A is displayed on the display screen of the studio monitor 17 of the studio 10. As an example, the video of the object with the user ID is displayed on the floor surface of the studio 10 by the projector 16. Thereby, the performer A can grasp | ascertain the users A, B, and C who gave the headband.
 一例として、演者AがユーザIDを発声しマイク13で集音されると、サーバ20は、マイク13からの音声データから音声認識処理により、返礼をするユーザのユーザIDを特定してそのユーザに返礼できるようにする。この音声認識処理は、マイク13以外の他の装置(サーバ20、スタジオ10に設置された装置等)で行うこともできる。また、一例として、返礼処理として、演者A,B,Cが装着しているアイテム(例えばヘッドバンド)を演者が触りながら返礼するアイテムを投げる動作をすると、演者A,B,Cが触っているアイテムをプレゼントしたユーザIDを特定してそのユーザに返礼できるようにする。更に、一例として、サーバ20は、ライブ配信視聴者全員をユーザIDで特定しており、アイテムを一定額以上購入してくれたユーザに対して演者A,B,Cが返礼処理を行うことができるようにする。更に、一例として、ユーザ端末がユーザの視線を検出する視線検出部を備えており、ユーザが特定の演者を注視した時間を算出することができる場合がある。このような場合において、演者Aが返礼処理をしたとき、演者Aを一定時間以上見続けていたユーザIDのユーザに対して返礼することができる。また、演者Aを一定時間以上見続けていないユーザIDのユーザに対してはランダムに抽出したユーザIDのユーザに対して返礼する。 As an example, when the performer A speaks the user ID and is collected by the microphone 13, the server 20 identifies the user ID of the user who gives the return by voice recognition processing from the voice data from the microphone 13, and prompts the user. Be able to return. This voice recognition process can also be performed by a device other than the microphone 13 (such as a server 20 or a device installed in the studio 10). Further, as an example, when performing an action of throwing an item to be returned while the performer touches an item (for example, a headband) worn by the performers A, B, and C as a return process, the performers A, B, and C are touching. The user ID that gave the item is specified so that it can be returned to the user. Furthermore, as an example, the server 20 identifies all live distribution viewers by user ID, and performers A, B, and C perform return processing for users who have purchased items of a certain amount or more. It can be so. Furthermore, as an example, the user terminal may include a line-of-sight detection unit that detects the line of sight of the user, and the time when the user gazes at a specific performer may be calculated. In such a case, when the performer A performs the return process, it is possible to return to the user with the user ID who has been watching the performer A for a certain period of time. In addition, for the user with the user ID who has not seen the performer A for a certain period of time, the user with the user ID extracted at random is returned.
 以上のように、返礼をする場合、演者AがユーザA,B,Cに対して返礼のプレゼントとなる第2オブジェクトとしてのアイテムオブジェクトの映像をスタジオモニタ17及びユーザ端末40,60,70の表示面に表示する。具体的に、返礼品のアイテムが演者Aのサインボールであるとき、スタジオモニタ17及びユーザ端末40,60,70の表示面において、演者Aの右手又は左手にサインボールが把持されたようにサインボールのアイテムオブジェクトの映像を演者Aの右手又は左手の位置に表示する。これにより、演者Aは、現在、自分はサインボールを持った現状を把握することができ、また、ユーザA,B,Cも、演者Aが自分への返礼となるサインボールを投げてくれる現状を把握することができる。 As described above, when performing a return, performer A displays the video of the item object as the second object as a present of return for users A, B, and C on display on studio monitor 17 and user terminals 40, 60, and 70. Display on the surface. Specifically, when the return item is the sign ball of the performer A, the sign is displayed as if the sign ball is gripped by the right or left hand of the performer A on the display surface of the studio monitor 17 and the user terminals 40, 60, and 70. The video of the ball item object is displayed at the position of the right hand or left hand of the performer A. As a result, the performer A can now grasp the present situation with the sign ball, and the users A, B, and C also throw the sign ball that the performer A will return to himself / herself. Can be grasped.
 演者Aがサインボールを投げる動作を行ったとき、デプスカメラ15は、演者Aのサインボールを持った手の深度情報等の変化に基づいて演者Aがサインボールを投げたことを検出する。すると、ユーザ端末40,60,70の表示面には、演者AがユーザA,B,Cの方へサインボールを投げたようにアイテムオブジェクトが表示される。ユーザ端末40,60,70において、表示面に表示されたサインボールをキャッチする操作がされたとき、ユーザA,B,Cがサインボールを受け取ったことになる。なお、返礼のプレゼントとしては、サインボールに限定されるものではない。ユーザA,B,Cは、受け取ったサインボール等の返礼品を、再度、ライブ配信が行われている際に、演者A,B,Cに対してアイテムを投げることもできる。そして、返礼品としてのプレゼントは、実際の物をユーザA,B,Cに対して後日郵送されるようにしてもよい。郵送の際には、実際のサインボールではなく、色紙や演者に関連するグッズやCDやDVD等のアルバム、コンサートの優待券等であってもよい。 When the performer A performs an action of throwing a sign ball, the depth camera 15 detects that the performer A has thrown the sign ball based on a change in depth information of a hand holding the sign ball of the performer A. Then, the item object is displayed on the display screen of the user terminals 40, 60, and 70 as if the performer A threw a sign ball toward the users A, B, and C. When the user terminal 40, 60, 70 is operated to catch the sign ball displayed on the display surface, the users A, B, C have received the sign ball. The gift of return is not limited to a sign ball. The users A, B, and C can also throw items to the performers A, B, and C when live delivery is being performed again on the received return items such as sign balls. And the present as a return item may be mailed to users A, B, and C at a later date. In the case of mailing, it may be an actual sign ball, goods such as colored paper and performers, albums such as CDs and DVDs, concert coupons, and the like.
 〔デプスカメラ15〕
 デプスカメラ15は、一例として、パルス変調された赤外線を投光するプロジェクタ等の投光部と赤外線カメラ等の赤外線検出部とを備えており、投光した赤外線パルスが反射して戻ってくるまでの時間から深度情報を算出する(Time of Flight(TOF)方式)。デプスカメラ15は、一例として、常時、スタジオ10における各所の深度情報等の三次元位置情報を算出している。
[Depth camera 15]
The depth camera 15 includes, for example, a projector such as a projector that projects pulse-modulated infrared light and an infrared detection unit such as an infrared camera until the projected infrared pulse is reflected and returned. Depth information is calculated from the time (Time of Flight (TOF) method). As an example, the depth camera 15 constantly calculates three-dimensional position information such as depth information at various locations in the studio 10.
 また、デプスカメラ15は、演者A,B,Cの人物領域を抽出し、人物領域と非人物領域とに区分する。一例として、同じ場所(一例としてスタジオ10)において、人物が映る前後における差分値に基づいて人物領域を算出する。また、一例として、検出する赤外線量が閾値を超える領域を人物領域と判定する。 Further, the depth camera 15 extracts the person areas of the performers A, B, and C and divides them into a person area and a non-person area. As an example, the person area is calculated based on the difference value before and after the person appears in the same place (studio 10 as an example). Further, as an example, an area where the amount of detected infrared rays exceeds a threshold is determined as a person area.
 さらに、デプスカメラ15は、骨格位置を検出する。デプスカメラ15は、人物領域における各所の深度情報を取得し、深度と形状の特徴量に基づいて、人物領域に写る人物の実空間上の部位(左右の手、頭部、首、左右の肩、左右の肘、左右の膝、左右の足等)を算出し、各部位における中心位置を骨格位置として算出する。デプスカメラ15は、記憶部に記憶された特徴量辞書を用いて、人物領域から決定される特徴量を、当該特徴量辞書に登録されている各部位の特徴量と照合することにより、人物領域における各部位を算出する。 Furthermore, the depth camera 15 detects the skeleton position. The depth camera 15 obtains depth information of various places in the person area, and based on the depth and shape feature quantities, a part of the person in the person area in the real space (left and right hands, head, neck, left and right shoulders). Left and right elbows, left and right knees, left and right feet, etc.) and the center position of each part is calculated as the skeleton position. The depth camera 15 uses the feature amount dictionary stored in the storage unit to collate the feature amount determined from the person region with the feature amount of each part registered in the feature amount dictionary, thereby Each part in is calculated.
 なお、デプスカメラ15が赤外線検出部での検出結果を他の装置(サーバ20、ユーザ端末40,60,70、スタジオ10に設置された算出装置等)に出力し、他の装置が深度情報を算出、人物領域を抽出、人物領域と非人物領域とに区分、人物領域の検出、骨格位置の検出、人物領域における各部位を特定等の処理を行うようにしてもよい。 The depth camera 15 outputs the detection result of the infrared detection unit to other devices (server 20, user terminals 40, 60, 70, a calculation device installed in the studio 10, etc.), and the other devices send the depth information. Processing such as calculation, extraction of a person area, division into a person area and a non-person area, detection of a person area, detection of a skeleton position, and identification of each part in the person area may be performed.
 また、以上のようなモーションキャプチャ処理は、演者A,B,Cに対してマーカを付けることなく行うものであるが、演者A,B,Cにマーカを付けて行うようにしてもよい。 In addition, the motion capture process as described above is performed without adding markers to the performers A, B, and C. However, the performers A, B, and C may be performed with markers.
 更に、深度情報を算出するにあたっては、投光した赤外線パターンを読み取り、パターンのゆがみから深度情報を得る方式であってもよい(Light Coding方式)。
 更にまた、深度情報は、二眼カメラ、複数台カメラによる視差情報から算出するようにしてもよい。更に、深度情報は、RGBカメラ14で取得した映像を画像認識し、写真測量技術等を用いて画像解析することで算出することもできる。この場合、RGBカメラ14が検出部として機能することからデプスカメラ15は不要となる。
Furthermore, when calculating the depth information, a method may be used in which the projected infrared pattern is read and the depth information is obtained from the distortion of the pattern (Light Coding method).
Furthermore, the depth information may be calculated from parallax information obtained by a twin-lens camera or a plurality of cameras. Further, the depth information can be calculated by recognizing an image obtained by the RGB camera 14 and analyzing the image using a photogrammetry technique or the like. In this case, since the RGB camera 14 functions as a detection unit, the depth camera 15 becomes unnecessary.
 〔サーバ20〕
 図2に示すように、サーバ20は、スタジオ10の各部とのインタフェース(以下、単に「IF」という。)を備えており、有線又は無線で接続されている。スタジオ10の各部と接続するIFとしては、オーディオIF21と、RGBカメラIF22と、デプスカメラIF23と、プロジェクタIF24と、表示IF25とを備えている。更に、データベース26と、データ記憶部27と、ネットワークIF28と、メインメモリ29と、制御部30とを備えている。サーバ20は、ユーザ端末40,60,70にライブデータを配信し、更に、プロジェクタ16やスタジオモニタ17やユーザ端末40,60,70の表示を制御する表示制御装置として機能する。
[Server 20]
As shown in FIG. 2, the server 20 includes an interface (hereinafter simply referred to as “IF”) with each unit of the studio 10 and is connected by wire or wirelessly. As an IF connected to each part of the studio 10, an audio IF 21, an RGB camera IF 22, a depth camera IF 23, a projector IF 24, and a display IF 25 are provided. Furthermore, a database 26, a data storage unit 27, a network IF 28, a main memory 29, and a control unit 30 are provided. The server 20 distributes live data to the user terminals 40, 60, and 70, and further functions as a display control device that controls the display of the projector 16, the studio monitor 17, and the user terminals 40, 60, and 70.
 オーディオIF21は、スタジオ10の再生装置11、マイク13等と接続されている。オーディオIF21は、再生装置11から演奏する楽曲データが入力され、マイク13から演者A,B,Cの音声データが入力される。 The audio IF 21 is connected to the playback device 11 and the microphone 13 of the studio 10. The audio IF 21 receives music data to be played from the playback device 11 and voice data of the performers A, B, and C from the microphone 13.
 RGBカメラIF22は、RGBカメラ14が撮像したスタジオ10の映像データが入力される。デプスカメラIF23は、スタジオ10の各所や演者A,B,Cの深度情報、人物領域のデータ、骨格位置の深度情報等が入力される。プロジェクタIF24は、プロジェクタ16でスタジオ10のステージの床面等に、アイテムを表示する制御を行う。表示IF25は、スタジオ10に設置されたスタジオモニタ17を制御する。一例として、表示IF25は、スタジオモニタ17の表示面に、演者A,B,CにプレゼントしたアイテムオブジェクトやユーザIDを表示する。これにより、演者A,B,Cは、誰からアイテムがプレゼントされたかを知ることができる。 The RGB camera IF 22 receives the video data of the studio 10 captured by the RGB camera 14. The depth camera IF 23 receives depth information of various places in the studio 10 and the performers A, B, and C, data on the person area, depth information on the skeleton position, and the like. The projector IF 24 controls the projector 16 to display items on the floor surface of the stage of the studio 10. The display IF 25 controls the studio monitor 17 installed in the studio 10. As an example, the display IF 25 displays the item object and user ID presented to the performers A, B, and C on the display surface of the studio monitor 17. Thereby, performers A, B, and C can know from whom the item was presented.
 管理部としてのデータベース26は、ライブ毎に、本システムに登録したユーザのユーザIDに関連付けて、各ユーザのアイテムを管理する。具体的に、データベース26では、ライブ毎に、各ユーザIDに関連付けて、アイテムIDと、アイテム送り先IDと、アイテム受取有無と、返礼有無と、返礼受取成否とを管理している。 The database 26 as a management unit manages the items of each user in association with the user ID of the user registered in this system for each live. Specifically, the database 26 manages item IDs, item destination IDs, presence / absence of item reception, presence / absence of return, and success / failure of return receipt in association with each user ID for each live.
 アイテムIDは、ユーザが購入したアイテムを一意に特定するIDであり、各ライブにおいて、ユーザがプレゼントしたアイテムを一意に特定するIDである。
 アイテム送り先IDは、アイテムをプレゼントした演者を一意に特定するIDである。
The item ID is an ID that uniquely identifies an item purchased by the user, and is an ID that uniquely identifies an item presented by the user in each live.
The item destination ID is an ID that uniquely identifies the performer who presented the item.
 アイテム受取有無は、ユーザがプレゼントしたアイテムの受取を、ユーザが選択した演者が成功したかどうかを管理する。
 返礼有無は、ユーザがプレゼントしたアイテを受け取った演者がプレゼントしてくれたユーザに対して返礼を行ったかを管理する。
Whether or not the item has been received manages whether or not the performer selected by the user succeeded in receiving the item presented by the user.
The presence / absence of return is managed to determine whether the performer who received the item presented by the user gave a return to the user.
 返礼受取成否は、返礼の受取が成功したかどうかを管理する。
 データベース26は、その他に、ライブ配信に参加可能な全ユーザをユーザIDに関連付けて管理している。各ライブに参加するユーザは、登録された全ユーザから選択される。また、データベース26は、一例として、各アイテムの値段をアイテムIDと関連付けて管理している。また、一例として、各ユーザの演者に応じた購入金額の総額を管理している。
Whether or not the return is successfully received manages whether or not the return has been successfully received.
In addition, the database 26 manages all users who can participate in live distribution in association with user IDs. A user who participates in each live is selected from all registered users. Moreover, the database 26 manages the price of each item in association with the item ID as an example. Further, as an example, the total purchase price corresponding to each user's performer is managed.
 図2では、ユーザAは、アイテムID「A(花束)」を演者Cに対してプレゼントしたが、演者Cによってプレゼントが受け取ってもらえなかったことを示している。ユーザBは、アイテムID「C(ヘッドバンド)」を演者Aに対してプレゼントし、アイテムAを演者Aが受け取り、さらに、演者Aが返礼を行い、その返礼をユーザAが受け取ったことを示している。ユーザCは、アイテムID「B(エフェクト)」を演者Bに対してプレゼントし、アイテムBを演者Bが受け取り、さらに、演者Bが返礼を行ったが、ユーザCがその返礼の受取に失敗したことを示している。 In FIG. 2, the user A presents the item ID “A (bouquet)” to the performer C, but shows that the present has not been received by the performer C. The user B presents the item ID “C (headband)” to the performer A, the item A is received by the performer A, the performer A gives a return, and the user A receives the return. ing. The user C presents the item ID “B (effect)” to the performer B, the item B is received by the performer B, and the performer B returns, but the user C fails to receive the return. It is shown that.
 データ記憶部27は、ハードディスク等の記憶装置である。データ記憶部27は、ライブ配信システム1に関連する制御プログラムやアイテムのオブジェクトを表示するための表示データ等が保存されている。アイテムのオブジェクトを表示するための表示データは、一例として、アイテムが有形の物、例えば装飾具であるとき、三次元データであり、演者の向きに合わせて装飾具も表示されるようになっている。一例として、装飾具のデータは、演者が正面を向いているとき、装飾具の正面から表示され、演者が横を向いているとき、装飾具も横向きに表示される。制御プログラムは、一例として、ライブデータをユーザ端末40,60,70に対して配信する配信プログラムである。制御プログラムは、一例として、ユーザからのプレゼントされたアイテムのオブジェクトをプロジェクタ16によってスタジオ10の床面に表示するアイテム表示制御プログラムである。また、制御プログラムは、一例として、スタジオモニタ17やユーザ端末40,60,70の表示面に対して、ユーザからプレゼントされたアイテムオブジェクトを、演者A,B,Cと関連付けて表示する表示装置制御プログラムである。また、制御プログラムは、一例として、スタジオモニタ17やユーザ端末40,60,70の表示面に対して送り主のユーザIDのオブジェクトを表示する表示装置制御プログラムである。更に、制御プログラムは、一例として、アイテムがプレゼントされた演者がアイテムをユーザに対して返礼をするとき、スタジオモニタ17やユーザ端末40,60,70の表示面に対して返礼のアイテムオブジェクトを表示する表示装置制御プログラムである。 The data storage unit 27 is a storage device such as a hard disk. The data storage unit 27 stores control data related to the live distribution system 1, display data for displaying item objects, and the like. The display data for displaying the object of the item is, for example, three-dimensional data when the item is a tangible object, such as a decoration, and the decoration is also displayed according to the direction of the performer. Yes. As an example, the data of the ornament is displayed from the front of the ornament when the performer is facing the front, and the ornament is also displayed sideways when the performer is facing sideways. As an example, the control program is a distribution program that distributes live data to the user terminals 40, 60, and 70. As an example, the control program is an item display control program for displaying the object of the item presented by the user on the floor surface of the studio 10 by the projector 16. Further, as an example, the control program controls the display device that displays the item object presented by the user in association with the performers A, B, and C on the display surface of the studio monitor 17 and the user terminals 40, 60, and 70. It is a program. The control program is, for example, a display device control program that displays an object of the sender's user ID on the display surface of the studio monitor 17 or the user terminals 40, 60, and 70. Furthermore, as an example, when the performer who gave the item gives a return to the user, the control program displays the return item object on the display screen of the studio monitor 17 or the user terminal 40, 60, or 70. The display device control program to be executed.
 ネットワークIF28は、サーバ20とユーザ端末40,60,70とをインターネット等のネットワーク2を介して接続する。メインメモリ29は、一例として、RAMであり、配信中のライブデータや制御プログラム等を一時的に記憶する。 The network IF 28 connects the server 20 and the user terminals 40, 60, 70 via the network 2 such as the Internet. The main memory 29 is, for example, a RAM, and temporarily stores live data being distributed, a control program, and the like.
 制御部30は、一例として、CPUであり、サーバ20の全体の動作を制御する。制御部30は、一例として、配信制御プログラムに従って、ライブデータをユーザ端末40,60,70に対して配信する配信部である。制御部30は、一例として、アイテム表示制御プログラムに従って、ユーザからのプレゼントされたアイテムをプロジェクタ16によってスタジオ10の床面に表示するアイテム表示制御部である。制御部30は、一例として、ユーザ端末40,60,70の表示を制御する表示装置制御部であり、また、スタジオモニタの表示を制御する表示装置制御部である。 The control unit 30 is a CPU as an example, and controls the overall operation of the server 20. For example, the control unit 30 is a distribution unit that distributes live data to the user terminals 40, 60, and 70 in accordance with a distribution control program. As an example, the control unit 30 is an item display control unit that displays the item presented by the user on the floor surface of the studio 10 by the projector 16 according to the item display control program. As an example, the control unit 30 is a display device control unit that controls display of the user terminals 40, 60, and 70, and is a display device control unit that controls display of the studio monitor.
 このような制御部30は、スタジオモニタ17やユーザ端末40,60,70の表示面に対して、ユーザからプレゼントされたアイテムオブジェクトを、演者A,B,Cと関連付けて表示する表示データを生成し表示する。更に、一例として、スタジオモニタ17やユーザ端末40,60,70の表示面に対して、送り主のユーザIDを表示する表示データを生成し表示する。更に、制御部30は、一例として、アイテムがプレゼントされた演者がアイテムをユーザに対して返礼をするとき、スタジオモニタ17やユーザ端末40,60,70の表示面に対して返礼のアイテムの表示データを生成し表示する。 Such a control unit 30 generates display data for displaying the item object presented by the user in association with the performers A, B, and C on the display surfaces of the studio monitor 17 and the user terminals 40, 60, and 70. And display. Further, as an example, display data for displaying the sender's user ID is generated and displayed on the display surface of the studio monitor 17 and the user terminals 40, 60, and 70. Furthermore, the control part 30 displays the item of return on the display surface of the studio monitor 17 or the user terminals 40, 60, 70, for example, when the performer who presented the item returns the item to the user. Generate and display data.
 アイテムオブジェクトは、スタジオモニタ17やユーザ端末40,60,70の表示面に表示するにあたって、スタジオ10の実空間に本来あるべきアイテムオブジェクト位置に表示される。すなわち、アイテム位置は、スタジオ10の実空間における位置が三次元位置情報によって特定されている。スタジオモニタ17やユーザ端末40,60,70の表示面に表示されるアイテムのオブジェクトは、RGBカメラ14の向きを変更したとしても、その向きで取得している映像のなかでしかるべきアイテムオブジェクト位置に表示される。また、アイテム位置がRGBカメラ14の撮像範囲から外れる場合、スタジオモニタ17やユーザ端末40,60,70の表示面には、アイテムオブジェクトが表示されなくなる。そして、スタジオモニタ17やユーザ端末40,60,70の表示面には、アイテムオブジェクトは、演者A,B,Cがしゃがんだり、飛び上がったりした場合であっても、演者A,B,Cの動作に合わせて表示されることになる。 When the item object is displayed on the display screen of the studio monitor 17 or the user terminals 40, 60, and 70, the item object is displayed at the item object position that should originally exist in the real space of the studio 10. That is, as for the item position, the position in the real space of the studio 10 is specified by the three-dimensional position information. Even if the orientation of the RGB camera 14 is changed, the item object displayed on the display surface of the studio monitor 17 or the user terminal 40, 60, 70 is an appropriate item object position in the video acquired in that orientation. Is displayed. Further, when the item position is out of the imaging range of the RGB camera 14, the item object is not displayed on the display screen of the studio monitor 17 or the user terminals 40, 60, and 70. On the display surfaces of the studio monitor 17 and the user terminals 40, 60, and 70, the item object is operated by the performers A, B, and C even when the performers A, B, and C squat down or jump up. Will be displayed according to.
 制御部30は、以上のような処理を全て行う必要はなく、一部の処理を他の装置と連携して行うようにしてもよい。一例として、スタジオ10にパーソナルコンピュータ等の制御装置を設置し、スタジオ10に設置された制御装置とサーバ20とで連携して以上のような処理を行うシステムとすることができる。この場合、例えば、サーバ20は、データベース26と、メインメモリ29と、制御部30とを備えるようにする。また、制御装置は、オーディオIF21と、RGBカメラIF22と、デプスカメラIF23と、プロジェクタIF24と、表示IF25と、データ記憶部27と、ネットワークIF28とを備えるようにする。そして、一例として、制御装置は、データベース26の更新以外の処理を行うようにしてもよい。例えば、ユーザからのプレゼントされたアイテムのオブジェクトをプロジェクタ16で表示する処理やスタジオモニタ17やユーザ端末40,60,70の表示面に表示する処理等である。 The control unit 30 does not have to perform all of the above processes, and may perform some processes in cooperation with other devices. As an example, a control device such as a personal computer may be installed in a studio 10, and the control device installed in the studio 10 and the server 20 may be linked to perform a process as described above. In this case, for example, the server 20 includes a database 26, a main memory 29, and a control unit 30. Further, the control device includes an audio IF 21, an RGB camera IF 22, a depth camera IF 23, a projector IF 24, a display IF 25, a data storage unit 27, and a network IF 28. As an example, the control device may perform processing other than the update of the database 26. For example, a process for displaying an object of an item presented by the user on the projector 16 or a process for displaying the object on the display surface of the studio monitor 17 or the user terminals 40, 60, and 70.
 また、以上のような処理の一部をユーザ端末40,60,70と連携して行うようにしてもよい。例えば、RGBカメラ14により取得した実空間の映像データ、デプスカメラ15により取得した三次元位置情報等をユーザ端末40,60,70へ送信する。ユーザ端末40,60,70は、ユーザA,B,Cの動作を検出し、実空間の映像データ、三次元位置情報、ユーザA,B,Cの動作の検出結果に基づいて、最終到達位置までのアイテムの軌跡、アイテムオブジェクトの表示等を自身の表示面に表示する。 Further, a part of the above processing may be performed in cooperation with the user terminals 40, 60, and 70. For example, real space video data acquired by the RGB camera 14, three-dimensional position information acquired by the depth camera 15, and the like are transmitted to the user terminals 40, 60, and 70. The user terminals 40, 60, and 70 detect the movements of the users A, B, and C, and the final arrival positions based on the real space video data, the three-dimensional position information, and the detection results of the movements of the users A, B, and C The trajectory of the item up to and the display of the item object are displayed on its own display surface.
 〔ユーザ端末40〕
 ユーザ端末40は、一例として、ユーザAが管理する装置であり、デスクトップ又はラップトップ型のパーソナルコンピュータ40aとスマートウォッチ50とを備えている。一例として、ラップトップ型のパーソナルコンピュータ40aは、オーディオIF41と、表示IF42と、ネットワークIF43と、通信IF44と、データ記憶部45と、操作IF46と、メインメモリ47と、制御部48とを備えている。オーディオIF41は、スピーカ、イヤホン、ヘッドフォン等の音声出力機器やマイク等の音声入力機器と接続されている。表示IF42は、一例として、液晶表示装置等の表示装置で構成された表示部49と接続されている。
[User terminal 40]
For example, the user terminal 40 is a device managed by the user A, and includes a desktop or laptop personal computer 40 a and a smart watch 50. As an example, the laptop personal computer 40a includes an audio IF 41, a display IF 42, a network IF 43, a communication IF 44, a data storage unit 45, an operation IF 46, a main memory 47, and a control unit 48. Yes. The audio IF 41 is connected to an audio output device such as a speaker, an earphone, and headphones, and an audio input device such as a microphone. As an example, the display IF 42 is connected to a display unit 49 configured by a display device such as a liquid crystal display device.
 ネットワークIF43は、一例として、ネットワーク2を介してサーバ20と通信する。通信IF44は、一例として、スマートウォッチ50と通信する。通信IF44とスマートウォッチ50とは、無線LANや有線のLANで接続され、スマートウォッチ50からユーザ動作情報としての加速度データ、角度データ、角速度データ等が入力される。データ記憶部45は、不揮発性メモリであり、一例として、ハードディスクやフラッシュメモリである。データ記憶部45は、ライブデータの再生プログラムやスマートウォッチ50との通信制御プログラム等が格納されている。操作IF46は、キーボードやマウス等の操作装置と接続されている。また、表示IF42に接続された表示部49の表示面にタッチパネルが設けられているときには、当該タッチパネルと接続されている。メインメモリ47は、一例として、RAMであり、配信中のライブデータや制御プログラム等を一時的に記憶する。制御部48は、一例として、CPUであり、ユーザ端末40の全体の動作を制御する。一例として、制御部48は、ライブデータを再生しているとき、演者A,B,Cの中から1人又は複数人を選択し、演者選択データをサーバ20に対して送信し、また、アイテムのオブジェクトの一覧の中から1つ又は複数のアイテム選択データをサーバ20に対して送信する。更に、制御部48は、一例として、スマートウォッチ50で検出したユーザ動作情報としての加速度データ、角度データ、角速度データ等の操作データをサーバ20に対して送信する。 The network IF 43 communicates with the server 20 via the network 2 as an example. The communication IF 44 communicates with the smart watch 50 as an example. The communication IF 44 and the smart watch 50 are connected by a wireless LAN or a wired LAN, and acceleration data, angle data, angular velocity data, and the like as user operation information are input from the smart watch 50. The data storage unit 45 is a non-volatile memory, and is, for example, a hard disk or a flash memory. The data storage unit 45 stores a live data reproduction program, a communication control program with the smart watch 50, and the like. The operation IF 46 is connected to an operation device such as a keyboard and a mouse. When a touch panel is provided on the display surface of the display unit 49 connected to the display IF 42, the touch panel is connected to the touch panel. The main memory 47 is a RAM, for example, and temporarily stores live data being distributed, a control program, and the like. The control unit 48 is a CPU, for example, and controls the overall operation of the user terminal 40. As an example, the control unit 48 selects one or a plurality of performers A, B, and C when playing live data, transmits the performer selection data to the server 20, and item One or a plurality of item selection data is transmitted to the server 20 from the list of objects. Furthermore, as an example, the control unit 48 transmits operation data such as acceleration data, angle data, and angular velocity data as user motion information detected by the smart watch 50 to the server 20.
 スマートウォッチ50は、一例としてユーザAの利き腕の手首等に装着される腕時計型の情報処理端末である。スマートウォッチ50は、センサ51と、通信IF52と、データ記憶部53と、メインメモリ54と、制御部55とを備える。センサ51は、例えば加速度センサやジャイロセンサである。通信IF52は、一例として、センサ51が検出した加速度データやスマートウォッチ50の角度データや角速度データをパーソナルコンピュータ40aに送信する。センサ51は、一例として、ユーザAが物を投げる動作をしたとき、その腕の振りに関するユーザ動作情報としての加速度データ、角度データ、角速度データ等の操作データを通信IF52よりパーソナルコンピュータ40aに送信する。データ記憶部53は、不揮発性メモリであり、一例として、ハードディスクやフラッシュメモリである。データ記憶部53は、センサ51を駆動するドライバやパーソナルコンピュータ40aとの通信制御プログラム等が格納されている。制御部55は、一例として、CPUであり、スマートウォッチ50の全体の動作を制御する。 Smart watch 50 is a wristwatch-type information processing terminal that is worn on the wrist of user A's dominant arm as an example. The smart watch 50 includes a sensor 51, a communication IF 52, a data storage unit 53, a main memory 54, and a control unit 55. The sensor 51 is, for example, an acceleration sensor or a gyro sensor. As an example, the communication IF 52 transmits acceleration data detected by the sensor 51, angle data of the smart watch 50, and angular velocity data to the personal computer 40a. For example, when the user A performs a motion of throwing an object, the sensor 51 transmits operation data such as acceleration data, angle data, angular velocity data, etc. as user motion information related to the swing of the arm from the communication IF 52 to the personal computer 40a. . The data storage unit 53 is a non-volatile memory, and is, for example, a hard disk or a flash memory. The data storage unit 53 stores a driver for driving the sensor 51, a communication control program with the personal computer 40a, and the like. The control unit 55 is a CPU as an example, and controls the overall operation of the smart watch 50.
 なお、ユーザ端末40と接続される端末は、スマートウォッチ50ではなく、加速度センサやジャイロセンサを備えスマートフォン等の小型で携帯可能な情報処理端末であってもよい。 Note that the terminal connected to the user terminal 40 may be a small and portable information processing terminal such as a smartphone provided with an acceleration sensor and a gyro sensor instead of the smart watch 50.
 〔ユーザ端末60〕
 ユーザ端末60は、一例として、ユーザBが管理する装置であり、スマートフォンやタブレット等のスマートデバイス端末である。ユーザ端末60は、一例として、オーディオIF61と、表示IF62と、操作IF63と、センサ64と、ネットワークIF65と、データ記憶部66と、メインメモリ67と、制御部68とを備えている。オーディオIF61は、内蔵スピーカやイヤホン等の音声出力機器や内蔵マイク等の音声入力機器と接続されている。オーディオIF61は、一例として、ライブデータを音声出力機器から放音する。表示IF62は、内蔵の液晶パネル、有機ELパネル等の小型の表示部69と接続されている。表示部69には、タッチパネルが設けられ、操作IF63は、タッチパネルと接続されている。センサ64は、例えば加速度センサやジャイロセンサである。ネットワークIF65は、一例として、ネットワーク2を介してサーバ20と通信する。ネットワークIF65は、一例として、ユーザが物を投げる動作をしたとき、センサ64が検出したユーザ動作情報としての腕の振りに関する加速度データ、角度データ、角速度データの操作データをサーバ20に送信する。また、ライブデータが表示されている表示面を指やスタイラスペンを用いて、表示されている演者A,B,Cの方向へなぞるスワイプ操作が行われたとき、その座標データ等の操作データをサーバ20に送信する。データ記憶部66は、不揮発性メモリであり、一例として、フラッシュメモリである。データ記憶部66は、ライブデータの再生プログラム等が格納されている。メインメモリ67は、一例として、RAMであり、配信中のライブデータや制御プログラム等を一時的に記憶する。
[User terminal 60]
As an example, the user terminal 60 is a device managed by the user B, and is a smart device terminal such as a smartphone or a tablet. As an example, the user terminal 60 includes an audio IF 61, a display IF 62, an operation IF 63, a sensor 64, a network IF 65, a data storage unit 66, a main memory 67, and a control unit 68. The audio IF 61 is connected to a sound output device such as a built-in speaker or earphone or a sound input device such as a built-in microphone. As an example, the audio IF 61 emits live data from an audio output device. The display IF 62 is connected to a small display unit 69 such as a built-in liquid crystal panel or organic EL panel. The display unit 69 is provided with a touch panel, and the operation IF 63 is connected to the touch panel. The sensor 64 is, for example, an acceleration sensor or a gyro sensor. The network IF 65 communicates with the server 20 via the network 2 as an example. For example, when the user performs an action of throwing an object, the network IF 65 transmits, to the server 20, acceleration data, angle data, and operation data of angular velocity data regarding arm swing as user operation information detected by the sensor 64. Also, when a swipe operation is performed in which the display surface on which live data is displayed is traced in the direction of the performers A, B, and C using a finger or a stylus pen, the operation data such as the coordinate data is obtained. Send to server 20. The data storage unit 66 is a non-volatile memory, for example, a flash memory. The data storage unit 66 stores a live data reproduction program and the like. The main memory 67 is a RAM, for example, and temporarily stores live data being distributed, a control program, and the like.
 制御部68は、一例として、CPUであり、ユーザ端末60の全体の動作を制御する。一例として、制御部68は、ライブデータを再生しているとき、演者A,B,Cの中から1人又は複数人を選択し、選択データをサーバ20に対して送信し、また、アイテムのオブジェクトの一覧の中から1つ又は複数の選択データをサーバ20に対して送信する。更に、制御部48は、一例として、ユーザがユーザ端末60を持って、物を投げる動作をしたとき、その腕の振りの加速度データ、角度データ、角速度データ、座標データ等の操作データをサーバ20に対して送信する。 The control unit 68 is a CPU as an example, and controls the overall operation of the user terminal 60. As an example, when playing back live data, the control unit 68 selects one or a plurality of performers A, B, and C, transmits the selected data to the server 20, One or more selection data from the list of objects is transmitted to the server 20. Further, as an example, when the user performs an operation of throwing an object with the user terminal 60, the control unit 48 receives operation data such as acceleration data, angle data, angular velocity data, coordinate data, etc. of the arm swing. Send to.
 以下、ライブ配信システム1の作用について説明する。
 〔ライブ配信処理〕
 ライブ配信に先立って、先ず、スタジオ10では、デプスカメラ15がスタジオ10における各所の深度情報を取得するとともに、人物領域を算出し、次いで、人物領域において骨格位置を算出しし、各骨格位置における深度情報を算出できるようにする。以後、デプスカメラ15では、モーションキャプチャ処理が実行される。また、ユーザ端末40,60,70は、サーバ20に対してログインし、ライブ配信を視聴可能な状態にする。
Hereinafter, the operation of the live distribution system 1 will be described.
[Live distribution processing]
Prior to the live distribution, first, in the studio 10, the depth camera 15 acquires depth information of each place in the studio 10, calculates the person area, then calculates the skeleton position in the person area, Allow depth information to be calculated. Thereafter, in the depth camera 15, motion capture processing is executed. In addition, the user terminals 40, 60, and 70 log in to the server 20 so that live distribution can be viewed.
 図3(a)に示すように、演者A,B,Cが演奏を開始する際、ステップS1において、サーバ20は、コンテンツデータとしてのライブデータを生成する。具体的に、サーバ20には、RGBカメラ14が撮像したスタジオ10において演者A,B,Cが演奏している実空間の映像データが入力される。また、サーバ20には、再生装置11から楽曲データが入力され、マイク13から演者A,B,Cの音声データが入力される。そして、サーバ20は、これらの各種データに基づいて、ユーザ端末40,60,70に対して配信するための演者A,B,Cによるパフォーマンスのライブデータを生成する。また、ステップS2において、サーバ20は、スタジオ10の各所や演者A,B,Cにおける骨格位置の深度情報が入力される。ステップS3において、サーバ20は、ユーザ端末40,60,70に対してライブデータをライブ配信する。すなわち、サーバ20は、演者A,B,Cによる演奏をリアルタイムでユーザ端末40,60,70に対して配信する。これにより、図3(b)に示すように、ユーザ端末40,60,70は、表示面に、ライブデータに基づくライブ映像71が表示され、また、ライブ音声が出力される。 As shown in FIG. 3A, when performers A, B, and C start playing, in step S1, the server 20 generates live data as content data. Specifically, the server 20 is input with real space video data played by the performers A, B, and C in the studio 10 captured by the RGB camera 14. In addition, music data is input from the playback device 11 to the server 20, and voice data of performers A, B, and C are input from the microphone 13. And the server 20 produces | generates the live data of the performance by the performers A, B, C for delivering with respect to the user terminals 40, 60, 70 based on these various data. Further, in step S2, the server 20 receives depth information on the skeleton positions in various locations of the studio 10 and the performers A, B, and C. In step S <b> 3, the server 20 delivers live data to the user terminals 40, 60, and 70 live. That is, the server 20 delivers performances by the performers A, B, and C to the user terminals 40, 60, and 70 in real time. As a result, as shown in FIG. 3B, the user terminals 40, 60, and 70 display the live video 71 based on the live data on the display surface and output live audio.
 〔アイテム/演者選択処理〕
 次に、アイテム/演者選択処理を図4を用いて説明する。ここでは、ユーザBがユーザ端末60でライブデータを視聴し操作する場合を例にとり説明する。ステップS11において、サーバ20は、ユーザ端末60の表示面に、ライブ映像71に重畳するように、選択可能なアイテムのオブジェクトが一覧表示するアイテム選択オブジェクト72を表示する(図5(a)参照)。一例として、図5(a)では、アイテム選択オブジェクト72には、左から順に、花束のアイテムを示すオブジェクト72a、演者の動作を演出するエフェクトを付加するアイテムのオブジェクト72b、猫耳のヘッドバンドのアイテムを示すオブジェクト72c、ライブ配信の背景画像のアイテムを示すオブジェクト72dが一列に列挙されている。
[Item / Performer selection process]
Next, the item / performer selection process will be described with reference to FIG. Here, a case where the user B views and operates live data on the user terminal 60 will be described as an example. In step S11, the server 20 displays an item selection object 72 that displays a list of selectable item objects on the display surface of the user terminal 60 so as to be superimposed on the live video 71 (see FIG. 5A). . As an example, in FIG. 5A, the item selection object 72 includes, in order from the left, an object 72a indicating a bouquet item, an item object 72b to which an effect for directing the performer's action is added, and a cat ear headband. An object 72c indicating an item and an object 72d indicating an item of a background image of live distribution are listed in a line.
 アイテム選択オブジェクト72に列挙されるアイテムは、運営者側によって複数用意されるアイテムである。用意されるアイテムは、ライブ毎に異なっていてもよいし、全てのライブで共通していてもよい。また、複数のライブで一部のアイテムが重複していてもよい。サーバ20のデータベースでは、各アイテムの値段をアイテムIDと関連付けて管理している。サーバ20は、一例として、アイテムIDに関連付けられて当該アイテムを表示するためのアイテムデータとして動画データや画像データや音声データや楽曲データ等を保存している。アイテムデータは、一例として三次元データである。 The items listed in the item selection object 72 are items prepared by the operator. The prepared items may be different for each live, or may be common to all live. Moreover, some items may overlap in a plurality of live performances. In the database of the server 20, the price of each item is managed in association with the item ID. As an example, the server 20 stores video data, image data, audio data, music data, and the like as item data for displaying the item in association with the item ID. The item data is three-dimensional data as an example.
 各アイテムは、有料であり、アイテムに応じて金額が決まっており、アイテムIDに値段が関連付けられている。一例として、アイテムID「A」の花束のアイテムは、200円である。アイテムID「B」のエフェクトを付加するアイテムは、300円である。アイテムID「C」の猫耳のヘッドバンドのアイテムは、500円である。アイテムID「D」の背景画像のアイテム「D」は、1000円である。一例として、図2のデータベースに示されたユーザAは、アイテムID「A」の花束を200円で購入しており、ユーザBは、アイテムID「C」のヘッドバンドを500円で購入しており、ユーザCは、アイテムID「B」のエフェクトを300円で購入している。このように、ユーザA,B,Cは、ユーザ端末40,60,70を介してこれらのアイテムを購入することで、演者A,B,Cに対してアイテムをプレゼントできるようになる。これにより、演者A,B,C及び運営者は、ユーザA,B,Cからプレゼントされたアイテムに応じた売り上げを得ることができる。演者A,B,Cがアイテムを受け取る(例えば拾う)か否かにかかわらず、ユーザA,B,Cがプレゼントしたすべてのアイテムが、演者A,B,C及びその運営者の売り上げとなる。アイテムの中には、無料のアイテムが存在してもよい。また、1つのライブで、1人のユーザが1つのアイテムを購入してもよいし、複数のアイテムを購入してもよい。データベースでは、各ユーザの演者に応じた購入金額の総額を管理している。これにより、一例として、運営者が管理するサーバ20は、多くアイテムを購入したユーザに対して演者から優先的に返礼を行う処理等を行うことができるようになる。 Each item is charged, the amount is determined according to the item, and the price is associated with the item ID. As an example, the item of the bouquet with the item ID “A” is 200 yen. The item to which the effect of item ID “B” is added is 300 yen. The item of the cat ear headband of item ID “C” is 500 yen. The item “D” of the background image with the item ID “D” is 1000 yen. As an example, user A shown in the database of FIG. 2 purchases a bouquet of item ID “A” for 200 yen, and user B purchases a headband of item ID “C” for 500 yen. The user C has purchased the effect of the item ID “B” for 300 yen. Thus, the users A, B, and C can present items to the performers A, B, and C by purchasing these items through the user terminals 40, 60, and 70. Thus, the performers A, B, C and the operator can obtain sales corresponding to the items presented by the users A, B, C. Regardless of whether or not the performers A, B, and C receive (for example, pick up) items, all items presented by the users A, B, and C become sales for the performers A, B, and C and their operators. There may be free items among the items. Moreover, one user may purchase one item in one live, and may purchase several items. In the database, the total amount of purchases corresponding to the performers of each user is managed. Thereby, as an example, the server 20 managed by the operator can perform a process of giving a preferential return from the performer to a user who has purchased many items.
 ユーザ端末60でアイテム選択オブジェクト72の一覧の中から1つのアイテムがユーザBにより選択されると、ユーザ端末60は、ユーザIDと選択したアイテムのアイテムIDを含むアイテム選択データをサーバ20に対して送信する。ステップS12において、サーバ20は、アイテム選択データに基づいてアイテムの選択処理を行う。サーバ20は、選択されたアイテムのオブジェクト72cのみをユーザ端末60に表示する選択データをユーザ端末60対して送信し、ユーザ端末60の表示面に、ライブ映像71に対して重畳するように、オブジェクト72cを表示する。図5(b)では、一例として、サーバ20がヘッドバンドのアイテムを示すオブジェクト72cを選択し、ユーザ端末60の表示面の下側の角に表示した状態を示している。あわせて、サーバ20は、演者A,B,Cにも現在アイテム選択処理の状態にあることを知らせるため、スタジオモニタ17にも同様な表示を行う。 When one item is selected by the user B from the list of item selection objects 72 on the user terminal 60, the user terminal 60 sends item selection data including the user ID and the item ID of the selected item to the server 20. Send. In step S12, the server 20 performs an item selection process based on the item selection data. The server 20 transmits selection data for displaying only the object 72 c of the selected item on the user terminal 60 to the user terminal 60, and the object is superimposed on the live video 71 on the display surface of the user terminal 60. 72c is displayed. FIG. 5B shows a state in which the server 20 selects an object 72c indicating a headband item and displays it on the lower corner of the display surface of the user terminal 60 as an example. At the same time, the server 20 displays a similar display on the studio monitor 17 in order to inform the performers A, B, and C that the item selection process is currently in progress.
 また、サーバ20は、ユーザ端末60の表示面に、一例として、演者A,B,Cを1人ずつ囲む演者選択オブジェクト73を表示する。ここで、ユーザ端末60の表示面には、一例として、ユーザBが次に演者選択操作を知らせる第1教示オブジェクト73aもあわせて表示する。ユーザ端末60で1つの演者選択オブジェクト73を選択すると、ユーザ端末60は、ユーザIDと選択した演者の演者IDを含む演者選択データをサーバ20に対して送信する。ステップS13において、サーバ20は、演者選択データに基づいて演者の選択処理を行う。サーバ20は、ユーザ端末60の表示面に、ライブ映像71に重畳するように、選択した演者Aに対して演者決定オブジェクト74を表示する。図5(c)では、一例として、サーバ20が演者Aを選択し、ユーザ端末60の表示面に表示した状態を示している。演者選択オブジェクト73や演者決定オブジェクト74は、四角形状でもよいが、これに限定されるものではなく、例えば円形形状や三角形状であってもよい。あわせて、図5(d)に示すように、サーバ20は、演者A,B,Cにも現在演者選択処理の状態にあることを知らせるため、スタジオモニタ17にも同様な表示を行う。スタジオモニタ17では、ユーザBが選択したアイテムを送ろうとしていることを示す第2教示オブジェクト74aを表示する。 Moreover, the server 20 displays the performer selection object 73 surrounding the performers A, B, and C one by one on the display surface of the user terminal 60 as an example. Here, on the display surface of the user terminal 60, as an example, the first teaching object 73a in which the user B next notifies the performer selection operation is also displayed. When one performer selection object 73 is selected on the user terminal 60, the user terminal 60 transmits performer selection data including the user ID and the performer ID of the selected performer to the server 20. In step S13, the server 20 performs a performer selection process based on the performer selection data. The server 20 displays the performer determination object 74 for the selected performer A so as to be superimposed on the live video 71 on the display surface of the user terminal 60. FIG. 5C shows a state in which the server 20 selects the performer A and displays it on the display screen of the user terminal 60 as an example. The performer selection object 73 and the performer determination object 74 may be rectangular, but are not limited thereto, and may be, for example, circular or triangular. In addition, as shown in FIG. 5D, the server 20 displays the same on the studio monitor 17 in order to inform the performers A, B, and C that they are currently in the performer selection process. The studio monitor 17 displays a second teaching object 74a indicating that the item selected by the user B is about to be sent.
 なお、サーバ20は、アイテムと演者が選択されると、データベース26に選択されたアイテムと演者を登録する。
 この後、ユーザ端末60のユーザは、ユーザ端末60を操作しながらスタジオ10に居る演者A,B,Cに対してアイテムをプレゼントできる状態となる。具体的に、ユーザ端末60のユーザBは、ユーザ端末60を手に持って物を投げる動作をすることで、選択したアイテムを自分が選択した演者に対して投げる疑似体験をすることができる。具体的に、サーバ20は、ユーザ端末60と同期処理を開始し、ユーザ端末60は、単位時間ごとに、センサ64で検出したユーザ動作情報としての加速度データ、角度データ、角速度データ等の操作データをサーバ20に対して送信する。ステップS14において、サーバ20は、ユーザが投げ動作を行ったことを判定するための閾値を記憶しており、閾値を超えたとき、ユーザ端末60で投げ動作が行われたと判定する。一例として、サーバ20は、投げ動作を特定するため、加速度データ、角度データ、角速度データ等の閾値を記憶している。サーバ20は、加速度データ、角度データ、角速度データ等が閾値を超えたとき、投げ動作が行われたと判定する。また、タッチパネルのスワイプ操作が行われた際の始点と終点の距離等が閾値を超えたとき、投げ動作が行われたと判定する。なお、ユーザ端末40の場合、加速度データ、角度データ、角速度データ等が閾値を超えたとき、投げ動作が行われたと判定する。
The server 20 registers the selected item and the performer in the database 26 when the item and the performer are selected.
Thereafter, the user of the user terminal 60 is in a state in which items can be presented to performers A, B, and C in the studio 10 while operating the user terminal 60. Specifically, the user B of the user terminal 60 can perform a pseudo-experience of throwing the selected item against the performer he / she selects by throwing an object while holding the user terminal 60 in his / her hand. Specifically, the server 20 starts synchronization processing with the user terminal 60, and the user terminal 60 operates data such as acceleration data, angle data, and angular velocity data as user motion information detected by the sensor 64 for each unit time. Is transmitted to the server 20. In step S <b> 14, the server 20 stores a threshold value for determining that the user has performed the throwing motion, and determines that the throwing motion has been performed at the user terminal 60 when the threshold value is exceeded. As an example, the server 20 stores threshold values such as acceleration data, angle data, and angular velocity data in order to specify a throwing motion. The server 20 determines that a throwing motion has been performed when acceleration data, angle data, angular velocity data, etc. exceed a threshold value. Further, when the distance between the start point and the end point when the touch panel swipe operation is performed exceeds a threshold value, it is determined that the throwing operation has been performed. In the case of the user terminal 40, when acceleration data, angle data, angular velocity data, or the like exceeds a threshold value, it is determined that a throwing motion has been performed.
 ステップS15において、サーバ20は、ユーザ端末60から送信された腕の振りに関する加速度データ、角度データ、角速度データ等の操作データに基づいてユーザの腕の振りの向きや速さ等を解析する。これにより、サーバ20は、投げられたアイテムが投げられたときの軌跡や落下位置であるアイテム位置が算出される。一例として、アイテム位置は、デプスカメラ15の検出部を原点とする三次元座標系等で特定することができる。 In step S15, the server 20 analyzes the direction and speed of the user's arm swing based on operation data such as acceleration data, angle data, and angular velocity data related to arm swing transmitted from the user terminal 60. Thereby, the server 20 calculates an item position which is a trajectory or a drop position when the thrown item is thrown. As an example, the item position can be specified by a three-dimensional coordinate system having the detection unit of the depth camera 15 as the origin.
 ステップS16において、サーバ20は、解析結果に基づいて、ユーザ端末60の表示面に表示する落下するヘッドバンドのアイテムを示す落下オブジェクト75の表示データを生成し、ユーザ端末60に対して送信する。これにより、図5(e)に示すように、ユーザ端末60の表示面には、落下オブジェクト75が演者Aに向かって来るようにリアルタイムに表示される。また、同様の落下オブジェクト75の表示データをスタジオ10のスタジオモニタ17に対して送信し、スタジオモニタ17にも落下オブジェクト75がリアルタイムに表示される。落下オブジェクト75は、スタジオモニタ17やユーザ端末60の表示面に表示するにあたって、スタジオ10の実空間に本来あるべきアイテムオブジェクト位置に表示される。すなわち、アイテムの位置は、スタジオ10の実空間におけるアイテム位置が三次元位置情報によって特定されている。したがって、落下オブジェクト75は、仮にRGBカメラ14の向きを変更したとしても、そのRGBカメラ14の向きで取得している映像のなかでしかるべきアイテムオブジェクト位置に表示される。また、アイテム位置がRGBカメラ14の撮像範囲から外れる場合には、落下オブジェクト75が表示されなくなる。 In step S <b> 16, the server 20 generates display data of the falling object 75 indicating the falling headband item displayed on the display surface of the user terminal 60 based on the analysis result, and transmits the display data to the user terminal 60. As a result, as shown in FIG. 5 (e), the falling object 75 is displayed in real time on the display surface of the user terminal 60 so as to come toward the performer A. In addition, similar display data of the fall object 75 is transmitted to the studio monitor 17 of the studio 10, and the fall object 75 is also displayed on the studio monitor 17 in real time. When the fall object 75 is displayed on the display screen of the studio monitor 17 or the user terminal 60, the fall object 75 is displayed at the item object position that should originally exist in the real space of the studio 10. That is, the item position in the real space of the studio 10 is specified by the three-dimensional position information. Therefore, even if the orientation of the RGB camera 14 is changed, the falling object 75 is displayed at an appropriate item object position in the image acquired in the orientation of the RGB camera 14. Further, when the item position is out of the imaging range of the RGB camera 14, the falling object 75 is not displayed.
 なお、スタジオモニタ17に落下オブジェクト75を表示する処理は、次のステップS17でプロジェクタ16によってスタジオ10の床面に落下オブジェクト75が表示されることから省略してもよい。 Note that the process of displaying the falling object 75 on the studio monitor 17 may be omitted because the falling object 75 is displayed on the floor surface of the studio 10 by the projector 16 in the next step S17.
 ステップS17において、サーバ20は、プロジェクタ16に対して落下オブジェクト75の表示データを送信し、プロジェクタ16は、スタジオ10の床面において、飛翔した状態で演者Aに向かってくるアイテムのオブジェクトやアイテム位置に落下したアイテムのオブジェクトをリアルタイムに表示する。これにより、演者A,B,Cは、落下オブジェクト75の落下位置を把握することができる。 In step S <b> 17, the server 20 transmits the display data of the falling object 75 to the projector 16, and the projector 16 is the object object and the item position of the item that comes to the performer A in a flying state on the floor surface of the studio 10. The object of the item that has fallen on is displayed in real time. Thereby, performers A, B, and C can grasp the falling position of the falling object 75.
 なお、落下オブジェクト75は、少なくともアイテム位置に表示されえればよく、アイテム位置まで飛翔してくるまでの途中の状態や軌跡は表示しなくてもよい。
 また、アイテム位置が演者A,B,Cの行動する演者行動範囲内にならないように落下オブジェクト75を表示してもよい。また、アイテム位置に至るまでの間はオブジェクトが演者行動範囲内に入ってもよいが、ユーザがプレゼントしたアイテムのオブジェクトが最終的に演者行動範囲内に入らないように落下オブジェクト75を表示するようにしてもよい。また、ユーザが演者にアイテムをプレゼントする動作(例えば投げる)を検出して算出したアイテム位置が演者行動範囲内に位置したとしても、演者行動範囲内にオブジェクトを表示しないようにしてもよい。この場合、例えば、算出したアイテム位置を考慮して、演者行動範囲外の近傍(演者行動範囲外であって、算出したアイテム位置に最も近い位置)にオブジェクトを表示する。このような表示形態によれば、演者A,B,Cが誤ってプレゼントされたアイテムを踏んでしまうことを防ぐことができる。
The falling object 75 only needs to be displayed at least at the item position, and it is not necessary to display a state or locus in the middle of flying to the item position.
Further, the falling object 75 may be displayed so that the item position does not fall within the performer action range where the performers A, B, and C act. Further, the object may enter the performer action range until the item position is reached, but the falling object 75 is displayed so that the object of the item presented by the user does not finally enter the performer action range. It may be. Further, even if the item position calculated by detecting the operation of presenting the item to the performer (for example, throwing) is located within the performer action range, the object may not be displayed within the performer action range. In this case, for example, in consideration of the calculated item position, the object is displayed near the outside of the performer action range (the position outside the performer action range and closest to the calculated item position). According to such a display form, it is possible to prevent the performers A, B, and C from stepping on the items that were presented by mistake.
 なお、演者行動範囲は、一例として、スタジオ10等におけるステージ等である。また、演者行動範囲は、1曲のなかで前奏、間奏及び後奏の期間と、それ以外の期間とで異なる範囲を設定してもよい。また、曲を演奏している期間と演奏していない期間とで異なる範囲を設定してもよい(例えば曲を演奏している期間は演者が曲に合わせて行動する範囲とし、曲を演奏していない期間は演者行動範囲をなしとする)。更に、演奏中の曲にあわせて異なる範囲を設定してもよいし、ライブ配信している期間はずっと同じ範囲に設定してもよい。演奏に限らず、演技している期間と、演技前後の期間とで異なる範囲を設定してもよい。 The performer action range is, for example, a stage in the studio 10 or the like. In addition, the performer action range may be set to a range that is different between the periods of the prelude, interlude, and postlude in one song, and other periods. Also, different ranges may be set for the period during which the song is played and the period during which the song is not played (for example, the period during which the song is played is the range in which the performer acts in accordance with the song, and the song is played) If there is no period, the range of performer activity is assumed to be none). Furthermore, different ranges may be set according to the music being played, or the same range may be set throughout the live distribution period. Not only the performance but also a different range may be set for the acting period and the period before and after the acting.
 更に、アイテムが演者に当たったら、アイテムをその場で落下させたり、アイテムの飛ぶ向きが変わるようにしてもよいし、移動させるようにしてもよい。
 〔アイテム取得処理〕
 次に、ユーザBがプレゼントしたヘッドバンドのアイテムを演者Aが取得する動作について図6を参照して説明する。
Furthermore, when an item hits the performer, the item may be dropped on the spot, the direction of the item flying may be changed, or the item may be moved.
[Item acquisition processing]
Next, an operation in which the performer A acquires the headband item presented by the user B will be described with reference to FIG.
 サーバ20では、常時、デプスカメラ15からは各演者A,B,Cの骨格位置と各骨格位置の深度情報が入力されている。また、サーバ20は、RGBカメラ14からの映像から演者A,B,Cの顔検出をしている。これにより、サーバ20は、演者Aの各骨格位置と、当該骨格位置の深度情報と、演者Aの顔の位置を追跡している。ステップS21において、サーバ20は、演者Aがヘッドバンドのアイテムを拾ったと判定する。一例として、サーバ20は、スタジオ10の床に落ちているアイテムを拾う動作を特定するため、左右何れかの手の骨格位置と左右何れかの足の骨格位置の距離や左右何れかの手の骨格位置と床面との距離等に関する閾値を記憶している。サーバ20は、算出した左右何れかの手の骨格位置と左右何れかの足の骨格位置の距離や左右何れかの手の骨格位置とアイテム位置における床面との距離等のデータが閾値を超えたとき、演者Aがヘッドバンドのアイテムを拾ったと判定する。一例として、サーバ20は、左右何れかの手の位置が床面のアイテム位置と重なったとき、演者Aがヘッドバンドのアイテムを拾ったと判定する。換言すると、ユーザから演者にプレゼントされたアイテムのオブジェクトの範囲内に演者が位置したかを判定する。アイテムのオブジェクトの範囲は、一例として、三次元位置情報と、ユーザ動作情報と、アイテムの種類とによって決定される。一例として、ヘッドバンドの場合、ヘッドバンドの外形形状の範囲となる。左右何れかの手の位置と床面のアイテム位置とが重なったかどうかの判定は、一例として、演者の手の位置の三次元情報(1点)とアイテム位置の三次元情報(1点)とが重なったかを判定する。また、一例として、演者の手の位置の三次元情報(複数点)とアイテム位置の三次元情報(1点)とが重なったかを判定する。更に、一例として、演者の手の位置の三次元情報(1点)とアイテム位置の三次元情報(複数点)とが重なったかを判定する。更に、一例として、演者の手の位置の三次元情報(複数点)とアイテム位置の三次元情報(複数点)とが重なったかを判定する。更に、一例として、演者の手の位置の三次元情報(指先等の領域)とアイテム位置の三次元情報(落下オブジェクト75が表示されている領域)とが重なったかを判定する。左右何れかの手の位置と床面のアイテム位置とが重なったかどうかの判定は、演者の手の位置の三次元情報とアイテム位置の三次元情報とが1点ではなく、複数点や領域が重なったかを判定するようにした方が行いやすくなる。 In the server 20, the skeleton positions of the performers A, B, and C and the depth information of each skeleton position are always input from the depth camera 15. The server 20 detects the faces of the performers A, B, and C from the video from the RGB camera 14. Thereby, the server 20 tracks each skeleton position of the performer A, depth information of the skeleton position, and the position of the face of the performer A. In step S21, the server 20 determines that the performer A has picked up the headband item. As an example, in order to identify the action of picking up an item that has fallen on the floor of the studio 10, the server 20 determines the distance between the skeleton position of either the left or right hand and the skeleton position of either the left or right hand, or the left or right hand. A threshold value relating to the distance between the skeleton position and the floor surface is stored. The server 20 determines that the calculated distance between the left and right hand skeleton position and the left or right foot skeleton position, the distance between the left or right hand skeleton position and the floor position at the item position, etc. exceeds a threshold. It is determined that performer A has picked up the headband item. As an example, the server 20 determines that the performer A has picked up the headband item when the position of either the left or right hand overlaps the item position on the floor. In other words, it is determined whether the performer is positioned within the range of the object of the item presented to the performer by the user. For example, the range of the object of the item is determined by the three-dimensional position information, the user action information, and the item type. As an example, in the case of a headband, it is the range of the outer shape of the headband. As an example, the determination of whether the left or right hand position and the item position on the floor overlap each other includes, as an example, three-dimensional information (one point) of the performer's hand position and three-dimensional information (one point) of the item position It is determined whether or not. Further, as an example, it is determined whether or not the three-dimensional information (multiple points) of the position of the performer's hand and the three-dimensional information (one point) of the item position overlap. Furthermore, as an example, it is determined whether the three-dimensional information (one point) of the performer's hand position and the three-dimensional information (multiple points) of the item position overlap. Furthermore, as an example, it is determined whether the three-dimensional information (plural points) of the position of the performer's hand overlaps with the three-dimensional information (multiple points) of the item position. Further, as an example, it is determined whether or not the three-dimensional information on the performer's hand position (region such as a fingertip) overlaps with the three-dimensional information on the item position (region where the falling object 75 is displayed). Whether or not the position of either the left or right hand and the item position on the floor are overlapped is determined by the fact that the 3D information of the performer's hand position and the 3D information of the item position are not a single point but a plurality of points or areas. It is easier to do this by determining whether they overlap.
 ステップS22において、サーバ20は、スタジオ10の床面に表示している落下オブジェクト75を非表示にする制御を行う。アイテムが演者Aに拾われれば、床面からはアイテムが無くなるからである。 In step S <b> 22, the server 20 performs control to hide the falling object 75 displayed on the floor surface of the studio 10. This is because if the item is picked up by the performer A, the item disappears from the floor.
 アイテムが演者Aに拾われてから演者Aの頭部に装着されるまでの間は、演者Aの手に持たれることになる。一例として、図7(a)は、演者Aがヘッドバンドのアイテムを拾った状態を示している。そこで、ステップS23において、サーバ20は、演者Aの取得動作を解析する。すなわち、サーバ20は、演者Aの各骨格位置と、当該骨格位置の深度情報と、演者Aの顔の位置からヘッドバンドのアイテムを頭部に装着するまでの動作を解析する。ステップS24において、サーバ20は、解析結果に基づいて、スタジオモニタ17及びユーザ端末60の表示面に、拾ってから頭部に装着するまでのヘッドバンドのアイテムを示す取得オブジェクト76の表示データを生成し、スタジオモニタ17及びユーザ端末60に対して送信する。これにより、一例として、スタジオモニタ17及びユーザ端末60の表示面には、取得オブジェクト76が床面のアイテム位置から頭部に移動するまでの間、拾った手に関連付けられて表示される。 From the time the item is picked up by the performer A until the item is put on the head of the performer A, it is held by the performer A. As an example, FIG. 7A shows a state where the performer A has picked up a headband item. Therefore, in step S23, the server 20 analyzes the acquisition operation of the performer A. That is, the server 20 analyzes each skeleton position of the performer A, the depth information of the skeleton position, and the operation from the position of the face of the performer A to the mounting of the headband item on the head. In step S <b> 24, the server 20 generates display data of the acquisition object 76 indicating the headband item from picking up to wearing on the head on the display surface of the studio monitor 17 and the user terminal 60 based on the analysis result. And transmitted to the studio monitor 17 and the user terminal 60. Thereby, as an example, the acquisition object 76 is displayed on the display surfaces of the studio monitor 17 and the user terminal 60 in association with the picked-up hand until the acquisition object 76 moves from the item position on the floor surface to the head.
 ステップS25において、サーバ20は、演者Aがヘッドバンドのアイテムを頭部に装着する装着動作を解析する。すなわち、サーバ20は、演者Aの各骨格位置と、当該骨格位置の深度情報と、演者Aの顔の位置からヘッドバンドのアイテムを頭部に装着動作を解析する。一例として、サーバ20は、左右何れかの手の位置が頭部の位置と重なったときに装着動作を検出する。ステップS26において、サーバ20は、スタジオモニタ17及びユーザ端末60の表示面に、演者A,B,Cの頭部の装着位置に取得オブジェクト76を表示する表示データを生成し、スタジオモニタ17及びユーザ端末60に対して送信する。一例として、サーバ20は、髪色と背景との境界部分に沿うように取得オブジェクト76を表示する表示データを生成する。これにより、一例として、スタジオモニタ17及びユーザ端末60の表示面には、取得オブジェクト76が演者Aの頭部に装着された状態が表示される(図7(b)参照)。サーバ20は、演者Aの頭部を追跡しており、演者Aが動いたとしても常にヘッドバンドのアイテムが装着されているように表示する。 In step S25, the server 20 analyzes the mounting operation in which the performer A mounts the headband item on the head. That is, the server 20 analyzes the operation of mounting the headband item on the head from each skeleton position of the performer A, depth information of the skeleton position, and the position of the face of the performer A. As an example, the server 20 detects the mounting operation when the position of either the left or right hand overlaps the position of the head. In step S <b> 26, the server 20 generates display data for displaying the acquisition object 76 on the display positions of the studio monitor 17 and the user terminal 60 at the mounting positions of the heads of the performers A, B, and C. Transmit to the terminal 60. As an example, the server 20 generates display data for displaying the acquisition object 76 along the boundary between the hair color and the background. Thereby, as an example, the state where the acquisition object 76 is mounted on the head of the performer A is displayed on the display surfaces of the studio monitor 17 and the user terminal 60 (see FIG. 7B). The server 20 tracks the head of the performer A and displays the headband item as always being worn even if the performer A moves.
 演者Aは、振り付けによっては、横を向くことがある。このような場合であっても、サーバ20は、演者Aの向きに合わせて、取得オブジェクト76を表示する(図7(c)参照)。各演者Aの向きは、RGBカメラ14からの表示データから演者Aの顔検出をし、デプスカメラ15から演者Aの骨格位置を算出することで判定することができ、アイテムのオブジェクトを表示するデータも三次元データであり、何れの向きからの表示も可能である。これらのデータに基づいて、演者Aが横向きになったことを検出した場合には、演者Aの向きに合わせてヘッドバンドのオブジェクトの向きも変化される。そして、取得オブジェクト76は、演者Aがしゃがんだり、飛び上がったりした場合であっても、演者Aの動作に合わせて表示されることになる。 Actor A may turn sideways depending on choreography. Even in such a case, the server 20 displays the acquisition object 76 in accordance with the direction of the performer A (see FIG. 7C). The direction of each performer A can be determined by detecting the face of the performer A from the display data from the RGB camera 14, and calculating the skeleton position of the performer A from the depth camera 15, and data for displaying the object of the item Is also three-dimensional data, and can be displayed from any orientation. Based on these data, when it is detected that the performer A has turned sideways, the direction of the headband object is also changed in accordance with the direction of the performer A. The acquisition object 76 is displayed in accordance with the action of the performer A even when the performer A squats down or jumps up.
 なお、サーバ20は、選択した演者によってアイテムが取得されると、成功した旨がデータベース26に登録される。
 演者Aにヘッドバンドのアイテムが装着されると、ステップS27において、サーバ20は、ヘッドバンドのアイテムを演者AにプレゼントしたユーザBを示すユーザIDのIDオブジェクト76aをスタジオモニタ17及びユーザ端末60の表示面に表示する。これにより、演者Aは、ヘッドバンドのアイテムをプレゼントしてくれたユーザのユーザIDを視認することができ、また、ユーザBも、自分のユーザIDが表示されることで、自分のプレゼントしたヘッドバンドのアイテムが演者Aに装着されたことを視認することができる。なお、サーバ20は、プロジェクタ16でスタジオ10の床面にも表示するようにしてもよい。
When the server 20 acquires the item by the selected performer, the server 26 registers the success in the database 26.
When the headband item is attached to the performer A, in step S27, the server 20 sends the ID object 76a of the user ID indicating the user B who presents the headband item to the performer A in the studio monitor 17 and the user terminal 60. Display on the display surface. Thereby, the performer A can visually recognize the user ID of the user who presented the headband item, and the user B can also display his / her own user ID so that his / her head is presented. It can be visually confirmed that the item of the band is attached to the performer A. The server 20 may be displayed on the floor surface of the studio 10 by the projector 16.
 アイテムのオブジェクトを演者に関連付けて表示する期間は、取得してからのライブ配信が行われている全期間であってもよいし、1曲毎であってもよい。また、間奏の間は表示しないようにしてもよい。 The period in which the item object is displayed in association with the performer may be the entire period in which live distribution has been performed since acquisition or may be for each song. Further, it may not be displayed during the interlude.
 また、演者が一度装着したアイテムを取り外し、例えばボックス(スタジオに設置する等実在するものでもよいし、アイテムと同様に仮想のオブジェクトにしてもよい。)に収納したり置いたりすることができる。これにより、数多くのアイテムが演者にプレゼントされた場合、演者は複数のプレゼントを装着することができる。例えば演者がヘッドバンドを一度に複数装着することもできるし、それまで演者が装着していたヘッドバンドを外して、新たなヘッドバンドを拾って装着できる。このような場合において、ボックスを机や収納箱のように使用して、違和感なく、ヘッドバンドの付け替え作業等を演出することができる。 Also, an item once worn by the performer can be removed and stored or placed in a box (which may be a real object such as being installed in a studio or a virtual object similar to an item). Thereby, when many items are presented to the performer, the performer can wear a plurality of presents. For example, the performer can wear a plurality of headbands at once, or the headband previously worn by the performer can be removed and a new headband can be picked up and worn. In such a case, the box can be used like a desk or a storage box, and a headband replacement operation or the like can be produced without a sense of incongruity.
 〔返礼処理〕
 間奏の間は、演者A,B,Cも歌唱を行わないことから、アイテムをプレゼントしてくれたユーザBに対してお返しである返礼の動作を行うことができる。ステップS31において、サーバ20は、ライブ演奏中の楽曲が間奏に入ったかを判定する。サーバ20は、一例として、マイク13からの音声入力が所定期間なかったときに、間奏に入ったと判定することができる。また、サーバ20は、一例として、再生装置11から間奏に入ったことを示す検出信号が入力されたとき、間奏に入ったと判定することができる。また、一例として、間奏に入ったことを示す動作を検出することによって、間奏に入ったと判定することができる。サーバ20は、一例として、ユーザ端末60と同期処理を開始し、ユーザ端末60における表示やユーザ端末60で操作される返礼受取処理を検出できるようにする。
[Return processing]
Since performers A, B, and C do not sing during the interlude, a return operation can be performed in return for user B who presented the item. In step S <b> 31, the server 20 determines whether the music being played live has entered an interlude. For example, the server 20 can determine that an interlude has been entered when there is no voice input from the microphone 13 for a predetermined period. For example, the server 20 can determine that an interlude has been entered when a detection signal indicating that an interlude has been entered from the playback device 11. As an example, it can be determined that an interlude has been entered by detecting an action indicating that an interlude has been entered. As an example, the server 20 starts synchronization processing with the user terminal 60 so that the display on the user terminal 60 and the return receipt processing operated on the user terminal 60 can be detected.
 なお、返礼処理は、間奏の間ではなく、曲と曲の間に行ってもよい。また、舞台等で演劇やミュージカル等を行っているときには、第N幕と第N+1幕との間等であってもよい。 Note that the return process may be performed between songs instead of between interludes. Moreover, when performing a play or a musical on the stage or the like, it may be between the Nth act and the N + 1th act.
 なお、間奏の終了は、マイク13からの音声入力があったときや再生装置11から間奏が終了することを示す検出信号が入力されたときに、間奏の終了を判定することができる。また、間奏が終了したことを示す動作を検出することによって、間奏が終了したと判定することができる。 It should be noted that the end of the interlude can be determined when there is a voice input from the microphone 13 or when a detection signal indicating the end of the interlude is input from the playback device 11. Moreover, it can be determined that the interlude has ended by detecting an action indicating that the interlude has ended.
 上述したステップS27において、サーバ20は、ヘッドバンドのアイテムを演者AにプレゼントしたユーザBのユーザIDをスタジオモニタ17及びユーザ端末60の表示面に表示している。したがって、演者Aは、ヘッドバンドのアイテムを演者AにプレゼントしたユーザBのユーザIDをマイク13に向かって呼ぶ。すると、ステップS32において、サーバ20は、マイク13で集音した音声データを音声認識し、ユーザBのユーザIDを特定する。なお、サーバ20は、返礼する相手となるユーザに対しては、返礼を行う旨がデータベース26に登録される。 In step S27 described above, the server 20 displays the user ID of the user B who presented the headband item to the performer A on the display screen of the studio monitor 17 and the user terminal 60. Therefore, the performer A calls the user ID of the user B who gave the headband item to the performer A toward the microphone 13. Then, in step S32, the server 20 recognizes the voice data collected by the microphone 13 and identifies the user ID of the user B. It should be noted that the server 20 registers in the database 26 that a user who is a returnee is to return.
 ステップS33において、サーバ20は、演者Aの返礼動作を検出する。サーバ20は、一例として、演者が返礼動作に移行することを示す特徴的な特定動作を検出する。一例として、サーバ20は、特定動作を判定するための各骨格位置の閾値を記憶しており、各骨格位置のデータが閾値を超えたとき、演者Aが特定動作を行ったと判定する。ここで、演者AからユーザBに対する返礼品は、一例として、演者Aのサインボールであり、演者Aは、特定動作に次いで、スタジオ10から実際にはスタジオ10にはいないユーザBに対してサインボールを投げる動作をする。ステップS34において、サーバ20は、演者Aの各骨格位置と、当該骨格位置の深度情報と、演者Aの顔の位置から返礼動作を解析する。ステップS35において、サーバ20は、一例として、スタジオモニタ17及びユーザ端末60の表示面に、演者Aによる投げ動作の開始部分において、演者Aの左右何れかの手の位置にサインボールの返礼オブジェクト77を表示する表示データを生成し、スタジオモニタ17及びユーザ端末60に対して送信する。これにより、図9(a)に示すように、一例として、スタジオモニタ17及びユーザ端末60の表示面には、返礼オブジェクト77がリアルタイムに表示される。また、サーバ20は、スタジオモニタ17及びユーザ端末60の表示面にユーザBの手を模した受取オブジェクト78を表示する表示データを生成し、スタジオモニタ17及びユーザ端末60に対して送信する。受取オブジェクト78がサインボールを投げる際の仮想的な目標となる。 In step S33, the server 20 detects the return operation of the performer A. For example, the server 20 detects a characteristic specific action indicating that the performer shifts to a return action. As an example, the server 20 stores a threshold value of each skeleton position for determining the specific action, and determines that the performer A has performed the specific action when the data of each skeleton position exceeds the threshold value. Here, the return item from the performer A to the user B is, for example, a sign ball of the performer A. The performer A signs the user B who is not actually in the studio 10 from the studio 10 following the specific action. Move the ball. In step S34, the server 20 analyzes the return operation from each skeleton position of the performer A, depth information of the skeleton position, and the position of the face of the performer A. In step S35, as an example, the server 20 displays a sign ball return object 77 on the display surface of the studio monitor 17 and the user terminal 60 at the position of the left or right hand of the performer A at the start of the throwing motion by the performer A. Is generated and transmitted to the studio monitor 17 and the user terminal 60. Thereby, as shown in FIG. 9A, the return object 77 is displayed in real time on the display screens of the studio monitor 17 and the user terminal 60 as an example. Further, the server 20 generates display data for displaying the receiving object 78 imitating the hand of the user B on the display surfaces of the studio monitor 17 and the user terminal 60 and transmits the display data to the studio monitor 17 and the user terminal 60. This is a virtual target when the receiving object 78 throws a sign ball.
 ステップS36において、サーバ20は、演者Aによる投げ動作を解析する。具体的に、サーバ20は、演者Aの各骨格位置と、当該骨格位置の深度情報と、演者Aの顔の位置から演者Aの腕の振り等を検出する。ステップS37において、サーバ20は、スタジオモニタ17及びユーザ端末60の表示面に、投げる動作の途中にある演者Aの左右何れかの手の位置に返礼オブジェクト77を表示する表示データ生成する。また、左右何れかの手から離れ飛翔している状態の返礼オブジェクト77を表示する表示データを生成する。そして、スタジオモニタ17及びユーザ端末60に対して送信する。これにより、図9(b)に示すように、一例として、スタジオモニタ17及びユーザ端末60の表示面には、受取オブジェクト78の方向にサインボールが投げられたようにリアルタイムに表示される。 In step S36, the server 20 analyzes the throwing motion by the performer A. Specifically, the server 20 detects the swing of the performer A's arm from each skeleton position of the performer A, depth information of the skeleton position, and the position of the face of the performer A. In step S <b> 37, the server 20 generates display data for displaying the return object 77 on the left or right hand position of the performer A in the middle of the throwing motion on the display surfaces of the studio monitor 17 and the user terminal 60. Also, display data for displaying the return object 77 in a state of flying away from either the left or right hand is generated. And it transmits to the studio monitor 17 and the user terminal 60. Accordingly, as shown in FIG. 9B, as an example, the display screens of the studio monitor 17 and the user terminal 60 are displayed in real time as if a sign ball was thrown in the direction of the receiving object 78.
 なお、演者Aのサインボールを投げる腕には、エフェクトを付与するようにしてもよい。一例として、このエフェクトは、演者の動きを検出して、検出した動きに応じたエフェクトとして、多数の点滅する星型図形を移動する演者Aの腕の移動方向の下流側の縁に表示するものである。一例として、エフェクトは、返礼動作を検出した時点で、演者Aのサインボールを投げる腕に関連付けて表示する。そして、投げる動作に移ると、振っている腕の移動に合わせて、腕の移動方向の下流側の縁に表示される。また、返礼動作が検出された時点で、背景画像を返礼処理のときに表示する特定画像に変更してもよい。 It should be noted that an effect may be given to the arm of the performer A who throws the sign ball. As an example, this effect is to detect the movement of the performer and display it on the downstream edge of the movement direction of the performer A's arm as an effect according to the detected movement. It is. As an example, the effect is displayed in association with the arm that throws the sign ball of the performer A when the return operation is detected. Then, when the operation moves to the throwing motion, it is displayed on the downstream edge of the arm moving direction in accordance with the movement of the waving arm. Further, when the return operation is detected, the background image may be changed to a specific image displayed during the return process.
 ユーザ端末60で返礼オブジェクト77が受取オブジェクト78に届くタイミングで受取操作を行うと、ユーザ端末60は、ユーザIDを含む受取データをサーバ20に対して送信する。受取操作とは、ボールを抱える動作であり、画面の何れかの位置や受取オブジェクト78を、マウスでクリックする操作である。受取操作は、一例として、タッチパネルをタッチする操作である。ステップS38において、サーバ20は、受取データを受信したとき、返礼の受け取った旨をデータベース26に登録する。このとき、サーバ20は、スタジオモニタ17及びユーザ端末60の表示面に、受取オブジェクト78によって返礼オブジェクト77がキャッチされた状態を表示する表示データを生成し、スタジオモニタ17及びユーザ端末60に対して送信する。これにより、図9(c)に示すように、一例として、スタジオモニタ17及びユーザ端末60の表示面には、手でサインボールをキャッチした状態が表示される。 When the receiving operation is performed at the timing when the return object 77 arrives at the receiving object 78 at the user terminal 60, the user terminal 60 transmits the received data including the user ID to the server 20. The receiving operation is an operation of holding the ball, and is an operation of clicking any position on the screen or the receiving object 78 with the mouse. The receiving operation is an operation of touching the touch panel as an example. In step S38, when receiving the received data, the server 20 registers in the database 26 that the return has been received. At this time, the server 20 generates display data for displaying the state where the return object 77 is caught by the receiving object 78 on the display surfaces of the studio monitor 17 and the user terminal 60, and Send. Thereby, as shown in FIG.9 (c), the state which caught the sign ball with the hand is displayed on the display surface of the studio monitor 17 and the user terminal 60 as an example.
 なお、返礼オブジェクト77が受取オブジェクト78に届くタイミングで受取操作を行わなかったときは、ユーザIDを含む受取失敗データをサーバ20に対して送信し、サーバ20は、受取失敗データを受信したとき、返礼の受取を失敗した旨をデータベース26に登録する。 Note that if the receiving operation is not performed when the return object 77 arrives at the receiving object 78, the receiving failure data including the user ID is transmitted to the server 20, and the server 20 receives the receiving failure data. The fact that receipt of return has failed is registered in the database 26.
 〔他のアイテム〕
 図5(a)において示す演者の動作を演出するエフェクトを付加するアイテムのオブジェクト72bは、演者A,B,Cに対して次のようなエフェクトを付加する。図10(a)は、選択された演者Aに対してエフェクトオブジェクト81が付加されている。このエフェクトオブジェクト81の映像は、演者の動きを検出して、検出した動きに応じたエフェクトとして、多数の点滅する星型図形を移動する演者Aの腕の移動方向の下流側の縁に表示している。このようなエフェクトオブジェクト81は、上述のヘッドバンドのように有形の物ではない。図10(b)に示すように、選択した演者Aに対して投げるときは、プレゼント用のリボン等が付いた箱オブジェクト82とする。そして、図10(c)に示すように、選択し演者Aによって取得されたとき、すなわち左右何れかの手の位置がアイテム位置と重なったとき、箱オブジェクト82を非表示とし、以降、エフェクトオブジェクト81を表示する制御を行う。
[Other items]
The item object 72b to which an effect for directing the performer's action shown in FIG. 5A is added to the performers A, B, and C as follows. In FIG. 10A, an effect object 81 is added to the selected performer A. The video of the effect object 81 is detected on the downstream edge of the movement direction of the performer A's arm, which detects the movement of the performer and, as an effect corresponding to the detected movement, moves a large number of blinking star-shaped figures. ing. Such an effect object 81 is not a tangible object like the above-mentioned headband. As shown in FIG. 10B, when throwing against the selected performer A, the box object 82 with a present ribbon or the like is used. Then, as shown in FIG. 10C, when the selected and acquired by the performer A, that is, when the position of either the left or right hand overlaps the item position, the box object 82 is hidden, and thereafter the effect object Control to display 81 is performed.
 一例として、演者Aがしゃがんだり、飛び上がったりした場合であっても、演者Aの動作に合わせて表示される。一例として、演者A,B,Cがジャンプする前後でエフェクトを変化させてもよい。一例として、ジャンプ前は点滅する星型図形でエフェクト表示し、ジャンプ後は、異なる図形を点滅表示する。また、一例として、予め複数の特定動作が定義されており、1つの特定動作が検出されたときには、その動作に関連付いた特定のエフェクトを付与する表示を行う。また、一例として、特定動作が検出されたときには、エフェクトを付与する表示を停止する。また、一例として、特定動作が検出されるまでは、エフェクトを付与する表示を行わない。 As an example, even if the performer A squats down or jumps up, it is displayed according to the operation of the performer A. As an example, the effect may be changed before and after the performers A, B, and C jump. As an example, effects are displayed as blinking star shapes before jumping, and different shapes are blinking displayed after jumping. Further, as an example, a plurality of specific operations are defined in advance, and when one specific operation is detected, a display for giving a specific effect associated with the operation is performed. As an example, when a specific operation is detected, the display for applying the effect is stopped. Further, as an example, the display for applying the effect is not performed until a specific operation is detected.
 図10(d)は、ライブ配信の背景画像のアイテムを示すオブジェクト72dが選択された状態を示している。このような背景画像のオブジェクト72dの場合にも、上述のヘッドバンドのように有形の物ではない。このため、選択した演者Aに対して投げるときは、プレゼント用の箱オブジェクト82を使用することが好ましい。 FIG. 10D shows a state in which an object 72d indicating an item of a live distribution background image is selected. Such a background image object 72d is not a tangible object like the above-described headband. For this reason, when throwing against the selected performer A, it is preferable to use the present box object 82.
 なお、ヘッドバンドのアイテムを選択したときにも、選択した演者Aに対して投げるときは、プレゼント用の箱オブジェクト82を表示するようにしてもよい。
 また、返礼オブジェクト77は、返礼をするユーザのユーザ端末40,60,70に対してのみ表示するようにしてもよい。これにより、演者とユーザの1対1のコミュニケーションを実現することができる。
Even when an item of the headband is selected, when throwing against the selected performer A, a present box object 82 may be displayed.
The return object 77 may be displayed only on the user terminals 40, 60, and 70 of the user who makes the return. Thereby, the one-to-one communication between the performer and the user can be realized.
 〔他のアイテム/演者選択処理〕
 以上の例においては、ユーザBが演者Aを選択した場合を説明したが、演者Aの代わりに演者Bや演者Cを選択可能であるし、1人のユーザが1つのユーザ端末40,60,70から演者Aと共に、複数の演者、例えば演者Bや演者Cを選択してもよい。複数の演者を選択した場合において、ライブ配信の背景画像のアイテムを示すオブジェクト72d以外であれば、異なるアイテムが選択されてもよいし、同じアイテムが選択されてもよい。
[Other item / performer selection process]
In the above example, although the case where the user B selected the performer A was demonstrated, the performer B and the performer C can be selected instead of the performer A, and one user is one user terminal 40,60, A plurality of performers such as performer B and performer C may be selected together with performer A from 70. When a plurality of performers are selected, different items may be selected or the same item may be selected as long as the object is not the object 72d indicating the item of the background image of the live distribution.
 上記ライブ配信システム1によれば、以下に列挙する効果を得ることができる。
 (1)ライブ配信システム1では、ユーザ欲求として、例えばユーザは演者にアイテムを受け取ってほしい気持ちから、演者に受け取ってもらえるまでアイテムを購入して演者へプレゼントする。また、ユーザは演者に受け取ってもらえる可能性を少しでも高めたいと思うがために、少しでも演者の近くにアイテムを投げようとする。また、ユーザ間競争意識として、例えば自分は演者にアイテムを受け取ってもらったことがある、お返しをもらったことがあるといった競争意識が生まれる。これにより、ユーザによるアイテム購入を促すことができる。このように、運営者および演者の収益を高めることができる。
According to the live distribution system 1, the effects listed below can be obtained.
(1) In the live distribution system 1, as a user desire, for example, a user purchases an item from a desire for the performer to receive the item until the performer receives it, and presents it to the performer. In addition, the user wants to increase the possibility that the performer will receive it, so he tries to throw an item close to the performer. In addition, as a competitive consciousness between users, for example, a competitive consciousness is born that the performer has received the item or has received a reward. Thereby, item purchase by the user can be prompted. In this way, the profits of the operator and the performer can be increased.
 (2)スタジオ10にユーザA,B,Cは、スタジオ10で演奏している演者A,B,Cに対してアイテムをスタジオ10に対して投げるような動作をしてプレゼントすることができる。プレゼントされたアイテムは、スタジオ10の床面に落下オブジェクト75として表示される。これにより、演者A,B,Cも、ファン等のユーザA,B,Cからアイテムがプレゼントされたことを視認することができる。また、ユーザ端末40,60,70にも演者A,B,Cの前に表示されることから、ユーザA,B,Cも自分が投げたアイテムが演者A,B,Cの前に届いたことを視認することができる。このように、ユーザA,B,Cは、実際にスタジオ10に居なくても、スタジオ10に居るかのようにユーザ端末40,60,70から演者A,B,Cに対してアイテムをプレゼントすることができる。また、演者A,B,Cも、実際のプレゼントを受け取るような振る舞いを行うことができる。 (2) Users A, B, and C can give gifts to the studio 10 by throwing items to the studio 10 for the performers A, B, and C performing in the studio 10. The presented item is displayed as a falling object 75 on the floor surface of the studio 10. Thereby, the performers A, B, and C can also visually recognize that the item is presented from the users A, B, and C such as fans. Also, since the user terminals 40, 60, and 70 are displayed in front of the performers A, B, and C, the items that the users A, B, and C have thrown also arrive in front of the performers A, B, and C. Can be visually recognized. Thus, even if the users A, B, and C are not actually in the studio 10, they present items to the performers A, B, and C from the user terminals 40, 60, and 70 as if they were in the studio 10. can do. In addition, performers A, B, and C can also behave like receiving actual presents.
 (3)プレゼントした演者A,B,Cがスタジオ10の床面に表示されたアイテムのオブジェクトを拾う振りをした場合、ユーザ端末40,60,70には、プレゼントした演者A,B,Cが実際に身に着けたように表示される。したがって、プレゼントしたユーザは、自分がプレゼントしたアイテムを受け取ってもらえたことを視認することができる。 (3) When presenting performers A, B, and C pretend to pick up an item object displayed on the floor of the studio 10, the presenting performers A, B, and C are displayed on the user terminals 40, 60, and 70, respectively. It is displayed as if it was actually worn. Therefore, the presenting user can visually recognize that the item he / she received has been received.
 (4)ユーザがスタジオ10で演奏している演者A,B,Cに対してアイテムをスタジオ10に対して投げるような動作をしてプレゼントしたとき、ユーザ端末40,60,70からは腕の振りに関する加速度データ、角度データ、角速度データ等の操作データが送信される。したがって、腕の振りの速さ等に応じて、アイテムが落ちるアイテム位置を変えることができる。したがって、ユーザA,B,Cは、なるべくアイテムが選択した演者A,B,Cの前に落下するように、腕の振りの速さ等を調整することができ、娯楽性を高めることができる。 (4) When the user presents the performers A, B, and C performing in the studio 10 by throwing items to the studio 10 and presents them, the user terminals 40, 60, and 70 Operation data such as acceleration data, angle data, angular velocity data, etc. relating to the swing is transmitted. Therefore, the item position where the item falls can be changed according to the swing speed of the arm or the like. Therefore, the users A, B, and C can adjust the speed of swinging the arm so that the item falls as much as possible before the performers A, B, and C selected, and can enhance entertainment. .
 (5)演者A,B,Cには、アイテムをプレゼントしたユーザA,B,CのユーザIDを視認させることができる。
 (6)演者A,B,Cは、プレゼントしてくれたユーザA,B,Cに対して返礼をすることができる。これにより、演者A,B,Cとユーザとの間で双方向のコミュニケーションを実現することができる。
(5) The performers A, B, and C can visually recognize the user IDs of the users A, B, and C who presented the items.
(6) The performers A, B, and C can return to the users A, B, and C who gave them presents. Thereby, two-way communication can be realized between the performers A, B, and C and the user.
 (7)返礼のアイテムも、演者A,B,Cの動作に応じて、ユーザ端末40,60,70に表示することができる。さらに、返礼のアイテムをタイミング良くキャッチする操作をユーザ端末40,60,70で行うことができるようにすることで、一層、娯楽性を高めることができる。 (7) Return items can also be displayed on the user terminals 40, 60, and 70 according to the actions of the performers A, B, and C. Furthermore, by making it possible to perform an operation for catching a return item with good timing on the user terminals 40, 60, and 70, it is possible to further enhance entertainment.
 なお、上記ライブ配信システムは、以下のように適宜変更して実施することもできる。
 ・演者A,B,Cは、プレゼントしてくれたユーザA,B,Cに対して返礼をする場合において、返礼オブジェクト77を使用して演者A,B,Cが返礼の動作を行わなくてもよい。この場合、プレゼントしてくれたユーザA,B,Cに対して、返礼の有形プレゼントを郵送するようにしてもよい。これにより、ライブ配信システム1の処理を簡素化することができる。
In addition, the said live delivery system can also be suitably changed and implemented as follows.
When performers A, B, and C give a return to users A, B, and C who gave presents, performers A, B, and C do not perform a return operation using return object 77 Also good. In this case, a tangible gift for return may be mailed to the users A, B, and C who gave the gift. Thereby, the process of the live delivery system 1 can be simplified.
 ・返礼品は、有形のプレゼントは、実際の物をユーザA,B,Cに対して後日郵送されるようにしてもよい。郵送の際には、実際のサインボールではなく、色紙や演者に関連するグッズやCDやDVD等のアルバム、コンサートの優待券等であってもよい。返礼を郵送する場合は、データベース26において、返礼の受け取った旨が登録されているユーザに対して行う。この場合の送り主は、演者A,B,Cであってもよいし、本システムの運営者であってもよい。なお、受取失敗の場合、そのユーザは、有形のプレゼントも受け取ることができない(郵送されない)ようにしてもよい。 ・ For return gifts, tangible gifts may be mailed to users A, B, and C at a later date. In the case of mailing, it may be an actual sign ball, goods such as colored paper and performers, albums such as CDs and DVDs, concert coupons, and the like. When the return is mailed, it is performed for the user who has registered in the database 26 that the return has been received. The sender in this case may be performers A, B, and C, or may be the operator of this system. In the case of failure in receiving, the user may not be able to receive a tangible present (not mailed).
 ・演者A,B,Cは、プレゼントしてくれたユーザA,B,Cに対して返礼を行わなくてもよい。すなわち、サーバ20は、返礼処理を省略してもよいし、また、アイテムを受け取ったとしても、返礼品を郵送しなくてもよい。 ・ Performers A, B, and C do not have to give back to users A, B, and C who gave them presents. That is, the server 20 may omit the return process, and even if an item is received, the return item need not be mailed.
 ・一例として、プレゼントしてくれたユーザA,B,Cに対して返礼を行わない場合、ユーザIDとプレゼントしたアイテムとを関連付けて管理しなくてもよい。これにより、ライブ配信システム1の処理を簡素化することができる。 As an example, when the user A, B, or C who gave the gift is not returned, the user ID may not be managed in association with the gift item. Thereby, the process of the live delivery system 1 can be simplified.
 ・ユーザ端末60のように、タッチパネルを備えている場合、ライブデータが表示されている表示面を指やスタイラスペンを用いて、表示されている演者A,B,Cの方向へなぞる操作を行って、選択した演者A,B,Cに対してアイテムをプレゼントする操作を行うようにしてもよい。この場合、ユーザ端末には、加速度センサやジャイロセンサが不要となる。また、タッチパネルを備えないユーザ端末をユーザが使用している場合には、マウスを用いて、ポインタを表示されている演者A,B,Cの方向へ移動させる操作を行って、選択した演者A,B,Cに対してアイテムをプレゼントする操作を行うようにしてもよい。 When the touch panel is provided like the user terminal 60, the display surface on which the live data is displayed is traced in the direction of the performers A, B, and C by using a finger or a stylus pen. Then, an operation of presenting items to the selected performers A, B, and C may be performed. In this case, an acceleration sensor and a gyro sensor are not required for the user terminal. When the user is using a user terminal that does not include a touch panel, the selected performer A is performed by moving the pointer in the direction of the performers A, B, and C displayed using the mouse. , B, C may be operated to present items.
 ・少なくとも、演者A,B,Cに投げたアイテムの落下オブジェクトは、アイテム位置に少なくとも表示されればよく、落下オブジェクトがアイテム位置に届くまでの軌跡は省略してもよい。 · At least the fall object of the item thrown to the performers A, B, and C may be displayed at least at the item position, and the trajectory until the fall object reaches the item position may be omitted.
 ・演者が実演する実空間は、スタジオ10以外であってもよく、ライブ会場やコンサート会場であってもよい。この場合、プロジェクタ16は、ステージにアイテムのオブジェクトを表示することになり、ユーザは、観客席に居ながら、ユーザ端末60のような自身の小型で携帯可能な情報処理端末を用いて演者に対してアイテムを投げる操作を行うことができる。 ・ The actual space where the performer will perform may be other than the studio 10, or a live venue or a concert venue. In this case, the projector 16 will display the item object on the stage, and the user will be present to the performer using his / her small and portable information processing terminal such as the user terminal 60 while in the audience seat. To throw items.
 ・アイテムのオブジェクトをスタジオ10に表示する手段は、プロジェクタ16に限定されるものではない。一例として、スタジオ10の床を、液晶表示パネル等の複数のフラット表示パネルを表示面が床面を向くように並べ、表示面の上に透明な合成樹脂板を敷いて構成し、床面に、アイテムのオブジェクトを表示するようにしてもよい。また、アイテム位置をレーザポインタで指し示すだけでもよい。空中ディスプレイ技術、空中結像技術、エアリアルイメージング技術を用いてアイテムを表示するようにしてもよい。アイテムは、二次元画像(コンピュータグラフィックス(CG))または三次元画像(コンピュータグラフィックス(CG))で表示するようにしてもよい。さらに、床面に多数のロッドを敷き詰め床面に対して垂直方向にロッドを昇降させて、床面を波状に変化させることによって、アイテムのオブジェクトを表すようにしてもよい。また、アイテムのオブジェクトをスタジオ10に表示する手段は、これらの装置を組み合わせたものであってもよい。 The means for displaying the item object on the studio 10 is not limited to the projector 16. As an example, the floor of the studio 10 is configured by arranging a plurality of flat display panels such as liquid crystal display panels so that the display surface faces the floor surface, and laying a transparent synthetic resin plate on the display surface. The object of the item may be displayed. Further, the item position may be simply indicated by the laser pointer. The item may be displayed using an aerial display technology, an aerial imaging technology, or an aerial imaging technology. The item may be displayed as a two-dimensional image (computer graphics (CG)) or a three-dimensional image (computer graphics (CG)). Furthermore, an object of an item may be represented by spreading a large number of rods on the floor surface and raising and lowering the rods in a direction perpendicular to the floor surface to change the floor surface into a wave shape. The means for displaying the item object on the studio 10 may be a combination of these devices.
 ・返礼のアイテムをデータベース26に登録し、次回のライブのときに、返礼のアイテムを演者に投げるようにしてもよい。このようなアイテムは、ユーザが購入することができない非売品アイテムである。当該アイテムと他のユーザのアイテムとが競合したときには、当該アイテムを優先して演者が装着する制御を行うようにしてもよい。非売品のアイテムとしては、装飾具でもよいし、エフェクトを付与するものでもよいし、背景画像であってもよい。 ・ Return items may be registered in the database 26, and the return items may be thrown to the performer at the next live. Such items are non-sale items that cannot be purchased by the user. When the item and another user's item compete with each other, control may be performed so that the performer wears the item with priority. The non-sold item may be a decoration, an effect, or a background image.
 ・演者やユーザによる特定の動作(例えば物を投げる動作、又、どの方向へ、どの程度の強さで投げたのか)は、スマートウォッチ50の検出部やスマートデバイス端末60aの検出部による検出結果に基づいて判定(検出)するものに限定されない。例えばカメラで取得した映像に基づいて、フレーム間差分や動きベクトルを算出して、判定するようにしてもよい。 A specific action by a performer or user (for example, an action of throwing an object or in which direction and with which strength) is detected by the detection part of the smart watch 50 or the detection part of the smart device terminal 60a. It is not limited to what determines (detects) based on. For example, the determination may be made by calculating an inter-frame difference or a motion vector based on the video acquired by the camera.
 一例として、ユーザA,B,Cが演者A,B,Cへアイテムをプレゼントする際にユーザが行う動作を加速度センサやジャイロセンサを用いて検出するのではなく、例えば、次のように行う。ユーザの正面に、ウェブカメラ、ビデオカメラ等動画撮影機能を有するカメラを設置する。ここでのカメラは、一例として、ラップトップ型のパーソナルコンピュータに一体に設けられたウェブカメラであり、一例として、デスクトップ型パーソナルコンピュータに接続されたウェブカメラやビデオカメラである。また、一例として、スマートデバイス端末に内蔵されたカメラである。そして、ユーザ端末40,60,70、サーバ20、又は、それ以外の他の装置は、ビデオデータを構成するフレームのフレーム間差分によってユーザの物を投げる動きデータを算出し、動きデータに基づいて投げる動作を検出する。又は、基準となるフレームからのオブジェクトの動きベクトルを検出して、ユーザの物を投げる動作を検出する。この後、スタジオ10の床面のアイテム位置やユーザ端末40,60,70の表示面に、アイテムの軌跡やアイテムを演者A,B,Cが認識できるように表示されるようになる。 As an example, instead of using an acceleration sensor or a gyro sensor to detect an operation performed by the user A, B, or C when presenting an item to the performers A, B, or C, for example, the operation is performed as follows. A camera having a video shooting function such as a web camera or a video camera is installed in front of the user. The camera here is, for example, a web camera provided integrally with a laptop personal computer, and as an example, a web camera or a video camera connected to a desktop personal computer. Also, as an example, a camera built in a smart device terminal. Then, the user terminals 40, 60, 70, the server 20, or other devices calculate motion data for throwing the user's object based on the inter-frame difference of the frames constituting the video data, and based on the motion data Detects throwing motion. Alternatively, the motion vector of the object from the reference frame is detected, and the motion of throwing the user's object is detected. Thereafter, the track of the item and the item are displayed on the item position on the floor of the studio 10 and the display surface of the user terminals 40, 60, and 70 so that the performers A, B, and C can recognize them.
 また、ユーザA,B,Cが演者A,B,Cへプレゼントしたアイテムを演者A,B,Cが取得する動作を上述したフレーム間差分や動きベクトルを用いた画像解析を用いて検出してもよい。一例として、演者A,B,Cがアイテムを拾うときのしゃがんだり屈んだりする動作やアイテムに触る又はアイテムが表示されたアイテム位置を触る動作を上記画像解析で検出する。この後、演者A,B,Cがアイテムを装着したり演者A,B,Cに対してエフェクトを付加する処理を行うことができる。 In addition, the actions that the performers A, B, and C acquire the items that the users A, B, and C present to the performers A, B, and C are detected using the image analysis using the inter-frame difference and the motion vector described above. Also good. As an example, the above-described image analysis detects the actions of the performers A, B, and C squatting and bending when picking up items, and touching the item or touching the item position where the item is displayed. Thereafter, the performers A, B, and C can perform processing for attaching items and adding effects to the performers A, B, and C.
 更に、演者A,B,Cが取得したアイテムを装着する動作を上記画像解析等を用いて検出してもよい。一例として、演者A,B,C自身の頭部へアイテムを移動する動作を上記画像解析で検出することができる。 Furthermore, the operation of wearing items acquired by the performers A, B, and C may be detected using the image analysis or the like. As an example, the movement of the item to the heads of the performers A, B, and C can be detected by the image analysis.
 更に、演者A,B,CがユーザA,B,Cへアイテムを返礼する動作を上記画像解析処理を用いて検出してもよい。一例として、スタジオ10で演者A,B,Cが上記画像解析で検出することができる。 Furthermore, an operation in which performers A, B, and C return items to users A, B, and C may be detected using the image analysis process. As an example, performers A, B, and C can be detected in the studio 10 by the image analysis.
 すなわち、演者A,B,Cの行う動作を深度情報を用いることなく上記画像解析処理で検出することもできるし、ユーザA,B,Cが行う動作を上記画像解析処理で検出することができる。 That is, the actions performed by the performers A, B, and C can be detected by the image analysis process without using depth information, and the actions performed by the users A, B, and C can be detected by the image analysis process. .
 ・操作データとしては、加速度データ、角度データ、角速度データのすべてではなく、動きデータとしての少なくとも加速度データだけでもよい。加速度データにより、投げられたアイテムの飛距離等を算出することができるからである。 ・ As operation data, not all acceleration data, angle data, and angular velocity data, but at least acceleration data as motion data may be used. This is because the flying distance of the thrown item can be calculated from the acceleration data.
 ・スタジオ10において、スタジオモニタ17は省略してもよい。この場合、アイテムのオブジェクトを投げたユーザのユーザIDのIDオブジェクト76a等は、プロジェクタ16で表示するようにすればよい。 In the studio 10, the studio monitor 17 may be omitted. In this case, the ID object 76a of the user ID of the user who threw the item object may be displayed on the projector 16.
 ・演者A,B,Cに対してアイテムが多くプレゼントされ過ぎた場合、スタジオモニタ17やユーザ端末40,60,70の表示面に、アイテムのオブジェクトが表示され過ぎてしまう。同様に、スタジオ10の床面には、プロジェクタ16によってアイテムのオブジェクトが表示され過ぎてしまう。このような場合、サーバ20は、参加するユーザ端末の数が閾値を超えると、ランダムにユーザ端末を抽出し、抽出したユーザ端末からのアイテムのオブジェクトをスタジオモニタ17やユーザ端末40,60,70の表示面に表示する。また、サーバ20は、抽出したユーザ端末からのアイテムのオブジェクトをプロジェクタ16によってスタジオ10の床面に表示する。 When there are too many items for the performers A, B, and C, the object of the item is displayed too much on the display screen of the studio monitor 17 and the user terminals 40, 60, and 70. Similarly, the object of an item will be displayed too much by the projector 16 on the floor surface of the studio 10. In such a case, when the number of participating user terminals exceeds the threshold, the server 20 randomly extracts user terminals, and the object of the item from the extracted user terminals is displayed in the studio monitor 17 or the user terminals 40, 60, 70. Is displayed on the display screen. Further, the server 20 displays the object of the item from the extracted user terminal on the floor surface of the studio 10 by the projector 16.
 ・ユーザA,B,Cから演者A,B,Cへのプレゼントとしてのアイテムや演者A,B,CからユーザA,B,Cへの返礼アイテムは、一例として、ユーザA,B,Cから演者A,B,Cに対して、単に「送る」ものがある。また、一例として、ユーザA,B,Cから演者A,B,Cに対して、ユーザA,B,Cが演者A,B,Cに感謝したり祝福したり応援する気持ちを込めて「贈る」ものがある。更に、一例として、ユーザA,B,Cから演者A,B,Cに対して、ユーザA,B,Cが購入したアイテム(所有物)を演者A,B,Cに「与える」ものがある。更に、一例として、ユーザA,B,Cが購入したアイテム(所有権)を演者A,B,Cに「渡す」ものがある。 -Items as gifts from the users A, B, C to the performers A, B, C and return items from the performers A, B, C to the users A, B, C are, for example, from the users A, B, C. For performers A, B, and C, there is something that is simply “sent”. Also, as an example, “users A, B, and C” give a feeling that the users A, B, and C appreciate, congratulate, and support the performers A, B, and C. There is something. Further, as an example, there is one that “gives” the items (owned items) purchased by the users A, B, C to the performers A, B, C from the users A, B, C to the performers A, B, C. . Furthermore, as an example, there is one that “passes” items (ownership) purchased by users A, B, and C to performers A, B, and C.
 ・アイテムとしては、ユーザA,B,Cから演者A,B,Cへプレゼントされるアイテムは、ユーザ端末40,60,70の表示面やスタジオモニタ17の表示面に単に表示するものであってもよい。このようなアイテムは、図5(a)に示すアイテム選択オブジェクト72の中から選択することもできるし、アイテム選択オブジェクト72とは関係なく、ユーザが自作した演者の舞台を華やかにする画像データ等のアイテムであってもよい。このような、ユーザ端末40,60,70の表示面やスタジオモニタ17の表示面に単に表示するだけのアイテムは、一例として、ユーザがアイテムを購入したときに課金される。なお、自作のアイテムの場合は無料であってもよい。 As items, items presented to the performers A, B, C from the users A, B, C are simply displayed on the display surface of the user terminals 40, 60, 70 and the display surface of the studio monitor 17. Also good. Such an item can be selected from the item selection object 72 shown in FIG. 5A, or image data that makes the performer's stage gorgeous by the user regardless of the item selection object 72, etc. It may be an item. Such items that are simply displayed on the display surface of the user terminals 40, 60, and 70 or the display surface of the studio monitor 17 are charged when the user purchases an item, for example. In the case of a self-made item, it may be free.
 ヘッドバンドといった演者A,B,Cが装着する装着具のアイテムやエフェクトのアイテムは、実際に演者A,B,Cがアイテムを取得したときにも更に課金するようにしてもよい。すなわち、ユーザA,B,Cがアイテムを購入したときと演者A,B,Cがアイテムを取得したときの2回課金を行うようにしてもよい。なお、演者A,B,Cがアイテムを取得したときだけ課金するようにしてもよい。 The items of the wearing equipment and the effect items worn by the performers A, B, and C such as the headband may be further charged when the performers A, B, and C actually acquire the items. That is, you may make it charge twice, when the user A, B, C purchases an item and when the performers A, B, C acquire an item. In addition, you may make it charge only when performers A, B, and C acquire an item.
 ・演者A,B,CからユーザA,B,Cへの返礼として、返礼を受けたユーザのみが見られる単なる表示や演出であってもよい。この場合、一例として、演者A,B,CからユーザA,B,Cへ返礼する動作は行わなくてもよい。そして、このような場合は、サインボール等の実物をユーザA,B,Cに対して郵送等しなくてもよい。 As a return from the performers A, B, and C to the users A, B, and C, it may be a simple display or production that can be seen only by the user who received the return. In this case, as an example, the operation of returning from the performers A, B, and C to the users A, B, and C may not be performed. In such a case, a real object such as a sign ball may not be mailed to the users A, B, and C.
 ・アイテムとしては、ユーザA,B,Cがソフトウェアを使用して作成した画像データや動画データを含んだ簡易式プログラムであってもよい。簡易式プログラムとしては、一例として、オブジェクトの動きを含んだ演者のステージを華やかにする演出等のエフェクトのプログラムである。 The item may be a simple program including image data and moving image data created by the users A, B, and C using software. As an example of the simple program, there is an effect program such as an effect that makes the stage of the performer including the movement of the object gorgeous.
 1…ライブ配信システム、2…ネットワーク、10…スタジオ、11…再生装置、12…スピーカ、13…マイク、14…RGBカメラ、15…デプスカメラ、16…プロジェクタ、17…スタジオモニタ、18…アイテム、20…サーバ、21…オーディオIF、22…RGBカメラIF、23…デプスカメラIF、24…プロジェクタIF、25…表示IF、26…データベース、27…データ記憶部、28…ネットワークIF、29…メインメモリ、30…制御部、40…ユーザ端末、41…オーディオIF、42…表示IF、43…ネットワークIF、44…通信IF、45…データ記憶部、46…操作IF、47…メインメモリ、48…制御部、49…表示部、50…スマートウォッチ、51…センサ、52…通信IF、53…データ記憶部、54…メインメモリ、55…制御部、60…ユーザ端末、60a…スマートデバイス端末、61…オーディオIF、62…表示IF、63…操作IF、64…センサ、65…ネットワークIF、66…データ記憶部、67…メインメモリ、68…制御部、69…表示部、70…ユーザ端末、71…ライブ映像、72…アイテム選択オブジェクト、72a…オブジェクト、72b…オブジェクト、72c…オブジェクト、72d…オブジェクト、73…演者選択オブジェクト、73a…第1教示オブジェクト、74…演者決定オブジェクト、74a…第2教示オブジェクト、75…落下オブジェクト、76…取得オブジェクト、76a…IDオブジェクト、77…返礼オブジェクト、78…受取オブジェクト、81…エフェクトオブジェクト、82…箱オブジェクト。 DESCRIPTION OF SYMBOLS 1 ... Live delivery system, 2 ... Network, 10 ... Studio, 11 ... Playback apparatus, 12 ... Speaker, 13 ... Microphone, 14 ... RGB camera, 15 ... Depth camera, 16 ... Projector, 17 ... Studio monitor, 18 ... Item, DESCRIPTION OF SYMBOLS 20 ... Server, 21 ... Audio IF, 22 ... RGB camera IF, 23 ... Depth camera IF, 24 ... Projector IF, 25 ... Display IF, 26 ... Database, 27 ... Data storage part, 28 ... Network IF, 29 ... Main memory , 30 ... control unit, 40 ... user terminal, 41 ... audio IF, 42 ... display IF, 43 ... network IF, 44 ... communication IF, 45 ... data storage unit, 46 ... operation IF, 47 ... main memory, 48 ... control 49, display unit, 50 ... smart watch, 51 ... sensor, 52 ... communication IF, 53 ... Data storage unit, 54 ... main memory, 55 ... control unit, 60 ... user terminal, 60a ... smart device terminal, 61 ... audio IF, 62 ... display IF, 63 ... operation IF, 64 ... sensor, 65 ... network IF, 66 ... Data storage unit, 67 ... Main memory, 68 ... Control unit, 69 ... Display unit, 70 ... User terminal, 71 ... Live video, 72 ... Item selection object, 72a ... Object, 72b ... Object, 72c ... Object, 72d ... object, 73 ... performer selection object, 73a ... first teaching object, 74 ... performer determination object, 74a ... second teaching object, 75 ... falling object, 76 ... acquisition object, 76a ... ID object, 77 ... return object, 78 ... Receiving object, 81 ... Effect Object, 82 ... box object.

Claims (13)

  1.  演者が存在する実空間の映像をライブ配信の対象として表示装置に表示させる表示装置制御部と、
     前記実空間の三次元位置情報を取得する取得部と、
     ユーザが前記演者へアイテムをプレゼントするためのユーザ動作を検出する検出部と、
     前記取得部が取得した三次元位置情報と、前記検出部が検出したユーザ動作のユーザ動作情報とに基づいて、前記実空間上において前記アイテムを配置すべきアイテム位置を算出し、算出した前記アイテム位置を前記演者が認識できるように前記実空間上に表示するアイテム表示制御部と
     を備えた表示制御システム。
    A display device control unit for displaying on the display device the video of real space where the performer exists as a target of live distribution;
    An acquisition unit for acquiring three-dimensional position information of the real space;
    A detection unit for detecting a user action for a user to present an item to the performer;
    Based on the three-dimensional position information acquired by the acquisition unit and user action information of the user action detected by the detection unit, the item position where the item should be arranged in the real space is calculated, and the calculated item A display control system comprising: an item display control unit for displaying the position on the real space so that the performer can recognize the position.
  2.  前記ユーザ動作は、前記ユーザが保持するスマートデバイス端末により検出される
     請求項1に記載の表示制御システム。
    The display control system according to claim 1, wherein the user operation is detected by a smart device terminal held by the user.
  3.  前記ユーザ動作は、前記ユーザが物を投げる動作である
     請求項2に記載の表示制御システム。
    The display control system according to claim 2, wherein the user action is an action in which the user throws an object.
  4.  前記ユーザ動作情報は、前記ユーザの動きデータである
     請求項3に記載の表示制御システム。
    The display control system according to claim 3, wherein the user action information is movement data of the user.
  5.  前記表示装置制御部は、前記映像において、前記アイテム位置に対応するアイテムオブジェクトを配置すべきアイテムオブジェクト位置を算出し、前記映像と前記アイテムオブジェクトとを、前記アイテムオブジェクトが前記アイテムオブジェクト位置に表示されるように表示装置に表示させる
     請求項1ないし4のうち何れか1項に記載の表示制御システム。
    The display device control unit calculates an item object position where an item object corresponding to the item position is to be arranged in the video, and the video object and the item object are displayed at the item object position. The display control system according to any one of claims 1 to 4, wherein the display is displayed on the display device.
  6.   前記表示装置制御部は、前記映像に対して前記アイテムオブジェクトをリアルタイムに表示させる
     請求項5に記載の表示制御システム。
    The display control system according to claim 5, wherein the display device control unit displays the item object on the video in real time.
  7.  前記検出部が前記ユーザから前記演者にプレゼントされた前記アイテムの範囲内に前記演者が位置したと判定した場合、前記表示装置制御部は、前記アイテムオブジェクトを前記演者に関連付けて表示装置に表示させる
     請求項5又は6に記載の表示制御システム。
    When the detection unit determines that the performer is located within the range of the item presented to the performer from the user, the display device control unit causes the display device to display the item object in association with the performer. The display control system according to claim 5 or 6.
  8.  前記アイテムの範囲は、前記取得部が取得した三次元位置情報と、前記検出部が検出したユーザ動作情報と、前記アイテムの種類とに基づいて決定する
     請求項7に記載の表示制御システム。
    The display control system according to claim 7, wherein the range of the item is determined based on the three-dimensional position information acquired by the acquisition unit, user action information detected by the detection unit, and the type of the item.
  9.  前記表示装置制御部は、前記演者が前記ユーザに返礼アイテムをプレゼントするための動作をしたと判定した場合、前記ユーザに前記返礼アイテムを付与する
     請求項1ないし8のうち何れか1項に記載の表示制御システム。
    The said display apparatus control part provides the said return item to the said user, when it determines with the said actor having performed the operation | movement for giving the return item to the said user, The any one of Claims 1 thru | or 8 is given. Display control system.
  10.  前記表示装置制御部は、前記ユーザが受取動作を行ったと判定したときは前記ユーザに前記返礼アイテムを付与し、前記ユーザが受取動作を行っていないと判定したとき前記ユーザに前記返礼アイテムを付与しない
     請求項9に記載の表示制御システム。
    The display device control unit gives the return item to the user when it is determined that the user has performed a receiving operation, and provides the return item to the user when it is determined that the user has not performed a receiving operation. The display control system according to claim 9.
  11.  前記表示装置制御部は、返礼に関連する表示を、前記返礼アイテムの相手となるユーザ端末の表示装置のみ表示し、それ以外のユーザのユーザ端末の表示装置には表示しない
     請求項10に記載の表示制御システム。
    The display device control unit displays the display related to the return only on the display device of the user terminal that is the partner of the return item, and does not display it on the display device of the user terminal of other users. Display control system.
  12.  前記アイテムは、前記ユーザが購入したアイテムである
     請求項1ないし11のうち何れか1項に記載の表示制御システム。
    The display control system according to claim 1, wherein the item is an item purchased by the user.
  13.  演者が存在する実空間の映像をライブ配信し、
     前記実空間の三次元位置情報を取得し、
     ユーザが前記演者へアイテムをプレゼントするためのユーザ動作を検出し、
     前記取得した三次元位置情報と、前記検出したユーザ動作のユーザ動作情報とに基づいて、前記実空間上において前記アイテムを配置すべきアイテム位置を算出し、算出した前記アイテム位置を前記演者が認識できるように前記実空間上に表示する表示制御方法。
    Live video of the real space where the performer exists,
    Obtaining the three-dimensional position information of the real space,
    Detecting a user action for a user to present an item to the performer;
    Based on the acquired three-dimensional position information and user action information of the detected user action, an item position where the item is to be arranged in the real space is calculated, and the performer recognizes the calculated item position. A display control method for displaying in the real space so that it can be performed.
PCT/JP2017/003496 2017-01-31 2017-01-31 Display control system and display control method WO2018142494A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201780084966.XA CN110249631B (en) 2017-01-31 2017-01-31 Display control system and display control method
PCT/JP2017/003496 WO2018142494A1 (en) 2017-01-31 2017-01-31 Display control system and display control method
JP2018565130A JP6965896B2 (en) 2017-01-31 2017-01-31 Display control system and display control method
TW107102798A TWI701628B (en) 2017-01-31 2018-01-26 Display control system and display control method for live broadcast

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/003496 WO2018142494A1 (en) 2017-01-31 2017-01-31 Display control system and display control method

Publications (1)

Publication Number Publication Date
WO2018142494A1 true WO2018142494A1 (en) 2018-08-09

Family

ID=63040375

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/003496 WO2018142494A1 (en) 2017-01-31 2017-01-31 Display control system and display control method

Country Status (4)

Country Link
JP (1) JP6965896B2 (en)
CN (1) CN110249631B (en)
TW (1) TWI701628B (en)
WO (1) WO2018142494A1 (en)

Cited By (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6491388B1 (en) * 2018-08-28 2019-03-27 グリー株式会社 Video distribution system, video distribution method, and video distribution program for live distribution of a video including animation of a character object generated based on the movement of a distribution user
JP2020036303A (en) * 2019-02-28 2020-03-05 グリー株式会社 Moving image distribution system, moving image distribution method, and moving image distribution program for live distribution of moving image including animation of character object generated based on movement of distribution user
WO2020044749A1 (en) * 2018-08-28 2020-03-05 グリー株式会社 Moving-image delivery system for delivering moving-image live that includes animation of character object generated on the basis of motion of delivering user, moving-image delivery method, and moving-image delivery program
JP2020036305A (en) * 2019-04-25 2020-03-05 グリー株式会社 Moving image distribution system, moving image distribution method, and moving image distribution program for live distribution of moving image including animation of character object generated based on movement of distribution user
US20200077157A1 (en) * 2018-08-28 2020-03-05 Gree, Inc. Video distribution system for live distributing video containing animation of character object generated based on motion of distributor user, distribution method, and storage medium storing video distribution program
JP2020036309A (en) * 2019-07-01 2020-03-05 グリー株式会社 Moving image distribution system, moving image distribution method, and moving image distribution program for live distribution of moving image including animation of character object generated based on movement of distribution user
WO2020121909A1 (en) * 2018-12-12 2020-06-18 グリー株式会社 Video distribution system, video distribution method, and video distribution program
JP2020096341A (en) * 2019-07-01 2020-06-18 グリー株式会社 Video distribution system, video distribution method, and video distribution program
JP2020096335A (en) * 2019-03-26 2020-06-18 グリー株式会社 Video distribution system, video distribution method, and video distribution program
JP2020096269A (en) * 2018-12-12 2020-06-18 グリー株式会社 Video distribution system, video distribution method, and video distribution program
JP2020113857A (en) * 2019-01-10 2020-07-27 株式会社Zizai Live communication system using character
CN111523545A (en) * 2020-05-06 2020-08-11 青岛联合创智科技有限公司 Article searching method combined with depth information
WO2020166594A1 (en) * 2019-02-15 2020-08-20 ステルスバリュー合同会社 Information processing device and program
JP2020160798A (en) * 2019-03-26 2020-10-01 株式会社ミクシィ Server device, program for server device, and program for terminal device
WO2020202860A1 (en) * 2019-04-02 2020-10-08 株式会社ディー・エヌ・エー System, method, and program for distributing live moving image
JP2020178350A (en) * 2020-06-02 2020-10-29 グリー株式会社 Video distribution system, video distribution method, and video distribution program for live distribution of video containing animation of character object generated based on movement of distributing user
JP6788756B1 (en) * 2020-01-27 2020-11-25 グリー株式会社 Information processing system, information processing method and computer program
JP6798733B1 (en) * 2020-01-20 2020-12-09 合同会社Mdk Consideration-linked motion induction method and consideration-linked motion induction program
JP2020205062A (en) * 2020-08-07 2020-12-24 株式会社 ディー・エヌ・エー System, method and program for delivering live video
JP2021057701A (en) * 2019-09-27 2021-04-08 グリー株式会社 Computer program, server device, terminal device, and method
JP2021071942A (en) * 2019-10-31 2021-05-06 グリー株式会社 Moving image processing method, server device, and computer program
JP2021086639A (en) * 2019-11-29 2021-06-03 グリー株式会社 Information processing system, information processing method, and computer program
WO2021145451A1 (en) * 2020-01-16 2021-07-22 ソニーグループ株式会社 Information processing device and information processing terminal
WO2021153369A1 (en) * 2020-01-30 2021-08-05 株式会社ドワンゴ Management server, user terminal, gift system, and information processing method
JP2021118539A (en) * 2020-01-27 2021-08-10 グリー株式会社 Computer program, method, and server device
US11128932B2 (en) 2018-05-09 2021-09-21 Gree, Inc. Video distribution system for live distributing video containing animation of character object generated based on motion of actors
US11190848B2 (en) 2018-05-08 2021-11-30 Gree, Inc. Video distribution system distributing video that includes message from viewing user
US11202118B2 (en) 2018-05-08 2021-12-14 Gree, Inc. Video distribution system, video distribution method, and storage medium storing video distribution program for distributing video containing animation of character object generated based on motion of actor
US20210392412A1 (en) * 2018-11-20 2021-12-16 Gree, Inc. System, method, and program for distributing video
JP2021197730A (en) * 2020-06-11 2021-12-27 グリー株式会社 Information processing system, information processing method and computer program
JPWO2022059686A1 (en) * 2020-09-16 2022-03-24
JP2022059738A (en) * 2020-10-02 2022-04-14 合同会社Mdk Counter value interlocking type operation induction method and counter value interlocking type operation induction program
CN114503598A (en) * 2019-12-19 2022-05-13 多玩国株式会社 Management server, user terminal, gift system, and information processing method
JP2022091811A (en) * 2020-05-01 2022-06-21 グリー株式会社 Moving image distribution system, information processing method and computer program
JP2022096096A (en) * 2020-12-17 2022-06-29 株式会社ティーアンドエス Video distribution method and program for the same
WO2022149517A1 (en) * 2021-01-05 2022-07-14 株式会社コルク Livestreaming distribution system and method therefor
JP2022132299A (en) * 2020-08-13 2022-09-08 グリー株式会社 Moving image processing method, server device, and computer program
JP2022133429A (en) * 2020-06-02 2022-09-13 グリー株式会社 Moving image distribution system, moving image distribution method, and moving image distribution program for live distribution of moving image including animation of character object generated on basis of motion of distribution user
JP7156735B1 (en) 2021-10-26 2022-10-20 合同会社Mdk Program, management server device, content distribution management method, content distribution method
US11559740B2 (en) 2019-09-13 2023-01-24 Gree, Inc. Video modification and transmission using tokens
US11559745B2 (en) * 2019-11-08 2023-01-24 Gree, Inc. Video modification and transmission using tokens
US11682154B2 (en) 2019-10-31 2023-06-20 Gree, Inc. Moving image processing method of a moving image viewed by a viewing user, a server device controlling the moving image, and a computer program thereof
JP7349689B1 (en) * 2022-09-07 2023-09-25 義博 矢野 Information processing method and information processing system
JP7428334B1 (en) 2022-11-04 2024-02-06 17Live株式会社 Gift box event for live streamers and viewers
JP7437480B2 (en) 2020-08-21 2024-02-22 株式会社コロプラ Programs, methods, and computers

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115039410A (en) * 2020-02-12 2022-09-09 索尼集团公司 Information processing system, information processing method, and program
JP2021197614A (en) * 2020-06-12 2021-12-27 株式会社コナミデジタルエンタテインメント Video distribution system, computer program used therefor, and control method
CN112929685B (en) * 2021-02-02 2023-10-17 广州虎牙科技有限公司 Interaction method and device for VR live broadcast room, electronic device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012120098A (en) * 2010-12-03 2012-06-21 Linkt Co Ltd Information provision system
WO2015068442A1 (en) * 2013-11-05 2015-05-14 株式会社ディー・エヌ・エー Content delivery system, delivery program, and delivery method
JP2016024682A (en) * 2014-07-22 2016-02-08 トモヤ 高柳 Content distribution system

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4696018B2 (en) * 2006-04-13 2011-06-08 日本電信電話株式会社 Observation position following video presentation device, observation position following video presentation program, video presentation device, and video presentation program
WO2008153599A1 (en) * 2006-12-07 2008-12-18 Adapx, Inc. Systems and methods for data annotation, recordation, and communication
CN104516492A (en) * 2013-09-28 2015-04-15 南京专创知识产权服务有限公司 Man-machine interaction technology based on 3D (three dimensional) holographic projection
JP5530557B1 (en) * 2013-12-13 2014-06-25 株式会社 ディー・エヌ・エー Server, program and method for distributing content
CN104363519B (en) * 2014-11-21 2017-12-15 广州华多网络科技有限公司 It is a kind of based on online live method for information display, relevant apparatus and system
US9846968B2 (en) * 2015-01-20 2017-12-19 Microsoft Technology Licensing, Llc Holographic bird's eye view camera
US20160330522A1 (en) * 2015-05-06 2016-11-10 Echostar Technologies L.L.C. Apparatus, systems and methods for a content commentary community
CN105373306B (en) * 2015-10-13 2018-10-30 广州酷狗计算机科技有限公司 Virtual objects presentation method and device
CN106231368B (en) * 2015-12-30 2019-03-26 深圳超多维科技有限公司 Main broadcaster's class interaction platform stage property rendering method and its device, client
CN106231435B (en) * 2016-07-26 2019-08-02 广州华多网络科技有限公司 The method, apparatus and terminal device of electronics present are given in network direct broadcasting
CN106131536A (en) * 2016-08-15 2016-11-16 万象三维视觉科技(北京)有限公司 A kind of bore hole 3D augmented reality interactive exhibition system and methods of exhibiting thereof
CN106331735B (en) * 2016-08-18 2020-04-21 北京奇虎科技有限公司 Special effect processing method, electronic equipment and server
CN106355440A (en) * 2016-08-29 2017-01-25 广州华多网络科技有限公司 Control method and device for giving away electronic gifts in group

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012120098A (en) * 2010-12-03 2012-06-21 Linkt Co Ltd Information provision system
WO2015068442A1 (en) * 2013-11-05 2015-05-14 株式会社ディー・エヌ・エー Content delivery system, delivery program, and delivery method
JP2016024682A (en) * 2014-07-22 2016-02-08 トモヤ 高柳 Content distribution system

Cited By (87)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11190848B2 (en) 2018-05-08 2021-11-30 Gree, Inc. Video distribution system distributing video that includes message from viewing user
US11202118B2 (en) 2018-05-08 2021-12-14 Gree, Inc. Video distribution system, video distribution method, and storage medium storing video distribution program for distributing video containing animation of character object generated based on motion of actor
US11128932B2 (en) 2018-05-09 2021-09-21 Gree, Inc. Video distribution system for live distributing video containing animation of character object generated based on motion of actors
US20220201371A1 (en) * 2018-08-28 2022-06-23 Gree, Inc. Video distribution system for live distributing video containing animation of character object generated based on motion of distributor user, distribution method, and storage medium storing video distribution program
US11336969B2 (en) 2018-08-28 2022-05-17 Gree, Inc. Video distribution system for live distributing video containing animation of character object generated based on motion of distributor user, distribution method, and storage medium storing video distribution program
US20200077157A1 (en) * 2018-08-28 2020-03-05 Gree, Inc. Video distribution system for live distributing video containing animation of character object generated based on motion of distributor user, distribution method, and storage medium storing video distribution program
JP2020036134A (en) * 2018-08-28 2020-03-05 グリー株式会社 Moving image distribution system, moving image distribution method, and moving image distribution program for live distribution of moving image including animation of character object generated based on movement of distribution user
CN110866963A (en) * 2018-08-28 2020-03-06 日本聚逸株式会社 Moving image distribution system, moving image distribution method, and recording medium
KR20210025102A (en) * 2018-08-28 2021-03-08 그리 가부시키가이샤 A moving image distribution system, a moving image distribution method, and a moving image distribution program for live distribution of moving images including animations of character objects generated based on the movement of the distribution user
WO2020044749A1 (en) * 2018-08-28 2020-03-05 グリー株式会社 Moving-image delivery system for delivering moving-image live that includes animation of character object generated on the basis of motion of delivering user, moving-image delivery method, and moving-image delivery program
JP6491388B1 (en) * 2018-08-28 2019-03-27 グリー株式会社 Video distribution system, video distribution method, and video distribution program for live distribution of a video including animation of a character object generated based on the movement of a distribution user
CN110866963B (en) * 2018-08-28 2024-02-02 日本聚逸株式会社 Moving image distribution system, moving image distribution method, and recording medium
US11044535B2 (en) 2018-08-28 2021-06-22 Gree, Inc. Video distribution system for live distributing video containing animation of character object generated based on motion of distributor user, distribution method, and storage medium storing video distribution program
US11838603B2 (en) 2018-08-28 2023-12-05 Gree, Inc. Video distribution system for live distributing video containing animation of character object generated based on motion of distributor user, distribution method, and storage medium storing video distribution program
KR102490402B1 (en) * 2018-08-28 2023-01-18 그리 가부시키가이샤 A moving image distribution system, a moving image distribution method, and a moving image distribution program for live distribution of a moving image including animation of a character object generated based on a distribution user's movement.
US20210392412A1 (en) * 2018-11-20 2021-12-16 Gree, Inc. System, method, and program for distributing video
US11736779B2 (en) 2018-11-20 2023-08-22 Gree, Inc. System method and program for distributing video
JP2020096269A (en) * 2018-12-12 2020-06-18 グリー株式会社 Video distribution system, video distribution method, and video distribution program
WO2020121909A1 (en) * 2018-12-12 2020-06-18 グリー株式会社 Video distribution system, video distribution method, and video distribution program
JP2020113857A (en) * 2019-01-10 2020-07-27 株式会社Zizai Live communication system using character
JP7277145B2 (en) 2019-01-10 2023-05-18 株式会社Iriam Live communication system with characters
JP2020135238A (en) * 2019-02-15 2020-08-31 ステルスバリュー合同会社 Information processing device and program
WO2020166594A1 (en) * 2019-02-15 2020-08-20 ステルスバリュー合同会社 Information processing device and program
JP2020036303A (en) * 2019-02-28 2020-03-05 グリー株式会社 Moving image distribution system, moving image distribution method, and moving image distribution program for live distribution of moving image including animation of character object generated based on movement of distribution user
JP2020160798A (en) * 2019-03-26 2020-10-01 株式会社ミクシィ Server device, program for server device, and program for terminal device
JP7236632B2 (en) 2019-03-26 2023-03-10 株式会社Mixi Server device, server device program and terminal device program
JP7440794B2 (en) 2019-03-26 2024-02-29 株式会社Mixi Server devices, programs for server devices, and programs for terminal devices
JP2020096335A (en) * 2019-03-26 2020-06-18 グリー株式会社 Video distribution system, video distribution method, and video distribution program
US11818437B2 (en) 2019-04-02 2023-11-14 DeNA Co., Ltd. System, method, and computer-readable medium including program for distributing live video
JP2020170283A (en) * 2019-04-02 2020-10-15 株式会社 ディー・エヌ・エー System for delivering live moving image and method and program
WO2020202860A1 (en) * 2019-04-02 2020-10-08 株式会社ディー・エヌ・エー System, method, and program for distributing live moving image
JP2020036305A (en) * 2019-04-25 2020-03-05 グリー株式会社 Moving image distribution system, moving image distribution method, and moving image distribution program for live distribution of moving image including animation of character object generated based on movement of distribution user
JP2020096341A (en) * 2019-07-01 2020-06-18 グリー株式会社 Video distribution system, video distribution method, and video distribution program
JP2020036309A (en) * 2019-07-01 2020-03-05 グリー株式会社 Moving image distribution system, moving image distribution method, and moving image distribution program for live distribution of moving image including animation of character object generated based on movement of distribution user
US11559740B2 (en) 2019-09-13 2023-01-24 Gree, Inc. Video modification and transmission using tokens
JP2021057701A (en) * 2019-09-27 2021-04-08 グリー株式会社 Computer program, server device, terminal device, and method
JP7360112B2 (en) 2019-09-27 2023-10-12 グリー株式会社 Computer program, server device, terminal device, and method
US11930228B2 (en) 2019-09-27 2024-03-12 Gree, Inc. Computer program, server device, terminal device and method
JP2021071942A (en) * 2019-10-31 2021-05-06 グリー株式会社 Moving image processing method, server device, and computer program
US11682154B2 (en) 2019-10-31 2023-06-20 Gree, Inc. Moving image processing method of a moving image viewed by a viewing user, a server device controlling the moving image, and a computer program thereof
US11559745B2 (en) * 2019-11-08 2023-01-24 Gree, Inc. Video modification and transmission using tokens
US11944906B2 (en) 2019-11-08 2024-04-02 Gree, Inc. Video modification and transmission using tokens
JP7336798B2 (en) 2019-11-29 2023-09-01 グリー株式会社 Information processing system, information processing method and computer program
JP2021086639A (en) * 2019-11-29 2021-06-03 グリー株式会社 Information processing system, information processing method, and computer program
CN114503598B (en) * 2019-12-19 2024-01-16 多玩国株式会社 Management server, user terminal, gift system, and information processing method
CN114503598A (en) * 2019-12-19 2022-05-13 多玩国株式会社 Management server, user terminal, gift system, and information processing method
WO2021145451A1 (en) * 2020-01-16 2021-07-22 ソニーグループ株式会社 Information processing device and information processing terminal
JP6798733B1 (en) * 2020-01-20 2020-12-09 合同会社Mdk Consideration-linked motion induction method and consideration-linked motion induction program
JP2021114711A (en) * 2020-01-20 2021-08-05 合同会社Mdk Consideration-linked motion induction method and consideration-linked motion induction program
JP7165176B2 (en) 2020-01-27 2022-11-02 グリー株式会社 Information processing system, information processing method and computer program
JP7418708B2 (en) 2020-01-27 2024-01-22 グリー株式会社 Information processing system, information processing method and computer program
JP6788756B1 (en) * 2020-01-27 2020-11-25 グリー株式会社 Information processing system, information processing method and computer program
JP7285244B2 (en) 2020-01-27 2023-06-01 グリー株式会社 Computer program, method and server device
JP2021117721A (en) * 2020-01-27 2021-08-10 グリー株式会社 Information processing system, information processing method and computer program
JP2021118539A (en) * 2020-01-27 2021-08-10 グリー株式会社 Computer program, method, and server device
JP2021118540A (en) * 2020-01-27 2021-08-10 グリー株式会社 System and method for information processing, and computer program
JP2023012486A (en) * 2020-01-27 2023-01-25 グリー株式会社 Information processing system, information processing method, and computer program
WO2021153369A1 (en) * 2020-01-30 2021-08-05 株式会社ドワンゴ Management server, user terminal, gift system, and information processing method
US20230123269A1 (en) * 2020-01-30 2023-04-20 Dwango Co., Ltd. Management server, user terminal, gift system, and information processing method
JP7034191B2 (en) 2020-01-30 2022-03-11 株式会社ドワンゴ Management server, gift system and information processing method
JP2021120795A (en) * 2020-01-30 2021-08-19 株式会社ドワンゴ Management server, user terminal, gift system and information processing method
JP7455298B2 (en) 2020-05-01 2024-03-26 グリー株式会社 Video distribution system, information processing method and computer program
JP2022091811A (en) * 2020-05-01 2022-06-21 グリー株式会社 Moving image distribution system, information processing method and computer program
CN111523545A (en) * 2020-05-06 2020-08-11 青岛联合创智科技有限公司 Article searching method combined with depth information
JP2022133429A (en) * 2020-06-02 2022-09-13 グリー株式会社 Moving image distribution system, moving image distribution method, and moving image distribution program for live distribution of moving image including animation of character object generated on basis of motion of distribution user
JP2020178350A (en) * 2020-06-02 2020-10-29 グリー株式会社 Video distribution system, video distribution method, and video distribution program for live distribution of video containing animation of character object generated based on movement of distributing user
JP7104097B2 (en) 2020-06-02 2022-07-20 グリー株式会社 Distribution A video distribution system, video distribution method, and video distribution program that delivers live videos including animations of character objects generated based on user movements.
JP7284329B2 (en) 2020-06-02 2023-05-30 グリー株式会社 Video distribution system, video distribution method, and video distribution program for live distribution of video containing animation of character object generated based on movement of distribution user
JP2021197730A (en) * 2020-06-11 2021-12-27 グリー株式会社 Information processing system, information processing method and computer program
JP7439170B2 (en) 2020-08-07 2024-02-27 株式会社 ディー・エヌ・エー System, method, and program for delivering live video
JP2020205062A (en) * 2020-08-07 2020-12-24 株式会社 ディー・エヌ・エー System, method and program for delivering live video
JP2022113881A (en) * 2020-08-07 2022-08-04 株式会社 ディー・エヌ・エー System for delivering live moving image and method and program
JP7093383B2 (en) 2020-08-07 2022-06-29 株式会社 ディー・エヌ・エー Systems, methods, and programs for delivering live video
JP7430353B2 (en) 2020-08-13 2024-02-13 グリー株式会社 Video processing method, server device and computer program
JP2022132299A (en) * 2020-08-13 2022-09-08 グリー株式会社 Moving image processing method, server device, and computer program
JP7437480B2 (en) 2020-08-21 2024-02-22 株式会社コロプラ Programs, methods, and computers
WO2022059686A1 (en) * 2020-09-16 2022-03-24 日本紙工株式会社 Video evaluation system, video evaluation program, and video evaluation method
JP7255918B2 (en) 2020-09-16 2023-04-11 日本紙工株式会社 Video evaluation system, video evaluation program, video evaluation method
JPWO2022059686A1 (en) * 2020-09-16 2022-03-24
JP2022059738A (en) * 2020-10-02 2022-04-14 合同会社Mdk Counter value interlocking type operation induction method and counter value interlocking type operation induction program
JP2022096096A (en) * 2020-12-17 2022-06-29 株式会社ティーアンドエス Video distribution method and program for the same
WO2022149517A1 (en) * 2021-01-05 2022-07-14 株式会社コルク Livestreaming distribution system and method therefor
JP2023064199A (en) * 2021-10-26 2023-05-11 合同会社Mdk Program, management server device, content distribution management method, and content distribution method
JP7156735B1 (en) 2021-10-26 2022-10-20 合同会社Mdk Program, management server device, content distribution management method, content distribution method
JP7402476B1 (en) * 2022-09-07 2023-12-21 義博 矢野 Information processing method and information processing system
JP7349689B1 (en) * 2022-09-07 2023-09-25 義博 矢野 Information processing method and information processing system
JP7428334B1 (en) 2022-11-04 2024-02-06 17Live株式会社 Gift box event for live streamers and viewers

Also Published As

Publication number Publication date
JP6965896B2 (en) 2021-11-10
JPWO2018142494A1 (en) 2019-11-21
TW201832161A (en) 2018-09-01
CN110249631B (en) 2022-02-11
CN110249631A (en) 2019-09-17
TWI701628B (en) 2020-08-11

Similar Documents

Publication Publication Date Title
WO2018142494A1 (en) Display control system and display control method
JP6382468B1 (en) Movie distribution system, movie distribution method, and movie distribution program for distributing movie including animation of character object generated based on movement of actor
JP6431233B1 (en) Video distribution system that distributes video including messages from viewing users
JP6420930B1 (en) Movie distribution system, movie distribution method, and movie distribution program for distributing movie including animation of character object generated based on movement of actor
JP6955861B2 (en) Event control system and program
US10380798B2 (en) Projectile object rendering for a virtual reality spectator
WO2020027226A1 (en) Display control system, display control method, and display control program
WO2019216146A1 (en) Moving picture delivery system for delivering moving picture including animation of character object generated based on motions of actor, moving picture delivery method, and moving picture delivery program
JPWO2019234879A1 (en) Information processing system, information processing method and computer program
CN114253393A (en) Information processing apparatus, terminal, method, and computer-readable recording medium
JP2024023273A (en) Video distribution system for distributing video including animation of character object generated based on motion of actor, video distribution method and video distribution program
JP6951394B2 (en) Video distribution system that distributes videos including messages from viewers
JP6847138B2 (en) A video distribution system, video distribution method, and video distribution program that distributes videos containing animations of character objects generated based on the movements of actors.
JP6498832B1 (en) Video distribution system that distributes video including messages from viewing users
JP2020043578A (en) Moving image distribution system, moving image distribution method, and moving image distribution program, for distributing moving image including animation of character object generated on the basis of movement of actor
WO2021095576A1 (en) Information processing device, information processing method, and program
JP6592214B1 (en) Video distribution system that distributes video including messages from viewing users
JP7357865B1 (en) Program, information processing method, and information processing device
JP6431242B1 (en) Video distribution system that distributes video including messages from viewing users
JP6764442B2 (en) Video distribution system, video distribution method, and video distribution program that distributes videos including animations of character objects generated based on the movements of actors.
JP2019198057A (en) Moving image distribution system, moving image distribution method and moving image distribution program distributing moving image including animation of character object generated based on actor movement
JP2020005238A (en) Video distribution system, video distribution method and video distribution program for distributing a video including animation of character object generated based on motion of actor

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17894983

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018565130

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17894983

Country of ref document: EP

Kind code of ref document: A1