WO2023062802A1 - Content distribution method and content distribution system - Google Patents

Content distribution method and content distribution system Download PDF

Info

Publication number
WO2023062802A1
WO2023062802A1 PCT/JP2021/038157 JP2021038157W WO2023062802A1 WO 2023062802 A1 WO2023062802 A1 WO 2023062802A1 JP 2021038157 W JP2021038157 W JP 2021038157W WO 2023062802 A1 WO2023062802 A1 WO 2023062802A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
tactile
distribution
video
distributed
Prior art date
Application number
PCT/JP2021/038157
Other languages
French (fr)
Japanese (ja)
Inventor
悠二 米原
帝聡 黒木
由希子 浅野
久幸 三木
大倫 佐藤
Original Assignee
豊田合成株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 豊田合成株式会社 filed Critical 豊田合成株式会社
Priority to PCT/JP2021/038157 priority Critical patent/WO2023062802A1/en
Publication of WO2023062802A1 publication Critical patent/WO2023062802A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B6/00Tactile signalling systems, e.g. personal calling systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs

Definitions

  • the present invention relates to a content distribution method and a content distribution system.
  • Patent Literature 1 discloses a technique for distributing tactile information based on biometric information of a performer superimposed on video and sound information of a concert to a user via a wearable device worn by the user.
  • the actor's biological information is, for example, the heartbeat.
  • a user watching the live distribution is presented with a tactile sensation based on the performer's biometric information in addition to the visual information and auditory information. This enhances the user's sense of immersion and presence in the live distribution.
  • a content distribution method for solving the above problems is a content distribution method for live distribution of content information including a video of an actor, sound information, and tactile information for operating a tactile presentation device, wherein the actor is photographed.
  • a delay time is provided before the tactile information after switching is distributed.
  • the delivery of tactile information after switching is delayed.
  • the tactile sensation presented to the user from the tactile sensation presentation device is adjusted to control the operation of the tactile sensation presentation device during the delay time, thereby controlling the impression of the tactile sensation based on the tactile signal delivered after the delay time. can.
  • modulated tactile information obtained by modulating the tactile information distributed before the delay time, after the delay time, or before and after the delay time, or modulated tactile information distributed in advance. Distributing connecting tactile information for connection, and switching from distribution of the modulated tactile information or the connecting tactile information to distribution of the tactile information after the delay time.
  • the first obtaining step is a step of obtaining a plurality of videos at the same time, and the content distribution method performs at least a part of the processing in the determination step in the distribution step. simultaneously for each of the images.
  • the determination item is that the performer appearing in the video is different from the performer from which the currently distributed tactile information is derived. be.
  • a content distribution system that solves the above problems includes a photographing device that acquires the video and the sound information, a tactile sensation acquisition device that acquires or generates the tactile information of the performer, and the distributed video and the sound information.
  • a viewing device configured to be used by a user for viewing; a tactile presentation device configured to present to the user a tactile sensation based on the distributed tactile information; an information processing device for delivering to a viewing device and the tactile presentation device, wherein the information processing device includes a determination unit that determines whether or not the image captured by the imaging device includes the determination item; a first distribution control unit that determines and distributes the video and the sound information to be distributed; and a second distribution control unit that distributes the tactile information corresponding to the video determined by the first distribution control unit.
  • the second distribution control unit controls distribution of the video and the sound information by the first distribution control unit. After a predetermined delay time has elapsed from the start, distribution of the tactile information is started.
  • FIG. 1 is a block diagram of a content distribution system; FIG. It is explanatory drawing of a tactile sense presentation apparatus.
  • 1 is a schematic cross-sectional view of a tactile sensation presentation device;
  • FIG. 4 is a flow chart of a first pattern and a second pattern of a content distribution method; 4 is a timing chart in the first pattern of the content distribution method; It is a timing chart in the second pattern of the content delivery method. It is a timing chart in the 3rd pattern of the content delivery method.
  • FIG. 11 is a flow chart of a third pattern of a content distribution method; FIG.
  • the content distribution system 10 includes a photographing device 11, a tactile sensation acquisition device 12, a viewing device 13, a tactile sensation presentation device 14, and an information processing device 15 that distributes content information.
  • the photographing device 11 is not particularly limited, and a known photographing device such as a camera for acquiring sound information such as images and voices of performers can be applied.
  • the image capturing device 11 included in the content distribution system 10 may be singular or plural.
  • the image and sound information acquired by the photographing device 11 is transmitted to the information processing device 15 .
  • the tactile sense acquisition device 12 is a device for acquiring the tactile sense information of the actor.
  • the tactile information acquired by the tactile acquisition device 12 includes, for example, vibrations generated based on the actions of the performer, and biological information such as pulse, heartbeat, respiration, and blood pressure.
  • the tactile sense acquisition device 12 may be a contact type device worn by the performer, or may be a non-contact type device.
  • the tactile sensation acquisition device 12 examples include a pressure sensor, pulse meter, heart rate meter, sphygmomanometer, and thermometer. Further, the tactile sensation acquisition device 12 may be a device that acquires tactile information by analyzing the image of the performer captured by the imaging device 11 .
  • the content distribution system 10 may include a single tactile sensation acquisition device 12 or a plurality of tactile sensation acquisition devices. When there are a plurality of performers, a tactile sensation acquisition device 12 is provided for each performer. Also, a plurality of tactile sensation acquisition devices 12 may be provided for one performer. The tactile information acquired by the tactile acquisition device 12 is transmitted to the information processing device 15 .
  • the viewing device 13 is a display device having an audio output function for allowing the user to view the actor's video and audio information distributed from the information processing device 15 . That is, the viewing device 13 is used by the user to view the actor's video and sound information distributed from the information processing device 15 .
  • the viewing device 13 includes a display device having an audio output function, for example, a tablet terminal or mobile terminal display, a stationary display, or a head-mounted display device such as a head-mounted display. Also, the viewing device 13 may be a combination of a display device without an audio output function and an audio output device.
  • the tactile sensation presentation device 14 is a device for presenting a tactile sensation to the user watching the live distribution.
  • the tactile sensation presentation device 14 presents a tactile sensation based on the tactile information to the user by vibrating based on the tactile information distributed from the information processing device 15 .
  • An example of the tactile presentation device 14 is shown in FIGS. 2 and 3.
  • FIG. 1 An example of the tactile presentation device 14 is shown in FIGS. 2 and 3.
  • the tactile sensation presentation device 14 shown in FIGS. 2 and 3 is a device that is used while being held in one hand by the user.
  • the tactile sensation presentation device 14 includes a box-shaped housing 20 sized to fit in the user's hand, and a presentation unit 21 provided on the upper surface of the housing 20 .
  • the presentation unit 21 includes a soft layer 22 protruding in a hemispherical shape from the upper surface of the housing 20 and one or more vibration actuators 23 arranged on the upper surface of the soft layer 22 .
  • the soft layer 22 is made of a soft material such as urethane.
  • the vibration actuator 23 is a sheet-like dielectric elastomer actuator (DEA: Dielectric Elastomer Actuator).
  • the DEA as the vibration actuator 23 causes the user who touches the upper surface of the presentation unit 21 to perceive vibrations based on deformation such as expansion and contraction of the DEA as a specific tactile sensation.
  • the soft layer 22 is a layer that deforms following deformation of the DEA. By arranging the soft layer 22 between the DEA and the housing 20 , it is possible to suppress deformation of the DEA from being restricted by the housing 20 .
  • a DEA is a multi-layer structure in which a sheet-like dielectric layer made of a dielectric elastomer and a plurality of positive electrodes and negative electrodes as electrode layers arranged on both sides of the dielectric layer in the thickness direction are laminated. An insulating layer is laminated on the outermost layer of the DEA.
  • DEA when a DC voltage is applied between the positive electrode and the negative electrode, the dielectric layer is compressed in the thickness direction and along the surface of the dielectric layer depending on the magnitude of the applied voltage. It deforms so as to extend in the plane direction of the DEA.
  • the DEA allows the user to perceive vibrations and the like based on the expansion and contraction of the DEA as a tactile sensation.
  • the dielectric elastomer constituting the dielectric layer is not particularly limited, and known dielectric elastomers used in DEA can be used.
  • Examples of the dielectric elastomer include crosslinked polyrotaxane, silicone elastomer, acrylic elastomer, and urethane elastomer.
  • One type of these dielectric elastomers may be used, or a plurality of types may be used in combination.
  • the thickness of the dielectric layer is, for example, 20-200 ⁇ m.
  • Examples of materials that make up the positive and negative electrodes include conductive elastomers, carbon nanotubes, Ketjenblack (registered trademark), and vapor-deposited metal films.
  • Examples of the conductive elastomer include a conductive elastomer containing an insulating polymer and a conductive filler.
  • Examples of the insulating polymer include crosslinked polyrotaxane, silicone elastomer, acrylic elastomer, and urethane elastomer. One type of these insulating polymers may be used, or a plurality of types may be used in combination.
  • Examples of the conductive filler include carbon nanotubes, Ketjenblack (registered trademark), carbon black, and metal particles such as copper and silver. One type of these conductive fillers may be used, or a plurality of types may be used in combination.
  • the thickness of the positive electrode and negative electrode is, for example, 1 to 100 ⁇ m.
  • the insulating elastomer constituting the insulating layer is not particularly limited, and known insulating elastomers used for the insulating portion of known DEA can be used.
  • the insulating elastomer include crosslinked polyrotaxane, silicone elastomer, acrylic elastomer, and urethane elastomer.
  • One type of these insulating elastomers may be used, or a plurality of types may be used in combination.
  • the thickness of the insulating layer is, for example, 10-100 ⁇ m.
  • the thickness of the entire DEA is preferably, for example, 0.3 to 3 mm from the viewpoint of ensuring flexibility and strength.
  • a coating layer (not shown) that covers the vibration actuator 23 is arranged on the vibration actuator 23 as necessary. Further, inside the housing 20 of the tactile sense presentation device 14, a driving unit 24 for applying a voltage between a pair of electrodes constituted by a positive electrode and a negative electrode of the vibration actuator 23 from a power supply (not shown) such as a battery is provided. is provided.
  • the tactile sensation presentation device 14 is not limited to the tactile sensation presentation device 14 shown in FIGS.
  • it may be a penlight-type tactile sensation presentation device 14 in which a vibration actuator 23 is arranged in the grip portion of a penlight used at a concert or the like, or it can be used by holding it like a cushion or pillow.
  • the tactile presentation device 14 may be a hugging type tactile presentation device 14 in which the vibration actuator 23 is arranged on a certain cushion material.
  • the penlight-type tactile sensation presentation device 14 is suitable for enhancing the presence of live distribution.
  • the hugging type tactile sense presentation device 14 is suitable for viewing live distribution in a relaxed state.
  • the information processing device 15 sets content information to be distributed to the viewing device 13 and the tactile presentation device 14 based on the acquired video, sound information, and tactile information. In addition, the information processing device 15 sets the set content information, that is, the actor's video, sound information, and tactile information so that the viewing device 13 can view and the tactile presentation device 14 can present the tactile sensation. deliver to Examples of the information processing device 15 include a server, a PC (Personal Computer), a mobile phone such as a smart phone, a tablet terminal, and a game machine.
  • a server a PC (Personal Computer)
  • a mobile phone such as a smart phone, a tablet terminal, and a game machine.
  • the information processing device 15 includes a transmission/reception section 30 , a signal generation section 31 , a determination section 32 , a first distribution control section 33 , a second distribution control section 34 and a storage section 35 .
  • the transmission/reception unit 30 is, for example, a communication interface for communicating with the photographing device 11, the tactile sensation acquisition device 12, the viewing device 13, and the tactile sensation presentation device 14 via the Internet as a wide area communication network.
  • the transmission/reception unit 30 may be compatible with a mobile communication system so as to be directly connected to the Internet, or may communicate with devices connected to the Internet.
  • the wide area communication network is arbitrary, and may be, for example, a telephone line. Further, transmission and reception between the transmission/reception unit 30 and the imaging device 11 and the tactile sensation acquisition device 12 may be performed by wire.
  • the signal generation unit 31, the determination unit 32, the first distribution control unit 33, and the second distribution control unit 34 include: 1) one or more processors that operate according to a computer program (software); or 3) any combination thereof, such as one or more dedicated hardware circuits, such as an application specific integrated circuit (ASIC), that performs the processing of .
  • a processor includes, for example, a CPU.
  • the signal generation unit 31 generates a tactile signal from the tactile information acquired by the tactile acquisition device 12 .
  • the tactile sensation signal is a signal for driving the vibration actuator 23 of the tactile sensation presentation device 14 so as to present a tactile sensation based on the acquired tactile information.
  • the signal generation unit 31 generates a tactile signal from the tactile information acquired for each tactile sensation acquisition device 12 .
  • the corresponding tactile signals are generated for all the tactile information acquired by the tactile sense acquisition device 12 .
  • the signal generation unit 31 also generates a modulated tactile signal by modulating the generated tactile signal. The details of the modulated tactile signal will be described later.
  • Each tactile signal described above is a signal indicating a voltage waveform, and the volume of the tactile signal is the amplitude of the voltage waveform.
  • the determination unit 32 determines whether or not the pre-distribution video acquired by the imaging device 11 includes preset determination items.
  • the determination item is a requirement set in advance for determining switching of tactile information to be distributed.
  • the determination item of the present embodiment is that the main performer appearing in the video determined to be distributed is different from the performer from which the tactile information currently distributed is derived. If the main performer appearing in the video determined to be distributed is different from the performer from which the tactile information currently distributed is derived, the determination unit 32 makes an affirmative determination. Then, when the main performer appearing in the video decided to be distributed is the same as the performer from which the tactile information currently distributed is derived, the determination unit 32 makes a negative determination.
  • the determination unit 32 makes a negative determination.
  • the determination method by the determination unit 32 include a known determination method such as a determination method using AI for image recognition. Also, the timing of the determination by the determination unit 32 can be arbitrarily set during the period before the video used for determination is distributed.
  • the first distribution control unit 33 determines video and audio information to be distributed from the video and audio information acquired by the photographing device 11, and distributes the determined video and audio information so that it can be viewed by the viewing device 13. .
  • the first distribution control unit 33 determines video and audio information acquired by one imaging device 11 as video and audio information to be distributed. Then, the first distribution control unit 33 performs control for distributing the determined video and audio information.
  • the first distribution control unit 33 distributes the image and sound information with a slight time difference between the acquisition of the image and sound information by the photographing device 11 and the distribution. This time difference is, for example, several seconds to ten minutes.
  • the method of providing this time difference is not particularly limited, and an existing distribution technology such as delay broadcasting may be used.
  • the determination process by the determination unit 32 for the image immediately before distribution is performed.
  • the first distribution control unit 33 determines the video to be distributed from the plurality of videos acquired by the plurality of photographing devices 11. Also, the first distribution control unit 33 determines sound information to be distributed from a plurality of pieces of sound information acquired by a plurality of photographing devices 11 . Then, the first distribution control unit 33 distributes the determined video and audio information while providing the time difference.
  • the process of determining the video to be distributed from a plurality of videos and the process of determining the sound information to be distributed from a plurality of sound information are performed, for example, based on the switching operation performed by the operator to switch the content information to be distributed. .
  • the second distribution control unit 34 distributes the tactile signal generated from the tactile information so that the tactile sensation presentation device 14 can present the tactile sensation.
  • the tactile signal distributed by the second distribution control unit 34 is a tactile signal based on the main performer appearing in the video determined to be distributed by the first distribution control unit 33 .
  • the second distribution control unit 34 performs delay control to delay the start of distribution of the tactile signal when the determination by the determination unit 32 for the video determined to be distributed by the first distribution control unit 33 is positive.
  • the delay control is control for starting distribution of the tactile signal after a predetermined delay time has elapsed since the first distribution control unit started distributing the video and audio information.
  • the delay time is, for example, 100 milliseconds to 3 seconds. If the video to be distributed has a transition effect, the delay time is preferably set to match the length of the transition effect.
  • the delay time is provided to expand the range of tactile sensations presented to the user based on the tactile signals delivered after the delay time. Whether or not to distribute a signal for operating the tactile sensation presentation device 14 during the delay time can be set arbitrarily. By controlling the operation of the tactile sense presentation device 14 during the delay time, it is possible to control the impression of the tactile sense presented to the user based on the tactile sense signal delivered after the delay time.
  • the delay time is the time during which no tactile signal is distributed so as not to operate the tactile sensation presentation device 14 .
  • the tactile sensation presented to the user from the tactile sensation presentation device 14 after the delay time is emphasized.
  • the delay time is defined as the time for delivering the connecting tactile information for operating the tactile sensation presentation device 14 so as to present the connecting tactile sensation to the user. In this case, it is possible to reduce the discomfort that the user experiences due to the sudden change in the tactile sensation presented to the user by the tactile sensation presentation device 14 .
  • the tactile information for connection for example, modulated tactile information obtained by modulating the tactile information distributed before the delay time, after the delay time, or before and after the delay time, and high affinity with various tactile sensations. tactile information for connection created in advance as shown in FIG.
  • the delay time is the time to distribute the modulated tactile information for operating the tactile sensation presentation device 14 so as to present a connecting tactile sensation to the user.
  • the second distribution control unit 34 distributes the modulated tactile signal as the modulated tactile information during the delay time so that the tactile sensation presentation device 14 can present the tactile sensation.
  • a modulated haptic signal is a signal obtained by superimposing a fade-out signal and a fade-in signal.
  • the fade-out signal is a signal obtained by modulating the haptic signal distributed before the delay time so that it gradually becomes smaller during the delay time.
  • the fade-in signal is a signal obtained by modulating the haptic signal delivered after the delay time so that it gradually increases during the delay time.
  • the second distribution control unit 34 switches from distribution of the modulated tactile signal to distribution of the tactile signal.
  • a control program is stored in the storage unit 35 .
  • the storage unit 35 is, for example, a nonvolatile memory.
  • ⁇ Content delivery method> Next, a content distribution method for live-distributing content information including video, sound information, and tactile information of actors using the content distribution system 10 described above will be described.
  • the first pattern of the content distribution method is a content distribution method in which there are two performers A and B, and the photographing device 11 is one camera CA. is.
  • Step S1 is a first obtaining step of photographing performers A and B with a camera CA to obtain video and sound information.
  • the cameraman takes an image of performer A with camera CA, and after a predetermined time has elapsed, changes the object of photography from performer A to performer B, and then takes performer B.
  • the actor A is being photographed by the camera CA
  • video and audio information based on the actor A is transmitted from the camera CA to the information processing device 15 .
  • camera CA is photographing performer B
  • video and sound information based on performer B is transmitted from camera CA to information processing device 15 .
  • Step S2 is a second acquisition step in which the tactile information of performer A and the tactile information of performer B are acquired by the tactile acquisition device 12 respectively. Acquisition of each tactile information of actor A and actor B by the tactile sensation acquisition device 12 is performed continuously or intermittently throughout the live distribution. Then, each acquired tactile information is transmitted from the tactile sensation acquisition device 12 to the information processing device 15 . Note that steps S1 and S2 are executed in parallel.
  • Step S3 is a step of determining the video and audio information to be distributed acquired in step S1.
  • one camera is camera CA
  • one type of video and audio information is distributed. Therefore, in step S3, the first distribution control unit 33 always determines the video and sound information corresponding to the video acquired by one camera CA as the video and audio information to be distributed.
  • Step S4 is a judgment step for judging whether or not the pre-distribution video whose distribution was decided in step S3 contains a judgment item.
  • the determination unit 32 determines that the main performer appearing in the video before distribution, which has been determined to be distributed in step S3, is currently being distributed. A positive determination (YES) or a negative determination (NO) is made based on whether the performer is different from the information source. At the start of distribution, since there is no tactile information being distributed at present, the determination by the determination unit 32 is a negative determination.
  • Steps S5a and S5b are steps for starting distribution of the determined video and audio information.
  • Step S5a is a step performed when the determination in step S4 is negative
  • step S5b is a step performed when the determination in step S4 is positive.
  • Steps S5a and S5b differ in that the process is performed when the determination in step S4 is affirmative or the process is performed when the determination in step S4 is negative. are the same.
  • the first distribution control unit 33 starts distributing the video and audio information with a slight time lag between the acquisition of the video and audio information by the photographing device 11 and the distribution thereof.
  • Step S ⁇ b>6 is a step of distributing the tactile signal acquired by the tactile sensation acquisition device 12 .
  • step S6 during the period in which camera C A continues to photograph actor A, if actor A is the main actor appearing in the images distributed in steps S4a and S4b, second distribution control unit 34: A tactile signal based on the tactile information of performer A is distributed in synchronization with the video. Then, when the main performer is performer B, the second distribution control unit 34 distributes the tactile signal based on the tactile information of performer B in synchronization with the video.
  • Step S7 is a step of distributing a modulated tactile signal as modulated tactile information for a predetermined delay time. Step S7 is performed following step S5a when the determination in step S4 is affirmative.
  • the second distribution control unit 34 distributes a modulated tactile signal obtained by superimposing a fade-out signal and a fade-in signal.
  • a fade-out signal is a signal obtained by modulating a tactile signal based on tactile information of an actor, for example, actor A, which has been distributed before the delay time.
  • the fade-in signal is a signal obtained by modulating a tactile signal based on the tactile information of an actor, for example, actor B appearing in the video distributed in step S5a.
  • step S4 determines whether the determination in step S4 is negative, the process moves from step S4 to steps S5a and S6. If the determination in step S4 is affirmative, the process proceeds from step S4 to step S5b and step S7, and then from step S7 to step S6. After step S6, the information processing device 15 returns to step S4, and repeats the processes after step S4. Accordingly, the information processing device 15 continuously distributes video, sound information, and tactile information.
  • step S3 the actor appearing in the video determined to be distributed in step S3 is actor A, and in step S6, actor A is also the actor from whom the tactile information currently being distributed is derived. be.
  • the determination of the determination unit 32 in step S4 during this period is a negative determination, including at the start of distribution. If the determination in step S4 is a negative determination, the determination by the determination unit 32 is a negative determination because there is no tactile information being distributed at the start of distribution.
  • the first distribution control unit 33 starts distributing the image decided to be distributed in step S3, that is, the image of actor A and the corresponding sound information.
  • the second distribution control unit 34 distributes a tactile signal based on the tactile information of performer A whose video is distributed in step S5a.
  • step S1 after a predetermined period of time has elapsed from the start of distribution, the subject of photography is changed from performer A to performer B by the cameraman. As a result, video and audio information based on performer B is transmitted from camera CA to information processing device 15 .
  • step S4 in the cycle immediately after the shooting target is changed from actor A to actor B, actor B is the actor appearing in the video determined to be distributed in step S3.
  • Actor A is the actor from whom tactile information that is currently distributed is derived.
  • the determination of the determination unit 32 in step S4 is a positive determination.
  • step S5b the first distribution control unit 33 starts distributing the image decided to be distributed in step S3, that is, the image of actor B and the corresponding sound information.
  • step S7 the second distribution control unit 34 distributes the modulated tactile signal for a predetermined delay time from the start of distribution of the video of actor B, and then in step S6, the video is distributed in step S5b.
  • a tactile signal based on the tactile information of performer B who is performing is distributed.
  • the first distribution control unit 33 continues distributing the image of actor B, which is the image obtained in step S1.
  • step S3 the actor appearing in the video determined to be distributed in step S3 is actor B, and in step S6, the origin of the currently distributed tactile information is determined.
  • Actor B is also Actor B. Therefore, the determination by the determination unit 32 in step S4 in this period is a negative determination.
  • step S4 distribution is performed in steps S5a and S6.
  • the user can use the viewing device 13 to view the video and audio information distributed in steps S5a and S5b.
  • the vibration actuator 23 of the tactile sense presentation device 14 vibrates based on the tactile sense signal delivered in step S6 and the modulated tactile sense signal delivered in step S7.
  • the tactile sensation based on actor A or actor B is presented to the user touching the presentation unit 21 of the tactile sensation presentation device 14 .
  • the first distribution steps are step S3 and steps S5a and S5b.
  • the second distribution steps are steps S6 and S7.
  • the content information of performer A is distributed for a while after the start of distribution. After that, the distribution of the content information of the performer A is switched to the distribution of the content information of the performer B.
  • step S1 acquisition of image and sound information based on performer A by camera CA and acquisition of image and sound information based on performer B by camera CB are performed simultaneously.
  • the acquired video and sound information based on the performer A is transmitted from the camera CA to the information processing device 15 .
  • the acquired video and sound information based on actor B is transmitted from camera CB to information processing device 15 .
  • Step S2 is similar to the first pattern.
  • Step S3 is a step of determining the video and audio information to be distributed acquired in step S1.
  • the first distribution control unit 33 performs distribution based on the video and sound information based on the actor A acquired by the camera C A and the video and sound information based on the actor B acquired by the camera CB. determine the video and audio information to be used. The above determination is made based on the input of the switching operation performed by the operator in order to switch the content information to be distributed.
  • Steps S4 to 7 are the same as in the first pattern.
  • the content information of performer A is distributed for a while after the start of distribution.
  • the first distribution control unit 33 determines the video and sound information based on the performer A acquired by the camera CA as the video and sound information to be distributed.
  • the actor appearing in the video determined to be distributed in step S3 is actor A
  • actor A is also the actor from whom the tactile information currently being distributed is derived. be.
  • the processing after step S4 in this period is the same as in the first pattern. That is, in this period, the determination of the determination unit 32 in step S4 is a negative determination.
  • step S5a distribution of the image decided to be distributed in step S3, that is, the image of actor A and the corresponding sound information is started.
  • step S6 the second distribution control unit 34 distributes a tactile signal based on the tactile information of performer A whose video is distributed in step S5a.
  • the content information of performer B is distributed for a while after a predetermined time has elapsed from the start of distribution.
  • the first distribution control unit 33 determines the video and sound information based on actor B obtained by camera CB as the video and sound information to be distributed. That is, in step S3 after a predetermined time has elapsed from the start of distribution, the first distribution control unit 33 performs distribution so as to switch from the video and sound information based on performer A to the video and sound information based on performer B. Determine video and audio information.
  • step S4 of the cycle immediately after the video and sound information determined in step S3 is changed to the video and sound information based on actor B the actor appearing in the video determined to be distributed in step S3 is actor B.
  • Actor A is the actor from whom tactile information that is currently distributed is derived.
  • the determination of the determination unit 32 in step S4 is a positive determination. If the determination in step S4 is affirmative, the process moves from step S4 to step S5b and step S7, and after step S7 moves to step S6.
  • the first distribution control unit 33 determines the image and sound information based on performer B acquired by camera CB as the image and sound information to be distributed. continue.
  • the actor appearing in the video determined to be distributed in step S3 is actor B, and in step S6, actor B is also the actor from whom the tactile information currently being distributed is derived. be. Therefore, the determination of the determination unit 32 in step S4 in this period is negative including the time of starting the distribution.
  • step S4 distribution is performed in steps S5a and S6. Specific processing in steps S4 to S7 is the same as in the first pattern.
  • a third pattern of the content distribution method will be described based on the flowchart shown in FIG.
  • the third pattern differs from the second pattern in the configuration of the determination step.
  • the description of the parts common to the second pattern will be omitted.
  • Step S4a is a step of determining whether or not each image acquired in step S1 includes a determination item, and storing the determination result.
  • the determination unit 32 intermittently determines whether or not each image acquired in step S1 includes a determination item at predetermined intervals after acquisition. Then, the determination unit 32 stores the determination result in the storage unit 35 for each image acquired in step S ⁇ b>1 or updates the determination result stored in the storage unit 35 .
  • Step S4b is a step for judging whether or not the video decided to be distributed in step S3 contains judgment items based on the most recent judgment results stored in the storage unit 35.
  • step S4b if the latest determination result stored in the storage unit 35 is affirmative for the video determined to be distributed in step S3, the determination unit 32 determines that the determination item is included. , when the determination result is a negative determination, it is determined that the determination item is not included. If the determination in step S4b is negative, the process proceeds from step S4b to steps S5a and S6. If the determination in step S4b is affirmative, the process moves from step S4b to step S5b and step S7, and after step S7, the process moves to step S6.
  • the video of actor B that is not currently being distributed is also processed by the determination unit 32 at the same time.
  • the first distribution control unit 33 performs distribution so as to switch from the video and sound information based on the performer A to the video and sound information based on the performer B. Assume that video and audio information are determined. At this timing, that is, at the timing when the video to be distributed to the video based on actor B is decided, the determination result of affirmative or negative determination for the video based on actor B has already been obtained.
  • step S4b after determining the video to be distributed in the video based on the performer B, by referring to the latest determination result stored in the storage unit 35, it is possible to determine whether or not the video contains a determination item. can determine whether In this case, after determining the video to be distributed in step S3, there is no need to perform a substantial determination process as to whether or not the video includes a determination item. Therefore, it is possible to reduce or eliminate the time difference provided between acquisition of video and audio information by the imaging device 11 and distribution thereof.
  • a content distribution method for live distribution of content information including video, sound information, and tactile information of an actor includes a first obtaining step, a second obtaining step, a determining step, and a distributing step.
  • the first acquiring step is a step of capturing video and audio information by photographing the performer.
  • the second obtaining step is a step of obtaining the performer's tactile information.
  • the determination step is a step of determining whether or not the image acquired in the first acquisition step includes a determination item related to switching of the tactile information to be distributed.
  • the distribution step is a step of distributing the acquired content information.
  • the distributing step includes a first distributing step of determining and distributing video and audio information to be distributed, and a second distributing step of distributing tactile information corresponding to the video determined in the first distributing step. If the image determined in the first delivery step includes the judgment item, in the second delivery step, after a predetermined delay time has elapsed from the start of delivery of the video and audio information in the first delivery step, the tactile sensation Start distributing information.
  • a delay time is provided before the tactile information after switching is distributed.
  • the delivery of tactile information after switching is delayed.
  • the content delivery method comprises delivering, during the delay time, modulated tactile information obtained by modulating the tactile information delivered before and after the delay time.
  • the content delivery method comprises switching from delivering modulated tactile information to delivering said tactile information after a delay time.
  • the first acquisition step is a step of simultaneously acquiring a plurality of images using a plurality of imaging devices 11 .
  • the content distribution method comprises, during the distribution step, simultaneously performing the processing of the determination step on each of the plurality of videos.
  • the tactile information delivered in the second delivery step is not limited to the tactile signal based on the tactile information acquired from the performer.
  • the tactile information itself acquired from the performer may be distributed.
  • tactile information generated based on the image captured by the imaging device 11 may be distributed.
  • a tactile information library containing a plurality of tactile information is created in advance and stored in the storage unit 35, and tactile information selected from the tactile information library is distributed based on the image captured by the imaging device 11.
  • the tactile sensation acquisition device 12 has, for example, a function of generating tactile information based on an image captured by the imaging device 11, or a function of selecting tactile information corresponding to an image from a plurality of tactile information stored in advance. using a device with The tactile sensation acquisition device 12 in this case may be part of the information processing device 15 .
  • the content information is not limited to the actor's video, sound information, and tactile information.
  • Other content information includes, for example, guidance information for superimposing a video effect that visually conveys changes in tactile information on the video displayed on the viewing device 13 .
  • the tactile information to be distributed can be selected based on the user's operation. For example, when a plurality of pieces of tactile information are acquired from one performer, the information processing device 15 distributes the acquired pieces of tactile information. By operating the tactile sensation presentation device 14, the user selects arbitrary tactile sensation information from a plurality of distributed tactile sensation information, and drives the tactile sensation presentation device 14 based on the selected tactile sensation information.
  • the created pseudo tactile information may be delivered.
  • cases where the tactile information to be distributed cannot be normally distributed include, for example, a case where acquisition of the tactile information from the performer fails, and a case where the tactile information acquired from the performer contains noise. .
  • the determination items used in the determination step are not limited to the items in the above embodiment as long as they are items that can determine the switching of the delivered tactile information. Moreover, a plurality of items may be determined.
  • the processing of the determination step performed during the delivery step may be part of the processing in the determination step.
  • the determination step comprises a first step and a second step that are performed in sequence
  • only the first step is performed during the distribution step
  • the second step is performed after determining the video to be distributed.
  • the first step is, for example, a step of determining based on a first determination item
  • the second step is a step of determining based on a second determination item different from the first determination item.
  • ⁇ Tactile information may be edited and distributed based on the actor's state and actions, and the content of the video to be distributed. For example, when a predetermined condition is satisfied, the information processing device 15 distributes tactile information edited so that the tactile sensation presented to the user by the tactile sensation presentation device 14 is strengthened or weakened.
  • predetermined conditions include, for example, when the actor's facial expression becomes a specific expression such as a smile when the corners of the mouth are raised, when the actor's face is displayed in a close-up image, and when the actor's face is displayed in a close-up image, vibrato For example, the performer performs a specific action.
  • the vibration actuator 23 is not limited to the DEA, and may be a known vibration actuator used in a tactile presentation device.
  • Known vibration actuators include, for example, other electroactive polymer actuators (EPA: Electroactive Polymer Actuator) such as ion exchange polymer metal composite (IPMC: Ionic Polymer Metal Composite), eccentric motors, linear resonance actuators, voice coils Actuators, piezo actuators.

Abstract

This content distribution method includes a first acquisition step, a second acquisition step, a determination step, and a distribution step. The first acquisition step includes acquiring a video and sound information by filming a performer. The second acquisition step includes acquiring tactile information of the performer. The determination step includes determining whether or not the video acquired in the first acquisition step contains a determination matter related to switching of the tactile information to be distributed. The distribution step includes distributing acquired content information. The distribution step includes a first distribution step for deciding and distributing the video and the sound information to be distributed and a second distribution step for distributing the tactile information corresponding to the video decided in the first distribution step. When the video decided in the first distribution step contains the determination matter, distribution of the tactile information is started in the second distribution step after a predetermined delay time has passed from the start of distribution of the video and the sound information in the first distribution step.

Description

コンテンツ配信方法、及びコンテンツ配信システムCONTENT DISTRIBUTION METHOD AND CONTENT DISTRIBUTION SYSTEM
 本発明は、コンテンツ配信方法、及びコンテンツ配信システムに関する。 The present invention relates to a content distribution method and a content distribution system.
 コンサートのライブ配信を視聴するユーザーのライブ配信への没入感及び臨場感を高めるための技術が知られている。例えば、特許文献1は、コンサートの映像及び音情報に重ねて、演技者の生体情報に基づく触感情報を、ユーザーが装着するウェアラブル機器を介してユーザーに配信する技術を開示する。演技者の生体情報は、例えば、心拍である。この場合、ライブ配信を視聴するユーザーには、視覚情報及び聴覚情報に加えて、演技者の生体情報に基づく触感が提示される。これにより、ユーザーのライブ配信への没入感及び臨場感が高められる。 A technique is known for increasing the sense of immersion and realism in a live broadcast of a user who watches the live broadcast of a concert. For example, Patent Literature 1 discloses a technique for distributing tactile information based on biometric information of a performer superimposed on video and sound information of a concert to a user via a wearable device worn by the user. The actor's biological information is, for example, the heartbeat. In this case, a user watching the live distribution is presented with a tactile sensation based on the performer's biometric information in addition to the visual information and auditory information. This enhances the user's sense of immersion and presence in the live distribution.
特開2016-197350号公報JP 2016-197350 A
 従来のコンテンツ配信方法は、ライブ配信への没入感及び臨場感を向上させる観点において改善の余地があった。  Conventional content distribution methods have room for improvement in terms of improving the sense of immersion and realism in live distribution.
 上記課題を解決するコンテンツ配信方法は、演技者の映像、音情報、及び触感提示装置を動作させるための触感情報を含むコンテンツ情報をライブ配信するコンテンツ配信方法であって、前記演技者を撮影して前記映像及び前記音情報を取得する第1取得ステップと、前記演技者の前記触感情報を取得又は生成する第2取得ステップと、前記第1取得ステップにて取得した前記映像に、配信される前記触感情報の切り替わりを判定するための判定事項が含まれているか否かを判定する判定ステップと、取得した前記コンテンツ情報を配信する配信ステップとを備え、前記配信ステップは、配信する前記映像及び前記音情報を決定して配信する第1配信ステップと、前記第1配信ステップにて決定された前記映像に応じた前記触感情報を配信する第2配信ステップとを備え、前記第1配信ステップにて決定された前記映像に前記判定事項が含まれている場合、前記第2配信ステップでは、前記第1配信ステップにおける前記映像及び前記音情報の配信の開始から所定の遅延時間が経過した後に、前記触感情報の配信を開始する。 A content distribution method for solving the above problems is a content distribution method for live distribution of content information including a video of an actor, sound information, and tactile information for operating a tactile presentation device, wherein the actor is photographed. a first acquisition step of acquiring the video and the sound information by means of a second acquisition step of acquiring or generating the tactile information of the performer; and a delivery step of delivering the acquired content information, wherein the delivery step includes the video and the a first distribution step of determining and distributing the sound information; and a second distribution step of distributing the tactile information corresponding to the video determined in the first distribution step, wherein If the video determined by the above includes the determination item, in the second distribution step, after a predetermined delay time has elapsed from the start of distribution of the video and the sound information in the first distribution step, Delivery of the tactile information is started.
 上記構成によれば、第2配信ステップにて配信される触感情報が切り替わる場合に、切り替え後の触感情報が配信される前に遅延時間を設けている。換言すると、切り替え後の触感情報の配信を遅らせている。この遅延時間に、触感提示装置からユーザーに提示される触感を調整して、遅延時間における触感提示装置の動作を制御することにより、遅延時間の後に配信される触感信号に基づく触感の印象を制御できる。これにより、遅延時間の後に配信される触感信号に基づく触感の表現の幅を広げることができる。そして、ライブ配信へのユーザーの没入感及び臨場感を効果的に向上させることができる。 According to the above configuration, when the tactile information distributed in the second distribution step is switched, a delay time is provided before the tactile information after switching is distributed. In other words, the delivery of tactile information after switching is delayed. During this delay time, the tactile sensation presented to the user from the tactile sensation presentation device is adjusted to control the operation of the tactile sensation presentation device during the delay time, thereby controlling the impression of the tactile sensation based on the tactile signal delivered after the delay time. can. As a result, it is possible to expand the range of expression of tactile sensations based on the tactile sensation signals distributed after the delay time. In addition, it is possible to effectively improve the user's sense of immersion and presence in the live distribution.
 上記コンテンツ配信方法の一態様は、前記遅延時間において、前記遅延時間の前、前記遅延時間の後、或いは遅延時間の前後に配信される前記触感情報を変調した変調触感情報、又は予め作成された繋ぎ用の繋ぎ触感情報を配信することと、前記遅延時間後に、前記変調触感情報又は前記繋ぎ触感情報の配信から前記触感情報の配信に切り替えることとを備える。 According to one aspect of the content distribution method, in the delay time, modulated tactile information obtained by modulating the tactile information distributed before the delay time, after the delay time, or before and after the delay time, or modulated tactile information distributed in advance. Distributing connecting tactile information for connection, and switching from distribution of the modulated tactile information or the connecting tactile information to distribution of the tactile information after the delay time.
 上記コンテンツ配信方法の一態様の前記第1取得ステップは、複数の映像を同時に取得するステップであり、当該コンテンツ配信方法は、前記配信ステップ中において、前記判定ステップにおける少なくとも一部の処理を、複数の映像のそれぞれに対して同時に行うことを備える。 In one aspect of the content distribution method, the first obtaining step is a step of obtaining a plurality of videos at the same time, and the content distribution method performs at least a part of the processing in the determination step in the distribution step. simultaneously for each of the images.
 上記コンテンツ配信方法の一態様の前記演技者は、複数であり、前記判定事項は、前記映像に映る前記演技者が現在、配信されている前記触感情報の由来となる前記演技者と異なることである。 In one aspect of the content distribution method, there are a plurality of performers, and the determination item is that the performer appearing in the video is different from the performer from which the currently distributed tactile information is derived. be.
 上記課題を解決するコンテンツ配信システムは、前記映像及び前記音情報を取得する撮影装置と、前記演技者の前記触感情報を取得又は生成する触感取得装置と、配信された前記映像及び前記音情報を視聴するためにユーザーに使用されるように構成された視聴装置と、配信された前記触感情報に基づく触感を前記ユーザーに提示するように構成された触感提示装置と、取得した前記コンテンツ情報を前記視聴装置及び前記触感提示装置に配信する情報処理装置とを備え、前記情報処理装置は、前記撮影装置により撮影された前記映像に前記判定事項が含まれているか否かを判定する判定部と、配信する前記映像及び前記音情報を決定して配信する第1配信制御部と、前記第1配信制御部にて決定された前記映像に応じた前記触感情報を配信する第2配信制御部とを備え、前記第1配信制御部にて決定された前記映像に前記判定事項が含まれている場合、前記第2配信制御部は、前記第1配信制御部による前記映像及び前記音情報の配信の開始から所定の遅延時間が経過した後に、前記触感情報の配信を開始する。 A content distribution system that solves the above problems includes a photographing device that acquires the video and the sound information, a tactile sensation acquisition device that acquires or generates the tactile information of the performer, and the distributed video and the sound information. a viewing device configured to be used by a user for viewing; a tactile presentation device configured to present to the user a tactile sensation based on the distributed tactile information; an information processing device for delivering to a viewing device and the tactile presentation device, wherein the information processing device includes a determination unit that determines whether or not the image captured by the imaging device includes the determination item; a first distribution control unit that determines and distributes the video and the sound information to be distributed; and a second distribution control unit that distributes the tactile information corresponding to the video determined by the first distribution control unit. When the video determined by the first distribution control unit includes the determination item, the second distribution control unit controls distribution of the video and the sound information by the first distribution control unit. After a predetermined delay time has elapsed from the start, distribution of the tactile information is started.
コンテンツ配信システムのブロック図である。1 is a block diagram of a content distribution system; FIG. 触感提示装置の説明図である。It is explanatory drawing of a tactile sense presentation apparatus. 触感提示装置の概略断面図である。1 is a schematic cross-sectional view of a tactile sensation presentation device; FIG. コンテンツ配信方法の第1パターン及び第2パターンのフローチャートである。4 is a flow chart of a first pattern and a second pattern of a content distribution method; コンテンツ配信方法の第1パターンにおけるタイミングチャートである。4 is a timing chart in the first pattern of the content distribution method; コンテンツ配信方法の第2パターンにおけるタイミングチャートである。It is a timing chart in the second pattern of the content delivery method. コンテンツ配信方法の第3パターンにおけるタイミングチャートである。It is a timing chart in the 3rd pattern of the content delivery method. コンテンツ配信方法の第3パターンのフローチャートである。FIG. 11 is a flow chart of a third pattern of a content distribution method; FIG.
 以下、演技者の映像、音情報、及び触感情報を含むコンテンツ情報をライブ配信するためのコンテンツ配信システムの一実施形態について説明する。
 <コンテンツ配信システム>
 図1に示すように、コンテンツ配信システム10は、撮影装置11、触感取得装置12、視聴装置13、触感提示装置14、及びコンテンツ情報を配信する情報処理装置15を備えている。
An embodiment of a content distribution system for live distribution of content information including video, sound information, and tactile information of performers will be described below.
<Content distribution system>
As shown in FIG. 1, the content distribution system 10 includes a photographing device 11, a tactile sensation acquisition device 12, a viewing device 13, a tactile sensation presentation device 14, and an information processing device 15 that distributes content information.
 (撮影装置)
 撮影装置11は特に限定されるものではなく、演技者の映像、及び音声などの音情報を取得するためのカメラなどの公知の撮影装置を適用できる。コンテンツ配信システム10が備える撮影装置11は、単数であってもよいし、複数であってもよい。撮影装置11により取得された映像及び音情報は、情報処理装置15へ送信される。
(imaging device)
The photographing device 11 is not particularly limited, and a known photographing device such as a camera for acquiring sound information such as images and voices of performers can be applied. The image capturing device 11 included in the content distribution system 10 may be singular or plural. The image and sound information acquired by the photographing device 11 is transmitted to the information processing device 15 .
 (触感取得装置)
 触感取得装置12は、演技者の触感情報を取得するための装置である。触感取得装置12により取得される触感情報としては、例えば、演技者の動作に基づいて生じる振動や、脈拍、心拍、呼吸、血圧などの生体情報が挙げられる。触感取得装置12は、取得する触感情報に応じた公知の測定装置を適用できる。触感取得装置12は、演技者に装着される接触型の装置であってもよいし、非接触型の装置であってもよい。
(tactile sensation acquisition device)
The tactile sense acquisition device 12 is a device for acquiring the tactile sense information of the actor. The tactile information acquired by the tactile acquisition device 12 includes, for example, vibrations generated based on the actions of the performer, and biological information such as pulse, heartbeat, respiration, and blood pressure. As the tactile sensation acquisition device 12, a known measurement device suitable for tactile information to be acquired can be applied. The tactile sense acquisition device 12 may be a contact type device worn by the performer, or may be a non-contact type device.
 触感取得装置12としては、例えば、圧力センサ、脈拍計、心拍計、血圧計、体温計が挙げられる。また、触感取得装置12は、撮影装置11により撮影された演技者の映像を解析することにより、触感情報を取得する装置であってもよい。コンテンツ配信システム10が備える触感取得装置12は、単数であってもよいし、複数であってもよい。演技者が複数である場合、各演技者に対して、それぞれ触感取得装置12を設ける。また、一人の演技者に対して複数の触感取得装置12を設けてもよい。触感取得装置12により取得された触感情報は、情報処理装置15へ送信される。 Examples of the tactile sensation acquisition device 12 include a pressure sensor, pulse meter, heart rate meter, sphygmomanometer, and thermometer. Further, the tactile sensation acquisition device 12 may be a device that acquires tactile information by analyzing the image of the performer captured by the imaging device 11 . The content distribution system 10 may include a single tactile sensation acquisition device 12 or a plurality of tactile sensation acquisition devices. When there are a plurality of performers, a tactile sensation acquisition device 12 is provided for each performer. Also, a plurality of tactile sensation acquisition devices 12 may be provided for one performer. The tactile information acquired by the tactile acquisition device 12 is transmitted to the information processing device 15 .
 (視聴装置)
 視聴装置13は、情報処理装置15から配信された演技者の映像及び音情報をユーザーが視聴するための音声出力機能を有する表示装置である。すなわち、視聴装置13は、情報処理装置15から配信された演技者の映像及び音情報を視聴するためにユーザーに使用される。視聴装置13としては、音声出力機能を有する表示装置、例えば、タブレット端末及び携帯端末のディスプレイ、設置型のディスプレイ、ヘッドマウントディスプレイ等の頭部装着型の表示装置が挙げられる。また、視聴装置13は、音声出力機能を有さない表示装置と音声出力装置との組み合わせであってもよい。
(viewing device)
The viewing device 13 is a display device having an audio output function for allowing the user to view the actor's video and audio information distributed from the information processing device 15 . That is, the viewing device 13 is used by the user to view the actor's video and sound information distributed from the information processing device 15 . The viewing device 13 includes a display device having an audio output function, for example, a tablet terminal or mobile terminal display, a stationary display, or a head-mounted display device such as a head-mounted display. Also, the viewing device 13 may be a combination of a display device without an audio output function and an audio output device.
 (触感提示装置)
 触感提示装置14は、ライブ配信を視聴するユーザーに触感を提示するための装置である。触感提示装置14は、情報処理装置15から配信された触感情報に基づいて振動することにより、触感情報に基づく触感をユーザーに提示する。触感提示装置14の一例を図2及び図3に示す。
(tactile sensation presentation device)
The tactile sensation presentation device 14 is a device for presenting a tactile sensation to the user watching the live distribution. The tactile sensation presentation device 14 presents a tactile sensation based on the tactile information to the user by vibrating based on the tactile information distributed from the information processing device 15 . An example of the tactile presentation device 14 is shown in FIGS. 2 and 3. FIG.
 図2及び図3に示す触感提示装置14は、ユーザーが片手に持った状態で使用される装置である。触感提示装置14は、ユーザーの手に収まる大きさの箱状の筐体20と、筐体20の上面に設けられた提示部21とを備えている。提示部21は、筐体20の上面から半球状に突出する軟質層22と、軟質層22の上面に配置される単数又は複数の振動アクチュエータ23とを備えている。軟質層22は、ウレタン等の軟質材料により形成されている。振動アクチュエータ23は、シート状の誘電エラストマーアクチュエータ(DEA:Dielectric Elastomer Actuator)である。 The tactile sensation presentation device 14 shown in FIGS. 2 and 3 is a device that is used while being held in one hand by the user. The tactile sensation presentation device 14 includes a box-shaped housing 20 sized to fit in the user's hand, and a presentation unit 21 provided on the upper surface of the housing 20 . The presentation unit 21 includes a soft layer 22 protruding in a hemispherical shape from the upper surface of the housing 20 and one or more vibration actuators 23 arranged on the upper surface of the soft layer 22 . The soft layer 22 is made of a soft material such as urethane. The vibration actuator 23 is a sheet-like dielectric elastomer actuator (DEA: Dielectric Elastomer Actuator).
 振動アクチュエータ23としてのDEAは、提示部21の上面に触れたユーザーに対して、DEAの伸縮等の変形に基づく振動等を特定の触感として認識させる。軟質層22は、DEAの変形に追従して変形する層である。DEAと筐体20との間に軟質層22を配置することにより、DEAの変形が筐体20によって制限されることを抑制できる。 The DEA as the vibration actuator 23 causes the user who touches the upper surface of the presentation unit 21 to perceive vibrations based on deformation such as expansion and contraction of the DEA as a specific tactile sensation. The soft layer 22 is a layer that deforms following deformation of the DEA. By arranging the soft layer 22 between the DEA and the housing 20 , it is possible to suppress deformation of the DEA from being restricted by the housing 20 .
 DEAは、誘電エラストマーからなるシート状の誘電層と、誘電層の厚さ方向の両側に配置された電極層としての正極電極及び負極電極とが複数積層された多層構造体である。DEAの最外層には絶縁層が積層されている。DEAでは、正極電極と負極電極との間に直流電圧が印加されると、印加電圧の大きさに応じて、誘電層が厚さ方向に圧縮されるとともに誘電層の面に沿った方向であるDEAの面方向に伸長するように変形する。DEAは、DEAの伸縮に基づく振動等を触感としてユーザーに認識させる。 A DEA is a multi-layer structure in which a sheet-like dielectric layer made of a dielectric elastomer and a plurality of positive electrodes and negative electrodes as electrode layers arranged on both sides of the dielectric layer in the thickness direction are laminated. An insulating layer is laminated on the outermost layer of the DEA. In DEA, when a DC voltage is applied between the positive electrode and the negative electrode, the dielectric layer is compressed in the thickness direction and along the surface of the dielectric layer depending on the magnitude of the applied voltage. It deforms so as to extend in the plane direction of the DEA. The DEA allows the user to perceive vibrations and the like based on the expansion and contraction of the DEA as a tactile sensation.
 誘電層を構成する誘電エラストマーは特に限定されるものではなく、公知のDEAに用いられる誘電エラストマーを用いることができる。上記誘電エラストマーとしては、例えば、架橋されたポリロタキサン、シリコーンエラストマー、アクリルエラストマー、ウレタンエラストマーが挙げられる。これら誘電エラストマーのうちの一種を用いてもよいし、複数種を併用してもよい。誘電層の厚さは、例えば、20~200μmである。 The dielectric elastomer constituting the dielectric layer is not particularly limited, and known dielectric elastomers used in DEA can be used. Examples of the dielectric elastomer include crosslinked polyrotaxane, silicone elastomer, acrylic elastomer, and urethane elastomer. One type of these dielectric elastomers may be used, or a plurality of types may be used in combination. The thickness of the dielectric layer is, for example, 20-200 μm.
 正極電極及び負極電極を構成する材料としては、例えば、導電エラストマー、カーボンナノチューブ、ケッチェンブラック(登録商標)、金属蒸着膜が挙げられる。上記導電エラストマーとしては、例えば、絶縁性高分子及び導電性フィラーを含有する導電エラストマーが挙げられる。 Examples of materials that make up the positive and negative electrodes include conductive elastomers, carbon nanotubes, Ketjenblack (registered trademark), and vapor-deposited metal films. Examples of the conductive elastomer include a conductive elastomer containing an insulating polymer and a conductive filler.
 上記絶縁性高分子としては、例えば、架橋されたポリロタキサン、シリコーンエラストマー、アクリルエラストマー、ウレタンエラストマーが挙げられる。これら絶縁性高分子のうちの一種を用いてもよいし、複数種を併用してもよい。上記導電性フィラーとしては、例えば、カーボンナノチューブ、ケッチェンブラック(登録商標)、カーボンブラック、銅や銀等の金属粒子が挙げられる。これら導電性フィラーのうちの一種を用いてもよいし、複数種を併用してもよい。正極電極及び負極電極の厚さは、例えば、1~100μmである。 Examples of the insulating polymer include crosslinked polyrotaxane, silicone elastomer, acrylic elastomer, and urethane elastomer. One type of these insulating polymers may be used, or a plurality of types may be used in combination. Examples of the conductive filler include carbon nanotubes, Ketjenblack (registered trademark), carbon black, and metal particles such as copper and silver. One type of these conductive fillers may be used, or a plurality of types may be used in combination. The thickness of the positive electrode and negative electrode is, for example, 1 to 100 μm.
 絶縁層を構成する絶縁エラストマーは特に限定されるものではなく、公知のDEAの絶縁部分に用いられる公知の絶縁エラストマーを用いることができる。上記絶縁エラストマーとしては、例えば、架橋されたポリロタキサン、シリコーンエラストマー、アクリルエラストマー、ウレタンエラストマーが挙げられる。これら絶縁エラストマーのうちの一種を用いてもよいし、複数種を併用してもよい。絶縁層の厚さは、例えば、10~100μmである。また、DEA全体の厚さは、柔軟性及び強度の確保の観点から、例えば、0.3~3mmであることが好ましい。 The insulating elastomer constituting the insulating layer is not particularly limited, and known insulating elastomers used for the insulating portion of known DEA can be used. Examples of the insulating elastomer include crosslinked polyrotaxane, silicone elastomer, acrylic elastomer, and urethane elastomer. One type of these insulating elastomers may be used, or a plurality of types may be used in combination. The thickness of the insulating layer is, for example, 10-100 μm. Moreover, the thickness of the entire DEA is preferably, for example, 0.3 to 3 mm from the viewpoint of ensuring flexibility and strength.
 振動アクチュエータ23の上には、必要に応じて、振動アクチュエータ23を覆う被覆層(図示略)が配置される。また、触感提示装置14の筐体20の内部には、バッテリ等の電源(図示略)から振動アクチュエータ23の正極電極及び負極電極により構成される一対の電極の間に電圧を印加する駆動部24が設けられている。 A coating layer (not shown) that covers the vibration actuator 23 is arranged on the vibration actuator 23 as necessary. Further, inside the housing 20 of the tactile sense presentation device 14, a driving unit 24 for applying a voltage between a pair of electrodes constituted by a positive electrode and a negative electrode of the vibration actuator 23 from a power supply (not shown) such as a battery is provided. is provided.
 なお、触感提示装置14は、図2及び図3に示す触感提示装置14に限定されない。例えば、コンサートなどで使用されるペンライトのグリップ部分に振動アクチュエータ23を配置したペンライト型の触感提示装置14であってもよいし、クッションや枕のように抱き込んで使用することが可能であるクッション材に振動アクチュエータ23を配置した抱き込み型の触感提示装置14であってもよい。ペンライト型の触感提示装置14は、ライブ配信の臨場感を高めた場合に適している。また、抱き込み型の触感提示装置14は、リラックスした状態でライブ配信を視聴する場合に適している。 Note that the tactile sensation presentation device 14 is not limited to the tactile sensation presentation device 14 shown in FIGS. For example, it may be a penlight-type tactile sensation presentation device 14 in which a vibration actuator 23 is arranged in the grip portion of a penlight used at a concert or the like, or it can be used by holding it like a cushion or pillow. The tactile presentation device 14 may be a hugging type tactile presentation device 14 in which the vibration actuator 23 is arranged on a certain cushion material. The penlight-type tactile sensation presentation device 14 is suitable for enhancing the presence of live distribution. In addition, the hugging type tactile sense presentation device 14 is suitable for viewing live distribution in a relaxed state.
 (情報処理装置)
 情報処理装置15は、取得した映像、音情報、及び触感情報に基づいて、視聴装置13及び触感提示装置14へ配信するコンテンツ情報を設定する。また、情報処理装置15は、設定したコンテンツ情報、即ち、演技者の映像、音情報、及び触感情報を、視聴装置13により視聴可能な状態及び触感提示装置14により触感提示可能な状態となるように配信する。情報処理装置15としては、例えば、サーバ、PC(Personal Computer)、スマートフォンなどの携帯電話、タブレット端末、ゲーム機が挙げられる。
(Information processing device)
The information processing device 15 sets content information to be distributed to the viewing device 13 and the tactile presentation device 14 based on the acquired video, sound information, and tactile information. In addition, the information processing device 15 sets the set content information, that is, the actor's video, sound information, and tactile information so that the viewing device 13 can view and the tactile presentation device 14 can present the tactile sensation. deliver to Examples of the information processing device 15 include a server, a PC (Personal Computer), a mobile phone such as a smart phone, a tablet terminal, and a game machine.
 図1に示すように、情報処理装置15は、送受信部30、信号生成部31、判定部32、第1配信制御部33、第2配信制御部34、及び記憶部35を備えている。
 送受信部30は、例えば、広域通信網としてのインターネットを介して、撮影装置11、触感取得装置12、視聴装置13、及び触感提示装置14と通信を行うための通信インターフェースである。送受信部30は、例えば、直接インターネットと繋がるように移動通信システムに対応するものであってもよいし、インターネットに繋がっている機器と通信を行うものであってもよい。なお、広域通信網とは任意であり、例えば電話回線でもよい。また、送受信部30と撮影装置11及び触感取得装置12との間の送受信は、有線により行われるものであってもよい。
As shown in FIG. 1 , the information processing device 15 includes a transmission/reception section 30 , a signal generation section 31 , a determination section 32 , a first distribution control section 33 , a second distribution control section 34 and a storage section 35 .
The transmission/reception unit 30 is, for example, a communication interface for communicating with the photographing device 11, the tactile sensation acquisition device 12, the viewing device 13, and the tactile sensation presentation device 14 via the Internet as a wide area communication network. For example, the transmission/reception unit 30 may be compatible with a mobile communication system so as to be directly connected to the Internet, or may communicate with devices connected to the Internet. The wide area communication network is arbitrary, and may be, for example, a telephone line. Further, transmission and reception between the transmission/reception unit 30 and the imaging device 11 and the tactile sensation acquisition device 12 may be performed by wire.
 信号生成部31、判定部32、第1配信制御部33、及び第2配信制御部34は、1)コンピュータプログラム(ソフトウェア)に従って動作する1つ以上のプロセッサ、2)各種処理のうち少なくとも一部の処理を実行する特定用途向け集積回路(ASIC)等の1つ以上の専用のハードウェア回路、或いは3)それらの組み合わせ、を含む回路(circuitry)として構成し得る。プロセッサは、例えば、CPUを含む。以下に記載する信号生成部31、判定部32、第1配信制御部33、及び第2配信制御部34により実行される各処理は、記憶部35に記憶されている制御プログラムを用いて実行される。 The signal generation unit 31, the determination unit 32, the first distribution control unit 33, and the second distribution control unit 34 include: 1) one or more processors that operate according to a computer program (software); or 3) any combination thereof, such as one or more dedicated hardware circuits, such as an application specific integrated circuit (ASIC), that performs the processing of . A processor includes, for example, a CPU. Each process executed by the signal generation unit 31, the determination unit 32, the first distribution control unit 33, and the second distribution control unit 34 described below is executed using a control program stored in the storage unit 35. be.
 信号生成部31は、触感取得装置12により取得された触感情報から触感信号を生成する。触感信号は、取得された触感情報に基づく触感を提示するように触感提示装置14の振動アクチュエータ23を駆動するための信号である。触感取得装置12が複数である場合、信号生成部31は、触感取得装置12ごとに、取得された触感情報から触感信号を生成する。つまり、触感取得装置12により取得された全ての触感情報に対して、対応する触感信号を生成する。また、信号生成部31は、生成した触感信号を変調した変調触感信号を生成する。変調触感信号の詳細は後述する。上記した各触感信号は、電圧波形を示す信号であり、触感信号のボリュームは、電圧波形の振幅である。 The signal generation unit 31 generates a tactile signal from the tactile information acquired by the tactile acquisition device 12 . The tactile sensation signal is a signal for driving the vibration actuator 23 of the tactile sensation presentation device 14 so as to present a tactile sensation based on the acquired tactile information. When there are a plurality of tactile sensation acquisition devices 12 , the signal generation unit 31 generates a tactile signal from the tactile information acquired for each tactile sensation acquisition device 12 . In other words, the corresponding tactile signals are generated for all the tactile information acquired by the tactile sense acquisition device 12 . The signal generation unit 31 also generates a modulated tactile signal by modulating the generated tactile signal. The details of the modulated tactile signal will be described later. Each tactile signal described above is a signal indicating a voltage waveform, and the volume of the tactile signal is the amplitude of the voltage waveform.
 判定部32は、撮影装置11により取得した配信前の映像に、予め設定された判定事項が含まれているか否かを判定する。判定事項は、配信される触感情報の切り替わりを判定するために予め設定された要件である。本実施形態の判定事項は、配信が決定された映像に映っている主たる演技者が、現在、配信されている触感情報の由来となる演技者と異なることである。配信が決定された映像に映っている主たる演技者が、現在、配信されている触感情報の由来となる演技者と異なる場合、判定部32は、肯定判定を行う。そして、配信が決定された映像に映っている主たる演技者が、現在、配信されている触感情報の由来となる演技者と同じである場合、判定部32は、否定判定を行う。なお、配信開始直後のように、現在、配信されている触感情報がないタイミングにおいては、判定部32は、否定判定を行う。判定部32による判定方法としては、例えば、画像認識用AIを用いた判定方法などの公知の判定方法が挙げられる。また、判定部32により判定を行うタイミングは、判定に用いる映像の配信前の期間において任意に設定できる。 The determination unit 32 determines whether or not the pre-distribution video acquired by the imaging device 11 includes preset determination items. The determination item is a requirement set in advance for determining switching of tactile information to be distributed. The determination item of the present embodiment is that the main performer appearing in the video determined to be distributed is different from the performer from which the tactile information currently distributed is derived. If the main performer appearing in the video determined to be distributed is different from the performer from which the tactile information currently distributed is derived, the determination unit 32 makes an affirmative determination. Then, when the main performer appearing in the video decided to be distributed is the same as the performer from which the tactile information currently distributed is derived, the determination unit 32 makes a negative determination. It should be noted that at a timing when there is no tactile information being distributed at present, such as immediately after the start of distribution, the determination unit 32 makes a negative determination. Examples of the determination method by the determination unit 32 include a known determination method such as a determination method using AI for image recognition. Also, the timing of the determination by the determination unit 32 can be arbitrarily set during the period before the video used for determination is distributed.
 第1配信制御部33は、撮影装置11により取得した映像及び音情報から配信する映像及び音情報を決定し、決定した映像及び音情報を視聴装置13により視聴可能な状態となるように配信する。撮影装置11が単数である場合、第1配信制御部33は、一つの撮影装置11により取得した映像及び音情報を、配信する映像及び音情報として決定する。そして、第1配信制御部33は、決定した映像及び音情報を配信する制御を行う。第1配信制御部33は、撮影装置11により映像及び音情報を取得してから配信するまでの間に僅かな時間差を設けて配信する。この時間差は、例えば、数秒から10分程度である。この時間差を設ける方法は特に限定されるものではなく、既存のディレイ放送等の配信技術を用いてもよい。撮影装置11により映像及び音情報を取得してから配信するまでの上記時間差に基づくタイミングにて、配信直前の映像に対する判定部32による判定処理が行われる。 The first distribution control unit 33 determines video and audio information to be distributed from the video and audio information acquired by the photographing device 11, and distributes the determined video and audio information so that it can be viewed by the viewing device 13. . When there is a single imaging device 11, the first distribution control unit 33 determines video and audio information acquired by one imaging device 11 as video and audio information to be distributed. Then, the first distribution control unit 33 performs control for distributing the determined video and audio information. The first distribution control unit 33 distributes the image and sound information with a slight time difference between the acquisition of the image and sound information by the photographing device 11 and the distribution. This time difference is, for example, several seconds to ten minutes. The method of providing this time difference is not particularly limited, and an existing distribution technology such as delay broadcasting may be used. At the timing based on the time difference from when the image and sound information is acquired by the photographing device 11 to when the information is distributed, the determination process by the determination unit 32 for the image immediately before distribution is performed.
 撮影装置11が複数である場合、第1配信制御部33は、複数の撮影装置11により取得された複数の映像から配信する映像を決定する。また、第1配信制御部33は、複数の撮影装置11により取得された複数の音情報から配信する音情報を決定する。そして、第1配信制御部33は、決定した映像及び音情報を、上記時間差を設けつつ配信する。複数の映像から配信する映像を決定する処理、及び複数の音情報から配信する音情報を決定する処理は、例えば、配信するコンテンツ情報を切り替えるために作業者が行うスイッチング操作に基づいて実行される。 When there are a plurality of photographing devices 11, the first distribution control unit 33 determines the video to be distributed from the plurality of videos acquired by the plurality of photographing devices 11. Also, the first distribution control unit 33 determines sound information to be distributed from a plurality of pieces of sound information acquired by a plurality of photographing devices 11 . Then, the first distribution control unit 33 distributes the determined video and audio information while providing the time difference. The process of determining the video to be distributed from a plurality of videos and the process of determining the sound information to be distributed from a plurality of sound information are performed, for example, based on the switching operation performed by the operator to switch the content information to be distributed. .
 第2配信制御部34は、触感情報から生成された触感信号を、触感提示装置14により触感提示可能な状態となるように配信する。第2配信制御部34により配信される触感信号は、第1配信制御部33により配信が決定された映像に映る主たる演技者に基づく触感信号である。 The second distribution control unit 34 distributes the tactile signal generated from the tactile information so that the tactile sensation presentation device 14 can present the tactile sensation. The tactile signal distributed by the second distribution control unit 34 is a tactile signal based on the main performer appearing in the video determined to be distributed by the first distribution control unit 33 .
 第2配信制御部34は、第1配信制御部33により配信が決定された映像に対する判定部32の判定が肯定判定である場合、触感信号の配信の開始を遅らせる遅延制御を行う。遅延制御は、第1配信制御部による映像及び音情報の配信の開始から所定の遅延時間が経過した後に、触感信号の配信を開始する制御である。遅延時間は、例えば、100ミリ秒~3秒である。配信映像にトランジション効果がある場合、遅延時間は、トランジション効果の長さに合わせた時間に設定されることが好ましい。 The second distribution control unit 34 performs delay control to delay the start of distribution of the tactile signal when the determination by the determination unit 32 for the video determined to be distributed by the first distribution control unit 33 is positive. The delay control is control for starting distribution of the tactile signal after a predetermined delay time has elapsed since the first distribution control unit started distributing the video and audio information. The delay time is, for example, 100 milliseconds to 3 seconds. If the video to be distributed has a transition effect, the delay time is preferably set to match the length of the transition effect.
 遅延時間は、遅延時間の後に配信される触感信号に基づいてユーザーに提示される触感の表現の幅を広げるために設けられている。遅延時間において、触感提示装置14を動作させるための信号を配信するか否かは任意に設定できる。遅延時間における触感提示装置14の動作を制御することにより、遅延時間の後に配信される触感信号に基づいてユーザーに提示される触感の印象を制御できる。 The delay time is provided to expand the range of tactile sensations presented to the user based on the tactile signals delivered after the delay time. Whether or not to distribute a signal for operating the tactile sensation presentation device 14 during the delay time can be set arbitrarily. By controlling the operation of the tactile sense presentation device 14 during the delay time, it is possible to control the impression of the tactile sense presented to the user based on the tactile sense signal delivered after the delay time.
 例えば、遅延時間を、触感提示装置14を動作させないように触感信号を配信しない時間とする。この場合、遅延時間の後に触感提示装置14からユーザーに提示される触感が強調される。また、触感提示装置14からユーザーに提示される触感が切り替わったことを強調できる。遅延時間を、繋ぎ用の触感をユーザーに提示するように触感提示装置14を動作させるための繋ぎ用の触感情報を配信する時間とする。この場合、触感提示装置14からユーザーに提示される触感が急激に変化することに起因してユーザーに生じる違和感を低減できる。繋ぎ用の触感情報としては、例えば、遅延時間の前、遅延時間の後、或いは遅延時間の前後に配信される触感情報を変調した変調触感情報、及び多様な触感に対して親和性が高くなるように予め作成した繋ぎ用の触感情報が挙げられる。 For example, the delay time is the time during which no tactile signal is distributed so as not to operate the tactile sensation presentation device 14 . In this case, the tactile sensation presented to the user from the tactile sensation presentation device 14 after the delay time is emphasized. In addition, it is possible to emphasize that the tactile sensation presented to the user from the tactile sensation presentation device 14 has been switched. The delay time is defined as the time for delivering the connecting tactile information for operating the tactile sensation presentation device 14 so as to present the connecting tactile sensation to the user. In this case, it is possible to reduce the discomfort that the user experiences due to the sudden change in the tactile sensation presented to the user by the tactile sensation presentation device 14 . As the tactile information for connection, for example, modulated tactile information obtained by modulating the tactile information distributed before the delay time, after the delay time, or before and after the delay time, and high affinity with various tactile sensations. tactile information for connection created in advance as shown in FIG.
 なお、本実施形態では、一例として、遅延時間を、繋ぎ用の触感をユーザーに提示するように触感提示装置14を動作させるための変調触感情報を配信する時間とした場合について具体的に記載している。第2配信制御部34は、遅延時間において、変調触感情報としての変調触感信号を、触感提示装置14により触感提示可能な状態となるように配信する。変調触感信号は、フェードアウト信号とフェードイン信号とを重ね合わせた信号である。フェードアウト信号は、遅延時間の前に配信されていた触感信号を、遅延時間の間に徐々に小さくなるように変調した信号である。フェードイン信号は、遅延時間後に配信する触感信号を、遅延時間の間に徐々に大きくなるように変調した信号である。遅延時間が経過した後、第2配信制御部34は、変調触感信号の配信から触感信号の配信に切り替える。 In this embodiment, as an example, a specific description is given of a case in which the delay time is the time to distribute the modulated tactile information for operating the tactile sensation presentation device 14 so as to present a connecting tactile sensation to the user. ing. The second distribution control unit 34 distributes the modulated tactile signal as the modulated tactile information during the delay time so that the tactile sensation presentation device 14 can present the tactile sensation. A modulated haptic signal is a signal obtained by superimposing a fade-out signal and a fade-in signal. The fade-out signal is a signal obtained by modulating the haptic signal distributed before the delay time so that it gradually becomes smaller during the delay time. The fade-in signal is a signal obtained by modulating the haptic signal delivered after the delay time so that it gradually increases during the delay time. After the delay time has elapsed, the second distribution control unit 34 switches from distribution of the modulated tactile signal to distribution of the tactile signal.
 記憶部35には、制御プログラムが記憶されている。記憶部35は、例えば、不揮発性メモリである。
 <コンテンツ配信方法>
 次に、上記のコンテンツ配信システム10を用いて、演技者の映像、音情報、及び触感情報を含むコンテンツ情報をライブ配信するコンテンツ配信方法について説明する。
A control program is stored in the storage unit 35 . The storage unit 35 is, for example, a nonvolatile memory.
<Content delivery method>
Next, a content distribution method for live-distributing content information including video, sound information, and tactile information of actors using the content distribution system 10 described above will be described.
 (第1パターン)
 図5のタイミングチャートに示すように、コンテンツ配信方法の第1パターンは、演技者が演技者A,Bの2名であり、撮影装置11がカメラCの1台である場合のコンテンツ配信方法である。
(first pattern)
As shown in the timing chart of FIG. 5, the first pattern of the content distribution method is a content distribution method in which there are two performers A and B, and the photographing device 11 is one camera CA. is.
 図4のフローチャートに基づいて、以下の場合を例に挙げて説明する。図5に示すように、配信開始からしばらくの間は、カメラCにより演技者Aを撮影し、演技者Aのコンテンツ情報を配信する。その後、カメラCにより演技者Bを撮影し、演技者Aのコンテンツ情報の配信から演技者Bのコンテンツ情報の配信に切り替える。 Based on the flowchart of FIG. 4, the following cases will be described as examples. As shown in FIG. 5, for a while after the start of distribution, actor A is photographed by camera CA and content information of actor A is distributed. After that, the actor B is photographed by the camera CA , and the distribution of the content information of the actor A is switched to the distribution of the content information of the actor B.
 ステップS1は、カメラCにより演技者A,Bを撮影して映像及び音情報を取得する第1取得ステップである。ステップS1において、カメラマンは、カメラCにより演技者Aを撮影し、所定時間が経過した後、撮影対象を演技者Aから演技者Bに変更し、演技者Bを撮影する。カメラCにより演技者Aを撮影している間は、演技者Aに基づく映像及び音情報がカメラCから情報処理装置15へ送信される。また、カメラCにより演技者Bを撮影している間は、演技者Bに基づく映像及び音情報がカメラCから情報処理装置15へ送信される。 Step S1 is a first obtaining step of photographing performers A and B with a camera CA to obtain video and sound information. In step S1, the cameraman takes an image of performer A with camera CA, and after a predetermined time has elapsed, changes the object of photography from performer A to performer B, and then takes performer B. While the actor A is being photographed by the camera CA, video and audio information based on the actor A is transmitted from the camera CA to the information processing device 15 . Further, while camera CA is photographing performer B, video and sound information based on performer B is transmitted from camera CA to information processing device 15 .
 ステップS2は、触感取得装置12により演技者Aの触感情報及び演技者Bの触感情報をそれぞれ取得する第2取得ステップである。触感取得装置12による演技者A及び演技者Bの各触感情報の取得は、ライブ配信の間を通して連続的又は間欠的に実行される。そして、取得された各触感情報は、触感取得装置12から情報処理装置15へ送信される。なお、ステップS1とステップS2は平行して実行される。 Step S2 is a second acquisition step in which the tactile information of performer A and the tactile information of performer B are acquired by the tactile acquisition device 12 respectively. Acquisition of each tactile information of actor A and actor B by the tactile sensation acquisition device 12 is performed continuously or intermittently throughout the live distribution. Then, each acquired tactile information is transmitted from the tactile sensation acquisition device 12 to the information processing device 15 . Note that steps S1 and S2 are executed in parallel.
 ステップS3は、ステップS1にて取得した配信する映像及び音情報を決定するステップである。カメラがカメラCの1台である本パターンにおいては、配信する映像及び音情報は1種類である。そのため、ステップS3において、第1配信制御部33は、常に、1台のカメラCにより取得された映像及び当該映像に対応する音情報を、配信する映像及び音情報に決定する。 Step S3 is a step of determining the video and audio information to be distributed acquired in step S1. In this pattern in which one camera is camera CA , one type of video and audio information is distributed. Therefore, in step S3, the first distribution control unit 33 always determines the video and sound information corresponding to the video acquired by one camera CA as the video and audio information to be distributed.
 ステップS4は、ステップS3にて配信が決定された配信前の映像に判定事項が含まれているか否かを判定する判定ステップである。ステップS3において、判定部32は、ステップS3にて配信が決定された配信前の映像について、配信するまでのタイミングにて、当該映像に映っている主たる演技者が、現在、配信されている触感情報の由来となる演技者と異なるか否かに基づいて肯定判定(YES)又は否定判定(NO)を行う。配信開始時は、現在、配信されている触感情報がないため、判定部32の判定は、否定判定となる。 Step S4 is a judgment step for judging whether or not the pre-distribution video whose distribution was decided in step S3 contains a judgment item. In step S3, the determination unit 32 determines that the main performer appearing in the video before distribution, which has been determined to be distributed in step S3, is currently being distributed. A positive determination (YES) or a negative determination (NO) is made based on whether the performer is different from the information source. At the start of distribution, since there is no tactile information being distributed at present, the determination by the determination unit 32 is a negative determination.
 ステップS5a,S5bは、決定された映像及び音情報の配信を開始するステップである。ステップS5aは、ステップS4の判定が否定判定である場合に行われるステップであり、ステップS5bは、ステップS4の判定が肯定判定である場合に行われるステップである。ステップS5a,S5bは、ステップS4の判定が肯定判定である場合に行われる処理であるか、否定判定である場合に行われる処理であるかの点で異なるものであり、ステップS5a,S5bにおける処理は同じである。 Steps S5a and S5b are steps for starting distribution of the determined video and audio information. Step S5a is a step performed when the determination in step S4 is negative, and step S5b is a step performed when the determination in step S4 is positive. Steps S5a and S5b differ in that the process is performed when the determination in step S4 is affirmative or the process is performed when the determination in step S4 is negative. are the same.
 ステップS5a,S5bにおいて、第1配信制御部33は、撮影装置11により映像及び音情報を取得してから配信するまでの間に僅かな時間差を設けて映像及び音情報の配信を開始する。 In steps S5a and S5b, the first distribution control unit 33 starts distributing the video and audio information with a slight time lag between the acquisition of the video and audio information by the photographing device 11 and the distribution thereof.
 ステップS6は、触感取得装置12により取得した触感信号を配信するステップである。カメラCにより演技者Aを撮影し続けている期間のステップS6において、ステップS4a,S4bにて配信される映像に映る主たる演技者が演技者Aである場合、第2配信制御部34は、演技者Aの触感情報に基づく触感信号を、当該映像に同期させて配信する。そして、上記の主たる演技者が演技者Bである場合、第2配信制御部34は、演技者Bの触感情報に基づく触感信号を、当該映像に同期させて配信する。 Step S<b>6 is a step of distributing the tactile signal acquired by the tactile sensation acquisition device 12 . In step S6 during the period in which camera C A continues to photograph actor A, if actor A is the main actor appearing in the images distributed in steps S4a and S4b, second distribution control unit 34: A tactile signal based on the tactile information of performer A is distributed in synchronization with the video. Then, when the main performer is performer B, the second distribution control unit 34 distributes the tactile signal based on the tactile information of performer B in synchronization with the video.
 ステップS7は、所定の遅延時間、変調触感情報としての変調触感信号を配信するステップである。ステップS7は、ステップS4の判定が肯定判定である場合に、ステップS5aに続いて行われる。図4に示すように、第2配信制御部34は、フェードアウト信号とフェードイン信号とを重ね合わせた変調触感信号を配信する。フェードアウト信号は、遅延時間の前に配信されていた演技者、例えば、演技者Aの触感情報に基づく触感信号を変調した信号である。フェードイン信号は、ステップS5aにて配信される映像に映っている演技者、例えば、演技者Bの触感情報に基づく触感信号を変調した信号である。 Step S7 is a step of distributing a modulated tactile signal as modulated tactile information for a predetermined delay time. Step S7 is performed following step S5a when the determination in step S4 is affirmative. As shown in FIG. 4, the second distribution control unit 34 distributes a modulated tactile signal obtained by superimposing a fade-out signal and a fade-in signal. A fade-out signal is a signal obtained by modulating a tactile signal based on tactile information of an actor, for example, actor A, which has been distributed before the delay time. The fade-in signal is a signal obtained by modulating a tactile signal based on the tactile information of an actor, for example, actor B appearing in the video distributed in step S5a.
 ステップS4の判定が否定判定である場合、ステップS4からステップS5a及びステップS6へと移行する。また、ステップS4の判定が肯定判定である場合、ステップS4からステップS5b及びステップS7へと移行し、その後、ステップS7からステップS6へと移行する。情報処理装置15は、ステップS6の後、ステップS4へと戻り、ステップS4以降の処理を繰り返す。これにより、情報処理装置15は、映像、音情報、及び触感情報を継続的に配信する。 If the determination in step S4 is negative, the process moves from step S4 to steps S5a and S6. If the determination in step S4 is affirmative, the process proceeds from step S4 to step S5b and step S7, and then from step S7 to step S6. After step S6, the information processing device 15 returns to step S4, and repeats the processes after step S4. Accordingly, the information processing device 15 continuously distributes video, sound information, and tactile information.
 図5に示すように、配信開始からしばらくの期間においては、カメラCによる演技者Aの撮影が継続される。このとき、演技者Aに基づく映像及び音情報がカメラCから情報処理装置15へ送信される。本期間において、ステップS3にて配信が決定された映像に映る演技者は、演技者Aであり、ステップS6にて、現在、配信されている触感情報の由来となる演技者も演技者Aである。 As shown in FIG. 5, for a while after the start of the distribution, the actor A continues to be photographed by the camera CA. At this time, video and audio information based on actor A is transmitted from camera CA to information processing device 15 . During this period, the actor appearing in the video determined to be distributed in step S3 is actor A, and in step S6, actor A is also the actor from whom the tactile information currently being distributed is derived. be.
 そのため、本期間のステップS4における判定部32の判定は、配信開始時を含めて否定判定となる。ステップS4の判定が否定判定である場合、配信開始時は、現在、配信されている触感情報がないため、判定部32の判定は、否定判定となる。このとき、第1配信制御部33は、ステップS5aとして、ステップS3にて配信を決定した映像、即ち、演技者Aの映る映像、及び対応する音情報の配信を開始する。そして、第2配信制御部34は、ステップS6として、ステップS5aにて映像が配信されている演技者Aの触感情報に基づく触感信号を配信する。 Therefore, the determination of the determination unit 32 in step S4 during this period is a negative determination, including at the start of distribution. If the determination in step S4 is a negative determination, the determination by the determination unit 32 is a negative determination because there is no tactile information being distributed at the start of distribution. At this time, in step S5a, the first distribution control unit 33 starts distributing the image decided to be distributed in step S3, that is, the image of actor A and the corresponding sound information. Then, in step S6, the second distribution control unit 34 distributes a tactile signal based on the tactile information of performer A whose video is distributed in step S5a.
 配信開始から所定時間が経過した後のステップS1において、カメラマンにより撮影対象が演技者Aから演技者Bに変更される。これにより、演技者Bに基づく映像及び音情報がカメラCから情報処理装置15へ送信される。 In step S1 after a predetermined period of time has elapsed from the start of distribution, the subject of photography is changed from performer A to performer B by the cameraman. As a result, video and audio information based on performer B is transmitted from camera CA to information processing device 15 .
 撮影対象が演技者Aから演技者Bに変更された直後の周期のステップS4においては、ステップS3にて配信が決定された映像に映っている演技者は、演技者Bである。そして、現在、配信されている触感情報の由来となる演技者は演技者Aとなる。この場合、両演技者が異なるため、ステップS4における判定部32の判定は、肯定判定となる。このとき、第1配信制御部33は、ステップS5bとして、ステップS3にて配信を決定した映像、即ち、演技者Bの映る映像、及び対応する音情報の配信を開始する。そして、第2配信制御部34は、ステップS7として、演技者Bの映る映像の配信の開始から所定の遅延時間、変調触感信号を配信した後、ステップS6として、ステップS5bにて映像が配信されている演技者Bの触感情報に基づく触感信号を配信する。なお、上記の遅延時間の間は、第1配信制御部33は、ステップS1にて取得した映像である、演技者Bの映る映像の配信を継続する。 In step S4 in the cycle immediately after the shooting target is changed from actor A to actor B, actor B is the actor appearing in the video determined to be distributed in step S3. Actor A is the actor from whom tactile information that is currently distributed is derived. In this case, since the two performers are different, the determination of the determination unit 32 in step S4 is a positive determination. At this time, in step S5b, the first distribution control unit 33 starts distributing the image decided to be distributed in step S3, that is, the image of actor B and the corresponding sound information. Then, in step S7, the second distribution control unit 34 distributes the modulated tactile signal for a predetermined delay time from the start of distribution of the video of actor B, and then in step S6, the video is distributed in step S5b. A tactile signal based on the tactile information of performer B who is performing is distributed. During the delay time, the first distribution control unit 33 continues distributing the image of actor B, which is the image obtained in step S1.
 その後は、カメラCによる演技者Bの撮影が継続される。演技者Bの撮影が継続される期間は、ステップS3にて配信が決定された映像に映る演技者は、演技者Bであり、ステップS6にて、現在、配信されている触感情報の由来となる演技者も演技者Bである。そのため、本期間のステップS4における判定部32の判定は、否定判定となる。そして、ステップS4の後は、ステップS5a及びステップS6における配信が行われる。 After that, the camera CA continues to photograph the performer B. During the period in which the shooting of actor B is continued, the actor appearing in the video determined to be distributed in step S3 is actor B, and in step S6, the origin of the currently distributed tactile information is determined. Actor B is also Actor B. Therefore, the determination by the determination unit 32 in step S4 in this period is a negative determination. After step S4, distribution is performed in steps S5a and S6.
 また、ユーザーは、ステップS5a及びステップS5bにおいて配信された映像及び音情報を視聴装置13により視聴できる。同時に、ステップS6において配信された触感信号、及びステップS7において配信された変調触感信号に基づいて、触感提示装置14の振動アクチュエータ23が振動する。これにより、触感提示装置14の提示部21に触れているユーザーに対して、演技者A又は演技者Bに基づく触感が提示される。なお、第1パターンにおいて、第1配信ステップは、ステップS3及びステップS5a,5bである。第2配信ステップは、ステップS6及びステップS7である。 Also, the user can use the viewing device 13 to view the video and audio information distributed in steps S5a and S5b. At the same time, the vibration actuator 23 of the tactile sense presentation device 14 vibrates based on the tactile sense signal delivered in step S6 and the modulated tactile sense signal delivered in step S7. As a result, the tactile sensation based on actor A or actor B is presented to the user touching the presentation unit 21 of the tactile sensation presentation device 14 . In the first pattern, the first distribution steps are step S3 and steps S5a and S5b. The second distribution steps are steps S6 and S7.
 (第2パターン)
 図6のタイミングチャートに示すように、コンテンツ配信方法の第2パターンは、演技者が演技者A,Bの2名であり、撮影装置11が演技者Aを撮影するカメラCと演技者Bを撮影するカメラCの2台である場合のコンテンツ配信方法である。
(second pattern)
As shown in the timing chart of FIG. 6, in the second pattern of the content distribution method, there are two performers A and B, and the photographing device 11 captures the performer A with the camera C A and the performer B. This is a content distribution method in the case of two cameras CB for photographing .
 図4のフローチャートに基づいて、以下の場合を例に挙げて説明する。図6に示すように、配信開始からしばらくの間は、演技者Aのコンテンツ情報を配信する。その後、演技者Aのコンテンツ情報の配信から演技者Bのコンテンツ情報の配信に切り替える。 Based on the flowchart in FIG. 4, the following case will be described as an example. As shown in FIG. 6, the content information of performer A is distributed for a while after the start of distribution. After that, the distribution of the content information of the performer A is switched to the distribution of the content information of the performer B.
 ステップS1では、カメラCによる演技者Aに基づく映像及び音情報の取得と、カメラCによる演技者Bに基づく映像及び音情報の取得が同時に行われる。取得した演技者Aに基づく映像及び音情報は、カメラCから情報処理装置15へ送信される。取得した演技者Bに基づく映像及び音情報は、カメラCから情報処理装置15へ送信される。 In step S1, acquisition of image and sound information based on performer A by camera CA and acquisition of image and sound information based on performer B by camera CB are performed simultaneously. The acquired video and sound information based on the performer A is transmitted from the camera CA to the information processing device 15 . The acquired video and sound information based on actor B is transmitted from camera CB to information processing device 15 .
 ステップS2は、第1パターンと同様である。
 ステップS3は、ステップS1にて取得した配信する映像及び音情報を決定するステップである。カメラがカメラCの及びカメラCの2台である本パターンにおいては、配信する映像及び音情報は2種類である。そのため、ステップS3において、第1配信制御部33は、カメラCにより取得した演技者Aに基づく映像及び音情報と、カメラCにより取得した演技者Bに基づく映像及び音情報とから、配信する映像及び音情報を決定する。上記決定は、配信するコンテンツ情報を切り替えるために作業者が行うスイッチング操作の入力に基づいて行われる。
Step S2 is similar to the first pattern.
Step S3 is a step of determining the video and audio information to be distributed acquired in step S1. In this pattern in which there are two cameras, camera CA and camera CB , there are two types of video and audio information to be distributed. Therefore, in step S3, the first distribution control unit 33 performs distribution based on the video and sound information based on the actor A acquired by the camera C A and the video and sound information based on the actor B acquired by the camera CB. determine the video and audio information to be used. The above determination is made based on the input of the switching operation performed by the operator in order to switch the content information to be distributed.
 ステップS4~ステップ7は、第1パターンと同様である。
 図6に示すように、配信開始からしばらくの間は、演技者Aのコンテンツ情報が配信される。本期間のステップS3において、第1配信制御部33は、カメラCにより取得した演技者Aに基づく映像及び音情報を、配信する映像及び音情報に決定する。本期間において、ステップS3にて配信が決定された映像に映る演技者は、演技者Aであり、ステップS6にて、現在、配信されている触感情報の由来となる演技者も演技者Aである。本期間におけるステップS4以降の処理は、第1パターンと同様である。つまり、本期間において、ステップS4における判定部32の判定は、否定判定となる。その後、ステップS5aとして、ステップS3にて配信を決定した映像、即ち、演技者Aの映る映像、及び対応する音情報の配信を開始する。そして、第2配信制御部34は、ステップS6として、ステップS5aにて映像が配信されている演技者Aの触感情報に基づく触感信号を配信する。
Steps S4 to 7 are the same as in the first pattern.
As shown in FIG. 6, the content information of performer A is distributed for a while after the start of distribution. In step S3 of this period, the first distribution control unit 33 determines the video and sound information based on the performer A acquired by the camera CA as the video and sound information to be distributed. During this period, the actor appearing in the video determined to be distributed in step S3 is actor A, and in step S6, actor A is also the actor from whom the tactile information currently being distributed is derived. be. The processing after step S4 in this period is the same as in the first pattern. That is, in this period, the determination of the determination unit 32 in step S4 is a negative determination. After that, as step S5a, distribution of the image decided to be distributed in step S3, that is, the image of actor A and the corresponding sound information is started. Then, in step S6, the second distribution control unit 34 distributes a tactile signal based on the tactile information of performer A whose video is distributed in step S5a.
 図6に示すように、配信開始から所定時間経過後からしばらくの間は、演技者Bのコンテンツ情報が配信される。本期間のステップS3において、第1配信制御部33は、カメラCにより取得した演技者Bに基づく映像及び音情報を、配信する映像及び音情報に決定する。つまり、配信開始から所定時間が経過した後のステップS3において、第1配信制御部33は、演技者Aに基づく映像及び音情報から演技者Bに基づく映像及び音情報に切り替えるように、配信する映像及び音情報を決定する。 As shown in FIG. 6, the content information of performer B is distributed for a while after a predetermined time has elapsed from the start of distribution. In step S3 of this period, the first distribution control unit 33 determines the video and sound information based on actor B obtained by camera CB as the video and sound information to be distributed. That is, in step S3 after a predetermined time has elapsed from the start of distribution, the first distribution control unit 33 performs distribution so as to switch from the video and sound information based on performer A to the video and sound information based on performer B. Determine video and audio information.
 ステップS3において決定される映像及び音情報が、演技者Bに基づく映像及び音情報に変更された直後の周期のステップS4においては、ステップS3にて配信が決定された映像に映っている演技者は、演技者Bである。そして、現在、配信されている触感情報の由来となる演技者は演技者Aとなる。この場合、両演技者が異なるため、ステップS4における判定部32の判定は、肯定判定となる。ステップS4の判定が肯定判定である場合、ステップS4からステップS5b及びステップS7へと移行し、ステップS7の後は、ステップS6へと移行する。 In step S4 of the cycle immediately after the video and sound information determined in step S3 is changed to the video and sound information based on actor B, the actor appearing in the video determined to be distributed in step S3 is actor B. Actor A is the actor from whom tactile information that is currently distributed is derived. In this case, since the two performers are different, the determination of the determination unit 32 in step S4 is a positive determination. If the determination in step S4 is affirmative, the process moves from step S4 to step S5b and step S7, and after step S7 moves to step S6.
 その後、演技者Bのコンテンツ情報が配信される期間においては、第1配信制御部33は、カメラCにより取得した演技者Bに基づく映像及び音情報を、配信する映像及び音情報に決定し続ける。本期間は、ステップS3にて配信が決定された映像に映る演技者は、演技者Bであり、ステップS6にて、現在、配信されている触感情報の由来となる演技者も演技者Bである。そのため、本期間のステップS4における判定部32の判定は、配信開始時を含めて否定判定となる。そして、ステップS4の後は、ステップS5a及びステップS6における配信が行われる。ステップS4~ステップS7における具体的な処理は、第1パターンと同様である。 Thereafter, during the period in which the content information of performer B is distributed, the first distribution control unit 33 determines the image and sound information based on performer B acquired by camera CB as the image and sound information to be distributed. continue. During this period, the actor appearing in the video determined to be distributed in step S3 is actor B, and in step S6, actor B is also the actor from whom the tactile information currently being distributed is derived. be. Therefore, the determination of the determination unit 32 in step S4 in this period is negative including the time of starting the distribution. After step S4, distribution is performed in steps S5a and S6. Specific processing in steps S4 to S7 is the same as in the first pattern.
 (第3パターン)
 図7のタイミングチャートに示すように、コンテンツ配信方法の第3パターンは、演技者が演技者A,Bの2名であり、撮影装置11が演技者Aを撮影するカメラCと演技者Bを撮影するカメラCの2台である場合のコンテンツ配信方法である。
(Third pattern)
As shown in the timing chart of FIG. 7, in the third pattern of the content distribution method, there are two performers A and B, and the photographing device 11 captures the performer A with the camera C A and the performer B. This is a content distribution method in the case of two cameras CB for photographing .
 図8に示すフローチャートに基づいてコンテンツ配信方法の第3パターンについて説明する。第3パターンは、判定ステップの構成が第2パターンと相違している。第3パターンについて、第2パターンと共通する部分については説明を省略する。 A third pattern of the content distribution method will be described based on the flowchart shown in FIG. The third pattern differs from the second pattern in the configuration of the determination step. Regarding the third pattern, the description of the parts common to the second pattern will be omitted.
 第3パターンのコンテンツ配信方法は、第2パターンのステップS3に代えて、判定ステップとしてのステップS4a及びステップS4bを有している。
 ステップS4aは、ステップS1にて取得した各映像に判定事項が含まれているか否かの判定、及びその判定結果の記憶を行うステップである。ステップS4aにおいて、判定部32は、ステップS1にて取得した各映像に対して、取得後から所定周期にて間欠的に、判定事項が含まれているか否かを判定する。そして、判定部32は、その判定結果を、ステップS1にて取得した映像ごとに記憶部35に記憶する、又は記憶部35に記憶されている判定結果の更新を行う。
The content distribution method of the third pattern has steps S4a and S4b as determination steps instead of step S3 of the second pattern.
Step S4a is a step of determining whether or not each image acquired in step S1 includes a determination item, and storing the determination result. In step S4a, the determination unit 32 intermittently determines whether or not each image acquired in step S1 includes a determination item at predetermined intervals after acquisition. Then, the determination unit 32 stores the determination result in the storage unit 35 for each image acquired in step S<b>1 or updates the determination result stored in the storage unit 35 .
 ステップS4bは、ステップS3にて配信が決定された映像について、記憶部35に記憶されている直近の判定結果に基づいて、判定事項が含まれているか否かを判定するステップである。ステップS4bにおいて、判定部32は、ステップS3にて配信が決定された映像について、記憶部35に記憶されている直近の判定結果が肯定判定である場合、判定事項が含まれていると判定し、判定結果が否定判定である場合、判定事項が含まれていないと判定する。ステップS4bの判定が否定判定である場合、ステップS4bからステップS5a及びステップS6へと移行する。また、ステップS4bの判定が肯定判定である場合、ステップS4bからステップS5b及びステップS7へと移行し、ステップS7の後は、ステップS6へと移行する。 Step S4b is a step for judging whether or not the video decided to be distributed in step S3 contains judgment items based on the most recent judgment results stored in the storage unit 35. In step S4b, if the latest determination result stored in the storage unit 35 is affirmative for the video determined to be distributed in step S3, the determination unit 32 determines that the determination item is included. , when the determination result is a negative determination, it is determined that the determination item is not included. If the determination in step S4b is negative, the process proceeds from step S4b to steps S5a and S6. If the determination in step S4b is affirmative, the process moves from step S4b to step S5b and step S7, and after step S7, the process moves to step S6.
 第3パターンの場合、例えば、演技者Aの映像の配信中であっても、現在、配信されていない演技者Bの映像に対しても判定部32による処理を同時に行う。そして、配信開始から所定時間が経過した後のステップS3において、第1配信制御部33は、演技者Aに基づく映像及び音情報から演技者Bに基づく映像及び音情報に切り替えるように、配信する映像及び音情報を決定したとする。このタイミング、即ち、演技者Bに基づく映像に配信する映像を決定したタイミングにおいて既に、演技者Bに基づく映像に対する肯定判定又は否定判定の判定結果が出ている。そのため、演技者Bに基づく映像に配信する映像を決定した後のステップS4bでは、記憶部35に記憶されている直近の判定結果を参照することにより、当該映像に判定事項が含まれているか否かを判定できる。この場合、ステップS3にて配信する映像を決定した後に、当該映像に判定事項が含まれているか否かの実質的な判定処理を行う必要がない。そのため、撮影装置11により映像及び音情報を取得してから配信するまでの間に設けている時間差を小さくする又は無くすことができる。 In the case of the third pattern, for example, even if the video of actor A is being distributed, the video of actor B that is not currently being distributed is also processed by the determination unit 32 at the same time. Then, in step S3 after a predetermined time has elapsed from the start of distribution, the first distribution control unit 33 performs distribution so as to switch from the video and sound information based on the performer A to the video and sound information based on the performer B. Assume that video and audio information are determined. At this timing, that is, at the timing when the video to be distributed to the video based on actor B is decided, the determination result of affirmative or negative determination for the video based on actor B has already been obtained. Therefore, in step S4b after determining the video to be distributed in the video based on the performer B, by referring to the latest determination result stored in the storage unit 35, it is possible to determine whether or not the video contains a determination item. can determine whether In this case, after determining the video to be distributed in step S3, there is no need to perform a substantial determination process as to whether or not the video includes a determination item. Therefore, it is possible to reduce or eliminate the time difference provided between acquisition of video and audio information by the imaging device 11 and distribution thereof.
 次に、上記実施形態の作用及び効果について説明する。
 (1)演技者の映像、音情報、及び触感情報を含むコンテンツ情報をライブ配信するコンテンツ配信方法は、第1取得ステップと、第2取得ステップと、判定ステップと、配信ステップとを備えている。第1取得ステップは、演技者を撮影して映像及び音情報を取得するステップである。第2取得ステップは、演技者の触感情報を取得するステップである。判定ステップは、第1取得ステップにて取得した映像に、配信される触感情報の切り替わりに関連する判定事項が含まれているか否かを判定するステップである。
Next, the operation and effects of the above embodiment will be described.
(1) A content distribution method for live distribution of content information including video, sound information, and tactile information of an actor includes a first obtaining step, a second obtaining step, a determining step, and a distributing step. . The first acquiring step is a step of capturing video and audio information by photographing the performer. The second obtaining step is a step of obtaining the performer's tactile information. The determination step is a step of determining whether or not the image acquired in the first acquisition step includes a determination item related to switching of the tactile information to be distributed.
 配信ステップは、取得したコンテンツ情報を配信するステップである。配信ステップは、配信する映像及び音情報を決定して配信する第1配信ステップと、第1配信ステップにて決定された映像に応じた触感情報を配信する第2配信ステップとを備えている。第1配信ステップにて決定された映像に判定事項が含まれている場合、第2配信ステップでは、第1配信ステップにおける映像及び音情報の配信の開始から所定の遅延時間が経過した後に、触感情報の配信を開始する。 The distribution step is a step of distributing the acquired content information. The distributing step includes a first distributing step of determining and distributing video and audio information to be distributed, and a second distributing step of distributing tactile information corresponding to the video determined in the first distributing step. If the image determined in the first delivery step includes the judgment item, in the second delivery step, after a predetermined delay time has elapsed from the start of delivery of the video and audio information in the first delivery step, the tactile sensation Start distributing information.
 上記構成によれば、第2配信ステップにて配信される触感情報が切り替わる場合に、切り替え後の触感情報が配信される前に遅延時間を設けている。換言すると、切り替え後の触感情報の配信を遅らせている。この遅延時間に、触感提示装置14からユーザーに提示される触感を調整して、遅延時間における触感提示装置14の動作を制御することにより、遅延時間の後に配信される触感信号に基づく触感の印象を制御できる。これにより、遅延時間の後に配信される触感信号に基づく触感の表現の幅を広げることができる。そして、ライブ配信へのユーザーの没入感及び臨場感を効果的に向上させることができる。 According to the above configuration, when the tactile information distributed in the second distribution step is switched, a delay time is provided before the tactile information after switching is distributed. In other words, the delivery of tactile information after switching is delayed. By adjusting the tactile sensation presented to the user from the tactile sensation presentation device 14 during the delay time and controlling the operation of the tactile sensation presentation device 14 during the delay time, the impression of the tactile sensation is based on the tactile signal delivered after the delay time. can be controlled. As a result, it is possible to expand the range of expression of tactile sensations based on the tactile sensation signals distributed after the delay time. In addition, it is possible to effectively improve the user's sense of immersion and presence in the live distribution.
 (2)コンテンツ配信方法は、遅延時間において、遅延時間の前後に配信される触感情報を変調した変調触感情報を配信することを備えている。コンテンツ配信方法は、遅延時間後に、変調触感情報の配信から前記触感情報の配信に切り替えることを備えている。 (2) The content delivery method comprises delivering, during the delay time, modulated tactile information obtained by modulating the tactile information delivered before and after the delay time. The content delivery method comprises switching from delivering modulated tactile information to delivering said tactile information after a delay time.
 上記構成によれば、触感提示装置14からユーザーに提示される触感が急激に変化することによりユーザーに生じる違和感を低減できる。
 (3)第2パターンに関し、コンテンツ配信方法の配信対象となる演技者は、複数である。判定事項は、映像に映る前記演技者が現在、配信されている触感情報の由来となる演技者と異なることである。
According to the above configuration, it is possible to reduce the sense of incongruity that the user experiences when the tactile sensation presented to the user by the tactile sensation presentation device 14 suddenly changes.
(3) Regarding the second pattern, there are a plurality of performers to be distributed by the content distribution method. A judgment item is that the performer appearing in the video is different from the performer from whom the tactile information currently being distributed is derived.
 触感情報の由来となる演技者が切り替わる際には、ユーザーに提示される触感が大きく変化しやすいことから、ユーザーに違和感を生じさせることが多くなる。したがって、演技者が切り替わりに関連付けた判定事項とすることにより、ユーザーに生じる違和感をより効果的に低減できる。  When the actor from which the tactile information is derived changes, the tactile sensation presented to the user tends to change greatly, which often causes the user to feel uncomfortable. Therefore, by setting the determination items associated with the switching by the performer, it is possible to more effectively reduce the sense of incongruity that the user feels.
 (4)第3パターンに関し、第1取得ステップは、複数の撮影装置11を用いて複数の映像を同時に取得するステップである。コンテンツ配信方法は、配信ステップ中において、判定ステップの処理を、複数の映像のそれぞれに対して同時に行うことを備えている。 (4) Regarding the third pattern, the first acquisition step is a step of simultaneously acquiring a plurality of images using a plurality of imaging devices 11 . The content distribution method comprises, during the distribution step, simultaneously performing the processing of the determination step on each of the plurality of videos.
 上記構成によれば、配信する映像及び音情報の決定後に実質的な判定処理を行う必要がない。そのため、撮影装置11により映像及び音情報を取得してから配信するまでの間に設ける時間差を小さくする又は無くすことができる。 According to the above configuration, it is not necessary to perform substantial determination processing after determining the video and audio information to be distributed. Therefore, it is possible to reduce or eliminate the time difference provided between acquisition of video and audio information by the photographing device 11 and distribution thereof.
 なお、本実施形態は、以下のように変更して実施することができる。本実施形態及び以下の変更例は、技術的に矛盾しない範囲で互いに組み合わせて実施することができる。
 ・第2配信ステップにおいて配信される触感情報は、演技者から取得した触感情報に基づく触感信号に限定されない。例えば、演技者から取得した触感情報そのものを配信してもよい。演技者から取得した触感情報に代えて、撮影装置11により撮影された映像に基づいて生成された触感情報を配信してもよい。
In addition, this embodiment can be changed and implemented as follows. This embodiment and the following modified examples can be implemented in combination with each other within a technically consistent range.
- The tactile information delivered in the second delivery step is not limited to the tactile signal based on the tactile information acquired from the performer. For example, the tactile information itself acquired from the performer may be distributed. Instead of the tactile information acquired from the performer, tactile information generated based on the image captured by the imaging device 11 may be distributed.
 また、予め複数の触感情報を含む触感情報ライブラリーを作成、及び記憶部35に記憶させておき、撮影装置11により撮影された映像に基づいて、触感情報ライブラリーから選択された触感情報を配信してもよい。この場合、触感取得装置12として、例えば、撮影装置11により撮影された映像に基づいて触感情報を生成する機能、又は予め記憶されている複数の触感情報から映像に応じた触感情報を選択する機能を有する装置を用いる。この場合の触感取得装置12は、情報処理装置15の一部であってもよい。 In addition, a tactile information library containing a plurality of tactile information is created in advance and stored in the storage unit 35, and tactile information selected from the tactile information library is distributed based on the image captured by the imaging device 11. You may In this case, the tactile sensation acquisition device 12 has, for example, a function of generating tactile information based on an image captured by the imaging device 11, or a function of selecting tactile information corresponding to an image from a plurality of tactile information stored in advance. using a device with The tactile sensation acquisition device 12 in this case may be part of the information processing device 15 .
 ・コンテンツ情報は、演技者の映像、音情報、及び触感情報に限定されない。その他のコンテンツ情報としては、例えば、視聴装置13に表示される映像に、触感情報の変化を視覚的に伝える映像演出を重ねて表示するための案内情報が挙げられる。 · The content information is not limited to the actor's video, sound information, and tactile information. Other content information includes, for example, guidance information for superimposing a video effect that visually conveys changes in tactile information on the video displayed on the viewing device 13 .
 ・ユーザーの操作に基づいて、配信される触感情報を選択できるように構成してもよい。例えば、一人の演技者から複数の触感情報が取得される場合、情報処理装置15は、取得された複数の触感情報を配信する。ユーザーは、触感提示装置14を操作することにより、配信された複数の触感情報から任意の触感情報を選択し、選択した触感情報に基づいて触感提示装置14を駆動させる。 · It may be configured so that the tactile information to be distributed can be selected based on the user's operation. For example, when a plurality of pieces of tactile information are acquired from one performer, the information processing device 15 distributes the acquired pieces of tactile information. By operating the tactile sensation presentation device 14, the user selects arbitrary tactile sensation information from a plurality of distributed tactile sensation information, and drives the tactile sensation presentation device 14 based on the selected tactile sensation information.
 ・第2配信ステップにおいて、配信すべき触感情報を正常に配信することができない場合に、作成済みの擬似触感情報を配信してもよい。配信すべき触感情報を正常に配信することができない場合としては、例えば、演技者からの触感情報の取得に失敗した場合、演技者から取得した触感情報にノイズが含まれている場合が挙げられる。 · In the second delivery step, if the tactile information to be delivered cannot be delivered normally, the created pseudo tactile information may be delivered. Examples of cases where the tactile information to be distributed cannot be normally distributed include, for example, a case where acquisition of the tactile information from the performer fails, and a case where the tactile information acquired from the performer contains noise. .
 ・判定ステップに用いる判定事項は、配信される触感情報の切り替わりを判定することのできる事項であれば、上記実施形態の事項に限定されない。また、判定事項は、複数であってもよい。 · The determination items used in the determination step are not limited to the items in the above embodiment as long as they are items that can determine the switching of the delivered tactile information. Moreover, a plurality of items may be determined.
 ・第3配信ステップにおいて、配信ステップ中に行われる判定ステップの処理は、判定ステップにおける処理の一部であってもよい。例えば、判定ステップが順に行われる第1ステップ及び第2ステップを備える場合に、第1ステップのみを配信ステップ中に行い、配信する映像の決定後に第2ステップを行う。第1ステップは、例えば、第1の判定事項に基づいて判定するステップであり、第2ステップは、第1の判定事項とは別の第2の判定事項に基づいて判定するステップである。 · In the third delivery step, the processing of the determination step performed during the delivery step may be part of the processing in the determination step. For example, if the determination step comprises a first step and a second step that are performed in sequence, only the first step is performed during the distribution step, and the second step is performed after determining the video to be distributed. The first step is, for example, a step of determining based on a first determination item, and the second step is a step of determining based on a second determination item different from the first determination item.
 ・演技者の状態や動作、配信される映像の内容に基づいて、触感情報を編集して配信してもよい。例えば、所定の条件を満たした場合に、情報処理装置15は、触感提示装置14によりユーザーに提示される触感が強くなるように、又は弱くなるように編集した触感情報を配信する。上記所定の条件としては、例えば、口角が上がった場合の笑顔などの演技者の表情が特定の表情になった場合、演技者の顔がアップで表示された映像になった場合、ビブラートなどの演技者による特定の動作が行われた場合が挙げられる。 ・Tactile information may be edited and distributed based on the actor's state and actions, and the content of the video to be distributed. For example, when a predetermined condition is satisfied, the information processing device 15 distributes tactile information edited so that the tactile sensation presented to the user by the tactile sensation presentation device 14 is strengthened or weakened. Examples of the above-mentioned predetermined conditions include, for example, when the actor's facial expression becomes a specific expression such as a smile when the corners of the mouth are raised, when the actor's face is displayed in a close-up image, and when the actor's face is displayed in a close-up image, vibrato For example, the performer performs a specific action.
 ・振動アクチュエータ23はDEAに限定されるものではなく、触感提示装置に用いられる公知の振動アクチュエータであってもよい。公知の振動アクチュエータとしては、例えば、イオン交換ポリマーメタル複合体(IPMC:Ionic Polymer Metal Composite)等の他の電場応答性高分子アクチュエータ(EPA:Electroactive Polymer Actuator)、偏心モータ、リニア共振アクチュエータ、ボイスコイルアクチュエータ、ピエゾアクチュエータが挙げられる。 · The vibration actuator 23 is not limited to the DEA, and may be a known vibration actuator used in a tactile presentation device. Known vibration actuators include, for example, other electroactive polymer actuators (EPA: Electroactive Polymer Actuator) such as ion exchange polymer metal composite (IPMC: Ionic Polymer Metal Composite), eccentric motors, linear resonance actuators, voice coils Actuators, piezo actuators.

Claims (5)

  1.  演技者の映像、音情報、及び触感提示装置を動作させるための触感情報を含むコンテンツ情報をライブ配信するコンテンツ配信方法であって、
     前記演技者を撮影して前記映像及び前記音情報を取得する第1取得ステップと、
     前記演技者の前記触感情報を取得又は生成する第2取得ステップと、
     前記第1取得ステップにて取得した前記映像に、配信される前記触感情報の切り替わりを判定するための判定事項が含まれているか否かを判定する判定ステップと、
     取得した前記コンテンツ情報を配信する配信ステップとを備え、
     前記配信ステップは、
     配信する前記映像及び前記音情報を決定して配信する第1配信ステップと、
     前記第1配信ステップにて決定された前記映像に応じた前記触感情報を配信する第2配信ステップとを備え、
     前記第1配信ステップにて決定された前記映像に前記判定事項が含まれている場合、前記第2配信ステップでは、前記第1配信ステップにおける前記映像及び前記音情報の配信の開始から所定の遅延時間が経過した後に、前記触感情報の配信を開始することを特徴とするコンテンツ配信方法。
    A content distribution method for live distribution of content information including video of an actor, sound information, and tactile information for operating a tactile presentation device,
    a first acquisition step of capturing the performer and acquiring the video and the sound information;
    a second obtaining step of obtaining or generating the tactile information of the performer;
    a determination step of determining whether or not the video acquired in the first acquisition step includes a determination item for determining switching of the tactile information to be distributed;
    a distribution step of distributing the acquired content information,
    The delivery step includes:
    a first distribution step of determining and distributing the video and sound information to be distributed;
    a second distribution step of distributing the tactile information corresponding to the image determined in the first distribution step;
    When the video determined in the first delivery step includes the determination item, in the second delivery step, a predetermined delay from the start of delivery of the video and the sound information in the first delivery step A content delivery method, comprising: starting delivery of the tactile information after a lapse of time.
  2.  前記遅延時間において、前記遅延時間の前、前記遅延時間の後、或いは遅延時間の前後に配信される前記触感情報を変調した変調触感情報、又は予め作成された繋ぎ用の繋ぎ触感情報を配信することと、
     前記遅延時間後に、前記変調触感情報又は前記繋ぎ触感情報の配信から前記触感情報の配信に切り替えることとを備える請求項1に記載のコンテンツ配信方法。
    During the delay time, distribute modulated tactile information obtained by modulating the tactile information distributed before the delay time, after the delay time, or before and after the delay time, or pre-created connecting tactile information for connection. and
    2. The content distribution method according to claim 1, further comprising switching from distribution of the modulated tactile information or the connecting tactile information to distribution of the tactile information after the delay time.
  3.  前記第1取得ステップは、複数の映像を同時に取得するステップであり、
     前記コンテンツ配信方法は、前記配信ステップ中において、前記判定ステップにおける少なくとも一部の処理を、複数の映像のそれぞれに対して同時に行うことを備える請求項1又は請求項2に記載のコンテンツ配信方法。
    The first obtaining step is a step of obtaining a plurality of images at the same time,
    3. The content delivery method according to claim 1, wherein said content delivery method comprises simultaneously performing at least part of the processing in said determination step for each of a plurality of videos during said delivery step.
  4.  前記演技者は、複数であり、
     前記判定事項は、前記映像に映る前記演技者が現在、配信されている前記触感情報の由来となる前記演技者と異なることである請求項1~3のいずれか一項に記載のコンテンツ配信方法。
    The performers are plural,
    4. The content distribution method according to any one of claims 1 to 3, wherein the determination item is that the performer appearing in the video is different from the performer from which the tactile information currently being distributed is derived. .
  5.  請求項1~4のいずれか一項に記載のコンテンツ配信方法に用いるコンテンツ配信システムであって、
     前記映像及び前記音情報を取得する撮影装置と、
     前記演技者の前記触感情報を取得又は生成する触感取得装置と、
     配信された前記映像及び前記音情報を視聴するためにユーザーに使用されるように構成された視聴装置と、
     配信された前記触感情報に基づく触感を前記ユーザーに提示するように構成された触感提示装置と、
     取得した前記コンテンツ情報を前記視聴装置及び前記触感提示装置に配信する情報処理装置とを備え、
     前記情報処理装置は、
     前記撮影装置により撮影された前記映像に前記判定事項が含まれているか否かを判定する判定部と、
     配信する前記映像及び前記音情報を決定して配信する第1配信制御部と、
     前記第1配信制御部にて決定された前記映像に応じた前記触感情報を配信する第2配信制御部とを備え、
     前記第1配信制御部にて決定された前記映像に前記判定事項が含まれている場合、前記第2配信制御部は、前記第1配信制御部による前記映像及び前記音情報の配信の開始から所定の遅延時間が経過した後に、前記触感情報の配信を開始することを特徴とするコンテンツ配信システム。
    A content distribution system used in the content distribution method according to any one of claims 1 to 4,
    a photographing device that acquires the video and the sound information;
    a tactile sense acquisition device that acquires or generates the tactile sense information of the performer;
    a viewing device configured to be used by a user to view the delivered video and audio information;
    a tactile presentation device configured to present the user with a tactile sensation based on the distributed tactile information;
    an information processing device that distributes the acquired content information to the viewing device and the tactile sensation presentation device;
    The information processing device is
    a determination unit that determines whether or not the image captured by the imaging device includes the determination item;
    a first distribution control unit that determines and distributes the video and audio information to be distributed;
    a second distribution control unit that distributes the tactile information corresponding to the video determined by the first distribution control unit;
    When the video determined by the first distribution control unit includes the determination item, the second distribution control unit controls the distribution of the video and the sound information from the start of distribution of the video and the sound information by the first distribution control unit. A content distribution system, wherein distribution of the tactile information is started after a predetermined delay time has passed.
PCT/JP2021/038157 2021-10-15 2021-10-15 Content distribution method and content distribution system WO2023062802A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/038157 WO2023062802A1 (en) 2021-10-15 2021-10-15 Content distribution method and content distribution system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/038157 WO2023062802A1 (en) 2021-10-15 2021-10-15 Content distribution method and content distribution system

Publications (1)

Publication Number Publication Date
WO2023062802A1 true WO2023062802A1 (en) 2023-04-20

Family

ID=85988198

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/038157 WO2023062802A1 (en) 2021-10-15 2021-10-15 Content distribution method and content distribution system

Country Status (1)

Country Link
WO (1) WO2023062802A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100835297B1 (en) * 2007-03-02 2008-06-05 광주과학기술원 Node structure for representing tactile information, method and system for transmitting tactile information using the same
KR20140004510A (en) * 2012-07-03 2014-01-13 김인수 Emotional message service editor system and the method
JP2017033536A (en) * 2015-07-29 2017-02-09 イマージョン コーポレーションImmersion Corporation Crowd-based haptics
JP2018033155A (en) * 2005-10-19 2018-03-01 イマージョン コーポレーションImmersion Corporation Synchronization of haptic effect data in media transport stream
US20200241643A1 (en) * 2017-10-20 2020-07-30 Ck Materials Lab Co., Ltd. Haptic information providing system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018033155A (en) * 2005-10-19 2018-03-01 イマージョン コーポレーションImmersion Corporation Synchronization of haptic effect data in media transport stream
KR100835297B1 (en) * 2007-03-02 2008-06-05 광주과학기술원 Node structure for representing tactile information, method and system for transmitting tactile information using the same
KR20140004510A (en) * 2012-07-03 2014-01-13 김인수 Emotional message service editor system and the method
JP2017033536A (en) * 2015-07-29 2017-02-09 イマージョン コーポレーションImmersion Corporation Crowd-based haptics
US20200241643A1 (en) * 2017-10-20 2020-07-30 Ck Materials Lab Co., Ltd. Haptic information providing system

Similar Documents

Publication Publication Date Title
US10593167B2 (en) Crowd-based haptics
CN107049280B (en) Wearable equipment of mobile internet intelligence
EP3206109B1 (en) Low-frequency effects haptic conversion system
EP3691280B1 (en) Video transmission method, server, vr playback terminal and computer-readable storage medium
US20190267043A1 (en) Automated haptic effect accompaniment
US10692336B2 (en) Method and schemes for perceptually driven encoding of haptic effects
KR20150028730A (en) Haptic warping system that transforms a haptic signal into a collection of vibrotactile haptic effect patterns
JP6241802B1 (en) Video distribution system, user terminal device, and video distribution method
US11482086B2 (en) Drive control device, drive control method, and program
JP7465019B2 (en) Information processing device, information processing method, and information processing program
US11887281B2 (en) Information processing device, head-mounted display, and image displaying method
WO2023062802A1 (en) Content distribution method and content distribution system
WO2017203834A1 (en) Information processing device, information processing method, and program
JP2010199740A (en) Apparatus and method of displaying stereoscopic images
WO2019138867A1 (en) Information processing device, method, and program
US10212533B2 (en) Method and system for synchronizing vibro-kinetic effects to a virtual reality session
JP2016179054A (en) Information processing equipment, stress-relieving system, and stress-relieving method
CN111448805B (en) Apparatus, method, and computer-readable storage medium for providing notification
KR20140006424A (en) Method for embodiment sensible vibration based on sound source
JP5388032B2 (en) Remote communication system, control device, control method and program
WO2022190919A1 (en) Information processing device, information processing method, and program
JP2014179913A (en) Interface device, imaging apparatus provided therewith, and control method for interface device
JP2006075422A (en) Electric sexual appliance control system
JP2002262138A (en) Image pickup system, video conference system, monitoring system, and information terminal with image pickup function
JP2023035787A (en) Meeting assistance system, meeting assistance method, and meeting assistance program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21960659

Country of ref document: EP

Kind code of ref document: A1