WO2021095573A1 - 情報処理システム、情報処理方法及びプログラム - Google Patents
情報処理システム、情報処理方法及びプログラム Download PDFInfo
- Publication number
- WO2021095573A1 WO2021095573A1 PCT/JP2020/040878 JP2020040878W WO2021095573A1 WO 2021095573 A1 WO2021095573 A1 WO 2021095573A1 JP 2020040878 W JP2020040878 W JP 2020040878W WO 2021095573 A1 WO2021095573 A1 WO 2021095573A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- viewer
- sight
- line
- effect
- performer
- Prior art date
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 67
- 238000003672 processing method Methods 0.000 title claims description 5
- 230000000694 effects Effects 0.000 claims description 241
- 238000003860 storage Methods 0.000 claims description 19
- 239000013598 vector Substances 0.000 claims description 16
- 230000004044 response Effects 0.000 abstract description 8
- 238000007726 management method Methods 0.000 description 70
- 238000009826 distribution Methods 0.000 description 53
- 238000004891 communication Methods 0.000 description 38
- 210000003128 head Anatomy 0.000 description 19
- 238000000034 method Methods 0.000 description 17
- 238000005516 engineering process Methods 0.000 description 16
- 238000012545 processing Methods 0.000 description 16
- 230000009471 action Effects 0.000 description 15
- 238000010586 diagram Methods 0.000 description 10
- 230000008569 process Effects 0.000 description 10
- 230000000007 visual effect Effects 0.000 description 8
- 230000006870 function Effects 0.000 description 7
- 230000008859 change Effects 0.000 description 6
- 230000033001 locomotion Effects 0.000 description 6
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 125000002066 L-histidyl group Chemical group [H]N1C([H])=NC(C([H])([H])[C@](C(=O)[*])([H])N([H])[H])=C1[H] 0.000 description 3
- 241000404883 Pisa Species 0.000 description 3
- 238000009434 installation Methods 0.000 description 3
- 239000004065 semiconductor Substances 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 238000005401 electroluminescence Methods 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 230000001151 other effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000035807 sensation Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/258—Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
- H04N21/44218—Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
Definitions
- This technology relates to an information processing system, information processing method, and program that can present information about the viewer to the performer in a system that distributes the content obtained by capturing the performance of the performer to the viewer in real time via a network.
- video distribution such as movie content has been realized by a one-way system in which content data is distributed from a distributor to a viewer and the viewer enjoys the distributed content.
- the communication means for transmitting from the viewer to the distributor is mainly text information and voice information.
- the communication means for transmitting from the viewer to the distributor is mainly text information and voice information.
- character information by superimposing the character information input by the viewer on the distributed video, communication is realized not only between the distributor and the viewer but also between the viewers.
- Patent Document 1 discloses a means for a plurality of users to perform equal text-based communication in the same virtual space.
- Patent Document 2 discloses means for users using the same content to grasp each other's states.
- the distributor acquires the image and audio data of the performer in real time and distributes those data.
- the content is distributed to the movie theater as the content displayed on the screen, and to each household as the content that can be viewed on the TV or the content that can be viewed on the HMD (Head Mount Display).
- HMD Head Mount Display
- the purpose of this technology is information processing, an information processing system that enables performers who appear in content delivered in real time to perform according to the reaction of viewers in remote areas. To provide methods and programs.
- the information processing system has a control unit.
- the control unit is a line-of-sight parameter that indicates the line-of-sight of the viewer in the coordinate system of the space in which the viewer exists from the terminal of the viewer who is playing back the content in which the performance of the performer is captured in real time via the network. Is acquired together with the viewer identification information that identifies the viewer. Further, the control unit converts the acquired line-of-sight parameter into a line-of-sight parameter indicating the virtual line-of-sight of the viewer in the coordinate system of the space in which the performer exists. Then, the control unit outputs the line-of-sight information indicating the virtual line-of-sight of the viewer to the output device in the space where the performer exists, based on the converted line-of-sight parameter.
- the line-of-sight information may be image information, audio information, or may include virtual position information of a viewer.
- the output device may be a display.
- the control unit calculates the intersection coordinates of the display and the virtual line of sight based on the converted line-of-sight parameters, and uses the line-of-sight information as the line-of-sight information at the intersection coordinates of the display.
- the image corresponding to the viewer may be output at the corresponding position.
- the performer can grasp that the viewer in a remote place is looking at himself / herself just by looking at the display, and can react appropriately such as looking at it or performing a performance.
- the image may be, for example, a viewer's avatar image.
- the control unit When the intersection coordinates corresponding to a predetermined number or more of the viewers exist in the predetermined area of the display, the control unit displays a predetermined one image showing the viewer group instead of the image corresponding to each of the viewers. It may be output.
- the information processing system can prevent the images corresponding to a plurality of viewers from being displayed in an overlapping manner and reducing the visibility of the performer.
- the control unit may acquire attribute information indicating the attributes of the viewer together with the line-of-sight parameters, and may change the output mode of the image according to the attribute information.
- the information processing system to change the image according to the attributes of each viewer so that the performer can respond in detail accordingly.
- the attributes are, for example, age, gender, nationality, place of residence, viewing time, the number of views and purchases of the content in which the same performer appears, the distance to the performer in the coordinate system of the content, and the like.
- the change of the attribute mode is, for example, adding a frame of a different color to the avatar image, changing the size of the avatar image, changing the transparency, and the like.
- the control unit may determine whether or not the viewer is looking at the performer based on the converted line-of-sight parameter, and may change the output mode of the image according to the determination result. ..
- the information processing system makes it possible to grasp whether or not each viewer is facing the performer, and to perform a performance according to the viewer's line of sight, for example, to perform a performance toward the viewer who is looking at the performer. be able to.
- the control unit sets the coordinates of the first intersection corresponding to the first viewer having the first viewer identification information calculated at the first time and the second time after the first time.
- the image corresponding to the viewer is moved on the trajectory connecting the first intersection coordinates to the second intersection coordinates. It may be displayed while making it.
- the information processing system may further have a storage unit that stores information indicating a plurality of types of effects that can be reproduced together with the image in association with the effect identification information that identifies the effect.
- the control unit receives the effect reproduction request including the viewer identification information and the effect identification information from the viewer's terminal, the effect corresponding to the effect identification information is converted into the viewer identification information. It may be output from the vicinity of the corresponding intersection coordinates.
- the effect that is the target of the effect reproduction request may be associated with an arbitrary input (gesture, button, etc.) on the viewer's terminal.
- the control unit may output a predetermined one effect instead of the effect corresponding to each of the viewers. Good.
- the information processing system can prevent the effects corresponding to a plurality of viewers from being displayed in an overlapping manner and reducing the visibility of the performer.
- control unit When the control unit receives an effect reproduction request having the same effect identification information from the predetermined number or more viewers, the control unit may output one predetermined effect instead of the effect corresponding to each viewer. ..
- Multiple speakers may be installed at different positions on the display.
- the control unit exists in the vicinity of the intersection coordinates corresponding to the viewer identification information. It may be output from a speaker.
- the information processing system can reproduce the effect as if the viewer is speaking to the performer, and the performer can grasp it.
- the control unit acquires the line-of-sight parameter indicating the line-of-sight of the performer, and is the inner product of the line-of-sight vector obtained from the line-of-sight parameter of the performer and the line-of-sight vector obtained from the line-of-sight parameter indicating the virtual line of sight of the viewer.
- a predetermined effect may be output from the vicinity of the intersection coordinates corresponding to the viewer identification information.
- the information processing system can make the performer know that the viewer has met the viewer and perform the performance accordingly.
- the control unit acquires the line-of-sight parameter indicating the line-of-sight of the performer, and for each of the plurality of viewers, from the line-of-sight vector obtained from the line-of-sight parameter of the performer and the line-of-sight parameter indicating the virtual line-of-sight of each viewer.
- the number of times the absolute value of the inner product with the obtained line-of-sight vector becomes less than a predetermined threshold is counted, and the value corresponding to each number of times of each viewer is associated with the vicinity of the intersection coordinates corresponding to each viewer.
- the histogram may be displayed on the above display.
- the performer can raise the satisfaction level of the entire viewer by performing in a direction in which the line of sight of the viewer matches the viewer infrequently based on this histogram.
- the line-of-sight parameter indicating the line-of-sight of the viewer in the coordinate system of the space in which the viewer exists is set to the viewer. Obtained with viewer identification information to identify The acquired line-of-sight parameter is converted into a line-of-sight parameter indicating the virtual line-of-sight of the viewer in the coordinate system of the space in which the performer exists. Based on the converted line-of-sight parameter, the line-of-sight information indicating the virtual line-of-sight of the viewer is output to an output device in the space where the performer exists.
- Programs related to other forms of this technology can be applied to information processing devices. From the viewer's terminal, which is playing back the content in which the performance of the performer is captured in real time via the network, the line-of-sight parameter indicating the line-of-sight of the viewer in the coordinate system of the space in which the viewer exists is set to the viewer.
- the steps to get along with the viewer identification information to identify The step of converting the acquired line-of-sight parameter into a line-of-sight parameter indicating the virtual line-of-sight of the viewer in the coordinate system of the space in which the performer exists, and Based on the converted line-of-sight parameter, the step of outputting the line-of-sight information indicating the virtual line-of-sight of the viewer to the output device in the space where the performer exists is executed.
- FIG. 18 It is a figure which showed the case where a performer, a plurality of viewers, and a display have a certain positional relationship in the above-mentioned content distribution system.
- FIG. 18 it is a diagram showing how information expressing the viewing state of another viewer is added to the content viewed by a certain viewer.
- FIG. 18 it is a diagram showing a state in which information expressing a viewer's viewing state is added by virtually moving / enlarging the display to the content viewed by a certain viewer.
- FIG. 1 is a diagram showing a configuration of a content distribution system according to an embodiment of the present technology.
- FIG. 1 is a diagram showing the overall configuration of the system
- FIG. 2 is a diagram showing an example of equipment installation in a content shooting studio possessed by the system.
- this system is connected to a viewer information management server, a performer output system 300, a content creation server 400, a content distribution server 500 in a content shooting studio, and the like, via a network 50 such as the Internet. It also has a plurality of viewer output systems 200.
- the content creation server 400 uses the above-mentioned studio dedicated to content creation to create content in which the performance of the performer is photographed in real time.
- the created content is streamed to the viewer via the network 50.
- the content delivered to the viewer is VR (Virtual Reality) content composed based on the 3D model and surround sound.
- VR Virtual Reality
- the studio is equipped with shooting equipment including one or more cameras 51 and a microphone 52 for content creation, and the content creation server 400 creates distribution content based on the captured data. To do.
- the viewer information management server 100 appropriately acquires and manages information on the viewer's viewing state such as the viewer's virtual line of sight and virtual position from the viewer output system 200.
- the performer output system 300 has one or more displays 53 for outputting information on the viewing state of the viewer to the performer who appears in the content.
- the viewer information management server 100 transmits information such as the viewing state of the viewer received from the viewer output system 200 to the content creation server 400, and the content creation server 400 changes the distributed content according to the information. It is also possible.
- the content created / changed by the content creation server 400 is distributed from the content distribution server 500 to each content viewer (viewer output system 200) by the content distribution server 500 via the network 50.
- FIG. 3 is a diagram showing the hardware configuration of the viewer information management server 100.
- the viewer information management server 100 includes a CPU (Central Processing Unit) 11, a ROM (Read Only Memory) 12, and a RAM (Random Access Memory) 13. Further, the viewer information management server 100 may include a host bus 14, a bridge 15, an external bus 16, an interface 17, an input device 18, an output device 19, a storage device 20, a drive 21, a connection port 22, and a communication device 23. .. Further, the viewer information management server 100 may include an image pickup device 26 and a sensor 27, if necessary. The viewer information management server 100 may have a processing circuit such as a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), or an FPGA (Field-Programmable Gate Array) in place of or in combination with the CPU 11. Good.
- DSP Digital Signal Processor
- ASIC Application Specific Integrated Circuit
- FPGA Field-Programmable Gate Array
- the CPU 11 functions as an arithmetic processing device and a control device, and controls all or a part of the operation in the viewer information management server 100 according to various programs recorded in the ROM 12, the RAM 13, the storage device 20, or the removable recording medium 24. To do.
- the ROM 12 stores programs, calculation parameters, and the like used by the CPU 11.
- the RAM 13 primarily stores a program used in the execution of the CPU 11 and parameters that are appropriately changed in the execution.
- the CPU 11, ROM 12, and RAM 13 are connected to each other by a host bus 14 composed of an internal bus such as a CPU bus. Further, the host bus 14 is connected to an external bus 16 such as a viewer information management server I (Peripheral Component Interconnect / Interface) bus via a bridge 15.
- I Peripheral Component Interconnect / Interface
- the input device 18 is a device operated by a user, such as a touch panel, physical buttons, switches, and levers.
- the input device 18 may be, for example, a remote control device using infrared rays or other radio waves, or an externally connected device 25 such as a smartphone or smart watch that supports the operation of the viewer information management server 100.
- the input device 18 includes an input control circuit that generates an input signal based on the information input by the user and outputs the input signal to the CPU 11. By operating the input device 18, the user inputs various data to the viewer information management server 100 and instructs the viewer information management server 100 to perform processing operations.
- the output device 19 is composed of a device capable of notifying the user of the acquired information using sensations such as sight, hearing, and touch.
- the output device 19 may be, for example, a display device such as an LCD (Liquid Crystal Display) or an organic EL (Electro-Luminescence) display, an audio output device such as a speaker, or the like.
- the output device 19 outputs the result obtained by the processing of the viewer information management server 100 as a video such as a text or an image, a voice such as a voice or a sound, or a vibration.
- the storage device 20 is a data storage device configured as an example of a storage unit of the viewer information management server 100.
- the storage device 20 is composed of, for example, a magnetic storage device such as an HDD (Hard Disk Drive), a semiconductor storage device, an optical storage device, an optical magnetic storage device, or the like.
- the storage device 20 includes, for example, a program executed by the CPU 11, various data, various data acquired from the outside, data acquired from the viewer output system 200 (line-of-sight parameters described later, avatar images of each viewer, etc.), and the like. To store.
- the drive 21 is a reader / writer for a removable recording medium 24 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, and is built in or externally attached to the viewer information management server 100.
- the drive 21 reads the information recorded on the mounted removable recording medium 24 and outputs the information to the RAM 13. Further, the drive 21 writes a record on the mounted removable recording medium 24.
- the connection port 22 is a port for connecting the device to the viewer information management server 100.
- the connection port 22 may be, for example, a USB (Universal Serial Bus) port, an IEEE1394 port, a SCSI (Small Computer System Interface) port, or the like. Further, the connection port 22 may be an RS-232C port, an optical audio terminal, an HDMI (registered trademark) (High-Definition Multimedia Interface) port, or the like.
- the communication device 23 is, for example, a communication interface composed of a communication device for connecting to the communication network 50.
- the communication device 23 may be, for example, a communication card for LAN (Local Area Network), Bluetooth (registered trademark), Wi-Fi, or WUSB (Wireless USB). Further, the communication device 23 may be a router for optical communication, a router for ADSL (Asymmetric Digital Subscriber Line), a modem for various communications, or the like.
- the communication device 23 transmits / receives a signal or the like to / from the Internet or another communication device using a predetermined protocol such as TCP / IP.
- the communication network 50 connected to the communication device 23 is a network connected by wire or wirelessly, and may include, for example, the Internet, a home LAN, infrared communication, radio wave communication, satellite communication, and the like.
- the image pickup device 26 uses, for example, an image pickup element such as a CMOS (Complementary Metal Oxide Semiconductor) or a CCD (Charge Coupled Device), and various members such as a lens for controlling the image formation of a subject image on the image pickup device. It is a camera that captures a real space and generates an captured image. The image pickup device 26 may capture a still image or may capture a moving image.
- an image pickup element such as a CMOS (Complementary Metal Oxide Semiconductor) or a CCD (Charge Coupled Device)
- various members such as a lens for controlling the image formation of a subject image on the image pickup device. It is a camera that captures a real space and generates an captured image.
- the image pickup device 26 may capture a still image or may capture a moving image.
- the sensor 27 is, for example, various sensors such as an acceleration sensor, an angular velocity sensor, a geomagnetic sensor, an illuminance sensor, a temperature sensor, a pressure sensor, a depth sensor, or a sound sensor (microphone).
- Each of the above components may be configured by using general-purpose members, or may be configured by hardware specialized for the function of each component. Such a configuration can be appropriately changed depending on the technical level at the time of implementation.
- the viewer output system 200, the performer output system 300, the content creation server 400, and the content distribution server 500 also have hardware for functioning as a computer similar to that of the viewer information management server 100. ..
- FIG. 4 is a diagram showing a flowchart of the content distribution process.
- FIG. 5 is a diagram showing a display example of the content when there are viewers having different positions and attitudes with respect to the content.
- the content viewer receives the content and views the content through the viewer output system of each viewer.
- the viewer output system 200 is, for example, a head-mounted display having a head tracking function capable of estimating the position and orientation of the viewer's head.
- the viewer output system 200 initializes the position and orientation of the viewer's head in the coordinate system of the content (coordinate system in the space where the performer exists) (step 41), and the content is provided by the head tracking function. The position and orientation of the viewer's head in the coordinate system are estimated (step 42).
- the viewer output system 200 projects the 3D content distributed according to this position / orientation on the virtual image plane (step 43), and outputs the projected content to the display (step 44).
- SLAM Simultaneous Localization And Mapping
- IMU Inertial Measurement Unit
- Binocular stereoscopic vision which is generally used for viewing VR content, requires the position and orientation of the viewer's left and right eyes, but these can be calculated by using the estimated head position to both eyes. it can.
- the viewer 1 looking at the content from the side (Fig. A) and the viewer 2 looking at the content from the front (Fig. B) have heads.
- the appearance of the content will differ depending on the position and posture.
- the viewer output system 200 uses an input device such as a controller to position and orient the head. It is also possible to move virtually.
- the content distribution system of the present embodiment provides an effect showing the viewer's virtual line-of-sight information (including the viewer's virtual position information) and the viewer's reaction to the performer during the content distribution process. , Can be presented to the performer.
- the content distribution system can add an effect indicating the reaction of the viewer to the content during the content distribution process. The details of these processes will be described below.
- FIG. 6 is a flowchart showing the flow of the viewer's line-of-sight information and effect presentation processing to the performer.
- the viewer output system 200 first calculates the viewer's line-of-sight parameter in the content coordinate system (step 51).
- the viewer output system 200 may obtain this by converting the line-of-sight parameters defined in advance in the head mount display coordinate system (coordinate system of the space in which the viewer exists) into the content coordinate system, or the viewer output. If the system 200 has a device for estimating the line-of-sight direction of the viewer in real time, the parameters may be obtained by converting the parameters into the content coordinate system.
- the line-of-sight parameter may be output separately for the right eye and the left eye, but here it is limited to one parameter in some way, such as adopting either one or obtaining the average of the left and right eyes. Think about it.
- the viewer output system 200 uses the premise that the viewer always faces the performer instead of using the position of the viewer's eyes, and draws a straight line connecting the performer's head position and the viewer's head position, for example. It may be used as a line-of-sight parameter. Further, the viewer output system 200 may determine the line-of-sight parameter with a specific direction in the body coordinate system of the head-mounted display as the line-of-sight direction.
- the line-of-sight parameter in the content coordinate system may be calculated by the viewer information management server 100 on the studio side instead of the viewer output system 200.
- the viewer output system 200 transmits the line-of-sight parameter of the viewer in the head-mounted display coordinate system to the viewer information management server 100
- the viewer information management server 100 transmits the line-of-sight parameter to the line-of-sight of the content coordinate system. Convert to parameters.
- the viewer output system 200 transmits the viewer's line-of-sight parameter expressed in the content coordinate system to the viewer information management server 100 (step 52).
- the viewer information management server 100 performs processing required by the performer output system 300 for the line-of-sight parameters sent from each viewer.
- the viewer information management server 100 when the viewer information management server 100 outputs the viewer's avatar image as the line-of-sight information in the performer output system 300, the viewer information management server 100 associates the line-of-sight parameter with the viewer's avatar image that sent the information. Processing may be performed.
- the viewer information management server 100 (CPU 11) has position and orientation information of the display 53 installed in the studio in the content coordinate system, and the display 53 is based on the viewer's line-of-sight parameter also expressed in the content coordinate system. And the coordinates of the intersection of the viewer's line of sight are calculated (step 53).
- the viewer information management server 100 expresses each display 53 by a plane equation, and if the viewer's line-of-sight parameter is expressed by a linear equation, the display 53 The coordinates of the intersection of and the line of sight can be calculated.
- the viewer information management server 100 may obtain the intersection coordinates in each display coordinate system after converting the line-of-sight parameter into each display coordinate system.
- the viewer information management server 100 (CPU 11) causes the performer output system 300 to output the viewer's line-of-sight information to the display 53 in a form that the performer can recognize based on the calculated intersection coordinates (step 54). ).
- the corresponding avatar images 71a, 71b, and 71c are displayed at the intersection coordinates I of the virtual line of sight VL1 of the viewer 1, the virtual line of sight VL2 of the viewer 2, the virtual line of sight VL3 of the viewer 3, and the display 53, respectively. Has been done.
- the performer P recognizes the line of sight of the viewer V at a remote location and the direction in which the viewer V exists in real time by looking at the avatar image 71 displayed on the display 53, and there. You will be able to take appropriate actions such as looking at the person and performing toward it.
- the virtual line of sight VL is also shown in FIG. It is shown that the avatar image 71 moves accordingly.
- this enables the viewer V to have a communication experience (for example, the line of sight is aligned) as if the performer P and himself / herself are physically close to each other.
- the viewer's line of sight may be concentrated on the same coordinates on the display 53 of the performer output system 300.
- the visibility of the performer is lowered because the plurality of avatar images 71 overlap each other.
- the viewer information management server 100 causes the performer output system 300 to display the plurality of avatar images 71 for each viewer by replacing them with other images expressing the concentration of the eyes of the plurality of viewers. You may.
- the performer output system 300 replaces the viewer's avatar image 71 group with the image A and displays the viewpoints of Y or more people.
- the image B may be replaced with an image B different from the image A and displayed.
- the performer output system 300 may display a heat map showing the degree of concentration of the line of sight on the display 53 instead of the avatar image 71.
- the viewer information management server 100 uses the viewer attribute information managed by the viewer information management server 100 or the viewer attribute information given to the line-of-sight parameter information acquired from the viewer output system 200, and outputs the performer.
- the viewer's avatar image 71 displayed on the display of the system 300 may be changed or processed.
- the viewer information management server 100 uses the viewer's age, gender, nationality, place of residence, viewing time, the number of views and purchases of content in which the same performer appears, the distance to the performer in the content coordinate system, and other viewing.
- the avatar image 71 may be framed with a different color, the size of the avatar image 71 may be changed, or the transparency may be changed.
- the viewer information management server 100 expresses the projection destination plane of the projector in the content coordinate system, so that the viewer's avatar image 71 is similar to the case where the display 53 is used. Etc. can be calculated at the position to be drawn.
- a plurality of cameras 51 and microphones 52 are arranged on the same plane as the display 53 (for example, as shown in FIG. 9).
- An embedded display device (in a matrix) may be used.
- the viewer information management server 100 uses the viewer information management server 100, for example, as shown in FIG.
- the size of the avatar image 71 and the color of the frame may be changed, or the avatar image 71 itself may not be displayed depending on whether the avatar image 71 is facing or not.
- the corresponding avatar images 71A and 71B are displayed in the usual sizes, but the virtual line of sight VL3 is the performer. Since it does not face P, the corresponding avatar image 71C is displayed smaller than the avatar images 71A and 71B.
- Whether or not the viewer's line of sight is directed toward the performer P can be determined, for example, by whether or not the performer is included in a viewing cone of an arbitrary size centered on the viewer's line of sight.
- the position of the viewer information (avatar image 71) displayed on the performer output system 300 may be updated at arbitrary intervals.
- the viewer information management server 100 connects two intersections when the line of sight of a certain viewer and the intersection position c (t) of the display are different from the intersection position c (t-1) for the same viewer calculated immediately before.
- the viewer information may be moved so as to move on the trajectory.
- the content distributor creates an effect that allows the viewer to request playback from the performer output system, as shown in the effect table of FIG. 12A.
- the effect table In the effect table, the effect ID that identifies the effect and the content of the effect indicated by the effect ID are associated with each other.
- the effect table is stored in, for example, the storage device 20 of the viewer information management server 100.
- Each viewer registers an action for issuing a playback request for each effect according to his / her own input device, as shown in the viewer action table of FIGS. 12B1 to B3.
- the action here means the input of a specific command or movement to the device included in the viewer output system 200.
- the viewer output system 200 first acquires the effect ID of the effect to be played back from the action of the viewer (step 61).
- the viewer 1 in FIG. 12B repeatedly moves the head up and down to issue a playback request for the effect in the performer output system 300 for the effect with the effect ID: 1000.
- a viewer having a viewing environment having a head tracking function may use the movement of the head for a request as in viewer 1 in FIG. 12B, and a viewer using a motion controller is viewer 2.
- a specific motion may be used for the request, as in.
- the viewer output system 200 transmits an effect reproduction request corresponding to the effect ID to the viewer information management server 100 (step 62).
- the effect reproduction request of each viewer is sent to the viewer information management server 100 as data in which the viewer ID that identifies the viewer and the effect ID are associated with each other.
- the viewer information management server 100 (CPU 11) reproduces the effect corresponding to the effect ID at a position (for example, in the vicinity of the avatar image 71) corresponding to the intersection coordinates of the performer output system 300 based on the intersection coordinates. (Step 63).
- the visual effect 72 of the effect ID: 1004 (rainbow) of FIG. 12A is reproduced in response to the reproduction request from the viewer 1, and in response to the reproduction request from the viewer 2, FIG.
- the visual effect 72 of the effect ID: 1003 (star) of 12A is reproduced, and the visual effect 72 of the effect ID 1000 (“cute” balloon comment) of FIG. 12A is reproduced in response to the reproduction request from the viewer 3.
- effect reproduction requests may be concentrated near the same coordinates of the display 51 of the performer output system 300. At this time, if the effect requested for each viewer is played back, the visibility of the performer will be lowered due to the overlapping of a plurality of effects.
- the viewer information management server 100 may cause the performer output system 300 to play the effects of the plurality of viewers by replacing them with other effects expressing the concentration of the plurality of effect playback requests. ..
- the performer output system 300 replaces each viewer's effect with a special effect expressing the concentration of the effect and reproduces the effect. You may.
- the viewer information management server 100 uses the viewer attribute information managed by the viewer information management server 100 or the viewer attribute information given to the line-of-sight parameter information acquired from the viewer output system 200, and outputs the performer. You may control the size of the effect played by the system 300 and the type of effect you can request.
- the viewer information management server 100 controls the types of effects that can be requested according to the viewing time of the viewer, the number of views of the content in which the same performer appears, the number of purchases, and other parameters associated with the viewer. You may.
- the viewer information management server 100 may reproduce an effect that does not approach the line-of-sight position in order to express the excitement of the entire viewer.
- a special effect indicating the same effect for example, a visual displayed over the entire display 53
- the effect may be reproduced by the performer output system 300.
- the performer output system 300 may include an audio reproduction device such as a speaker. This allows the viewer to request sound effects as well as visual effects.
- the content distributor creates an effect that allows the viewer to request the distribution content. Similarly, each viewer registers an action for issuing a request for granting each effect according to his / her own input device.
- a table related to this effect (for example, one having the same format as shown in FIG. 12) is also stored in the storage device 20 of the viewer management server 100, for example.
- FIG. 15 is a flowchart showing a flow from a viewer's request for adding an effect to the distribution of VR content to which the effect is applied to the viewer. Further, FIG. 16 is a conceptual diagram showing the flow.
- the CPU 11 of the viewer information management server 100 receives the effect addition request of each viewer from the viewer output system 200 of each viewer (step 151).
- the effect addition request is received as data in which the viewer ID and the effect ID are associated with each other.
- the CPU 11 specifies the effect ID from the effect addition request (step 152).
- the CPU 11 transmits an effect grant request including the effect ID to the content creation server 400 (step 153).
- the content to which the effect corresponding to the effect ID is given by the content creation server 400 is distributed from the content distribution server 500 to the viewer output system 200 (step 154).
- the effect addition request may be sent directly to the content creation server 400 without going through the viewer information management server 100.
- the effect is given to the content by the content creation server 400, and each viewer is given the effect. It is delivered to the viewer output system 200 of.
- each viewer can visually recognize the added effect from different lines of sight L1, L2, and L3.
- the viewers 1 and 3 can know in real time how the viewer 2 reacts to the content.
- effect application requests may be concentrated near the same position of the content (for example, around the performer). At this time, if the requested effect is given to each viewer, the visibility of the viewer is lowered due to the overlapping of a plurality of effects.
- the viewer information management server 100 may cause the content creation server 400 to give the effects of a plurality of viewers by replacing them with other effects expressing the concentration of the plurality of effect grant requests. ..
- the content creation server 400 may replace the effect of each viewer with a special effect expressing the concentration of the effect.
- the viewer information management server 100 uses the viewer attribute information managed by the viewer information management server 100 or the viewer attribute information given to the line-of-sight parameter information acquired from the viewer output system 200 to be used as content. You may control the size of the effect to be given and the type of effect that can be requested.
- the viewer information management server 100 controls the types of effects that can be requested to be given according to the viewing time of the viewer, the number of views of the content in which the same performer appears, the number of purchases, and other parameters associated with the viewer. You may.
- a special effect for example, a visual effect displayed over the entire content
- a special effect for example, a visual effect displayed over the entire content
- the viewer information management server 100 intentionally issues a request by using the viewer attribute information managed by the viewer information management server 100 or the viewer attribute information given to the line-of-sight parameter information.
- the VR content may be changed without the need for it.
- the viewer information management server 100 stores a viewer residential area attribute table showing the number of viewers for each residential area of viewers around the world.
- the content creation server 400 is a landmark representing each region (for example, Tokyo Tower in Japan, Statue of Liberty in the United States, Pisa in Italy) according to the number of viewers in the place of residence.
- the display size of a 3D model such as the Leaning Tower of Pisa or the statue of Merlion in Singapore may be changed and combined with the background of the performer P to create the content.
- the number of viewers is in the order of Japan, the United States, Italy, and Singapore, so the size of the 3D model is in the order of Tokyo Tower, Statue of Liberty, Leaning Tower of Pisa, and Merlion statue as the background of the content. Is set.
- a method of adding the viewing state of other viewers (viewer's position, etc.) to the distributed content and viewing the content can be considered.
- the additional content effect
- the additional content may appear at a position that hinders the viewing of the delivered content, or the delivered content may be buried in the additional content. Problems such as disappearing occur.
- FIG. 18 shows a case where the performer P, a plurality of virtual viewers V, and the display 53 have a certain positional relationship.
- the avatar content expressing the viewing state of the other viewer is added to the content viewed by the viewer V1 based on the viewing position of the other viewer or the position of the avatar image 71 of the other viewer.
- additional content may appear near the intersection coordinate I in the viewing cone of the viewer 1, and the viewing of the distributed content of the viewer V1 may be hindered.
- the content creation server 400 virtually moves and enlarges the display 53 based on the position of the viewer V1 as shown in FIG. 20, and sets the intersection of the line of sight of another viewer and the virtual display 53. By using it at the display position of the additional content, it is possible to add the content expressing the viewing state of another viewer to the distributed content without interfering with the viewing of the viewer V1.
- the position and size of the above virtual display may be changed arbitrarily.
- the content creation server 400 may be set so that the virtual display always comes behind the viewing position of the viewer V1 with respect to the content.
- the content creation server 400 may use an arbitrary plane, spherical surface, or a combination thereof to obtain an intersection with the line of sight of another viewer instead of the virtual display, and use it as the display position of the additional content.
- the viewer information management server 100 may share the viewing state only to the members of the group or community to which each viewer belongs (for example, obtained from SNS or the like). Further, the content creation server 400 may replace the viewer's avatar content 72 with an image that is easier to draw (lower resolution).
- the effect addition position may be adjusted appropriately.
- three specific cases will be described, but the present invention is not limited to these.
- the content creation server 400 keeps the quality of the playback effect viewed by each viewer constant by adjusting the playback position of the effect so that the effect is played in the viewing cone of each viewer. be able to.
- FIG. 22 shows how the effect playback position of “Random Rays” requested by another viewer is adjusted according to the viewing cone VC of viewer 1.
- the upper figure of the figure shows before adjustment, and the lower figure of the same figure shows after adjustment.
- the reproduction positions of ray2 and ray4, which were located outside the viewing cone VC in the upper figure, are in the viewing cone VC in the lower figure. It is adjusted so that it can be seen with.
- the viewer's line-of-sight direction may be the center, or the head direction may be the center.
- the effect requested by another viewer may be played in the space between the viewer and the performer.
- the viewer output system 200 of the viewer will use the target effect. It is conceivable to stop the reproduction of. However, using this means may prevent one viewer from seeing the effect requested by another viewer with a different perspective.
- each viewer output system 200 may adjust the center of occurrence of the reproduction effect according to the line-of-sight direction of each viewer and the position of the performer.
- the effect center is a coordinate that serves as a reference for determining the playback position of an effect that has or does not have a specific attribute.
- FIG. 23 shows the area A in which the effect generation center set centering on the performer P can be set.
- a circle having a radius r [m] horizontal to the ground is set to a height h [m] and its center is set as a performer P, but the setting method of the area A is not limited.
- FIG. 24 shows how the effect generation center C is set for each viewer using the set area A.
- the effect generation center C maps each viewer's line of sight L to the plane where the effect generation center settable area A exists, and is far from the viewer at the intersection of the mapped line of sight and the effect generation center settable area A. It is set as a person.
- any viewer can view the effect requested by other viewers without hindering the viewing of the distributed content.
- the content creation server 400 is not the viewer output system 200, but the content creation server 400 is used by each viewer from each viewer output system 200 via the viewer information management server 100 or directly. It may be executed by receiving the line-of-sight parameter of.
- the effect to be played has a text attribute
- the effect having the text attribute is played on the plane of the background content having a certain spread or more.
- background contents Plane1 and Plane2 having different plane parameters are arranged ahead of the line of sight (L1 and L2) of the viewer 1 and the viewer 2.
- the content creation server 400 not the viewer output system 200, obtains the line-of-sight parameters of each viewer from each viewer output system 200 via the viewer information management server 100 or directly. It may be executed by receiving.
- the content creation server 400 may reflect only the effect reproduction request of another viewer having a line-of-sight parameter close to that viewer in the content delivered to a certain viewer.
- the content creation server 400 sets the number of rays to be reproduced for one reproduction request to n.
- the book may be y, which is larger than x.
- the content creation server 400 does not give the effect that the playback position is determined according to the line-of-sight information of the viewer to the content distributed from the content distribution server 500, and the output system 200 of each viewer relates to the effect. By transmitting information and giving an effect, it is possible to reduce the load on the content creation server 400 and the content distribution server 500.
- an effect that changes the display posture according to the viewer's line-of-sight direction or has an attribute that does not change an effect that changes the display posture according to the orientation of the performer or has an attribute that does not change, in the viewing cone between the viewer and the performer
- an effect that has an attribute that is not displayed in an effect that has an attribute that is played back with the distance between the viewer and the performer as a parameter
- the content distribution system enables the performer to grasp the virtual line of sight of the viewer in the same space as himself / herself, and makes the performer a remote viewer. On the other hand, it is possible to perform an appropriate performance according to the reaction of the viewer.
- the performer and the viewer can communicate as if they are physically close to each other even in a remote place.
- the viewer's action is mapped to the effect ID and sent to the viewer information management server 100, the amount of communication data for expressing the viewer's action is significantly reduced.
- the content distribution system can share the experience among viewers who are viewing common content by reflecting the actions of the viewers in the distributed content.
- the content distribution system can differentiate the services provided for each viewer by controlling the effects that can be requested to be played / granted for each viewer.
- the viewer information providing server 100 or the content creation server 300 determines that a specific communication has been established between the performer and the viewer, and the content delivered by the target viewer or all of the contents. It is conceivable to enhance the communication experience by adding a special effect to the viewer's distributed content.
- the establishment of specific communication includes, for example, the case where the line of sight of the performer and the viewer match, the case where the viewer receives a specific effect reproduction request for a specific performance of the performer, and the like.
- the viewer information management server 100 or the content creation server 300 determines whether or not the line of sight of the performer and the viewer match, for example, the line of sight of the performer is directed toward the avatar image 71 of a certain viewer on the display.
- the absolute value of the inner product of the line-of-sight vectors of the performer and the viewer may be determined by whether or not it is less than a predetermined threshold value such that the line-of-sight vectors are substantially parallel.
- the viewer information management server 100 outputs a special visual effect or sound effect from the vicinity of the avatar image (intersection coordinates) corresponding to the viewer on the display 53 when the line of sight of the performer and the viewer match. You may let me.
- the viewer information management server 100 counts the number of times the line of sight is matched to each viewer, so that the value indicating how often the line of sight is matched in each direction is the coordinate of each intersection of the display 53. It can be displayed as a histogram in association with I. Based on this information, the performer can increase the satisfaction of the entire viewer by performing in a direction in which the viewer and the line of sight are infrequently aligned.
- FIG. 14 shows an example in which a frequency histogram 73 showing the above frequency is displayed on the display 53.
- the value of the frequency histogram 73 it is conceivable to use a value obtained by dividing the total number of times the viewer and the line of sight meet in each direction by the number of viewers existing in that direction.
- a content distribution system imposes a higher viewing fee than usual on a viewer who uses a specific viewing position on the premise that the performer frequently communicates with the viewing position.
- a content distribution system imposes a higher viewing fee than usual on a viewer who uses a specific viewing position on the premise that the performer frequently communicates with the viewing position.
- the content was shot by the camera 51 fixed to the shooting studio, but instead of the camera 51, the content may be shot while moving by, for example, a drone.
- the present technology can have the following configurations. (1) From the viewer's terminal, which is playing back the content in which the performance of the performer is captured in real time via the network, the line-of-sight parameter indicating the line-of-sight of the viewer in the coordinate system of the space in which the viewer exists is set to the viewer. Obtained with viewer identification information to identify The acquired line-of-sight parameter is converted into a line-of-sight parameter indicating the virtual line-of-sight of the viewer in the coordinate system of the space in which the performer exists.
- An information processing system including a control unit that outputs line-of-sight information indicating the virtual line-of-sight of the viewer to an output device in the space in which the performer exists based on the converted line-of-sight parameters.
- the output device is a display.
- the control unit calculates the intersection coordinates of the display and the virtual line of sight based on the converted line-of-sight parameters, and uses the line-of-sight information as the line-of-sight information at a position corresponding to the intersection coordinates of the display.
- An information processing system that outputs images corresponding to.
- (3) The information processing system according to (2) above.
- the control unit When the intersection coordinates corresponding to a predetermined number or more of the viewers exist in the predetermined area of the display, the control unit displays a predetermined one image showing the viewer group instead of the images corresponding to the respective viewers.
- the control unit is an information processing system that acquires attribute information indicating the attributes of the viewer together with the line-of-sight parameters and changes the output mode of the image according to the attribute information.
- the control unit determines whether or not the viewer is looking at the performer based on the converted line-of-sight parameter, and changes the output mode of the image according to the determination result. ..
- a storage unit further includes a storage unit that stores information indicating a plurality of types of effects that can be reproduced together with the image in association with the effect identification information that identifies the effect.
- the control unit When the control unit receives an effect reproduction request including the viewer identification information and the effect identification information from the viewer's terminal, the control unit corresponds to the effect corresponding to the effect identification information to the viewer identification information.
- An information processing system that outputs information from the vicinity of the intersection coordinates.
- the control unit outputs a predetermined one effect in place of the effect corresponding to each viewer when an effect reproduction request corresponding to the predetermined number or more of the viewers exists in the predetermined area of the display. system. (9) The information processing system according to (7) or (8) above.
- the control unit receives an effect reproduction request having the same effect identification information from the predetermined number or more viewers, the information processing system outputs a predetermined one effect instead of the effect corresponding to each viewer. ..
- the information processing system according to any one of (7) to (9) above.
- a plurality of speakers are installed at different positions on the display.
- the control unit transfers the sound effect from a speaker existing in the vicinity of the intersection coordinates corresponding to the viewer identification information.
- Information processing system to output.
- (11) The information processing system according to any one of (2) to (10) above.
- the control unit acquires a line-of-sight parameter indicating the performer's line of sight, and is an inner product of a line-of-sight vector obtained from the performer's line-of-sight parameter and a line-of-sight vector obtained from the viewer's virtual line-of-sight parameter.
- An information processing system that outputs a predetermined effect from the vicinity of the intersection coordinates corresponding to the viewer identification information when it is determined that the absolute value is less than a predetermined threshold value.
- the information processing system according to any one of (2) to (10) above.
- the control unit acquires a line-of-sight parameter indicating the line-of-sight of the performer, and for each of a plurality of viewers, from a line-of-sight vector obtained from the line-of-sight parameter of the performer and a line-of-sight parameter indicating a virtual line-of-sight of each viewer.
- the number of times the absolute value of the inner product with the obtained line-of-sight vector becomes less than a predetermined threshold is counted, and the value corresponding to each number of times of each viewer is associated with the vicinity of the intersection coordinates corresponding to each viewer.
- An information processing system that displays the resulting vector on the display. (13) From the viewer's terminal, which is playing back the content in which the performance of the performer is captured in real time via the network, the line-of-sight parameter indicating the line-of-sight of the viewer in the coordinate system of the space in which the viewer exists is set to the viewer.
- the acquired line-of-sight parameter is converted into a line-of-sight parameter indicating the virtual line-of-sight of the viewer in the coordinate system of the space in which the performer exists.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Social Psychology (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computer Graphics (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
Description
演者のパフォーマンスが撮像されたコンテンツを、ネットワークを介してリアルタイムに再生中の視聴者の端末から、上記視聴者の存在する空間の座標系における当該視聴者の視線を示す視線パラメータを、当該視聴者を識別する視聴者識別情報と共に取得し、
上記取得された視線パラメータを、上記演者が存在する空間の座標系における当該視聴者の仮想的な視線を示す視線パラメータに変換し、
上記変換された視線パラメータを基に、上記視聴者の上記仮想的な視線を示す視線情報を、上記演者の存在する空間内の出力装置へ出力する、ことを含む。
演者のパフォーマンスが撮像されたコンテンツを、ネットワークを介してリアルタイムに再生中の視聴者の端末から、上記視聴者の存在する空間の座標系における当該視聴者の視線を示す視線パラメータを、当該視聴者を識別する視聴者識別情報と共に取得するステップと、
上記取得された視線パラメータを、上記演者が存在する空間の座標系における当該視聴者の仮想的な視線を示す視線パラメータに変換するステップと、
上記変換された視線パラメータを基に、上記視聴者の上記仮想的な視線を示す視線情報を、上記演者の存在する空間内の出力装置へ出力するステップと、を実行させる。
図1は、本技術の一実施形態に係るコンテンツ配信システムの構成を示した図である。
図3は、上記視聴者情報管理サーバ100のハードウェア構成を示した図である。
次に、以上のように構成されたコンテンツ配信システムの動作について説明する。当該動作は、視聴者情報管理サーバ100のCPU11及び通信部等のハードウェアと、ROM12、RAM13、ストレージ装置20、またはリムーバブル記録媒体24に記憶されたソフトウェアとの協働により実行される。
図6は、上記演者に対する視聴者の視線情報及びエフェクトの提示処理の流れを示したフローチャートである。
以下、上記演者への視聴者の視線情報の提示に関する変形例について説明する。
次に、演者出力システム300において視聴者の視線以外の情報を追加で表示することで、視聴者の反応や盛り上がりといった情報を演者に伝える手段について述べる。
以下、演者に対するエフェクト提示処理の変形例について説明する。
次に、視聴者のアクションに応じて、配信コンテンツに特定のエフェクトを加えることによって、同じ配信コンテンツを視聴している視聴者同士に、お互いがどのような反応をしているかリアルタイムに知らせる方法について説明する。
以下、コンテンツに対するエフェクト付与処理の変形例について説明する。
視聴者の視線情報および位置情報と、演者の視線情報および位置情報を用いることで、視聴者同士の視聴状態の共有や、コンテンツへのエフェクトの追加をより効果的に行うことができる。以下、いくつかの例について説明する。
視聴状態の共有方法のひとつとして、他視聴者の視聴状態(視聴者の位置等)を配信コンテンツに付加し、コンテンツを視聴する方法が考えられる。この時、無条件に他視聴者の視聴状態が配信コンテンツに付加されると、配信コンテンツの視聴の妨げとなる位置に付加コンテンツ(エフェクト)が現れたり、配信コンテンツが付加コンテンツに埋もれて見られなくなる、といった問題が発生する。
上記エフェクト付与リクエストによって付与されたエフェクトを視聴者間で共有する場合、エフェクト付与位置が適切に調整されてもよい。ここでは3つの具体的なケースについて述べるが、これらに限定するものではない。
他視聴者からの付与リクエストによってあるエフェクトが再生されても、ビューイングコーン内でエフェクトが再生されない限り、視聴者は気付くことができない。
ある視聴者から付与リクエストされたエフェクトが、他視聴者に配信されるコンテンツに対しても同じように付与されると、リクエスト元の視聴者とは別の視点から配信コンテンツを視聴している視聴者にとって、視聴の妨げになる可能性がある。
特定の属性を持つエフェクトに対し、視聴者の視線方向と背景コンテンツの属性を用いて再生位置が調整されることで、各視聴者に適切なエフェクト再生が可能になる。
大量のエフェクト再生リクエストが発生した場合、配信コンテンツにエフェクトを付加するための処理量増大による配信の遅延や、通信データの増大といった問題が発生する。この問題を避けるために、視聴者の視線情報を用いて再生リクエストのフィルタリング処理を行うことが考えられる。
エフェクトが持つ属性によって再生方法を変える方法として、上述したもの以外にも次のような属性が考えられる。
本発明は上述の実施形態にのみ限定されるものではなく、本発明の要旨を逸脱しない範囲内において種々変更され得る。
本技術は以下のような構成もとることができる。
(1)
演者のパフォーマンスが撮像されたコンテンツを、ネットワークを介してリアルタイムに再生中の視聴者の端末から、前記視聴者の存在する空間の座標系における当該視聴者の視線を示す視線パラメータを、当該視聴者を識別する視聴者識別情報と共に取得し、
前記取得された視線パラメータを、前記演者が存在する空間の座標系における当該視聴者の仮想的な視線を示す視線パラメータに変換し、
前記変換された視線パラメータを基に、前記視聴者の前記仮想的な視線を示す視線情報を、前記演者の存在する空間内の出力装置へ出力する
制御部
を具備する情報処理システム。
(2)
上記(1)に記載の情報処理システムであって、
前記出力装置はディスプレイであり、
前記制御部は、前記変換された視線パラメータを基に、前記ディスプレイと前記仮想的な視線との交点座標を算出し、前記視線情報として、前記ディスプレイの当該交点座標に対応する位置に前記視聴者に対応する画像を出力させる
情報処理システム。
(3)
上記(2)に記載の情報処理システムであって、
前記制御部は、所定数以上の視聴者に対応する前記交点座標が前記ディスプレイの所定領域に存在する場合、前記各視聴者に対応する画像に代えて視聴者群を示す所定の1つの画像を出力させる
情報処理システム。
(4)
上記(2)または(3)に記載の情報処理システムであって、
前記制御部は、前記視聴者の属性を示す属性情報を前記視線パラメータと共に取得し、当該属性情報に応じて前記画像の出力態様を変更する
情報処理システム。
(5)
上記(2)~(4)のいずれかに記載の情報処理システムであって、
前記制御部は、前記変換された視線パラメータを基に、前記視聴者が前記演者に視線を向けているか否かを判定し、当該判定結果に応じて前記画像の出力態様を変更する
情報処理システム。
(6)
上記(2)~(5)のいずれかに記載の情報処理システムであって、
前記制御部は、第1の時刻に算出された第1の視聴者識別情報を有する第1の視聴者に対応する第1の交点座標と、前記第1の時刻より後の第2の時刻に算出された前記第1の視聴者に対応する第2の交点座標とが異なる場合、前記視聴者に対応する画像を前記第1の交点座標から前記第2の交点座標とを結ぶ軌跡上で移動させながら表示させる
情報処理システム。
(7)
上記(2)~(6)のいずれかに記載の情報処理システムであって、
前記画像と共に再生可能な複数種類のエフェクトを示す情報を、当該エフェクトを識別するエフェクト識別情報と対応付けて記憶する記憶部をさらに具備し、
前記制御部は、前記視聴者の端末から、前記視聴者識別情報及び前記エフェクト識別情報を含むエフェクト再生リクエストを受信した場合、当該エフェクト識別情報に対応するエフェクトを、前記視聴者識別情報に対応する前記交点座標の近傍から出力させる
情報処理システム。
(8)
上記(7)に記載の情報処理システムであって、
前記制御部は、前記所定数以上の視聴者に対応するエフェクト再生リクエストが前記ディスプレイの所定領域について存在する場合、前記各視聴者に対応するエフェクトに代えて所定の1つのエフェクトを出力させる
情報処理システム。
(9)
上記(7)または(8)に記載の情報処理システムであって、
前記制御部は、前記所定数以上の視聴者から同一のエフェクト識別情報を有するエフェクト再生リクエストを受信した場合、前記各視聴者に対応するエフェクトに代えて所定の1つのエフェクトを出力させる
情報処理システム。
(10)
上記(7)~(9)のいずれかに記載の情報処理システムであって、
前記ディスプレイ上の異なる位置には複数のスピーカが設置されており、
前記制御部は、前記エフェクト再生リクエストに含まれる前記エフェクト識別情報に対応するエフェクトがサウンドエフェクトである場合、当該サウンドエフェクトを、前記視聴者識別情報に対応する前記交点座標の近傍に存在するスピーカから出力させる
情報処理システム。
(11)
上記(2)~(10)のいずれかに記載の情報処理システムであって、
前記制御部は、前記演者の視線を示す視線パラメータを取得し、当該演者の視線パラメータから得られる視線ベクトルと、前記視聴者の仮想的な視線を示す視線パラメータから得られる視線ベクトルとの内積の絶対値が所定の閾値未満であるであると判断した場合、前記視聴者識別情報に対応する前記交点座標の近傍から所定のエフェクトを出力させる
情報処理システム。
(12)
上記(2)~(10)のいずれかに記載の情報処理システムであって、
前記制御部は、前記演者の視線を示す視線パラメータを取得し、複数の視聴者毎に、当該演者の視線パラメータから得られる視線ベクトルと、当該各視聴者の仮想的な視線を示す視線パラメータから得られる視線ベクトルとの内積の絶対値が所定の閾値未満となった回数をカウントし、前記各視聴者の各回数に対応する値を、各視聴者に対応する前記交点座標の近傍に対応付けたヒストグラムを前記ディスプレイに表示させる
情報処理システム。
(13)
演者のパフォーマンスが撮像されたコンテンツを、ネットワークを介してリアルタイムに再生中の視聴者の端末から、前記視聴者の存在する空間の座標系における当該視聴者の視線を示す視線パラメータを、当該視聴者を識別する視聴者識別情報と共に取得し、
前記取得された視線パラメータを、前記演者が存在する空間の座標系における当該視聴者の仮想的な視線を示す視線パラメータに変換し、
前記変換された視線パラメータを基に、前記視聴者の前記仮想的な視線を示す視線情報を、前記演者の存在する空間内の出力装置へ出力する
情報処理方法。
(14)
情報処理装置に、
演者のパフォーマンスが撮像されたコンテンツを、ネットワークを介してリアルタイムに再生中の視聴者の端末から、前記視聴者の存在する空間の座標系における当該視聴者の視線を示す視線パラメータを、当該視聴者を識別する視聴者識別情報と共に取得するステップと、
前記取得された視線パラメータを、前記演者が存在する空間の座標系における当該視聴者の仮想的な視線を示す視線パラメータに変換するステップと、
前記変換された視線パラメータを基に、前記視聴者の前記仮想的な視線を示す視線情報を、前記演者の存在する空間内の出力装置へ出力するステップと
を実行させるプログラム。
18…入力装置
19…出力装置
20…ストレージ装置
26…撮像装置
23…通信装置
51…カメラ
52…マイク
53…ディスプレイ
71…アバター画像
72…エフェクト
73…ヒストグラム
100…視聴者情報管理サーバ
200…演者出力システム
300…視聴者出力システム
400…コンテンツ作成サーバ
500…コンテンツ配信サーバ
P…演者
V…視聴者
L…視線
VL…仮想視線
Claims (14)
- 演者のパフォーマンスが撮像されたコンテンツを、ネットワークを介してリアルタイムに再生中の視聴者の端末から、前記視聴者の存在する空間の座標系における当該視聴者の視線を示す視線パラメータを、当該視聴者を識別する視聴者識別情報と共に取得し、
前記取得された視線パラメータを、前記演者が存在する空間の座標系における当該視聴者の仮想的な視線を示す視線パラメータに変換し、
前記変換された視線パラメータを基に、前記視聴者の前記仮想的な視線を示す視線情報を、前記演者の存在する空間内の出力装置へ出力する
制御部
を具備する情報処理システム。 - 請求項1に記載の情報処理システムであって、
前記出力装置はディスプレイであり、
前記制御部は、前記変換された視線パラメータを基に、前記ディスプレイと前記仮想的な視線との交点座標を算出し、前記視線情報として、前記ディスプレイの当該交点座標に対応する位置に前記視聴者に対応する画像を出力させる
情報処理システム。 - 請求項2に記載の情報処理システムであって、
前記制御部は、所定数以上の視聴者に対応する前記交点座標が前記ディスプレイの所定領域に存在する場合、前記各視聴者に対応する画像に代えて視聴者群を示す所定の1つの画像を出力させる
情報処理システム。 - 請求項2に記載の情報処理システムであって、
前記制御部は、前記視聴者の属性を示す属性情報を前記視線パラメータと共に取得し、当該属性情報に応じて前記画像の出力態様を変更する
情報処理システム。 - 請求項2に記載の情報処理システムであって、
前記制御部は、前記変換された視線パラメータを基に、前記視聴者が前記演者に視線を向けているか否かを判定し、当該判定結果に応じて前記画像の出力態様を変更する
情報処理システム。 - 請求項2に記載の情報処理システムであって、
前記制御部は、第1の時刻に算出された第1の視聴者識別情報を有する第1の視聴者に対応する第1の交点座標と、前記第1の時刻より後の第2の時刻に算出された前記第1の視聴者に対応する第2の交点座標とが異なる場合、前記視聴者に対応する画像を前記第1の交点座標から前記第2の交点座標とを結ぶ軌跡上で移動させながら表示させる
情報処理システム。 - 請求項2に記載の情報処理システムであって、
前記画像と共に再生可能な複数種類のエフェクトを示す情報を、当該エフェクトを識別するエフェクト識別情報と対応付けて記憶する記憶部をさらに具備し、
前記制御部は、前記視聴者の端末から、前記視聴者識別情報及び前記エフェクト識別情報を含むエフェクト再生リクエストを受信した場合、当該エフェクト識別情報に対応するエフェクトを、前記視聴者識別情報に対応する前記交点座標の近傍から出力させる
情報処理システム。 - 請求項7に記載の情報処理システムであって、
前記制御部は、前記所定数以上の視聴者に対応するエフェクト再生リクエストが前記ディスプレイの所定領域について存在する場合、前記各視聴者に対応するエフェクトに代えて所定の1つのエフェクトを出力させる
情報処理システム。 - 請求項7に記載の情報処理システムであって、
前記制御部は、前記所定数以上の視聴者から同一のエフェクト識別情報を有するエフェクト再生リクエストを受信した場合、前記各視聴者に対応するエフェクトに代えて所定の1つのエフェクトを出力させる
情報処理システム。 - 請求項7に記載の情報処理システムであって、
前記ディスプレイ上の異なる位置には複数のスピーカが設置されており、
前記制御部は、前記エフェクト再生リクエストに含まれる前記エフェクト識別情報に対応するエフェクトがサウンドエフェクトである場合、当該サウンドエフェクトを、前記視聴者識別情報に対応する前記交点座標の近傍に存在するスピーカから出力させる
情報処理システム。 - 請求項2に記載の情報処理システムであって、
前記制御部は、前記演者の視線を示す視線パラメータを取得し、当該演者の視線パラメータから得られる視線ベクトルと、前記視聴者の仮想的な視線を示す視線パラメータから得られる視線ベクトルとの内積の絶対値が所定の閾値未満であるであると判断した場合、前記視聴者識別情報に対応する前記交点座標の近傍から所定のエフェクトを出力させる
情報処理システム。 - 請求項2に記載の情報処理システムであって、
前記制御部は、前記演者の視線を示す視線パラメータを取得し、複数の視聴者毎に、当該演者の視線パラメータから得られる視線ベクトルと、当該各視聴者の仮想的な視線を示す視線パラメータから得られる視線ベクトルとの内積の絶対値が所定の閾値未満となった回数をカウントし、前記各視聴者の各回数に対応する値を、各視聴者に対応する前記交点座標の近傍に対応付けたヒストグラムを前記ディスプレイに表示させる
情報処理システム。 - 演者のパフォーマンスが撮像されたコンテンツを、ネットワークを介してリアルタイムに再生中の視聴者の端末から、前記視聴者の存在する空間の座標系における当該視聴者の視線を示す視線パラメータを、当該視聴者を識別する視聴者識別情報と共に取得し、
前記取得された視線パラメータを、前記演者が存在する空間の座標系における当該視聴者の仮想的な視線を示す視線パラメータに変換し、
前記変換された視線パラメータを基に、前記視聴者の前記仮想的な視線を示す視線情報を、前記演者の存在する空間内の出力装置へ出力する
情報処理方法。 - 情報処理装置に、
演者のパフォーマンスが撮像されたコンテンツを、ネットワークを介してリアルタイムに再生中の視聴者の端末から、前記視聴者の存在する空間の座標系における当該視聴者の視線を示す視線パラメータを、当該視聴者を識別する視聴者識別情報と共に取得するステップと、
前記取得された視線パラメータを、前記演者が存在する空間の座標系における当該視聴者の仮想的な視線を示す視線パラメータに変換するステップと、
前記変換された視線パラメータを基に、前記視聴者の前記仮想的な視線を示す視線情報を、前記演者の存在する空間内の出力装置へ出力するステップと
を実行させるプログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202080077588.4A CN114651448B (zh) | 2019-11-15 | 2020-10-30 | 信息处理系统、信息处理方法和程序 |
US17/767,746 US20240077941A1 (en) | 2019-11-15 | 2020-10-30 | Information processing system, information processing method, and program |
JP2021556021A JPWO2021095573A1 (ja) | 2019-11-15 | 2020-10-30 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2019207477 | 2019-11-15 | ||
JP2019-207477 | 2019-11-15 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021095573A1 true WO2021095573A1 (ja) | 2021-05-20 |
Family
ID=75912321
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2020/040878 WO2021095573A1 (ja) | 2019-11-15 | 2020-10-30 | 情報処理システム、情報処理方法及びプログラム |
Country Status (4)
Country | Link |
---|---|
US (1) | US20240077941A1 (ja) |
JP (1) | JPWO2021095573A1 (ja) |
CN (1) | CN114651448B (ja) |
WO (1) | WO2021095573A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023276252A1 (ja) * | 2021-06-30 | 2023-01-05 | ソニーグループ株式会社 | 情報処理装置、情報処理方法及びプログラム |
WO2023047637A1 (ja) * | 2021-09-22 | 2023-03-30 | ソニーグループ株式会社 | 情報処理装置およびプログラム |
WO2023079859A1 (ja) * | 2021-11-08 | 2023-05-11 | ソニーグループ株式会社 | 情報処理装置及び情報処理方法 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019126101A (ja) * | 2014-07-18 | 2019-07-25 | ソニー株式会社 | 情報処理装置及び方法、表示制御装置及び方法、プログラム、並びに情報処理システム |
JP2019192178A (ja) * | 2018-04-27 | 2019-10-31 | 株式会社コロプラ | プログラム、情報処理装置、および方法 |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8358328B2 (en) * | 2008-11-20 | 2013-01-22 | Cisco Technology, Inc. | Multiple video camera processing for teleconferencing |
JP4775671B2 (ja) * | 2008-12-26 | 2011-09-21 | ソニー株式会社 | 情報処理装置および方法、並びにプログラム |
US8154615B2 (en) * | 2009-06-30 | 2012-04-10 | Eastman Kodak Company | Method and apparatus for image display control according to viewer factors and responses |
JP5783629B2 (ja) * | 2011-07-08 | 2015-09-24 | 株式会社ドワンゴ | 映像表示システム、映像表示方法、映像表示制御プログラム、動作情報送信プログラム |
JP6039915B2 (ja) * | 2011-07-08 | 2016-12-07 | 株式会社ドワンゴ | ステージ演出システム、演出制御サブシステム、ステージ演出システムの動作方法、演出制御サブシステムの動作方法、およびプログラム |
US9538133B2 (en) * | 2011-09-23 | 2017-01-03 | Jie Diao | Conveying gaze information in virtual conference |
KR101751708B1 (ko) * | 2012-08-17 | 2017-07-11 | 한국전자통신연구원 | 시청행태 인식기반의 시청률 및 광고효과 분석 방법 및 시스템 |
CN105323531A (zh) * | 2014-06-30 | 2016-02-10 | 三亚中兴软件有限责任公司 | 视频会议热点场景的检测方法和装置 |
JP2017062598A (ja) * | 2015-09-24 | 2017-03-30 | ソニー株式会社 | 情報処理装置、情報処理方法、およびプログラム |
WO2018074037A1 (ja) * | 2016-10-21 | 2018-04-26 | 株式会社Myth | 情報処理システム |
JP6946600B2 (ja) * | 2017-02-27 | 2021-10-06 | 日本製紙クレシア株式会社 | 吸収性補助パッド及びその使用方法 |
JP2018163460A (ja) * | 2017-03-24 | 2018-10-18 | ソニー株式会社 | 情報処理装置、および情報処理方法、並びにプログラム |
US10269571B2 (en) * | 2017-07-12 | 2019-04-23 | Applied Materials, Inc. | Methods for fabricating nanowire for semiconductor applications |
JP6972789B2 (ja) * | 2017-08-31 | 2021-11-24 | 日本精機株式会社 | ヘッドアップディスプレイ装置 |
SG11202006693SA (en) * | 2018-01-19 | 2020-08-28 | Esb Labs Inc | Virtual interactive audience interface |
CN110244778B (zh) * | 2019-06-20 | 2022-09-06 | 京东方科技集团股份有限公司 | 一种基于人眼追踪的平视随动控制系统和控制方法 |
WO2022031872A1 (en) * | 2020-08-04 | 2022-02-10 | Owl Labs Inc. | Designated view within a multi-view composited webcam signal |
WO2022046810A2 (en) * | 2020-08-24 | 2022-03-03 | Owl Labs Inc. | Merging webcam signals from multiple cameras |
-
2020
- 2020-10-30 JP JP2021556021A patent/JPWO2021095573A1/ja active Pending
- 2020-10-30 US US17/767,746 patent/US20240077941A1/en active Pending
- 2020-10-30 CN CN202080077588.4A patent/CN114651448B/zh active Active
- 2020-10-30 WO PCT/JP2020/040878 patent/WO2021095573A1/ja active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019126101A (ja) * | 2014-07-18 | 2019-07-25 | ソニー株式会社 | 情報処理装置及び方法、表示制御装置及び方法、プログラム、並びに情報処理システム |
JP2019192178A (ja) * | 2018-04-27 | 2019-10-31 | 株式会社コロプラ | プログラム、情報処理装置、および方法 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023276252A1 (ja) * | 2021-06-30 | 2023-01-05 | ソニーグループ株式会社 | 情報処理装置、情報処理方法及びプログラム |
WO2023047637A1 (ja) * | 2021-09-22 | 2023-03-30 | ソニーグループ株式会社 | 情報処理装置およびプログラム |
WO2023079859A1 (ja) * | 2021-11-08 | 2023-05-11 | ソニーグループ株式会社 | 情報処理装置及び情報処理方法 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2021095573A1 (ja) | 2021-05-20 |
CN114651448B (zh) | 2024-10-18 |
CN114651448A (zh) | 2022-06-21 |
US20240077941A1 (en) | 2024-03-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11700286B2 (en) | Multiuser asymmetric immersive teleconferencing with synthesized audio-visual feed | |
US10645369B2 (en) | Stereo viewing | |
WO2021095573A1 (ja) | 情報処理システム、情報処理方法及びプログラム | |
WO2016009864A1 (ja) | 情報処理装置、表示装置、情報処理方法、プログラム、および情報処理システム | |
Lelyveld | Virtual reality primer with an emphasis on camera-captured VR | |
TWI530157B (zh) | 多視角影像之顯示系統、方法及其非揮發性電腦可讀取紀錄媒體 | |
US10681276B2 (en) | Virtual reality video processing to compensate for movement of a camera during capture | |
JP2016537903A (ja) | バーチャルリアリティコンテンツのつなぎ合わせおよび認識 | |
US11647354B2 (en) | Method and apparatus for providing audio content in immersive reality | |
WO2022209129A1 (ja) | 情報処理装置、情報処理方法、およびプログラム | |
WO2020206647A1 (zh) | 跟随用户运动控制播放视频内容的方法和装置 | |
US20200225467A1 (en) | Method for projecting immersive audiovisual content | |
WO2021161894A1 (ja) | 情報処理システム、情報処理方法及びプログラム | |
WO2020053412A1 (en) | A system for controlling audio-capable connected devices in mixed reality environments | |
CN110910508B (zh) | 一种图像显示方法、装置和系统 | |
WO2019146426A1 (ja) | 画像処理装置、画像処理方法、プログラム、および投影システム | |
US11863902B2 (en) | Techniques for enabling high fidelity magnification of video | |
US20220180664A1 (en) | Frame of reference for motion capture | |
WO2022220306A1 (ja) | 映像表示システム、情報処理装置、情報処理方法、及び、プログラム | |
WO2021179102A1 (zh) | 实境仿真全景系统及其使用方法 | |
CN116941234A (zh) | 用于运动捕捉的参考系 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20888250 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 17767746 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 2021556021 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20888250 Country of ref document: EP Kind code of ref document: A1 |