WO2020262391A1 - Dispositif de commande d'affichage, procédé de commande d'affichage et programme - Google Patents

Dispositif de commande d'affichage, procédé de commande d'affichage et programme Download PDF

Info

Publication number
WO2020262391A1
WO2020262391A1 PCT/JP2020/024637 JP2020024637W WO2020262391A1 WO 2020262391 A1 WO2020262391 A1 WO 2020262391A1 JP 2020024637 W JP2020024637 W JP 2020024637W WO 2020262391 A1 WO2020262391 A1 WO 2020262391A1
Authority
WO
WIPO (PCT)
Prior art keywords
viewpoint
display
image
avatar
information
Prior art date
Application number
PCT/JP2020/024637
Other languages
English (en)
Japanese (ja)
Inventor
史憲 入江
貴嗣 青木
一紀 田村
真彦 宮田
Original Assignee
富士フイルム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士フイルム株式会社 filed Critical 富士フイルム株式会社
Priority to JP2021527648A priority Critical patent/JP7163498B2/ja
Publication of WO2020262391A1 publication Critical patent/WO2020262391A1/fr
Priority to US17/558,537 priority patent/US11909945B2/en
Priority to US18/398,988 priority patent/US20240129449A1/en

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/001Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/282Image signal generators for generating image signals corresponding to three or more geometrical viewpoints, e.g. multi-view systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/37Details of the operation on graphic patterns
    • G09G5/377Details of the operation on graphic patterns for mixing or overlaying two or more graphic patterns
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • H04N13/279Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems

Definitions

  • the technology of the present disclosure relates to a display control device, a display control method, and a program.
  • Japanese Patent Application Laid-Open No. 2018-190336 describes a method executed by a computer for providing a virtual space by a head mount device, which corresponds to a step of defining the virtual space and a user's utterance of the head mount device.
  • a method including a step of receiving one voice signal and a step of executing shooting in a virtual space using the first voice signal as a trigger is disclosed.
  • the method described in JP-A-2018-190336 is a step of arranging a first avatar object corresponding to a user in a virtual space and a second avatar object corresponding to a user of another computer capable of communicating with a computer in the virtual space.
  • Performing imaging, further comprising a step of placing in includes capturing at least a portion of each of the first and second avatar objects based on the location information of the first and second avatar objects. ..
  • Japanese Patent Application Laid-Open No. 2018-106297 discloses a mixed reality presentation system.
  • the composite reality presentation system described in JP-A-2018-106297 is a device having an image pickup means and a display means, and is a device that can be worn on a plurality of heads and identification information of a local coordinate space to which each device belongs.
  • a storage means for storing CG model information for drawing a CG model to be synthesized on an image captured by the imaging means and avatar information for drawing an avatar displayed on behalf of the experiencer who wears each device.
  • the detection means for detecting the position and orientation of each device, and the composite image obtained by synthesizing the CG model and the avatar based on the respective position and orientation with the image captured by the imaging means of each device. It has a control means for outputting to the display means of.
  • the position and orientation of the device of interest are shown in the image captured by the imaging means of the device of interest.
  • the synthesizing means is a CG model based on the positional relationship with the device of interest, the CG model, and the other device.
  • the CG model and the avatar are synthesized by controlling the output of the avatar corresponding to the other device so that the image can be visually recognized.
  • One embodiment according to the technique of the present disclosure is a state in which the existence of a specific person can be perceived through a viewpoint image selected from a plurality of viewpoint images, and a specific person is recognized according to the angle of view of the viewpoint image.
  • a display control device a display control method, and a program capable of changing the presence of the display.
  • a first acquisition unit that acquires first viewpoint position information indicating the first viewpoint position of the first person with respect to the imaging region and a second person different from the first person visually recognize the first viewpoint position information.
  • a first display unit capable of displaying an image to be displayed, and a first viewpoint selected from a plurality of viewpoint images generated based on images obtained by capturing images from a plurality of viewpoint positions in which imaging regions are different from each other.
  • the first control unit includes a first control unit that controls the display of the image on the first display unit, and the first control unit is indicated by the first viewpoint position information acquired by the first acquisition unit in the first viewpoint image.
  • This is a display control device that controls to change the display size of the first specific information according to the angle of view of the first viewpoint image displayed by the first display unit.
  • the first control unit controls the first display unit to keep the degree of difference between the image quality of the first viewpoint image and the image quality of the first specific information within the first predetermined range.
  • the first control unit controls to change the display mode of the first specific information according to the relationship between the display size of the first viewpoint image and the display size of the first specific information.
  • a display control device according to the first aspect or the second aspect in which the above is performed on the first display unit.
  • the first control unit refers to the first display unit when the ratio of the display size of the first specific information to the display size of the first viewpoint image is equal to or greater than the first threshold value.
  • the display mode of the first specific information is changed by hiding the first specific information, displaying only the outline of the first specific information, or displaying the first specific information semi-transparently.
  • the first control unit has a relationship between the display size of the first viewpoint image and the display size of the first specific information, and the display position of the first viewpoint image and the first specific information.
  • the display control device according to the third aspect or the fourth aspect, which changes the display mode of the first specific information according to the relationship with the display position of the first specific information.
  • the first control unit refers to the first display unit when the ratio of the display size of the first specific information to the display size of the first viewpoint image is less than the second threshold value.
  • the display control device according to the third aspect is to display the first specific information in a display mode emphasized more than other areas in the first viewpoint image.
  • a seventh aspect according to the technique of the present disclosure is that the first display unit is any one of the first to sixth aspects included in the first head-mounted display mounted on the second person. It is a display control device according to.
  • An eighth aspect according to the technique of the present disclosure is that the first viewpoint image has a plurality of viewpoints according to the first instruction received by the first reception unit capable of receiving the first instruction to select one of the plurality of viewpoint images.
  • the display control device according to any one of the first to seventh aspects, which is the viewpoint image selected from the images.
  • the first acquisition unit further acquires the first line-of-sight direction information indicating the first line-of-sight direction of the first person with respect to the imaging region, and the first specific information is the first acquisition.
  • the display control device according to any one of the first to eighth aspects, which includes information capable of specifying the first line-of-sight direction indicated by the first line-of-sight direction information acquired by the unit.
  • a tenth aspect according to the technique of the present disclosure is that each of the plurality of viewpoint images has unique viewpoint position information indicating a unique viewpoint position, and each of the plurality of viewpoint images is observed from the corresponding unique viewpoint position. It is an image showing the imaged region, and the first viewpoint position information is a display relating to any one aspect from the first aspect to the ninth aspect, which is the unique viewpoint position information of any one of a plurality of viewpoint images. It is a control device.
  • the first acquisition unit is a second display unit capable of displaying an image visually recognized by the first person, and displays a second viewpoint image selected from a plurality of viewpoint images.
  • This is a display control device according to a tenth aspect, which acquires unique viewpoint position information corresponding to a second viewpoint image displayed by a displayable second display unit as first viewpoint position information.
  • the first acquisition unit further acquires the first line-of-sight direction information indicating the first line-of-sight direction of the first person with respect to the imaging region, and the first specific information is the first acquisition.
  • the first acquisition unit is a second display unit capable of displaying an image visually recognized by the first person, including information capable of specifying the first line-of-sight direction indicated by the first line-of-sight direction information acquired by the unit.
  • the first method of acquiring information indicating the direction facing the second viewpoint image displayed by the second display unit capable of displaying the second viewpoint image selected from the plurality of viewpoint images as the first line-of-sight direction information. It is a display control device which concerns on any one aspect from an aspect to the eighth aspect.
  • a thirteenth aspect according to the technique of the present disclosure is a display control device according to a twelfth aspect, wherein the second display unit is included in a second head-mounted display mounted on a first person.
  • the second viewpoint image has a plurality of viewpoints according to the second instruction received by the second reception unit capable of accepting the second instruction to select one of the plurality of viewpoint images.
  • the display control device according to any one of the eleventh to thirteenth aspects, which is a viewpoint image selected from the images.
  • a fifteenth aspect according to the technique of the present disclosure is a second acquisition unit that acquires second viewpoint position information indicating a second viewpoint position of a second person with respect to an imaging region, and can display an image visually recognized by the first person.
  • the second display unit includes a second control unit that controls the display of the second viewpoint image selected from the plurality of viewpoint images on the second display unit, and the second control unit includes the second viewpoint image.
  • the second viewpoint position indicated by the second viewpoint position information acquired by the second acquisition unit is included in the second viewpoint
  • the second viewpoint indicated by the second viewpoint position information acquired by the second acquisition unit is included. Controls to display the second specific information whose position can be specified in the second viewpoint image, and the display size of the second specific information according to the angle of view of the second viewpoint image displayed by the second display unit.
  • This is a display control device according to any one of the first to eighth aspects, which controls to change.
  • the second control unit controls the second display unit to keep the degree of difference between the image quality of the second viewpoint image and the image quality of the second specific information within the second predetermined range.
  • a seventeenth aspect according to the technique of the present disclosure is a control in which the first control unit changes the display mode of the second specific information according to the relationship between the display size of the second viewpoint image and the display size of the second specific information.
  • the second control unit refers to the second display unit when the ratio of the display size of the second specific information to the display size of the second viewpoint image is equal to or greater than the third threshold value.
  • the display mode of the second specific information is changed by hiding the second specific information, displaying only the outline of the second specific information, or displaying the second specific information semi-transparently.
  • the second control unit has a relationship between the display size of the second viewpoint image and the display size of the second specific information, and the display position of the second viewpoint image and the second specific information.
  • the display control device according to the fifteenth aspect or the sixteenth aspect, which changes the display mode of the second specific information according to the relationship with the display position of the second specific information.
  • a twentieth aspect according to the technique of the present disclosure is that the second control unit refers to the second display unit when the ratio of the display size of the second specific information to the display size of the second viewpoint image is less than the fourth threshold value.
  • the display control device according to the seventeenth aspect is to display the second specific information in a display mode emphasized more than other areas in the second viewpoint image.
  • the second acquisition unit further acquires the second line-of-sight direction information indicating the second line-of-sight direction of the second person with respect to the imaging region, and the second specific information is the second acquisition.
  • the display control device according to any one of the first to twentieth aspects, which includes information capable of specifying the second line-of-sight direction indicated by the second line-of-sight direction information acquired by the unit.
  • each of the plurality of viewpoint images has unique viewpoint position information indicating a unique viewpoint position, and each of the plurality of viewpoint images is observed from the corresponding unique viewpoint position. It is an image showing the imaged region, and each of the first viewpoint position information and the second viewpoint position information is the unique viewpoint position information of any one of a plurality of viewpoint images, according to the fifteenth to twenty-first aspects. It is a display control device which concerns on any one aspect.
  • the first acquisition unit is a second display unit capable of displaying an image visually recognized by a first person, and displays a second viewpoint image selected from a plurality of viewpoint images.
  • the unique viewpoint position information corresponding to the second viewpoint image displayed by the displayable second display unit is acquired as the first viewpoint position information
  • the second acquisition unit is the first displayed by the first display unit.
  • This is a display control device according to a 22nd aspect of acquiring unique viewpoint position information corresponding to a viewpoint image as a second viewpoint position information.
  • the first acquisition unit further acquires the first line-of-sight direction information indicating the first line-of-sight direction of the first person with respect to the imaging region, and the first specific information is the first acquisition.
  • the first acquisition unit includes information that can identify the first line-of-sight direction indicated by the first line-of-sight direction information acquired by the unit, and the first acquisition unit faces the second viewpoint image displayed by the second display unit.
  • the second acquisition unit further acquires the second line-of-sight direction information indicating the second line-of-sight direction of the second person with respect to the imaging region, and the second specific information is the second line-of-sight direction information.
  • the second acquisition unit faces the first viewpoint image displayed by the first display unit, including information capable of specifying the second line-of-sight direction indicated by the second line-of-sight direction information acquired by the acquisition unit.
  • the display control device according to any one of the fifteenth to twenty-third aspects, which acquires information indicating a direction as second line-of-sight direction information.
  • the second display unit is any one of the 15th to 24th aspects included in the second head-mounted display mounted on the first person. It is a display control device according to.
  • the second viewpoint image has a plurality of viewpoints according to the second instruction received by the second reception unit capable of accepting the second instruction to select one of the plurality of viewpoint images.
  • the display control device according to any one of the fifteenth to twenty-fifth aspects, which is the viewpoint image selected from the images.
  • the 27th aspect according to the technique of the present disclosure is a fifteenth to twenty-sixth aspect including a first setting unit for setting to hide the second specific information when the first predetermined condition is satisfied. It is a display control device which concerns on any one aspect.
  • the 28th aspect according to the technique of the present disclosure is from the first aspect to the first aspect in which the viewpoint position of at least one of the first person and the second person with respect to the imaging region is limited to a part of the imaging region. It is a display control device which concerns on any one aspect of 27 aspects.
  • the 29th aspect according to the technique of the present disclosure is the first to 28th aspects including a second setting unit that sets to hide the first specific information when the second predetermined condition is satisfied. It is a display control device which concerns on any one aspect.
  • a thirtieth aspect according to the technique of the present disclosure is a display control device according to any one of the first to 29th aspects, wherein at least one of the plurality of viewpoint images is a virtual viewpoint image. ..
  • a thirty-first aspect according to the technique of the present disclosure can acquire first viewpoint position information indicating the first viewpoint position of the first person with respect to the imaging region, and can display an image visually recognized by a second person different from the first person.
  • the first display unit displays the first viewpoint image selected from a plurality of viewpoint images generated based on the images obtained by capturing images from a plurality of viewpoint positions in which the imaging regions are different from each other.
  • a 32nd aspect according to the technique of the present disclosure is an image in which a computer acquires first viewpoint position information indicating a first viewpoint position of a first person with respect to an imaging region, and is visually recognized by a second person different from the first person.
  • a first display unit capable of displaying a first viewpoint image selected from a plurality of viewpoint images generated based on an image obtained by capturing images from a plurality of viewpoint positions having different imaging regions. Control to display on the first display unit, and when the first viewpoint image includes the first viewpoint position indicated by the acquired first viewpoint position information, it is indicated by the acquired first viewpoint position information.
  • Control is performed to display the first specific information that can specify the position of the first viewpoint in the first viewpoint image, and the first specific information is displayed according to the angle of view of the first viewpoint image displayed by the first display unit. It is a program for executing a process including controlling to change the size.
  • FIG. 1 An example of a mode in which a first viewpoint line-of-sight instruction is given to the first smartphone according to the embodiment and an example of a mode in which a second viewpoint line-of-sight instruction is given to the second smartphone according to the embodiment are shown. It is a conceptual diagram. An example of a mode in which a first viewpoint line-of-sight instruction is transmitted from the first smartphone according to the embodiment to the display control device, and a second viewpoint line-of-sight instruction is transmitted from the second smartphone according to the embodiment to the display control device. It is a conceptual diagram which shows. It is a block diagram which shows an example of the specific function of the 1st control part and the 2nd control part of the display control device which concerns on embodiment.
  • the viewpoint image generated by executing the viewpoint image generation process by the CPU of the display control device is acquired by the first viewpoint image acquisition unit and the second viewpoint image acquisition unit, and the viewpoint image identifier and the unique viewpoint position are obtained.
  • FIG. 8 is a conceptual diagram showing an example of a mode in which the first avatar is hidden from the viewpoint video containing the first avatar shown in FIG. 28.
  • the first smartphone according to the embodiment sends an avatar non-display instruction to the setting unit of the display control device, and the second smartphone according to the embodiment sends an avatar non-display instruction to the setting unit of the display control device.
  • FIG. 35 and 36 It is a flowchart which shows an example of the flow of the 2nd display control processing which concerns on embodiment. It is a continuation of the flowchart shown in FIG. 38 and 39. It is a flowchart which shows an example of the flow of the setting process which concerns on embodiment.
  • CPU refers to the abbreviation of "Central Processing Unit”.
  • RAM is an abbreviation for "Random Access Memory”.
  • DRAM refers to the abbreviation of "Dynamic Random Access Memory”.
  • SRAM refers to the abbreviation of "Static Random Access Memory”.
  • ROM is an abbreviation for "Read Only Memory”.
  • SSD is an abbreviation for "Solid State Drive”.
  • HDD refers to the abbreviation of "Hard Disk Drive”.
  • EEPROM refers to the abbreviation of "Electrically Erasable and Programmable Read Only Memory”.
  • I / F refers to the abbreviation of "Interface”.
  • IC refers to the abbreviation of "Integrated Circuit”.
  • ASIC refers to the abbreviation of "Application Special Integrated Circuit”.
  • PLD refers to the abbreviation of "Programmable Logical Device”.
  • FPGA refers to the abbreviation of "Field-Programmable Gate Array”.
  • SoC refers to the abbreviation of "System-on-a-chip”.
  • CMOS is an abbreviation for "Complementary Metal Oxide Semiconducor”.
  • CCD refers to the abbreviation of "Charge Coupled Device”.
  • EL refers to the abbreviation for "Electro-Luminescence”.
  • GPU refers to the abbreviation of "Graphics Processing Unit”.
  • LAN refers to the abbreviation of "Local Area Network”.
  • 3D refers to the abbreviation of "3 Dimension”.
  • USB refers to the abbreviation of "Universal Serial Bus”.
  • HMD refers to the abbreviation of "Head Mounted Display”.
  • fps refers to an abbreviation for "frame per second”.
  • GPS is an abbreviation for "Global Positioning System”.
  • the information processing system 10 includes a display control device 12, a first smartphone 14A, a second smartphone 14B, a plurality of imaging devices 16, an imaging device 18, and a wireless communication base station (hereinafter, simply "base”. It includes 20, a first HMD34A, and a second HMD34B.
  • the imaging devices 16 and 18 are imaging devices having a CMOS image sensor, and are equipped with an optical zoom function and a digital zoom function. Instead of the CMOS image sensor, another type of image sensor such as a CCD image sensor may be adopted.
  • CMOS image sensor another type of image sensor such as a CCD image sensor may be adopted.
  • the plurality of image pickup devices 16 are installed in the soccer stadium 22.
  • Each of the plurality of imaging devices 16 is arranged so as to surround the soccer field 24, and images the region including the soccer field 24 as an imaging region.
  • an example in which each of the plurality of image pickup devices 16 is arranged so as to surround the soccer field 24 is given, but the technique of the present disclosure is not limited to this, and the arrangement of the plurality of image pickup devices 16 is not limited to this. It is determined according to the virtual viewpoint image to be generated.
  • a plurality of image pickup devices 16 may be arranged so as to surround the entire soccer field 24, or a plurality of image pickup devices 16 may be arranged so as to surround a specific part thereof.
  • the image pickup device 18 is installed in an unmanned aerial vehicle (for example, a drone), and takes a bird's-eye view of an area including a soccer field 24 as an image pickup area from the sky.
  • the imaging region in a state in which the region including the soccer field 24 is viewed from above refers to the imaging surface of the soccer field 24 by the imaging device 18.
  • the display control device 12 is installed in the control room 32.
  • the plurality of image pickup devices 16 and the display control device 12 are connected via a LAN cable 30, and the display control device 12 controls the plurality of image pickup devices 16 and is imaged by each of the plurality of image pickup devices 16. The image obtained by this is acquired.
  • the connection using the wired communication method by the LAN cable 30 is illustrated here, the connection is not limited to this, and the connection using the wireless communication method may be used.
  • the base station 20 transmits and receives various information to and from the display control device 12, the first smartphone 14A, the second smartphone 14B, the first HMD34A, the second HMD34B, and the unmanned aerial vehicle 27 via radio waves. That is, the display control device 12 is wirelessly connected to the first smartphone 14A, the second smartphone 14B, the first HMD34A, the second HMD34B, and the unmanned aerial vehicle 27 via the base station 20.
  • the display control device 12 controls the unmanned aerial vehicle 27 by wirelessly communicating with the unmanned aerial vehicle 27 via the base station 20, and acquires an image obtained by being imaged by the imaging device 18 from the unmanned aerial vehicle 27. Or do.
  • the display control device 12 is a device corresponding to a server, and the first smartphone 14A, the second smartphone 14B, the first HMD34A, and the second HMD34B are devices corresponding to a client terminal for the display control device 12.
  • terminal devices when it is not necessary to distinguish the first smartphone 14A, the second smartphone 14B, the first HMD34A, and the second HMD34B, they are referred to as "terminal devices" without reference numerals.
  • the display control device 12 and the terminal device wirelessly communicate with each other via the base station 20, so that the terminal device requests the display control device 12 to provide various services, and the display control device 12 is the terminal device. Providing services to the terminal device in response to the request from.
  • the display control device 12 acquires a plurality of images from the plurality of imaging devices, and transmits the video generated based on the acquired plurality of images to the terminal device via the base station 20.
  • the viewer 28A possesses the first smartphone 14A, and the first HMD34A is attached to the head of the viewer 28A.
  • Viewer 28B is a different person from viewer 28A.
  • the viewer 28B possesses the second smartphone 14B, and the second HMD34B is attached to the head of the viewer 28B.
  • the video transmitted from the display control device 12 (hereinafter, also referred to as “delivered video”) is received by the terminal device, and the delivered video received by the terminal device is visually recognized by the viewers 28A and 28B through the terminal device.
  • the viewer 28A is an example of the "second person” according to the technique of the present disclosure
  • the viewer 28B is an example of the "first person” according to the technique of the present disclosure.
  • the distributed video is an example of a "video" according to the technology of the present disclosure.
  • the first HMD34A includes a main body portion 11A and a mounting portion 13A.
  • the main body 11A is located in front of the viewer 28A
  • the wearing portion 13A is located in the upper half of the head of the viewer 28A.
  • the mounting portion 13A is a band-shaped member having a width of about several centimeters, and includes an inner ring 13A1 and an outer ring 15A1.
  • the inner ring 13A1 is formed in an annular shape and is fixed in close contact with the upper half of the head of the viewer 28A.
  • the outer ring 15A1 is formed in a shape in which the occipital side of the viewer 28A is cut out. The outer ring 15A1 bends outward from the initial position or shrinks inward from the bent state toward the initial position according to the adjustment of the size of the inner ring 13A1.
  • the main body 11A includes a protective frame 11A1, a computer 150, and a display 156.
  • the computer 150 controls the entire first HMD 34A.
  • the protective frame 11A1 is a transparent plate curved so as to cover the entire eyes of the viewer 28A, and is formed of, for example, transparent colored plastic.
  • the display 156 includes a screen 156A and a projection unit 156B, and the projection unit 156B is controlled by the computer 150.
  • the screen 156A is arranged inside the protective frame 11A1. Screen 156A is assigned to each of the eyes of viewer 28A.
  • the screen 156A is made of a transparent material like the protective frame 11A1. The viewer 28A visually recognizes the real space through the screen 156A and the protective frame 11A1. That is, the first HMD34A is a transmissive HMD.
  • the screen 156A is located at a position facing the eyes of the viewer 28A, and the distribution image is projected on the inner surface of the screen 156A (the surface on the viewer 28A side) by the projection unit 156B under the control of the computer 150. Since the projection unit 156B is a well-known device, detailed description thereof will be omitted, but a display element such as a liquid crystal display for displaying the distribution image and projection optics for projecting the distribution image displayed on the display element toward the inner surface of the screen 156A. A device having a system.
  • the screen 156A is realized by using a half mirror that reflects the delivered image projected by the projection unit 156B and transmits the light in the real space.
  • the projection unit 156B projects the delivered image on the inner surface of the screen 156A at a predetermined frame rate (for example, 60 fps).
  • the delivered video is reflected on the inner surface of the screen 156A and is incident on the eyes of the viewer 28A. As a result, the viewer 28A visually recognizes the delivered video.
  • the half mirror is illustrated here as the screen 156A, the present invention is not limited to this, and the screen 156A itself may be used as a display element such as a liquid crystal.
  • the second HMD34B also has the same configuration as the first HMD34A, and the first HMD34A is applied to the viewer 28A, while the second HMD34B is applied to the viewer 28B.
  • the second HMD34B includes a main body portion 11B and a mounting portion 13B.
  • the mounting portion 13B corresponds to the mounting portion 13A of the first HMD34A.
  • the inner ring 13B1 corresponds to the inner ring 13A1
  • the outer ring 15B1 corresponds to the outer ring 15A1.
  • the main body portion 11B corresponds to the main body portion 11A of the first HMD34A.
  • the protective frame 11B1 corresponds to the protective frame 11A1
  • the display 206 corresponds to the display 156
  • the computer 200 corresponds to the computer 150.
  • the screen 206A corresponds to the screen 156A
  • the projection unit 206B corresponds to the projection unit 156B.
  • the display control device 12 acquires a bird's-eye view image 46A showing an area including the soccer field 24 when observed from the sky from the unmanned aerial vehicle 27.
  • the bird's-eye view image 46A is a moving image obtained by capturing a bird's-eye view of the area including the soccer field 24 as an imaging area (hereinafter, also simply referred to as an “imaging area”) by the imaging device 18 of the unmanned aerial vehicle 27. Is.
  • an imaging area here, the image is not limited to this, and may be a still image showing an area including the soccer field 24 when observed from the sky.
  • the display control device 12 acquires an image capture image 46B showing an image pickup region when observed from each position of the plurality of image pickup devices 16 from each of the plurality of image pickup devices 16.
  • the captured image 46B is a moving image obtained by capturing the imaging region by each of the imaging devices 16 having a plurality of imaging regions.
  • the captured image 46B is illustrated here, the image is not limited to this, and may be a still image showing an imaging region when observed from each position of the plurality of imaging devices 16.
  • the bird's-eye view image 46A and the captured image 46B are images obtained by capturing images from a plurality of viewpoint positions in which regions including the soccer field 24 are different from each other, and are examples of "images" according to the technique of the present disclosure.
  • the display control device 12 generates a virtual viewpoint image 46C based on the bird's-eye view image 46A and the captured image 46B.
  • the virtual viewpoint image 46C is an image showing an imaging region when the imaging region is observed from a viewpoint position and a line-of-sight direction different from the viewpoint position and the line-of-sight direction of each of the plurality of imaging devices.
  • the virtual viewpoint image 46C refers to a virtual viewpoint image showing an imaging area when the imaging area is observed from the viewpoint position 42 and the line-of-sight direction 44 in the spectator seat 26.
  • An example of the virtual viewpoint image 46C is a moving image using a 3D polygon.
  • the moving image is illustrated as the virtual viewpoint image 46C, but the present invention is not limited to this, and a still image using 3D polygons may be used.
  • a form example in which the bird's-eye view image 46A obtained by being imaged by the image pickup apparatus 18 is also used for generation is shown, but the technique of the present disclosure is not limited thereto.
  • the bird's-eye view image 46A is not used to generate the virtual viewpoint image 46C, but only the plurality of captured images 46B obtained by being imaged by each of the plurality of imaging devices 16 are used to generate the virtual viewpoint image 46C. You may do so.
  • the virtual viewpoint image 46C may be generated only from the image obtained by being imaged by the plurality of image pickup devices 16 without using the image obtained from the image pickup device 18 (for example, a drone). .. Further, if the image obtained from the image pickup device 18 (for example, a drone) is used, it is possible to generate a virtual viewpoint image with higher accuracy.
  • the display control device 12 selectively transmits the bird's-eye view image 46A, the captured image 46B, and the virtual viewpoint image 46C as distribution images to the terminal device.
  • the display control device 12 includes a computer 50, a reception device 52, a display 53, a first communication I / F 54, and a second communication I / F 56.
  • the computer 50 includes a CPU 58, a storage 60, and a memory 62, and the CPU 58, the storage 60, and the memory 62 are connected to each other via a bus line 64.
  • a bus line 64 In the example shown in FIG. 4, for convenience of illustration, one bus line is shown as the bus line 64, but the bus line 64 includes a data bus, an address bus, a control bus, and the like.
  • the CPU 58 controls the entire display control device 12.
  • the storage 60 stores various parameters and various programs.
  • the storage 60 is a non-volatile storage device.
  • EEPROM is adopted as an example of the storage 60, but the present invention is not limited to this, and a mask ROM, HDD, SSD, or the like may be used.
  • the memory 62 is a volatile storage device. Various information is temporarily stored in the memory 62.
  • the memory 62 is used as a work memory by the CPU 58.
  • DRAM is adopted as an example of the memory 62, but the present invention is not limited to this, and other types of volatile storage devices such as SRAM may be used.
  • the reception device 52 receives an instruction from a user or the like of the display control device 12. Examples of the reception device 52 include a touch panel, hard keys, a mouse, and the like.
  • the reception device 52 is connected to the bus line 64, and the instruction received by the reception device 52 is acquired by the CPU 58.
  • the display 53 is connected to the bus line 64 and displays various information under the control of the CPU 58.
  • An example of the display 53 is a liquid crystal display. Not limited to the liquid crystal display, another type of display such as an organic EL display may be adopted as the display 53.
  • the first communication I / F 54 is connected to the LAN cable 30.
  • the first communication I / F 54 is realized, for example, by a device having an FPGA.
  • the first communication I / F 54 is connected to the bus line 64 and controls the exchange of various information between the CPU 58 and the plurality of image pickup devices 16.
  • the first communication I / F 54 controls a plurality of image pickup devices 16 according to the request of the CPU 58.
  • the first communication I / F 54 acquires the captured image 46B (see FIG. 3) obtained by being imaged by each of the plurality of imaging devices 16, and outputs the acquired captured image 46B to the CPU 58.
  • the second communication I / F 56 is connected to the base station 20 so as to be capable of wireless communication.
  • the second communication I / 56 is realized, for example, by a device having an FPGA.
  • the second communication I / F 56 is connected to the bus line 64.
  • the second communication I / F 56 manages the exchange of various information between the CPU 58 and the unmanned aerial vehicle 27 in a wireless communication system via the base station 20. Further, the second communication I / F 56 manages the exchange of various information between the CPU 58 and the first smartphone 14A by a wireless communication method via the base station 20. Further, the second communication I / F 56 manages the exchange of various information between the CPU 58 and the first HMD 34A in a wireless communication system via the base station 20.
  • the second communication I / F 56 controls the exchange of various information between the CPU 58 and the second smartphone 14B by a wireless communication method via the base station 20. Further, the second communication I / F 56 manages the exchange of various information between the CPU 58 and the second HMD 34B in a wireless communication system via the base station 20.
  • the first smartphone 14A includes a computer 70, a GPS receiver 72, a gyro sensor 74, a reception device 76, a display 78, a microphone 80, a speaker 82, an image pickup device 84, and a communication I / F 86.
  • the computer 70 includes a CPU 88, a storage 90, and a memory 92, and the CPU 88, the storage 90, and the memory 92 are connected to each other via a bus line 94.
  • one bus line is shown as the bus line 94, but the bus line 94 includes a data bus, an address bus, a control bus, and the like.
  • the CPU 88 controls the entire first smartphone 14A.
  • the storage 90 stores various parameters and various programs.
  • the storage 90 is a non-volatile storage device.
  • EEPROM is adopted as an example of the storage 90, but the present invention is not limited to this, and a mask ROM, HDD, SSD, or the like may be used.
  • the memory 92 is a volatile storage device. Various information is temporarily stored in the memory 92, and the memory 92 is used as a work memory by the CPU 88.
  • DRAM is adopted as an example of the memory 92, but the present invention is not limited to this, and other types of volatile storage devices such as SRAM may be used.
  • the GPS receiver 72 receives radio waves from a plurality of GPS satellites (not shown) in response to an instruction from the CPU 88, and outputs reception result information indicating the reception result to the CPU 88.
  • the CPU 88 calculates the current position information indicating the current position of the first smartphone 14A as three-dimensional coordinates based on the reception result information input from the GPS receiver 72.
  • the gyro sensor 74 includes an angle around the yaw axis of the first smartphone 14A (hereinafter, also referred to as “yaw angle”), an angle around the roll axis of the first smartphone 14A (hereinafter, also referred to as “roll angle”), and a first.
  • the angle around the pitch axis of the smartphone 14A (hereinafter, also referred to as “pitch angle”) is measured.
  • the gyro sensor 74 is connected to the bus line 94, and angle information indicating the yaw angle, roll angle, and pitch angle measured by the gyro sensor 74 is acquired by the CPU 88 via the bus line 94.
  • the first smartphone 14A also includes an acceleration sensor (not shown).
  • the acceleration sensor and the gyro sensor 74 may be mounted as an integrated multi-axis (for example, 6-axis) sensor.
  • the reception device 76 receives an instruction from the viewer 28A.
  • Examples of the reception device 76 include a touch panel 76A, a hard key, and the like.
  • the reception device 76 is connected to the bus line 94, and the instruction received by the reception device 76 is acquired by the CPU 88.
  • the display 78 is connected to the bus line 94 and displays various information under the control of the CPU 88.
  • An example of the display 78 is a liquid crystal display.
  • another type of display such as an organic EL display may be adopted as the display 78.
  • the first smartphone 14A is provided with a touch panel display, and the touch panel display is realized by the touch panel 76A and the display 78. That is, the touch panel display is formed by superimposing the touch panel 76A on the display area of the display 78.
  • the microphone 80 converts the collected sound into an electric signal.
  • the microphone 80 is connected to the bus line 94.
  • the electric signal obtained by converting the sound collected by the microphone 80 is acquired by the CPU 88 via the bus line 94.
  • the speaker 82 converts an electric signal into sound.
  • the speaker 82 is connected to the bus line 94.
  • the speaker 82 receives the electric signal output from the CPU 88 via the bus line 94, converts the received electric signal into sound, and outputs the sound obtained by converting the electric signal to the outside of the first smartphone 14A. To do.
  • the image pickup device 84 acquires an image showing the subject by taking an image of the subject.
  • the image pickup apparatus 84 is connected to the bus line 94.
  • the image obtained by capturing the subject by the image pickup apparatus 84 is acquired by the CPU 88 via the bus line 94.
  • the communication I / F86 is connected to the base station 20 so as to be capable of wireless communication.
  • Communication I / F86 is realized, for example, by a device having an FPGA.
  • the communication I / F86 is connected to the bus line 94.
  • the communication I / F86 manages the exchange of various information between the CPU 88 and the external device in a wireless communication system via the base station 20.
  • examples of the "external device” include a display control device 12, an unmanned aerial vehicle 27, a second smartphone 14B, a first HMD34A, and a second HMD34B.
  • the second smartphone 14B has the same configuration as the first smartphone 14A. That is, the second smartphone 14B includes a computer 100, a GPS receiver 102, a gyro sensor 104, a reception device 106, a touch panel 106A, a display 108, a microphone 110, a speaker 112, an image pickup device 114, a communication I / F 116, a CPU 118, and a storage 120. It includes a memory 122 and a bus line 124.
  • the computer 100 corresponds to the computer 70.
  • the GPS receiver 102 corresponds to the GPS receiver 72.
  • the gyro sensor 104 corresponds to the gyro sensor 74.
  • the reception device 106 corresponds to the reception device 76.
  • the touch panel 106A corresponds to the touch panel 76A.
  • the display 108 corresponds to the display 78.
  • the microphone 110 is compatible with the microphone 80.
  • the speaker 112 corresponds to the speaker 82.
  • the image pickup device 114 corresponds to the image pickup device 84.
  • the communication I / F116 corresponds to the communication I / F86.
  • the CPU 118 corresponds to the CPU 88.
  • the storage 120 corresponds to the storage 90.
  • the memory 122 corresponds to the memory 92.
  • the bus line 124 corresponds to the bus line 94. Similar to the bus lines 64 and 94, the bus line 124 also includes a data bus, an address bus, a control bus, and the like.
  • the first HMD34A includes a computer 150, a reception device 152, a display 154, a microphone 157, a speaker 158, an eye tracker 166, and a communication I / F 168.
  • the computer 150 includes a CPU 160, a storage 162, and a memory 164, and the CPU 160, the storage 162, and the memory 164 are connected via a bus line 170.
  • one bus line is shown as the bus line 170 for convenience of illustration, but the bus line 170 includes a data bus, an address bus, a control bus, and the like.
  • the CPU 160 controls the entire first HMD34A.
  • the storage 162 stores various parameters and various programs.
  • the storage 162 is a non-volatile storage device.
  • EEPROM is adopted as an example of the storage 162, but the present invention is not limited to this, and a mask ROM, HDD, SSD, or the like may be used.
  • the memory 164 is a volatile storage device. Various information is temporarily stored in the memory 164, and the memory 164 is used as a work memory by the CPU 160.
  • DRAM is adopted as an example of the memory 164, but the present invention is not limited to this, and other types of volatile storage devices such as SRAM may be used.
  • the reception device 152 receives an instruction from the viewer 28A. Examples of the receiving device 152 include a remote controller and / or a hard key.
  • the reception device 152 is connected to the bus line 170, and the instruction received by the reception device 152 is acquired by the CPU 160.
  • the display 154 is a display capable of displaying the distributed image visually recognized by the viewer 28A, and is a display capable of displaying the first viewpoint image selected from the plurality of viewpoint images 46 (see FIG. 8) described later.
  • the display 154 is connected to the bus line 170 and displays various information under the control of the CPU 160.
  • An example of the display 154 is a liquid crystal display. Not limited to the liquid crystal display, another type of display such as an organic EL display may be adopted as the display 154.
  • the display 154 is an example of the "first display unit (first display)" according to the technique of the present disclosure.
  • the eye tracker 166 has an image pickup device (not shown), and by using the image pickup device, the eyes of the viewer 28A are imaged according to a predetermined frame rate (for example, 60 fps), and the image obtained by capturing the images is obtained. Based on this, the viewpoint position and the line-of-sight direction of the viewer 28A are detected. Then, the eye tracker 166 identifies the gazing point that the viewer 28A is gazing at in the distributed video displayed on the display 154 based on the detected viewpoint position and line-of-sight direction.
  • a predetermined frame rate for example, 60 fps
  • the communication I / F 168 is connected to the base station 20 so as to be capable of wireless communication.
  • Communication I / F168 is realized, for example, by a device having an FPGA.
  • the communication I / F 168 is connected to the bus line 170.
  • the communication I / F 168 controls the exchange of various information between the CPU 160 and the external device in a wireless communication system via the base station 20.
  • the "external device” include a display control device 12, an unmanned aerial vehicle 27, a first smartphone 14A, a second smartphone 14B, and a second HMD34B.
  • the second HMD34B has the same configuration as the first HMD34A. That is, the second HMD34B includes a computer 200, a reception device 202, a display 204, a microphone 207, a speaker 208, a CPU 210, a storage 212, a memory 214, an eye tracker 216, a communication I / F 218, and a bus line 220.
  • the computer 200 corresponds to the computer 150.
  • the reception device 202 corresponds to the reception device 152.
  • the display 204 corresponds to the display 154.
  • the microphone 207 corresponds to the microphone 157.
  • the speaker 208 corresponds to the speaker 158.
  • the CPU 210 corresponds to the CPU 160.
  • the storage 212 corresponds to the storage 162.
  • the memory 214 corresponds to the memory 164.
  • the eye tracker 216 corresponds to the eye tracker 166.
  • the communication I / F 218 corresponds to the communication I / F 168.
  • the bus line 220 corresponds to the bus line 170. Similar to the bus lines 64, 94 and 170, the bus line 220 includes a data bus, an address bus, a control bus and the like.
  • the display 204 is a display capable of displaying the distribution video visually recognized by the viewer 28B, and is a display capable of displaying the second viewpoint video selected from the plurality of viewpoint videos 46 (see FIG. 8) described later.
  • the display 204 is an example of a "second display unit (second display)" according to the technique of the present disclosure.
  • the storage 60 stores the first display control program 60A, the second display control program 60B, and the setting program 60C.
  • display control device programs when it is not necessary to distinguish between the first display control program 60A, the second display control program 60B, and the setting program 60C, they are referred to as "display control device programs" without reference numerals.
  • the CPU 58 reads the display control device program from the storage 60, and expands the read display control device program into the memory 62.
  • the CPU 58 controls the entire display control device 12 according to the display control device program expanded in the memory 62, and exchanges various information with the plurality of image pickup devices, the unmanned aerial vehicle 27, and the terminal device.
  • the CPU 58 is an example of the "processor” according to the technology of the present disclosure
  • the memory 62 is an example of the “memory” according to the technology of the present disclosure.
  • the CPU 58 reads the first display control program 60A from the storage 60, and expands the read first display control program 60A into the memory 62.
  • the CPU 58 operates as the first acquisition unit 58A and the first control unit 58B according to the first display control program 60A expanded in the memory 62.
  • the CPU 58 operates as the first acquisition unit 58A and the first control unit 58B to execute the first display control process (see FIGS. 35 to 37) described later.
  • the CPU 58 reads the second display control program 60B from the storage 60, and expands the read second display control program 60B into the memory 62.
  • the CPU 58 operates as the second acquisition unit 58C and the second control unit 58D according to the second display control program 60B expanded in the memory 62.
  • the CPU 58 operates as the second acquisition unit 58C and the second control unit 58D to execute the second display control process (see FIGS. 38 to 40) described later.
  • the CPU 58 reads the setting program 60C from the storage 60 and expands the read setting program 60C into the memory 62.
  • the CPU 58 operates as the setting unit 58E according to the setting program 60C expanded in the memory 62.
  • the CPU 58 operates as the setting unit 58E to execute the setting process (see FIG. 41) described later.
  • the viewpoint image generation process is a process for generating a plurality of viewpoint images 46.
  • the distribution video described above includes a plurality of viewpoint videos 46.
  • Each of the plurality of viewpoint images 46 is an image showing an imaging region observed from a corresponding unique viewpoint.
  • the plurality of viewpoint images 46 include a bird's-eye view image 46A, an captured image 46B, and a virtual viewpoint image 46C.
  • the virtual viewpoint image 46C is generated based on the bird's-eye view image 46A acquired from the imaging device 18 and the plurality of captured images 46B acquired from the plurality of imaging devices 16.
  • the technique of the present disclosure is not limited to this, and the bird's-eye view image 46A is described. And the virtual viewpoint image 46C may be generated based on at least two or more images out of the plurality of captured images 46B. Further, here, a form example in which the plurality of viewpoint images 46 includes the bird's-eye view image 46A, the captured image 46B, and the virtual viewpoint image 46C is shown, but the technique of the present disclosure is not limited to this, and a plurality of viewpoints are present.
  • the virtual viewpoint image 46C may not be included in the image 46, and the bird's-eye view image 46A may not be included in the plurality of viewpoint images 46. Further, the CPU 58 does not need to acquire the captured image 46B from all of the plurality of imaging devices 16, and does not have to acquire a part of the captured images 46B.
  • the viewpoint image 46 is displayed on each display such as the display 78 of the first smartphone 14A (see FIG. 5), the display 108 of the second smartphone 14B, the display 154 of the first HMD34A, and the display 204 of the second HMD34B.
  • the size of the viewpoint image 46 generated by the viewpoint image generation process and the display size of the viewpoint image 46 displayed on each display have a similar relationship.
  • changing the size of the viewpoint image 46 means changing the display size of the viewpoint image 4.
  • changing the size of the avatar means changing the display size of the avatar.
  • each of the plurality of viewpoint images 46 obtained by executing the viewpoint image generation process by the CPU 58 has a viewpoint image identifier, a unique viewpoint position information, a unique line-of-sight direction information, and a unique angle of view.
  • the viewpoint video identifier is an identifier that can uniquely identify the corresponding viewpoint video 46.
  • the unique viewpoint position information is information indicating a unique viewpoint position.
  • the unique viewpoint position is the viewpoint position of the corresponding viewpoint image 46.
  • the unique viewpoint position refers to the viewpoint position during which the imaging region indicated by the corresponding viewpoint image 46 is observed.
  • the unique viewpoint position information there are three-dimensional coordinates in which the unique viewpoint positions of each of the plurality of viewpoint images 46 can be relatively specified.
  • the unique viewpoint position is limited to a part of the imaging region.
  • the partial area refers to, for example, the spectator seat 26 (see FIGS. 1 and 3).
  • the unique line-of-sight direction information is information indicating a unique line-of-sight direction.
  • the unique line-of-sight direction is the line-of-sight direction of the corresponding viewpoint image 46.
  • the line-of-sight direction included in the corresponding viewpoint image 46 refers to the line-of-sight direction in which the imaging region indicated by the corresponding viewpoint image 46 is observed.
  • a direction facing the corresponding viewpoint image 46 for example, a direction in which the center of the viewpoint image 46 penetrates perpendicularly to the viewpoint image 46 is adopted.
  • Unique angle of view information is information indicating a unique angle of view.
  • the unique angle of view is the angle of view of the corresponding viewpoint image 46. That is, the unique angle of view refers to the angle of view with respect to the imaging region indicated by the corresponding viewpoint image 46.
  • the CPU 58 executes a bird's-eye view video transmission process.
  • the bird's-eye view image transmission process is a process of transmitting the bird's-eye view image 46A of the plurality of viewpoint images 46 generated by the viewpoint image generation process to the first smartphone 14A and the second smartphone 14B.
  • the bird's-eye view image 46A is received by the first smartphone 14A, and the received bird's-eye view image 46A is displayed on the display 78 of the first smartphone 14A. While the bird's-eye view image 46A is displayed on the display 78, the viewer 28A gives a first viewpoint line-of-sight instruction to the first smartphone 14A.
  • the touch panel 76A of the first smartphone 14A is a device capable of receiving a first viewpoint line-of-sight instruction, and is an example of a “first reception unit (first reception device)” according to the technology of the present disclosure.
  • the first viewpoint line-of-sight instruction is an instruction of the viewpoint position and the line-of-sight direction with respect to the imaging region, and is used as an instruction to select any of the plurality of viewpoint images 46.
  • Examples of the first viewpoint line-of-sight instruction include a touch operation and a slide operation on the touch panel 76A. In this case, the touch panel 76A is touched to indicate the viewpoint position, and the touch panel 76A is slid to indicate the line-of-sight direction.
  • the position in the touch panel 76A where the touch operation is performed corresponds to the viewpoint position with respect to the imaging region, and the direction in which the slide operation is performed in the touch panel 76A corresponds to the line-of-sight direction with respect to the imaging region.
  • the first viewpoint line-of-sight instruction is an example of the "first instruction" according to the technique of the present disclosure.
  • the bird's-eye view image 46A is received by the second smartphone 14B, and the received bird's-eye view image 46A is displayed on the display 108 of the second smartphone 14B. While the bird's-eye view image 46A is displayed on the display 108, the viewer 28B gives a second viewpoint line-of-sight instruction to the second smartphone 14B.
  • the touch panel 106A of the second smartphone 14B is a device capable of receiving a second viewpoint line-of-sight instruction, and is an example of a “second reception unit (second reception device)” according to the technology of the present disclosure.
  • the second viewpoint line-of-sight instruction is an instruction of the viewpoint position and the line-of-sight direction with respect to the imaging region, and is used as an instruction to select any of the plurality of viewpoint images 46.
  • Examples of the second viewpoint line-of-sight instruction include a touch operation and a slide operation on the touch panel 106A. In this case, the touch panel 106A is touched to indicate the viewpoint position, and the touch panel 106A is slid to indicate the line-of-sight direction.
  • the position in the touch panel 106A where the touch operation is performed corresponds to the viewpoint position with respect to the imaging region, and the direction in which the slide operation is performed in the touch panel 106A corresponds to the line-of-sight direction with respect to the imaging region.
  • the second viewpoint line-of-sight instruction is an example of the "second instruction" according to the technique of the present disclosure.
  • the first viewpoint line-of-sight instruction received by the touch panel 76A is transmitted by the CPU 88 to the CPU 58 of the display control device 12, and is received by the CPU 58.
  • the second viewpoint line-of-sight instruction received by the touch panel 106A is transmitted by the CPU 118 to the CPU 58 of the display control device 12, and is received by the CPU 58.
  • the first control unit 58B includes the first viewpoint image acquisition unit 58B1, the first determination unit 58B2, the first synthesis unit 58B3, and the first avatar display size change unit 58B4.
  • a first image quality control unit 58B5, a first display mode changing unit 58B6, and a first viewpoint video output unit 58B7 are provided.
  • the first control unit 58B controls to display the first viewpoint image selected from the plurality of viewpoint images 46 on the display 154 of the first HMD 34A.
  • the "first viewpoint image” refers to one viewpoint image selected from a plurality of viewpoint images 46 by the first control unit 58B.
  • the CPU 58 has a first viewpoint image acquisition unit 58B1, a first determination unit 58B2, a first synthesis unit 58B3, a first avatar display size change unit 58B4, and a first image quality control unit. It operates as 58B5, a first display mode changing unit 58B6, and a first viewpoint video output unit 58B7.
  • the second control unit 58D includes a second viewpoint image acquisition unit 58D1, a second determination unit 58D2, a second synthesis unit 58D3, a second avatar display size change unit 58D4, a second image quality control unit 58D5, and a second. 2
  • the display mode changing unit 58D6 and the second viewpoint video output unit 58D7 are provided.
  • the second control unit 58D controls to display the first viewpoint image selected from the plurality of viewpoint images 46 on the display 204 of the second HMD 34B.
  • the "second viewpoint image” refers to one viewpoint image selected from a plurality of viewpoint images 46 by the second control unit 58D.
  • the CPU 58 has a second viewpoint image acquisition unit 58D1, a second determination unit 58D2, a second synthesis unit 58D3, a second avatar display size change unit 58D4, and a second image quality control unit. It operates as 58D5, a second display mode changing unit 58D6, and a second viewpoint video output unit 58D7.
  • the first viewpoint video acquisition unit 58B1 receives the first viewpoint line-of-sight instruction transmitted from the first smartphone 14A. Upon receiving the first viewpoint line-of-sight instruction, the first viewpoint image acquisition unit 58B1 selects one viewpoint image 46 from the plurality of viewpoint images 46 as the first viewpoint image according to the first viewpoint line-of-sight instruction, and selects the first viewpoint image. Get the video.
  • the first viewpoint image is a viewpoint image 46 having unique viewpoint position information and specific line-of-sight direction information corresponding to the viewpoint position and line-of-sight direction instructed by the first viewpoint line-of-sight instruction.
  • the first viewpoint video acquisition unit 58B1 acquires the viewpoint video identifier associated with the acquired first viewpoint video, and outputs the acquired viewpoint video identifier to the first acquisition unit 58A.
  • the first acquisition unit 58A obtains the unique viewpoint position information, the unique line-of-sight direction information, and the unique angle of view information associated with the viewpoint image 46 specified by the viewpoint image identifier input from the first viewpoint image acquisition unit 58B1. get.
  • the memory 62 has a first storage area 62A and a second storage area 62B.
  • the first acquisition unit 58A stores the acquired unique viewpoint position information as the second viewpoint position information in the first storage area 62A.
  • the second viewpoint position information is the unique viewpoint position information of any one of the plurality of viewpoint images 46.
  • the second viewpoint position information refers to information indicating the second viewpoint position.
  • the second viewpoint position refers to the viewpoint position of the viewer 28A with respect to the imaging region.
  • the first acquisition unit 58A stores the acquired unique line-of-sight direction information in the first storage area 62A as the second line-of-sight direction information.
  • the second line-of-sight direction information is the unique line-of-sight direction information of any one of the plurality of viewpoint images 46.
  • the second line-of-sight direction information refers to information indicating the second line-of-sight direction.
  • the second line-of-sight direction refers to the line-of-sight direction of the viewer 28A with respect to the imaging region.
  • the first acquisition unit 58A stores the acquired unique angle of view information in the first storage area 62A. Further, the first acquisition unit 58A stores the viewpoint image identifier input from the first viewpoint image acquisition unit 58B1 in the first storage area 62A.
  • the first acquisition unit 58A When the second viewpoint position information, the second line-of-sight direction information, and the unique angle of view information are newly acquired by the first acquisition unit 58A, the first acquisition unit 58A newly acquires the second viewpoint position information and the second. The line-of-sight direction information and the unique angle of view information are overwritten and saved in the first storage area 62A.
  • a viewpoint video identifier is newly input to the first acquisition unit 58A from the first viewpoint video acquisition unit 58B1
  • the new viewpoint video identifier is overwritten and saved in the first storage area 62A by the first acquisition unit 58A.
  • the second viewpoint video acquisition unit 58D1 receives the second viewpoint line-of-sight instruction transmitted from the second smartphone 14B. Upon receiving the second viewpoint line-of-sight instruction, the second viewpoint image acquisition unit 58D1 selects one viewpoint image 46 from the plurality of viewpoint images 46 as the second viewpoint image according to the second viewpoint line-of-sight instruction, and selects the second viewpoint image. Get the video.
  • the second viewpoint image is a viewpoint image 46 having the specific viewpoint position information and the specific line-of-sight direction information corresponding to the viewpoint position and the line-of-sight direction instructed by the second viewpoint line-of-sight instruction.
  • the second viewpoint video acquisition unit 58D1 acquires the viewpoint video identifier associated with the acquired second viewpoint video, and outputs the acquired viewpoint video identifier to the second acquisition unit 58C.
  • the second acquisition unit 58C obtains the unique viewpoint position information, the unique line-of-sight direction information, and the unique angle of view information associated with the viewpoint image 46 specified by the viewpoint image identifier input from the second viewpoint image acquisition unit 58D1. get.
  • the second acquisition unit 58C stores the acquired unique viewpoint position information as the first viewpoint position information in the second storage area 62B.
  • the first viewpoint position information is the unique viewpoint position information of any one of the plurality of viewpoint images 46.
  • the first viewpoint position information refers to information indicating the first viewpoint position.
  • the first viewpoint position refers to the viewpoint position of the viewer 28B with respect to the imaging region.
  • the second acquisition unit 58C stores the acquired unique line-of-sight direction information as the first line-of-sight direction information in the second storage area 62B.
  • the first line-of-sight direction information is the unique line-of-sight direction information of any one of the plurality of viewpoint images 46.
  • the first line-of-sight direction information refers to information indicating the first line-of-sight direction.
  • the first line-of-sight direction refers to the line-of-sight direction of the viewer 28B with respect to the imaging region.
  • the second acquisition unit 58C stores the acquired unique angle of view information in the second storage area 62B. Further, the second acquisition unit 58C stores the viewpoint image identifier input from the second viewpoint image acquisition unit 58D1 in the second storage area 62B.
  • the second acquisition unit 58C When the first viewpoint position information, the first line-of-sight direction information, and the unique angle of view information are newly acquired by the second acquisition unit 58C, the second acquisition unit 58C newly acquires the first viewpoint position information, the first.
  • the line-of-sight direction information and the unique angle of view information are overwritten and saved in the second storage area 62B.
  • the viewpoint video identifier is newly input to the second acquisition unit 58C from the second viewpoint video acquisition unit 58D1
  • the new viewpoint video identifier is overwritten and saved in the second storage area 62B by the second acquisition unit 58C.
  • the first acquisition unit 58A acquires the first viewpoint position information. Specifically, the unique viewpoint position information corresponding to the second viewpoint image displayed on the display 204 of the second HMD34B is stored in the second storage area 62B as the first viewpoint position information, and is stored by the first acquisition unit 58A. The first viewpoint position information is acquired from the second storage area 62B.
  • the first control unit 58B is acquired by the first acquisition unit 58A when the first viewpoint image includes the first viewpoint position indicated by the first viewpoint position information acquired by the first acquisition unit 58A. Control is performed to display the first person avatar (see FIG. 25) capable of specifying the first viewpoint position indicated by the first viewpoint position information in the first viewpoint image. Further, the first control unit 58B controls to change the display size of the first avatar according to the angle of view of the viewpoint image with the first avatar (see FIG. 31) displayed by the display 154.
  • the first control unit 58B controls the display 154 to keep the degree of difference between the image quality of the first viewpoint image and the image quality of the first avatar within the first predetermined range (see FIGS. 24 and 25). Further, the first control unit 58B controls the display 154 to change the display mode of the first avatar according to the relationship between the display size of the first viewpoint image and the display size of the first avatar. Further, the first control unit 58B has the first avatar with respect to the display 154 when the ratio of the display size of the first avatar to the display size of the first viewpoint image is equal to or more than the first default value (for example, 5%). Controls to change the display mode of (see FIGS. 27 to 29).
  • the first default value is an example of the "first threshold value" of the technique of the present disclosure. Hereinafter, a specific description will be given.
  • the first viewpoint image acquisition unit 58B1 outputs the acquired first viewpoint image to the first determination unit 58B2.
  • the first determination unit 58B2 requests the first acquisition unit 58A to acquire the first viewpoint position information.
  • the first acquisition unit 58A acquires the first viewpoint position information from the second storage area 62B in response to the request from the first determination unit 58B2, and outputs the acquired first viewpoint position information to the first determination unit 58B2.
  • the first determination unit 58B2 determines whether or not the first viewpoint image includes the first viewpoint position indicated by the first viewpoint position information input from the first acquisition unit 58A. Whether or not the first viewpoint position is included in the first viewpoint image is determined by referring to, for example, the unique angle of view information, the unique viewpoint position information, and the unique line-of-sight direction information corresponding to the first viewpoint image. It is said.
  • the fact that the first viewpoint position is included in the first viewpoint image means that, as shown in FIG. 15 as an example, the viewpoint position of the viewer 28B who is viewing the second viewpoint image at the present time is determined. It means that it is included in the first viewpoint image visually recognized by the viewer 28A.
  • the viewers 28A and 28B shown in FIG. 15 are virtual images shown for convenience, and in the example shown in FIG. 15, the virtual existence positions of the viewers 28A and 28B with respect to the soccer field 24 can be identifiable. It's just that.
  • the second acquisition unit 58C acquires the second viewpoint position information. Specifically, as for the second viewpoint position information, the unique viewpoint position information corresponding to the first viewpoint image displayed on the display 154 of the first HMD34A is stored in the first storage area 62A as the second viewpoint position information. , The second acquisition unit 58C acquires the second viewpoint position information from the first storage area 62A.
  • the second control unit 58D is acquired by the second acquisition unit 58C when the second viewpoint image includes the second viewpoint position indicated by the second viewpoint position information acquired by the second acquisition unit 58C. Control is performed to display a second person avatar capable of specifying the second viewpoint position indicated by the second viewpoint position information in the second viewpoint image. Further, the second control unit 58D controls to change the display size of the second avatar according to the angle of view of the viewpoint image with the second avatar (see FIG. 32) displayed by the display 204.
  • the second control unit 58D controls the display 204 to keep the degree of difference between the image quality of the second viewpoint image and the image quality of the second avatar within a predetermined range. Further, the second control unit 58D controls the display 204 to change the display mode of the second avatar according to the relationship between the display size of the second viewpoint image and the display size of the second avatar. Further, the second control unit 58D has a second avatar with respect to the display 204 when the ratio of the display size of the second avatar to the display size of the second viewpoint image is equal to or more than the second default value (for example, 5%). Controls to change the display mode of (see FIGS. 29 and 30). The second default value is an example of the "third threshold" of the technique of the present disclosure. Hereinafter, a specific description will be given.
  • the second viewpoint image acquisition unit 58D1 outputs the acquired second viewpoint image to the second determination unit 58D2.
  • the second determination unit 58D2 requests the second acquisition unit 58C to acquire the second viewpoint position information.
  • the second acquisition unit 58C acquires the second viewpoint position information from the first storage area 62A in response to the request from the second determination unit 58D2, and outputs the acquired second viewpoint position information to the second determination unit 58D2.
  • the second determination unit 58D2 determines whether or not the second viewpoint image includes the second viewpoint position indicated by the second viewpoint position information input from the second acquisition unit 58C. Whether or not the second viewpoint position is included in the second viewpoint image is determined by referring to, for example, the unique angle of view information, the unique viewpoint position information, and the unique line-of-sight direction information corresponding to the second viewpoint image. It is said.
  • the fact that the second viewpoint position is included in the second viewpoint image means that, as shown in FIG. 17, the viewpoint position of the viewer 28A who is viewing the first viewpoint image at the present time is determined. It means that it is included in the second viewpoint image visually recognized by the viewer 28B.
  • the viewers 28A and 28B shown in FIG. 17 are virtual images shown for convenience, and in the example shown in FIG. 17, the virtual existence positions of the viewers 28A and 28B with respect to the soccer field 24 can be identifiable. It's just that.
  • the first determination unit 58B2 determines that the first viewpoint position is included in the first viewpoint image
  • the viewer 28B is present in the field of view of the viewer 28A. That is, the first person existence information indicating that the first viewpoint position exists is output to the first synthesis unit 58B3.
  • the first determination unit 58B2 determines that the first viewpoint position is not included in the first viewpoint image
  • the viewer 28B does not exist in the field of view of the viewer 28A, that is, the first viewpoint position.
  • the first person nonexistence information indicating that is not present is output to the first viewpoint video acquisition unit 58B1.
  • the first viewpoint video acquisition unit 58B1 When the first person nonexistence information is input from the first determination unit 58B2, the first viewpoint video acquisition unit 58B1 outputs the first viewpoint video to the first viewpoint video output unit 58B7.
  • the first viewpoint video output unit 58B7 outputs the first viewpoint video input from the first viewpoint video acquisition unit 58B7 to the first HMD34A.
  • the first viewpoint image output unit 58B7 outputs the first viewpoint image to the first HMD 34A, so that the first viewpoint image is displayed on the display 154.
  • the first reference avatar group is stored in the storage 60.
  • the first reference avatar group is a set of a plurality of first reference avatars.
  • the first reference avatar refers to a virtual image imitating the viewer 28B.
  • the first reference avatar group includes a plurality of first reference avatars indicating the viewer 28B when the viewer 28B is observed from a plurality of directions.
  • the first synthesis unit 58B3 requests the first acquisition unit 58A to acquire the first viewpoint position information and the first line-of-sight direction information.
  • the first acquisition unit 58A acquires the first viewpoint position information and the first line-of-sight direction information from the second storage area 62B in response to the request from the first synthesis unit 58B3, and the acquired first viewpoint position information and the first line-of-sight.
  • the direction information is output to the first synthesis unit 58B3.
  • the first synthesis unit 58B3 acquires the first viewpoint image from the first viewpoint image acquisition unit 58B1 and acquires the first reference avatar group from the storage 60. To do.
  • the first synthesis unit 58B3 generates the first avatar based on the first line-of-sight direction information input from the first acquisition unit 58A.
  • the first avatar refers to a virtual image that imitates the viewer 28B.
  • the first synthesis unit 58B3 uses the first reference avatar group to generate a first avatar capable of specifying the first line-of-sight direction indicated by the first line-of-sight direction information. That is, the first synthesis unit 58B3 synthesizes the first reference avatar group to generate the first avatar indicating the viewer 28B facing the first line-of-sight direction.
  • the first avatar is generated by the first synthesis unit 58B3 as an avatar capable of specifying the first line-of-sight direction indicated by the first line-of-sight direction information acquired by the first acquisition unit 58A.
  • the first avatar is an example of "first specific information" related to the technique of the present disclosure.
  • the first synthesizing unit 58B3 superimposes the first avatar on the first viewpoint position indicated by the first viewpoint position information input from the first acquisition unit 58A on the first viewpoint image, as an example. As shown in 19, the viewpoint image containing the first avatar is generated. The first synthesizing unit 58B3 outputs the generated viewpoint image containing the first avatar to the first avatar display size changing unit 58B4. Although a form example in which the first avatar is superimposed on the first viewpoint image is given here, the technique of the present disclosure is not limited to this, and for example, the first avatar is embedded in the first viewpoint image. Finally, the first avatar may be displayed in the second viewpoint image displayed on the display 154 of the first HMD34A.
  • the second determination unit 58D2 determines that the second viewpoint position is included in the second viewpoint image
  • the viewer 28A is present in the field of view of the viewer 28B. That is, the existence information of the second person indicating that the second viewpoint position exists is output to the second synthesis unit 58D3.
  • the second determination unit 58D2 determines that the second viewpoint position is not included in the second viewpoint image
  • the viewer 28A does not exist in the field of view of the viewer 28B, that is, the second viewpoint position.
  • the second person non-existence information indicating that is not present is output to the second viewpoint video acquisition unit 58D1.
  • the second viewpoint image acquisition unit 58D1 When the second person non-existence information is input from the second determination unit 58D2, the second viewpoint image acquisition unit 58D1 outputs the second viewpoint image to the second viewpoint image output unit 58D7.
  • the second viewpoint video output unit 58D7 outputs the second viewpoint video input from the second viewpoint video acquisition unit 58D1 to the second HMD34B.
  • the second viewpoint image output unit 58D7 outputs the second viewpoint image to the second HMD34B, so that the second viewpoint image is displayed on the display 204.
  • the second reference avatar group is stored in the storage 60.
  • the second reference avatar group is a set of a plurality of second reference avatars.
  • the second reference avatar refers to a virtual image that imitates the viewer 28A.
  • the second reference avatar group includes a plurality of second reference avatars indicating the viewer 28A when the viewer 28A is observed from a plurality of directions.
  • the second synthesis unit 58D3 requests the second acquisition unit 58C to acquire the second viewpoint position information and the second line-of-sight direction information.
  • the second acquisition unit 58C acquires the second viewpoint position information and the second line-of-sight direction information from the first storage area 62A in response to the request from the second synthesis unit 58D3, and acquires the second viewpoint position information and the second line-of-sight.
  • the direction information is output to the second synthesis unit 58D3.
  • the second synthesis unit 58D3 acquires the second viewpoint image from the second viewpoint image acquisition unit 58D1 and acquires the second reference avatar group from the storage 60. To do.
  • the second synthesis unit 58D3 generates the second avatar based on the second line-of-sight direction information input from the second acquisition unit 58C.
  • the second avatar refers to a virtual image that imitates the viewer 28A.
  • the second synthesis unit 58D3 generates a second avatar capable of specifying the second line-of-sight direction indicated by the second line-of-sight direction information by using the second reference avatar group. That is, the second synthesis unit 58D3 generates the second avatar indicating the viewer 28A facing the second line-of-sight direction by synthesizing the second reference avatar group.
  • the second avatar is generated by the second synthesis unit 58D3 as an avatar capable of specifying the second line-of-sight direction indicated by the second line-of-sight direction information acquired by the second acquisition unit 58C.
  • the second avatar is an example of "second specific information" related to the technique of the present disclosure.
  • the second synthesis unit 58D3 superimposes the second avatar on the second viewpoint image indicated by the second viewpoint position information input from the second acquisition unit 58C on the second viewpoint image, so that FIG. 19 shows. Similar to the example shown, a viewpoint image with a second avatar is generated. The second synthesizing unit 58D3 outputs the generated viewpoint image with the second avatar to the second avatar display size changing unit 58D4. Although a form example in which the second avatar is superimposed on the second viewpoint image is given here, the technique of the present disclosure is not limited to this, and for example, the second avatar is embedded in the second viewpoint image. Finally, the second avatar may be displayed in the second viewpoint image displayed on the display 206 of the second HMD34B. In the following, for convenience of explanation, when it is not necessary to distinguish between the first avatar and the second avatar, they are simply referred to as "avatars".
  • the first avatar display size changing unit 58B4 receives the unique angle of view information for the first acquisition unit 58A when the viewpoint image with the first avatar is input from the first synthesizing unit 58B3. Request the acquisition of.
  • the first acquisition unit 58A acquires the unique angle of view information from the second storage area 62B in response to the request from the first avatar display size change unit 58B4, and uses the acquired unique angle of view information as the first avatar display size change unit 58B4. Output to.
  • the storage 60 stores a size derivation table for avatars.
  • the size derivation table for the avatar is a table in which the angle of view of the viewpoint image 46 and the size of the avatar are associated with each other.
  • the size of the avatar refers to, for example, the area of the avatar.
  • the relationship between the angle of view of the viewpoint image 46 and the size of the avatar in the avatar size derivation table can be changed according to the instruction received by the reception device 152.
  • the size derivation table is illustrated, but the present invention is not limited to this, and it is also possible to apply the calculation formula with the angle of view of the viewpoint image 46 as the independent variable and the size of the avatar as the dependent variable.
  • the first avatar display size changing unit 58B4 changes the size of the first avatar in the viewpoint video containing the first avatar to a size corresponding to the angle of view indicated by the unique angle of view information.
  • the first avatar display size changing unit 58B4 derives the avatar size from the avatar size derivation table according to the unique angle of view information.
  • the size corresponding to the angle of view indicated by the unique angle of view information is derived from the avatar size derivation table.
  • the first avatar display size changing unit 58B4 changes the first avatar in the viewpoint video containing the first avatar to the size derived from the avatar size derivation table.
  • the size of the first avatar in the viewpoint image with the first avatar shown in FIG. 19 is the size of the first avatar in the viewpoint image with the first avatar shown in FIG. 22 by the first avatar display size changing unit 58B4. Is changed to.
  • the size of the first avatar is several times or more larger than the size of the audience in the viewpoint image containing the first avatar, whereas in the example shown in FIG. 22, the size of the first avatar is large.
  • the size has been changed to almost the same size as the audience in the viewpoint video with the first avatar.
  • the display size of the first avatar is changed according to the angle of view of the viewpoint image with the first avatar displayed on the display 154. ..
  • the second avatar display size changing unit 58D4 receives the unique angle of view information with respect to the second acquisition unit 58C when the viewpoint image with the second avatar is input from the second synthesizing unit 58D3. Request the acquisition of.
  • the second acquisition unit 58C acquires the unique angle of view information from the first storage area 62A in response to the request from the second avatar display size change unit 58D4, and uses the acquired unique angle of view information as the second avatar display size change unit 58D4. Output to.
  • the second avatar display size changing unit 58D4 sets the size of the second avatar in the second avatar-containing viewpoint image as a unique angle of view in the same manner as the method of changing the size of the first avatar in the first avatar-containing viewpoint image. Change the size according to the angle of view indicated by the information. That is, the second avatar display size changing unit 58D4 derives the size of the first avatar from the avatar size derivation table according to the unique angle of view information. As the size of the second avatar, the size corresponding to the angle of view indicated by the unique angle of view information is derived from the avatar size derivation table. The second avatar display size changing unit 58D4 changes the second avatar in the viewpoint video containing the second avatar to the size derived from the avatar size derivation table. As a result, when the second avatar-containing viewpoint image is displayed on the display 204 of the second HMD34B, the display size of the second avatar is changed according to the angle of view of the second avatar-containing viewpoint image displayed on the display 154. ..
  • the first avatar display size changing unit 58B4 transmits the viewpoint image with the first avatar obtained by changing the size of the first avatar to the size corresponding to the angle of view to the first image quality control unit 58B5. Output.
  • the first image quality control unit 58B5 adjusts the image quality of the viewpoint image containing the first avatar so that the difference between the image quality of the first viewpoint image and the image quality of the first avatar is within the first predetermined range in the viewpoint image containing the first avatar.
  • the "degree of difference" may be the difference between the image quality of the first viewpoint image and the image quality of the first avatar, or the image quality of the first viewpoint image and the image quality of the first avatar. It may be the other ratio.
  • the first image quality control unit 58B5 determines whether or not the degree of difference between the image quality of the first viewpoint image and the image quality of the first avatar is within the first predetermined range in the viewpoint image containing the first avatar. To do. Then, the first image quality control unit 58B5 determines the image quality of the first viewpoint image and the image quality of the first avatar when the degree of difference between the image quality of the first viewpoint image and the image quality of the first avatar does not fall within the first predetermined range. The image quality of the viewpoint image with the first avatar is controlled so that the degree of difference from the above is within the first predetermined range.
  • image quality refers to resolution, contrast, and degree of brightness.
  • degree of difference in resolution falls within the default resolution range and the degree of difference in contrast falls within the default contrast range. It means that the degree of difference in the degree of lightness and darkness is within the predetermined range of degree of lightness and darkness.
  • the default resolution range, the default contrast range, and the default brightness range may be fixed values or variable values. Examples of the fixed value include a value derived in advance by a sensory test and / or a computer simulation or the like as a value that does not cause a visual discomfort when the avatar enters the viewpoint image 46. Variable values include values that can be changed according to instructions received by the receiving device 52, 76, 106, 152 or 202.
  • the image quality of the viewpoint video with the first avatar is controlled so that the image quality of the viewpoint video with the first avatar and the image quality of the first avatar fall within the first predetermined range.
  • the image quality of the first avatar shown is changed as shown in FIG. 25 as an example. That is, the image quality of the first avatar is changed so that a visual discomfort does not occur due to the first avatar appearing from the first viewpoint image or sinking into the first viewpoint image.
  • the second avatar display size changing unit 58D4 transmits the viewpoint image with the second avatar obtained by changing the size of the second avatar to the size corresponding to the angle of view to the second image quality control unit 58D5.
  • the second image quality control unit 58D5 sets the degree of difference between the image quality of the second viewpoint image and the image quality of the second avatar in the second default range in the viewpoint image containing the second avatar in the same manner as the first image quality control unit 58B5.
  • the image quality of the viewpoint image with the second avatar is controlled so as to fit.
  • the "degree of difference" may be the difference between the image quality of the second viewpoint image and the image quality of the second avatar, or the image quality of the second viewpoint image and the image quality of the second avatar. It may be the other ratio.
  • the second image quality control unit 58D5 determines whether or not the difference between the image quality of the second viewpoint image and the image quality of the second avatar is within the second default range in the viewpoint image containing the second avatar. To do. Then, the second image quality control unit 58D5 sets the image quality of the second viewpoint image and the image quality of the second avatar when the degree of difference between the image quality of the second viewpoint image and the image quality of the second avatar does not fall within the second default range. The image quality of the viewpoint image with the second avatar is controlled so that the degree of difference from the above is within the second default range.
  • the difference between the image quality of the second viewpoint image and the image quality of the second avatar falls within the second default range is the degree of difference between the image quality of the first viewpoint image and the image quality of the first avatar and the first default. Similar to the relationship with the range, the difference in resolution falls within the default resolution range, the difference in contrast falls within the default contrast range, and the difference in degree of lightness and darkness falls within the default range of lightness and darkness. means.
  • the image quality of the viewpoint video with the second avatar is controlled so that the image quality of the viewpoint video with the second avatar and the image quality of the second avatar fall within the second default range, so that the second avatar becomes the second avatar.
  • the image quality of the first avatar is changed so that a visual discomfort does not occur due to the appearance from the two-viewpoint image or the sinking into the second-viewpoint image.
  • the technique of the present disclosure is not limited to this, and at least one of the resolution, contrast, and degree of lightness and darkness is used. Or it may be two. In addition, it may be a factor that affects the image quality other than the resolution, the contrast, and the degree of lightness and darkness.
  • the first image quality control unit 58B5 outputs the viewpoint image with the first avatar to the first display mode changing unit 58B6.
  • the first display mode changing unit 58B6 displays the first avatar according to the relationship between the size of the first viewpoint image and the size of the first avatar for the viewpoint image with the first avatar input from the first image quality control unit 58B5. Change the aspect.
  • the first display mode changing unit 58B6 determines whether or not the ratio of the size of the first avatar to the size of the first viewpoint image is equal to or greater than the first default value for the viewpoint image containing the first avatar.
  • the second display mode changing unit 58D6 is the viewpoint with the first avatar input from the first image quality control unit 58B5 when the ratio of the size of the first avatar to the size of the first viewpoint image is less than the first default value.
  • the video is transferred to the first viewpoint video output unit 58B7 as it is.
  • the first display mode changing unit 58B6 changes the display mode of the first avatar when the ratio of the size of the first avatar to the size of the first viewpoint image is equal to or greater than the first default value.
  • the first display mode changing unit 58B6 outputs the viewpoint video with the first avatar obtained by changing the display mode to the first viewpoint video output unit 58B7.
  • the first default value may be a fixed value or a variable value.
  • As the fixed value for example, a value derived in advance as a lower limit value of the size of the avatar that causes visual discomfort when the avatar enters the viewpoint image 46 by a sensory test and / or computer simulation or the like can be mentioned. Be done.
  • the variable value includes a value that can be changed according to an instruction received by any of the receiving devices 52, 76, 106, 152 or 202.
  • An example of changing the display mode of the first avatar is hiding the first avatar. That is, the first display mode changing unit 58B6 deletes the first avatar from the viewpoint image containing the first avatar when the ratio of the size of the first avatar to the size of the first viewpoint image is equal to or greater than the first default value. As a result, the display mode of the viewpoint image with the first avatar is changed. For example, as shown in FIG. 28, when the ratio of the size of the first avatar to the size of the first viewpoint image is equal to or more than the first default value in the viewpoint image containing the first avatar, FIG. 29 is shown as an example. As described above, the first avatar is deleted from the viewpoint video containing the first avatar. As a result, the display 154 of the first HMD34A displays the first viewpoint image shown in FIG. 29 as an example, that is, the first viewpoint image in a state where the first avatar is hidden.
  • the second image quality control unit 58D5 outputs the viewpoint image with the second avatar to the second display mode changing unit 58D6.
  • the second display mode changing unit 58D6 displays the second avatar according to the relationship between the size of the second viewpoint image and the size of the second avatar for the viewpoint image with the second avatar input from the second image quality control unit 58D5. Change the aspect.
  • the second display mode changing unit 58D6 determines whether or not the ratio of the size of the second avatar to the size of the second viewpoint image is equal to or greater than the second default value.
  • the second display mode changing unit 58D6 is a viewpoint with a second avatar input from the second image quality control unit 58D5 when the ratio of the size of the second avatar to the size of the second viewpoint image is less than the second default value.
  • the video is transferred to the second viewpoint video output unit 58D7 as it is.
  • the display mode of the first avatar is changed.
  • the second display mode changing unit 58D6 outputs the viewpoint video with the second avatar obtained by changing the display mode to the second viewpoint video output unit 58D7.
  • the second default value may be the same value as the first default value or may be a different value. Further, the second default value may be a fixed value or a variable value as in the case of the first default value.
  • An example of changing the display mode of the second avatar is hiding the second avatar. That is, the second display mode changing unit 58D6 deletes the second avatar from the viewpoint image containing the second avatar when the ratio of the size of the second avatar to the size of the second viewpoint image is equal to or greater than the second default value. As a result, the display mode of the viewpoint image with the second avatar is changed. That is, when the ratio of the size of the second avatar to the size of the second viewpoint image in the viewpoint image containing the second avatar is equal to or greater than the second default value, the second avatar is deleted from the viewpoint image containing the second avatar. Avatar. As a result, the display 204 of the second HMD34B displays the second viewpoint image in the state where the second avatar is hidden.
  • a first viewpoint video with an avatar or a first viewpoint video is input from the first display mode changing unit 58B6 to the first viewpoint video output unit 58B7.
  • the first viewpoint video output unit 58B7 outputs the input viewpoint video with the first avatar to the first HMD34A.
  • the first viewpoint image output unit 58B7 outputs the viewpoint image containing the first avatar to the first HMD34A, so that the viewpoint image containing the first avatar is displayed on the display 154 of the first HMD34A.
  • the first viewpoint video output unit 58B7 When the first viewpoint video is input from the first display mode changing unit 58B6, the first viewpoint video output unit 58B7 outputs the input first viewpoint video to the first HMD34A.
  • the first viewpoint video output unit 58B7 outputs the first viewpoint video to the first HMD 34A to display the first viewpoint video on the display 154 of the first HMD 34A.
  • a second viewpoint video with a second avatar or a second viewpoint video is input from the second display mode changing unit 58D6 to the second viewpoint video output unit 58D7.
  • the second viewpoint video output unit 58D7 outputs the input second avatar-containing viewpoint video to the second HMD34B.
  • the second viewpoint video output unit 58D7 outputs the viewpoint video with the second avatar to the second HMD34B, so that the viewpoint video with the second avatar is displayed on the display 204 of the second HMD34B.
  • the second viewpoint video output unit 58D7 When the second viewpoint video is input from the second display mode changing unit 58D6, the second viewpoint video output unit 58D7 outputs the input second viewpoint video to the second HMD34B.
  • the second viewpoint image output unit 58D7 outputs the second viewpoint image to the second HMD34B so that the second viewpoint image is displayed on the display 204 of the second HMD34B.
  • the viewer 28A gives the first smartphone 14A an avatar non-display instruction instructing the avatar non-display.
  • the avatar non-display instruction is received by the touch panel 76A of the first smartphone 14A.
  • the first smartphone 14A transmits the avatar non-display instruction received by the touch panel 76A to the setting unit 58E of the display control device 12.
  • the viewer 28B gives an avatar non-display instruction to the second smartphone 14B.
  • the avatar non-display instruction is received by the touch panel 106A of the second smartphone 14B.
  • the second smartphone 14B transmits the avatar non-display instruction received by the touch panel 106A to the setting unit 58E of the display control device 12.
  • the setting unit 58E sets to hide the second avatar when receiving the avatar hiding instruction sent from the second smartphone 14B. Further, the setting unit 58E sets to hide the first avatar when receiving the avatar non-display instruction transmitted from the first smartphone 14A.
  • the setting unit 58E is an example of the "first setting unit” and the "second setting unit” according to the technique of the present disclosure.
  • the setting unit 58E when the setting unit 58E receives the avatar non-display instruction transmitted from the first smartphone 14A, the setting unit 58E sends the flag setting instruction information instructing the on of the avatar non-display flag to the first control unit 58B. Output.
  • the avatar hiding flag refers to a flag instructing the avatar to be hidden.
  • the setting unit 58E When the setting unit 58E receives the avatar non-display instruction transmitted from the second smartphone 14B, the setting unit 58E outputs the flag setting instruction information to the second control unit 58D.
  • the reception by the setting unit 58E of the avatar non-display instruction transmitted from the first smartphone 14A is an example of the "second default condition" according to the technology of the present disclosure, and the avatar non-display transmitted from the second smartphone 14B is displayed.
  • the reception by the instruction setting unit 58E is an example of the "first default condition" according to the technique of the present disclosure.
  • the “second default condition" related to the technology of the present disclosure reception by the setting unit 58E of the avatar non-display instruction transmitted from the first smartphone 14A is illustrated, but the technology of the present disclosure is this. Not limited to.
  • the avatar non-display instruction received by another reception device such as the reception device 152 of the first HMD 34A is transmitted to the display control device 12, and the avatar non-display instruction is received by the setting unit 58E of the display control device 12.
  • the condition may be set to hide the first avatar.
  • the “first default condition" related to the technology of the present disclosure reception by the setting unit 58E of the avatar non-display instruction transmitted from the second smartphone 14B is illustrated, but the technology of the present disclosure Is not limited to this.
  • the avatar non-display instruction received by another reception device such as the reception device 202 of the second HMD34B is transmitted to the display control device 12, and the avatar non-display instruction is received by the setting unit 58E of the display control device 12.
  • the condition may be set to hide the second avatar.
  • the first control unit 58B turns on the avatar non-display flag when the flag setting instruction information is input from the setting unit 58E.
  • the viewpoint video with the first avatar is not generated, and the first viewpoint video output unit 58B7 outputs the first viewpoint video to the first HMD34A.
  • the second control unit 58D turns on the avatar non-display flag when the flag setting instruction information is input from the setting unit 58E.
  • the viewpoint video with the second avatar is not generated, and the second viewpoint video output unit 58D7 outputs the second viewpoint video to the second HMD34B.
  • FIGS. 35 to 37 an example of the flow of the first display control process executed by the CPU 58 of the display control device 12 according to the first display control program 60A will be described with reference to FIGS. 35 to 37.
  • a plurality of viewpoint images are generated by executing the viewpoint image generation process by the CPU 58, and the viewpoint image identifier, the unique viewpoint position information, the unique line-of-sight direction information, and the unique angle of view are generated for each viewpoint video.
  • the description will be made on the assumption that the information is associated with each other. Further, here, it is assumed that the first viewpoint line-of-sight instruction is transmitted from the first smartphone 14A to the display control device 12.
  • step ST10 the first viewpoint image acquisition unit 58B1 determines whether or not the first viewpoint line-of-sight instruction transmitted from the first smartphone 14A has been received. If the first viewpoint line-of-sight instruction transmitted from the first smartphone 14A is not received in step ST10, the determination is denied and the determination in step ST10 is performed again. When the first viewpoint line-of-sight instruction transmitted from the first smartphone 14A is received in step ST10, the determination is affirmed and the first display control process proceeds to step ST12.
  • the first viewpoint image acquisition unit 58B1 acquires the viewpoint image 46 associated with the unique viewpoint position information and the unique line-of-sight direction information corresponding to the first viewpoint line-of-sight instruction as the first viewpoint image, and obtains the first viewpoint image. Acquires the viewpoint video identifier associated with the video.
  • step ST14 the first acquisition unit 58A acquires the unique viewpoint position information associated with the viewpoint image 46 specified from the viewpoint image identifier acquired in step ST12 as the second viewpoint position information, and then , The first display control process proceeds to step ST16.
  • step ST16 the first acquisition unit 58A acquires the unique line-of-sight direction information associated with the viewpoint image 46 identified from the viewpoint image identifier acquired in step ST12 as the second line-of-sight direction information, and then the second 1 The display control process proceeds to step ST17.
  • step ST17 the first acquisition unit 58A acquires the unique angle of view information associated with the viewpoint image 46 specified from the viewpoint image identifier acquired in step ST12, and then the first display control process is performed in step 1. Move to ST18.
  • step ST18 the first acquisition unit 58A stores the second viewpoint position information and the like in the first storage area 62A, and then the first display control process shifts to step ST20.
  • the second viewpoint position information and the like are the viewpoint video identifier acquired in step ST12, the second viewpoint position information acquired in step ST14, the second line-of-sight direction information acquired in step ST16, and step ST17. Refers to the acquired unique angle of view information.
  • step ST20 the first acquisition unit 58A determines whether or not the first viewpoint position information or the like is stored in the second storage area 62B.
  • the first viewpoint position information and the like refer to the viewpoint video identifier, the first viewpoint position information, the first line-of-sight direction information, and the unique angle of view information.
  • step ST20 if the first viewpoint position information or the like is not stored in the second storage area 62B, the determination is denied and the determination in step ST20 is performed again.
  • step ST20 when the first viewpoint position information or the like is stored in the second storage area 62B, the determination is affirmed, and the first display control process shifts to step ST22.
  • step ST22 the first acquisition unit 58A acquires the first viewpoint position information and the like from the second storage area 62B, and then the first display control process shifts to step ST24.
  • step ST24 the first determination unit 58B2 acquires the first viewpoint image from the first viewpoint image acquisition unit 58B1, and then the first display control process shifts to step ST26.
  • step ST26 the first determination unit 58B2 determines whether or not the first viewpoint image acquired in step ST24 includes the first viewpoint position indicated by the first viewpoint position information acquired in step ST22. To do. In step ST26, if the first viewpoint position is not included in the first viewpoint image, the determination is denied and the process proceeds to step ST44 shown in FIG. 37. In step ST26, when the first viewpoint position is included in the first viewpoint image, the determination is affirmed, and the first display control process shifts to step ST28 shown in FIG.
  • step ST28 shown in FIG. 36 the first determination unit 58B2 outputs the first person existence information to the first synthesis unit 58B3, and then the first display control process shifts to step ST30.
  • the first synthesis unit 58B3 causes the first acquisition unit 58A to acquire the first viewpoint position information and the first line-of-sight direction information from the second storage area 62B. Further, the first synthesis unit 58B3 acquires the first reference avatar group from the storage 60. Further, the first synthesis unit 58B3 acquires the first viewpoint image from the first viewpoint image acquisition unit 58B1. Then, the first synthesis unit 58B3 generates the first avatar by using the first line-of-sight direction information and the first reference avatar group. The first synthesis unit 58B3 identifies the first viewpoint position indicated by the first viewpoint position information from the first viewpoint image, and superimposes the first avatar on the specified first viewpoint position in the first viewpoint image. Generates the viewpoint image containing the first avatar, and then the first display control process proceeds to step ST34.
  • step ST34 the first avatar display size changing unit 58B4 has an angle of view indicating the size of the first avatar in the viewpoint video containing the first avatar by the unique angle of view information according to the size derivation table for the avatar in the storage 60.
  • the size is changed according to the above, and then the first display control process proceeds to step ST36.
  • step ST36 the first image quality control unit 58B5 changes the size of the first avatar to a size corresponding to the angle of view in step ST34, and the image quality of the first avatar in the viewpoint image containing the first avatar is the first viewpoint. Determine if there is a mismatch with the image quality of the image.
  • mismatch means that the degree of difference between the image quality of the first viewpoint image and the image quality of the first avatar is outside the first predetermined range.
  • step ST36 if the image quality of the first avatar does not match the image quality of the first viewpoint image, the determination is denied and the first display control process proceeds to step ST40.
  • step ST36 if the image quality of the first avatar does not match the image quality of the first viewpoint image, the determination is affirmed, and the first display control process proceeds to step ST38.
  • step ST38 the first image quality control unit 58B5 determines the difference between the image quality of the first viewpoint image and the image quality of the first avatar in the viewpoint image containing the first avatar within the first predetermined range.
  • the image quality of the video is controlled, and then the first display control process proceeds to step ST40.
  • the first display mode changing unit 58B6 displays the display mode of the first avatar included in the viewpoint image containing the first avatar obtained by controlling the image quality of the first avatar by the first image quality control unit 58B5. It is determined whether or not the condition to be changed (first display mode change condition) is satisfied.
  • the first display mode change condition is, for example, the viewpoint image with the first avatar obtained by controlling the image quality of the first avatar by the first image quality control unit 58B5, and the viewpoint image with the first avatar. It refers to the condition that the ratio of the first avatar to the size of the first viewpoint image is equal to or more than the first default value.
  • step ST40 If the first display mode change condition is not satisfied in step ST40, the determination is denied, and the first display control process proceeds to step ST46 shown in FIG. 37. If the first display mode change condition is satisfied in step ST40, the determination is affirmed, and the first display control process shifts to step ST42.
  • step ST42 the first display mode changing unit 58B6 changes the display mode of the first avatar-containing viewpoint video by deleting the first avatar from the first avatar-containing viewpoint video. By deleting the first avatar from the viewpoint image containing the first avatar, the viewpoint image containing the first avatar is changed to the first viewpoint image.
  • the first display control process shifts to step ST46 shown in FIG. 37.
  • step ST44 shown in FIG. 37 the first viewpoint video output unit 58B7 outputs the first viewpoint video to the first HMD34A.
  • the first HMD34A causes the display 154 to display the first viewpoint image input from the first viewpoint image output unit 58B7.
  • the first display control process shifts to step ST48.
  • step ST46 shown in FIG. 37 the first viewpoint video output unit 58B7 outputs the viewpoint video with the first avatar or the first viewpoint video to the first HMD34A. That is, the first viewpoint video output unit 58B7 outputs the viewpoint video with the first avatar to the first HMD34A when the determination is denied in step ST40, and outputs the first viewpoint video when the determination is affirmed in step ST40. Output to 1HMD34A.
  • the first HMD34A displays the first viewpoint image input from the first viewpoint image output unit 58B7 on the display 154, and displays the viewpoint image with the first avatar input from the first viewpoint image output unit 58B7 on the display 154. Is displayed for.
  • the first display control process shifts to step ST48.
  • step ST48 the CPU 58 determines whether or not the condition for ending the first display control process (the condition for ending the first display control process) is satisfied.
  • the condition for ending the first display control process there is a condition that the reception device 52, 76, 106, 152 or 202 has received an instruction to end the first display control process.
  • step ST48 If the first display control process end condition is not satisfied in step ST48, the determination is denied and the first display control process proceeds to step ST10 shown in FIG. 35. If the condition for ending the first display control process is satisfied in step ST48, the determination is affirmed and the first display control process ends.
  • FIGS. 38 to 40 an example of the flow of the second display control process executed by the CPU 58 of the display control device 12 according to the second display control program 60B will be described with reference to FIGS. 38 to 40.
  • a plurality of viewpoint images are generated by executing the viewpoint image generation process by the CPU 58, and the viewpoint image identifier, the unique viewpoint position information, the unique line-of-sight direction information, and the unique angle of view are generated for each viewpoint video.
  • the description will be made on the assumption that the information is associated with each other. Further, here, it is assumed that the second viewpoint line-of-sight instruction is transmitted from the second smartphone 14B to the display control device 12.
  • step ST100 the second viewpoint image acquisition unit 58D1 determines whether or not the second viewpoint line-of-sight instruction transmitted from the second smartphone 14B has been received. If the second viewpoint line-of-sight instruction transmitted from the second smartphone 14B is not received in step ST100, the determination is denied and the determination in step ST100 is performed again. When the second viewpoint line-of-sight instruction transmitted from the second smartphone 14B is received in step ST100, the determination is affirmed, and the second display control process proceeds to step ST102.
  • the second viewpoint image acquisition unit 58D1 acquires the viewpoint image 46 associated with the unique viewpoint position information and the specific line-of-sight direction information corresponding to the second viewpoint line-of-sight instruction as the second viewpoint image, and obtains the second viewpoint image. Acquires the viewpoint video identifier associated with the video.
  • step ST104 the second acquisition unit 58C acquires the unique viewpoint position information associated with the viewpoint image 46 specified from the viewpoint image identifier acquired in step ST102 as the first viewpoint position information, and then acquires it.
  • the second display control process proceeds to step ST106.
  • step ST106 the second acquisition unit 58C acquires the unique line-of-sight direction information associated with the viewpoint image 46 identified from the viewpoint image identifier acquired in step ST102 as the second line-of-sight direction information, and then the second 2 The display control process proceeds to step ST107.
  • step ST107 the second acquisition unit 58C acquires the unique angle of view information associated with the viewpoint image 46 specified from the viewpoint image identifier acquired in step ST102, and then the second display control process is performed in step. Move to ST108.
  • step ST108 the second acquisition unit 58C stores the first viewpoint position information and the like in the second storage area 62B, and then the second display control process shifts to step ST110.
  • the first viewpoint position information and the like are the viewpoint video identifier acquired in step ST102, the first viewpoint position information acquired in step ST104, the first line-of-sight direction information acquired in step ST106, and step ST107. Refers to the acquired unique angle of view information.
  • step ST110 the second acquisition unit 58C determines whether or not the second viewpoint position information or the like is stored in the first storage area 62A.
  • the second viewpoint position information and the like refer to the viewpoint video identifier, the second viewpoint position information, the second line-of-sight direction information, and the unique angle of view information.
  • step ST110 if the second viewpoint position information or the like is not stored in the first storage area 62A, the determination is denied and the determination in step ST110 is performed again.
  • step ST110 when the second viewpoint position information or the like is stored in the first storage area 62A, the determination is affirmed, and the second display control process shifts to step ST112.
  • step ST112 the second acquisition unit 58C acquires the second viewpoint position information and the like from the first storage area 62A, and then the second display control process shifts to step ST114.
  • step ST114 the second determination unit 58D2 acquires the second viewpoint image from the second viewpoint image acquisition unit 58D1, and then the second display control process shifts to step ST116.
  • step ST116 the second determination unit 58D2 determines whether or not the second viewpoint image acquired in step ST114 includes the second viewpoint position indicated by the second viewpoint position information acquired in step ST112. To do. In step ST116, if the second viewpoint position is not included in the second viewpoint image, the determination is denied and the process proceeds to step ST134 shown in FIG. 40. In step ST116, when the second viewpoint position is included in the second viewpoint image, the determination is affirmed, and the second display control process shifts to step ST118 shown in FIG. 39.
  • step ST118 shown in FIG. 39 the second determination unit 58D2 outputs the second person existence information to the second synthesis unit 58D3, and then the second display control process shifts to step ST120.
  • the second synthesis unit 58D3 causes the second acquisition unit 58C to acquire the second viewpoint position information and the second line-of-sight direction information from the first storage area 62A. Further, the second synthesis unit 58D3 acquires the second reference avatar group from the storage 60. Further, the second synthesis unit 58D3 acquires the second viewpoint image from the second viewpoint image acquisition unit 58D1. Then, the second synthesis unit 58D3 generates the second avatar by using the second line-of-sight direction information and the second reference avatar group. The second synthesis unit 58D3 identifies the second viewpoint position indicated by the second viewpoint position information from the second viewpoint image, and superimposes the second avatar on the specified second viewpoint position in the second viewpoint image. Generates a viewpoint image with a second avatar, and then the second display control process proceeds to step ST124.
  • step ST124 the second avatar display size changing unit 58D4 has an angle of view indicating the size of the second avatar in the viewpoint video containing the second avatar by the unique angle of view information according to the size derivation table for the avatar in the storage 60.
  • the size is changed according to the above, and then the second display control process proceeds to step ST126.
  • step ST126 the second image quality control unit 58D5 changes the size of the second avatar to the size corresponding to the angle of view in step ST124, and the image quality of the second avatar in the second avatar-containing viewpoint image is the second viewpoint. Determine if there is a mismatch with the image quality of the image.
  • mismatch means that the degree of difference between the image quality of the second viewpoint image and the image quality of the second avatar is outside the second default range.
  • step ST126 if the image quality of the second avatar does not match the image quality of the second viewpoint image, the determination is denied and the second display control process proceeds to step ST130.
  • step ST126 if the image quality of the second avatar does not match the image quality of the second viewpoint image, the determination is affirmed, and the second display control process proceeds to step ST128.
  • step ST128 the second image quality control unit 58D5 determines the difference between the image quality of the second viewpoint image and the image quality of the second avatar in the viewpoint image containing the second avatar within the second default range. The image quality of the video is controlled, and then the second display control process proceeds to step ST130.
  • the second display mode changing unit 58D6 displays the display mode of the second avatar included in the viewpoint image containing the second avatar obtained by controlling the image quality of the second avatar by the second image quality control unit 58D5. It is determined whether or not the condition to be changed (second display mode change condition) is satisfied.
  • the second display mode change condition is, for example, the viewpoint image with the second avatar obtained by controlling the image quality of the second avatar by the second image quality control unit 58D5, and the viewpoint image with the second avatar. It refers to the condition that the ratio of the second avatar to the overall size, that is, the size of the second viewpoint image is equal to or greater than the second default value.
  • step ST130 If the second display mode change condition is not satisfied in step ST130, the determination is denied and the second display control process proceeds to step ST136 shown in FIG. 40. If the second display mode change condition is satisfied in step ST130, the determination is affirmed, and the second display control process shifts to step ST132.
  • step ST132 the second display mode changing unit 58D6 changes the display mode of the second avatar-containing viewpoint video by erasing the second avatar from the second avatar-containing viewpoint video. By deleting the second avatar from the viewpoint video containing the second avatar, the viewpoint video containing the second avatar is changed to the second viewpoint video.
  • the second display control process shifts to step ST136 shown in FIG. 40.
  • step ST134 shown in FIG. 40 the second viewpoint video output unit 58D7 outputs the second viewpoint video to the second HMD34B.
  • the second HMD34B causes the display 204 to display the second viewpoint image input from the second viewpoint image output unit 58D7.
  • the second display control process shifts to step ST138.
  • step ST136 shown in FIG. 40 the second viewpoint video output unit 58D7 outputs the viewpoint video with the second avatar or the second viewpoint video to the second HMD34B. That is, the second viewpoint video output unit 58D7 outputs the viewpoint video with the second avatar to the second HMD34B when the determination is denied in step ST130, and outputs the second viewpoint video when the determination is affirmed in step ST130. Output to 2HMD34B.
  • the second HMD34B displays the second viewpoint image input from the second viewpoint image output unit 58D7 on the display 204, and displays the viewpoint image with the second avatar input from the second viewpoint image output unit 58D7 on the display 204. Is displayed for.
  • the second display control process shifts to step ST138.
  • step ST138 the CPU 58 determines whether or not the condition for ending the second display control process (second display control process end condition) is satisfied.
  • the second display control process end condition there is a condition that the reception device 52, 76, 106, 152 or 202 has received an instruction to end the second display control process.
  • step ST138 If the second display control process end condition is not satisfied in step ST138, the determination is denied and the second display control process proceeds to step ST100 shown in FIG. 38. If the second display control process end condition is satisfied in step ST138, the determination is affirmed and the second display control process ends.
  • the setting unit 58E is the second control unit 58D as compared with the setting process when the avatar non-display instruction is transmitted from the first smartphone 14A.
  • the second control unit 58D operates by working on. Therefore, the description of the setting process when the avatar non-display instruction is transmitted from the second smartphone 14B will be omitted.
  • step ST200 the setting unit 58E determines whether or not the avatar non-display instruction transmitted from the first smartphone 14A has been received. If the avatar non-display instruction transmitted from the first smartphone 14A is not received in step ST200, the determination is denied and the determination in step ST200 is performed again. When the avatar non-display instruction transmitted from the first smartphone 14A is received in step ST200, the determination is affirmed, and the setting process proceeds to step ST202.
  • step ST202 the setting unit 58E outputs the flag setting instruction information to the first control unit 58B, and then the setting process shifts to step ST204.
  • step ST204 the first control unit 58B determines whether or not the avatar non-display flag is off. If the avatar non-display flag is on in step ST204, the determination is denied and the setting process proceeds to step ST208. If the avatar non-display flag is off in step ST204, the determination is affirmed and the setting process proceeds to step ST206.
  • step ST206 the first control unit 58B changes the avatar non-display flag from off to on, and then the setting process shifts to step ST208.
  • the avatar non-display flag is on, the viewpoint image with the first avatar is not generated, and the first viewpoint image is displayed on the display 154 of the first HMD34A.
  • step ST208 the setting unit 58E determines whether or not the condition for ending the setting process (setting process end condition) is satisfied.
  • the setting processing end condition there is a condition that the instruction to end the setting process is received by any of the receiving devices 52, 76, 106, 152 or 202.
  • step ST208 If the setting process end condition is not satisfied in step ST208, the determination is denied and the setting process proceeds to step ST200. If the setting process end condition is satisfied in step ST208, the determination is affirmed and the setting process ends.
  • the first avatar capable of specifying the first viewpoint position when the first viewpoint position is included in the first viewpoint image selected from the plurality of viewpoint images 46. Is displayed in the first viewpoint image. Then, the size of the first avatar is changed according to the angle of view of the first viewpoint image, and thereby the display size of the first avatar is changed according to the angle of view of the first viewpoint image. Therefore, in a state where the presence of the viewer 28B can be perceived through the first viewpoint video selected from the plurality of viewpoint images 46, the presence of the viewer 28B is determined according to the angle of view of the viewpoint video viewed by the viewer 28A. Can be changed.
  • the display control device 12 controls the degree of difference between the image quality of the first viewpoint image and the image quality of the first avatar within the first predetermined range. Therefore, compared to the case where the difference between the image quality of the first viewpoint image and the image quality of the first avatar is outside the first predetermined range, the visual image quality caused by the difference in image quality between the first viewpoint image and the first avatar is caused. The feeling of strangeness can be reduced.
  • the size of the first avatar is increased according to the relationship between the size of the first viewpoint image and the size of the first avatar, that is, the relationship between the display size of the first viewpoint image and the display size of the first avatar.
  • the display mode is changed. Therefore, regardless of the relationship between the display size of the first viewpoint image and the display size of the first avatar, the display size of the first viewpoint image and the display size of the first avatar are higher than those in the case where the display mode of the first avatar is constant. It is possible to reduce the visual discomfort caused by the difference between the two.
  • the display mode of the first avatar is changed when the ratio of the display size of the first avatar to the display size of the first viewpoint image is equal to or more than the first default value. Specifically, the first avatar is hidden. Therefore, the first avatar is compared with the case where the display mode of the first avatar is constant regardless of whether or not the ratio of the display size of the first avatar to the display size of the first viewpoint image is equal to or greater than the first default value. It is possible to reduce the visual discomfort caused by the presence of the first viewpoint image obstructing the visual recognition.
  • the first HMD34A having the display 154 is attached to the head of the viewer 28A, and the viewpoint image with the first avatar is displayed on the display 154. Therefore, the first viewpoint image and the first avatar can be visually perceived by the viewer 28A through the first HMD34A.
  • the first viewpoint image is selected from the plurality of viewpoint images 46 according to the first viewpoint line-of-sight instruction received by the touch panel 76A of the first smartphone 14A. Therefore, it is possible to provide the viewer 28A with the viewpoint video intended by the viewer 28A from the plurality of viewpoint images 46.
  • the first line-of-sight direction information is acquired by the first acquisition unit 58A. Then, an avatar capable of specifying the first line-of-sight direction indicated by the first line-of-sight direction information acquired by the first acquisition unit 58A is generated as the first avatar. Therefore, the line-of-sight direction of the viewer 28B can be perceived by the viewer 28A through the first viewpoint video selected from the plurality of viewpoint images 46.
  • each of the plurality of viewpoint images 46 has unique viewpoint position information. Further, each of the plurality of viewpoint images 46 is an image showing an imaging region observed from the viewpoint position indicated by the corresponding unique viewpoint position information. Then, the unique viewpoint position information of any one of the plurality of viewpoint images 46 is acquired as the first viewpoint position information by the first acquisition unit 58A. Therefore, as compared with the case where the viewpoint position of the viewer 28A is determined independently of the viewpoint image 46, the viewpoint position having a strong connection with the viewpoint image 46 can be determined as the viewpoint position of the viewer 28A.
  • the unique viewpoint position information corresponding to the second viewpoint image displayed on the display 204 of the second HMD34B is acquired by the first acquisition unit 58A as the first viewpoint position information. Therefore, as compared with the case where the viewpoint position of the viewer 28A is determined independently of the second viewpoint image displayed on the display 204 of the second HMD34B, the connection with the second viewpoint image displayed on the display 204 of the second HMD34B is established. A strong viewpoint position can be determined as the viewpoint position of the viewer 28A.
  • the line-of-sight direction of the viewer 28B can be easily determined as compared with the case where the line-of-sight direction of the viewer 28B is determined by detecting the line-of-sight direction of the viewer 28B with the detection device.
  • the second HMD34B having the display 204 is attached to the head of the viewer 28B, and the viewpoint image with the second avatar is displayed on the display 204. Therefore, the second viewpoint image and the second avatar can be visually perceived by the viewer 28B through the second HMD34B.
  • the second viewpoint image is selected from the plurality of viewpoint images 46 according to the second viewpoint line-of-sight instruction received by the touch panel 106A of the second smartphone 14B. Therefore, it is possible to provide the viewer 28B with the viewpoint video intended by the viewer 28B from the plurality of viewpoint videos 46.
  • the second avatar capable of specifying the second viewpoint position is the second viewpoint. It is displayed in the video. Then, the size of the second avatar is changed according to the angle of view of the second viewpoint image, and thereby the display size of the second avatar is changed according to the angle of view of the second viewpoint image. Therefore, in a state where the viewers 28A and 28B can perceive each other's existence through the viewpoint video 46 selected from the plurality of viewpoint images 46, the viewers 28A and 28B are viewed according to the angle of view of the viewpoint video 46. You can change each other's presence.
  • the display control device 12 controls that the degree of difference between the image quality of the second viewpoint image and the image quality of the second avatar is within the second default range. Therefore, compared to the case where the difference between the image quality of the second viewpoint image and the image quality of the second avatar is outside the second default range, the visual image quality caused by the difference in image quality between the second viewpoint image and the second avatar is caused. The feeling of strangeness can be reduced.
  • the size of the first avatar is increased according to the relationship between the size of the second viewpoint image and the size of the second avatar, that is, the relationship between the display size of the second viewpoint image and the display size of the second avatar.
  • the display mode is changed. Therefore, regardless of the relationship between the display size of the second viewpoint image and the display size of the second avatar, the display size of the second viewpoint image and the display size of the second avatar are higher than those in the case where the display mode of the second avatar is constant. It is possible to reduce the visual discomfort caused by the difference between the two.
  • the display mode of the second avatar is changed when the ratio of the display size of the second avatar to the display size of the second viewpoint image is equal to or more than the second default value. Specifically, the second avatar is hidden. Therefore, the second avatar is compared with the case where the display mode of the second avatar is constant regardless of whether or not the ratio of the display size of the second avatar to the display size of the second viewpoint image is equal to or greater than the second default value. It is possible to reduce the visual discomfort caused by the presence of the second viewpoint image obstructing the visual recognition.
  • the second line-of-sight direction information is acquired by the second acquisition unit 58C. Then, an avatar capable of specifying the second line-of-sight direction indicated by the second line-of-sight direction information acquired by the second acquisition unit 58C is generated as the second avatar. Therefore, the line-of-sight direction of the viewer 28A can be perceived by the viewer 28B through the second viewpoint video selected from the plurality of viewpoint images 46.
  • each of the plurality of viewpoint images 46 has unique viewpoint position information. Further, each of the plurality of viewpoint images 46 is an image showing an imaging region observed from the viewpoint position indicated by the corresponding unique viewpoint position information. Then, the first acquisition unit 58A acquires the unique viewpoint position information of any one of the plurality of viewpoint images 46 as the first viewpoint position information, and the second acquisition unit 58C acquires any of the plurality of viewpoint images 46. The unique viewpoint position information is acquired as the second viewpoint position information. Therefore, compared to the case where the viewpoint positions of the viewers 28A and 28B are determined independently of the viewpoint image 46, the viewpoint positions that are strongly connected to the viewpoint image 46 can be determined as the viewpoint positions of the viewers 28A and 28B. it can.
  • the unique viewpoint position information corresponding to the second viewpoint image displayed on the display 204 of the second HMD34B is acquired by the first acquisition unit 58A as the first viewpoint position information. Further, the unique viewpoint position information corresponding to the first viewpoint image displayed on the display 154 of the first HMD34A is acquired by the second acquisition unit 58C as the second viewpoint position information. Therefore, as compared with the case where the viewpoint positions of the viewers 28A and 28B are determined independently of the displayed viewpoint image 46, the viewpoint positions of the viewers 28A and 28B are strongly connected to the displayed viewpoint image 46. It can be determined as each viewpoint position.
  • information indicating the direction facing the second viewpoint image displayed on the display 204 of the second HMD34B is acquired by the first acquisition unit 58A as the first line-of-sight direction information.
  • information indicating the direction facing the first viewpoint image displayed on the display 154 of the first HMD34A is acquired by the second acquisition unit 58C as the second line-of-sight direction information. Therefore, the line-of-sight direction of the viewer 28B is determined by detecting the line-of-sight direction of the viewer 28B with the first detection device, and the line-of-sight direction of the viewer 28A is determined by detecting the line-of-sight direction of the viewer 28A with the second detection device. Compared with the case of determining, the line-of-sight directions of the viewers 28A and 28B can be easily determined.
  • the setting unit 58E sets to hide the second avatar when the avatar non-display instruction transmitted from the second smartphone 14B is received by the setting unit 58E. Therefore, it is possible to prevent the second avatar from being displayed in the viewpoint image 46 according to the intention of the viewer 28B.
  • the first viewpoint position and the second viewpoint position are limited to a part of the imaging region (in the example shown in FIGS. 1 and 3, the spectator seat 26). Therefore, the viewers 28A and 28B can easily exist each other through the viewpoint images selected from the plurality of viewpoint images 46, as compared with the case where the first viewpoint position and the second viewpoint position are not limited to a part of the imaging region. Can be perceived.
  • the setting unit 58E sets to hide the first avatar. Therefore, it is possible to prevent the first avatar from being displayed in the viewpoint image 46 according to the intention of the viewer 28A.
  • the virtual viewpoint image 46C is included in the plurality of viewpoint images 46. Therefore, it is possible to grasp the aspect of the imaging region observed from a position where the actual imaging device does not exist.
  • the first viewpoint position and the second viewpoint position are limited to a part of the imaging region (the spectator seat 26 in the examples shown in FIGS. 1 and 3), but the technique of the present disclosure Is not limited to this, and the first viewpoint position or the second viewpoint position may be limited to a part of the imaging region.
  • the non-display of the avatar is mentioned as an example of changing the display mode of the avatar, but the technique of the present disclosure is not limited to this.
  • display of only the outline of the avatar, translucency of the avatar, and the like can be mentioned.
  • the display mode of the first avatar is changed only according to the relationship between the display size of the first viewpoint image and the display size of the first avatar.
  • the first display mode changing unit 58B6 is the first according to the relationship between the display size of the first viewpoint image and the display size of the first avatar, and the relationship between the display position of the first viewpoint image and the display position of the first avatar. 1
  • the display mode of the avatar may be changed.
  • the ratio of the display size of the first avatar to the display size of the viewpoint image containing the first avatar is equal to or more than the first default value, and the first avatar specifies the viewpoint image containing the first avatar.
  • the display mode of the first avatar is changed by the first display mode changing unit 58B6. Examples of the change in the display mode of the first avatar include erasing the first avatar from the viewpoint video containing the first avatar, displaying only the outline of the first avatar, and making the first avatar translucent. ..
  • the display of the first avatar is displayed regardless of the relationship between the display size of the first viewpoint image and the display size of the first avatar, and the relationship between the display position of the first viewpoint image and the display position of the first avatar.
  • Visual sense caused by the difference between the display size of the first viewpoint image and the display size of the first avatar, and the difference between the display position of the first viewpoint image and the display position of the first avatar, as compared with the case where the mode is constant. Discomfort can be reduced.
  • the display mode of the first avatar is changed when the first avatar overlaps the central region 180, but the display mode is not limited to this, and the central region 180 in the viewpoint image containing the first avatar is used. May change the display mode of the first avatar when the first avatar overlaps with respect to different areas.
  • the relationship of size and position between the viewpoint image with the first avatar and the first avatar is illustrated, but the viewpoint image with the second avatar and the second avatar are If the same conditions are satisfied for the relationship between the size and the position, the display mode of the second avatar may be changed by the second display mode change unit 58D6 in the same manner as the change of the display mode of the first avatar. ..
  • the display mode of the second avatar is constant regardless of the relationship between the display size of the second viewpoint image and the display size of the second avatar, and the relationship between the display position of the second viewpoint image and the display position of the second avatar.
  • the first display mode changing unit 58B6 hides the first avatar.
  • the technique of the present disclosure is not limited to this.
  • the first display mode changing unit 58B6 enters the first avatar with respect to the display 154 of the first HMD34A.
  • the first avatar may be displayed in a display mode that is emphasized more than other areas in the viewpoint image.
  • Examples of display modes that are emphasized more than other areas include highlighting the outline of the first avatar, displaying with a mark such as an arrow indicating the line-of-sight direction of the first avatar, and pop-up display of the first avatar. Be done.
  • the third default value is an example of the "second threshold value" according to the technique of the present disclosure. As the third default value, for example, the above-mentioned first default value can be mentioned. Further, the third default value may be a value smaller than the first default value.
  • the first display mode is emphasized more than other areas in the viewpoint image containing the first avatar.
  • the avatar is displayed by giving an example of the form, the technique of the present disclosure is not limited to this.
  • the second display mode changing unit 58D6 will perform the second display as in the case of changing the display mode of the first avatar.
  • the display mode of the avatar may be changed.
  • the display mode of the second avatar is constant regardless of whether or not the ratio of the display size of the second avatar to the display size of the second viewpoint image is less than the third default value.
  • the presence of the viewer 28A can be easily perceived through the viewpoint image selected from the viewpoint image 46 of the above.
  • the "third default value” is an example of the "fourth threshold value” according to the technique of the present disclosure.
  • the display mode is changed when the first avatar overlaps the central region 180, but the technique of the present disclosure is not limited to this.
  • the first display mode changing unit 58B6 when at least a part of the first avatar overlaps the attention area of the viewer 28A with respect to the viewpoint video containing the first avatar, it is displayed by the first display mode changing unit 58B6.
  • the mode may be changed.
  • the first avatar does not overlap the attention area
  • the first avatar overlaps the attention area.
  • the first avatar is translucent.
  • the semi-transparent first avatar is superimposed and displayed on the first viewpoint image.
  • the area of interest may be defined according to the instructions received by the reception device 76 of the first smartphone 14A and / or the reception device 152 of the first HMD 34A. Further, the eye movement of the viewer 28A may be detected by the eye tracker 166 of the first HMD34A (see FIG. 6), and the region of interest in the viewpoint image with the first avatar may be determined according to the detection result.
  • the display mode may be changed by the second display mode changing unit 58D6 when at least a part of the second avatar overlaps the attention area of the viewer 28B with respect to the viewpoint video containing the second avatar. ..
  • the region of interest may be defined according to the instructions received by the reception device 106 of the second smartphone 14B and / or the reception device 202 of the second HMD 34B.
  • the eye movement of the viewer 28B may be detected by the eye tracker 216 (see FIG. 6) of the second HMD34B, and the region of interest in the viewpoint image with the second avatar may be determined according to the detection result.
  • the specific line-of-sight direction information is used as the first line-of-sight direction information and the second line-of-sight direction information, but the technique of the present disclosure is not limited to this.
  • the first line-of-sight direction information may be determined based on the detection result of the eye tracker 166, or the second line-of-sight direction information may be determined based on the detection result of the eye tracker 216.
  • the positions of the viewers 28A and 28B are specified based on the reception results of the GPS receivers 72 and 102, and the specified positions of the viewers 28A and 28B are used as the first viewpoint position and the second viewpoint position. You may do so.
  • the technology of the present disclosure is not limited to this.
  • the viewer 28A is made to select one of the viewpoint images 46 as the first viewpoint image via the touch panel 76A. You may. The same applies to the selection of the second viewpoint image.
  • the avatar is illustrated, but the technique of the present disclosure is not limited to this, as long as one of the viewers 28A and 28B can specify the viewpoint position and the line-of-sight direction of the other. It can be any information.
  • Information that can specify the viewpoint position and the line-of-sight direction includes, for example, a mark such as an arrow, a combination of a mark such as an arrow and an avatar, or an arrow indicating the position of an aber.
  • the CPU 58 of the display control device 12 determines the first display control process, the second display control process, and the setting process (hereinafter, when it is not necessary to distinguish between them, "display control device side process".
  • the technique of the present disclosure is not limited to this, and the display control device side processing includes the first smartphone 14A, the second smartphone 14B, the first HMD34A, and the second HMD34B. It may be distributed and executed by.
  • the first HMD34A may be made to execute the first display control process and the setting process
  • the second HMD34B may be made to execute the second display control process and the setting process.
  • the first display control program 60A and the setting program 60C are stored in the storage 162 of the first HMD34A.
  • the CPU 160 executes the first display control process by operating as the first acquisition unit 58A and the first control unit 58B according to the first display control program 60A. Further, the CPU 160 executes the setting process by operating as the setting unit 58E according to the setting program 60C.
  • the second display control program 60B and the setting program 60C are stored in the storage 212 of the second HMD34B.
  • the CPU 210 executes the second display control process by operating as the second acquisition unit 58C and the first control unit 58D according to the second display control program 60B. Further, the CPU 210 executes the setting process by operating as the setting unit 58E according to the setting program 60C.
  • the first HMD34A and the second HMD34B have been exemplified, but the technique of the present disclosure is not limited to this, and at least one of the first HMD34A and the second HMD34B is a smartphone, a tablet terminal, a head-up display, or It is possible to substitute various devices with arithmetic units such as personal computers.
  • the soccer field 22 has been illustrated, but this is only an example, and any imager can be installed as long as a plurality of imaging devices can be installed, such as a baseball field, a curling field, and a swimming pool. It may be a place.
  • the wireless communication method using the base station 20 is illustrated, but this is only an example, and the technique of the present disclosure is established even in the wired communication method using a cable.
  • the unmanned aerial vehicle 27 is illustrated, but the technique of the present disclosure is not limited to this, and the image pickup device 18 suspended by a wire (for example, a self-propelled image pickup device that can move along the wire). ) May be used to image the imaging region.
  • a wire for example, a self-propelled image pickup device that can move along the wire.
  • computers 50, 70, 100, 150 and 200 have been exemplified, but the technique of the present disclosure is not limited thereto.
  • computers 50, 70, 100, 150 and / or 200 devices including ASICs, FPGAs, and / or PLDs may be applied.
  • computers 50, 70, 100, 150 and / or 200 a combination of hardware configuration and software configuration may be used.
  • the display control device program is stored in the storage 60, but the technique of the present disclosure is not limited to this, and as shown in FIG. 47 as an example, SSD or SSD which is a non-temporary storage medium or
  • the display control device program may be stored in an arbitrary portable storage medium 400 such as a USB memory.
  • the display control device program stored in the storage medium 400 is installed in the computer 50, and the CPU 58 executes the display control device side processing according to the display control device program.
  • the display control device program is stored in a storage unit of another computer or server device connected to the computer 50 via a communication network (not shown), and the display control device is requested by the display control device 12.
  • the program may be downloaded to the display control device 12.
  • the display control device side processing based on the downloaded display control device program is executed by the CPU 58 of the computer 50.
  • the CPU 58 is illustrated, but the technique of the present disclosure is not limited to this, and a GPU may be adopted. Further, a plurality of CPUs may be adopted instead of the CPU 58. That is, the display control device side processing may be executed by one processor or a plurality of physically separated processors.
  • processors can be used as the hardware resources for executing the display control device side processing.
  • the processor include, as described above, a CPU, which is a general-purpose processor that functions as software, that is, a hardware resource that executes display control device-side processing according to a program.
  • a dedicated electric circuit which is a processor having a circuit configuration specially designed for executing a specific process such as FPGA, PLD, or ASIC can be mentioned.
  • a memory is built in or connected to each processor, and each processor executes display control device side processing by using the memory.
  • the hardware resource that executes the display control device side processing may be composed of one of these various processors, or a combination of two or more processors of the same type or different types (for example, a combination of a plurality of FPGAs). , Or a combination of CPU and FPGA). Further, the hardware resource for executing the display control device side processing may be one processor.
  • one processor is configured by a combination of one or more CPUs and software, and this processor controls display.
  • this processor controls display.
  • SoC there is a mode in which a processor that realizes the functions of the entire system including a plurality of hardware resources for executing display control device side processing with one IC chip is used.
  • the display control device side processing is realized by using one or more of the above-mentioned various processors as a hardware resource.
  • a and / or B is synonymous with "at least one of A and B". That is, “A and / or B” means that it may be only A, only B, or a combination of A and B. Further, in the present specification, when three or more matters are connected and expressed by "and / or", the same concept as “A and / or B" is applied.
  • the above processor Acquire the first viewpoint position information indicating the first viewpoint position of the first person with respect to the imaging region, and obtain It is a first display unit capable of displaying an image visually recognized by a second person different from the first person, and is generated based on an image obtained by capturing images from a plurality of viewpoint positions in which the imaging regions are different from each other. Controls to display the first viewpoint image selected from the plurality of viewpoint images on the first display unit.
  • the first viewpoint position indicated by the acquired first viewpoint position information is included in the first viewpoint image, the first viewpoint position indicated by the acquired first viewpoint position information is specified.
  • Control to display possible first specific information in the first viewpoint image A display control device that controls to change the display size of the first specific information according to the angle of view of the first viewpoint image displayed by the first display unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Controls And Circuits For Display Device (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention concerne un dispositif de commande d'affichage comprenant : une première unité d'acquisition pour acquérir des premières informations de position de point de vue ; et une première unité de commande qui effectue une commande par laquelle une première image de point de vue sélectionnée parmi une pluralité d'images de point de vue générées sur la base d'une image obtenue par imagerie d'une région de capture d'image parmi une pluralité de positions de point de vue est affichée sur une première unité d'affichage. La première unité de commande, lorsque la première image de point de vue comprend une première position de point de vue indiquée par les premières informations de position de point de vue acquises, effectue une commande par laquelle des premières informations spécifiques capables de spécifier la première position de point de vue sont affichées dans la première image de point de vue, et effectue également une commande par laquelle la taille d'affichage des premières informations spécifiques est modifiée en fonction d'un angle de champ de la première image de point de vue qui est affiché par la première unité d'affichage.
PCT/JP2020/024637 2019-06-28 2020-06-23 Dispositif de commande d'affichage, procédé de commande d'affichage et programme WO2020262391A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2021527648A JP7163498B2 (ja) 2019-06-28 2020-06-23 表示制御装置、表示制御方法、及びプログラム
US17/558,537 US11909945B2 (en) 2019-06-28 2021-12-21 Display control device, display control method, and program
US18/398,988 US20240129449A1 (en) 2019-06-28 2023-12-28 Display control device, display control method, and program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019122033 2019-06-28
JP2019-122033 2019-06-28

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/558,537 Continuation US11909945B2 (en) 2019-06-28 2021-12-21 Display control device, display control method, and program

Publications (1)

Publication Number Publication Date
WO2020262391A1 true WO2020262391A1 (fr) 2020-12-30

Family

ID=74060164

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/024637 WO2020262391A1 (fr) 2019-06-28 2020-06-23 Dispositif de commande d'affichage, procédé de commande d'affichage et programme

Country Status (3)

Country Link
US (2) US11909945B2 (fr)
JP (1) JP7163498B2 (fr)
WO (1) WO2020262391A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4404031A1 (fr) * 2023-01-17 2024-07-24 Deutsche Telekom AG Amélioration de l'utilité d'un espace virtuel

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002032788A (ja) * 2000-07-14 2002-01-31 Nippon Telegr & Teleph Corp <Ntt> 仮想現実提供方法、仮想現実提供装置及び仮想現実提供プログラムを記録した記録媒体
JP2015116336A (ja) * 2013-12-18 2015-06-25 マイクロソフト コーポレーション 複合現実感アリーナ
WO2017068824A1 (fr) * 2015-10-21 2017-04-27 シャープ株式会社 Dispositif de génération d'image, procédé de commande de dispositif de génération d'image, système d'affichage, programme de commande de génération d'image, et support d'enregistrement lisible par ordinateur

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050046698A1 (en) * 2003-09-02 2005-03-03 Knight Andrew Frederick System and method for producing a selectable view of an object space
US9167289B2 (en) * 2010-09-02 2015-10-20 Verizon Patent And Licensing Inc. Perspective display systems and methods
US9363441B2 (en) * 2011-12-06 2016-06-07 Musco Corporation Apparatus, system and method for tracking subject with still or video camera
KR101586651B1 (ko) * 2015-02-25 2016-01-20 (주) 라온엔터테인먼트 증강현실을 이용한 멀티플레이 로봇게임 시스템
JP6674247B2 (ja) * 2015-12-14 2020-04-01 キヤノン株式会社 情報処理装置、情報処理方法、およびコンピュータプログラム
JP6938123B2 (ja) * 2016-09-01 2021-09-22 キヤノン株式会社 表示制御装置、表示制御方法及びプログラム
JP6407225B2 (ja) * 2016-09-30 2018-10-17 キヤノン株式会社 画像処理装置及び画像処理方法及び画像処理システム及びプログラム
US10413803B2 (en) * 2016-12-20 2019-09-17 Canon Kabushiki Kaisha Method, system and apparatus for displaying a video sequence
JP2018106297A (ja) 2016-12-22 2018-07-05 キヤノンマーケティングジャパン株式会社 複合現実感提示システム、及び、情報処理装置とその制御方法、並びに、プログラム
JP7013139B2 (ja) * 2017-04-04 2022-01-31 キヤノン株式会社 画像処理装置、画像生成方法及びプログラム
JP6298558B1 (ja) 2017-05-11 2018-03-20 株式会社コロプラ 仮想空間を提供するための方法、および当該方法をコンピュータに実行させるためのプログラム、および当該プログラムを実行するための情報処理装置
JP6957215B2 (ja) * 2017-06-06 2021-11-02 キヤノン株式会社 情報処理装置、情報処理方法及びプログラム
US10687119B2 (en) * 2017-06-27 2020-06-16 Samsung Electronics Co., Ltd System for providing multiple virtual reality views
US10610303B2 (en) * 2017-06-29 2020-04-07 Verb Surgical Inc. Virtual reality laparoscopic tools
US10284882B2 (en) * 2017-08-31 2019-05-07 NBA Properties, Inc. Production of alternative views for display on different types of devices
JP7006912B2 (ja) * 2017-09-25 2022-01-24 国立大学法人 筑波大学 画像処理装置、画像表示装置及び画像処理プログラム
US20190197789A1 (en) * 2017-12-23 2019-06-27 Lifeprint Llc Systems & Methods for Variant Payloads in Augmented Reality Displays

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002032788A (ja) * 2000-07-14 2002-01-31 Nippon Telegr & Teleph Corp <Ntt> 仮想現実提供方法、仮想現実提供装置及び仮想現実提供プログラムを記録した記録媒体
JP2015116336A (ja) * 2013-12-18 2015-06-25 マイクロソフト コーポレーション 複合現実感アリーナ
WO2017068824A1 (fr) * 2015-10-21 2017-04-27 シャープ株式会社 Dispositif de génération d'image, procédé de commande de dispositif de génération d'image, système d'affichage, programme de commande de génération d'image, et support d'enregistrement lisible par ordinateur

Also Published As

Publication number Publication date
US11909945B2 (en) 2024-02-20
JPWO2020262391A1 (fr) 2020-12-30
JP7163498B2 (ja) 2022-10-31
US20240129449A1 (en) 2024-04-18
US20220116582A1 (en) 2022-04-14

Similar Documents

Publication Publication Date Title
JP4553362B2 (ja) システム、画像処理装置、情報処理方法
US9858643B2 (en) Image generating device, image generating method, and program
JP7042644B2 (ja) 情報処理装置、画像生成方法およびコンピュータプログラム
JP6732716B2 (ja) 画像生成装置、画像生成システム、画像生成方法、およびプログラム
JP2001008232A (ja) 全方位映像出力方法と装置
US7782320B2 (en) Information processing method and information processing apparatus
US20190045125A1 (en) Virtual reality video processing
JP2017204674A (ja) 撮像装置、ヘッドマウントディスプレイ、情報処理システム、および情報処理方法
JP2001195601A (ja) 複合現実感提示装置及び複合現実感提示方法並びに記憶媒体
JP7182920B2 (ja) 画像処理装置、画像処理方法およびプログラム
JP6523493B1 (ja) プログラム、情報処理装置、及び情報処理方法
JP6126271B1 (ja) 仮想空間を提供する方法、プログラム及び記録媒体
JP7234021B2 (ja) 画像生成装置、画像生成システム、画像生成方法、およびプログラム
US20200118341A1 (en) Image generating apparatus, image generating system, image generating method, and program
JP2021034885A (ja) 画像生成装置、画像表示装置および画像処理方法
US20240129449A1 (en) Display control device, display control method, and program
US11430178B2 (en) Three-dimensional video processing
JP2005339377A (ja) 画像処理方法、画像処理装置
JP7317119B2 (ja) 情報処理装置、情報処理方法、及びプログラム
JP2017208808A (ja) 仮想空間を提供する方法、プログラム及び記録媒体
JP2004310686A (ja) 画像処理方法及び装置
US11287658B2 (en) Picture processing device, picture distribution system, and picture processing method
JP6685814B2 (ja) 撮像装置およびその制御方法
JP6442619B2 (ja) 情報処理装置
EP4436159A1 (fr) Appareil de traitement d&#39;informations, procédé de traitement d&#39;image et programme

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20832878

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021527648

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20832878

Country of ref document: EP

Kind code of ref document: A1