WO2019176035A1 - Image generation device, image generation system, and image generation method - Google Patents

Image generation device, image generation system, and image generation method Download PDF

Info

Publication number
WO2019176035A1
WO2019176035A1 PCT/JP2018/010078 JP2018010078W WO2019176035A1 WO 2019176035 A1 WO2019176035 A1 WO 2019176035A1 JP 2018010078 W JP2018010078 W JP 2018010078W WO 2019176035 A1 WO2019176035 A1 WO 2019176035A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
posture
image
posture information
hmd
Prior art date
Application number
PCT/JP2018/010078
Other languages
French (fr)
Japanese (ja)
Inventor
孝範 南野
武 中川
貴一 池田
Original Assignee
株式会社ソニー・インタラクティブエンタテインメント
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ソニー・インタラクティブエンタテインメント filed Critical 株式会社ソニー・インタラクティブエンタテインメント
Priority to PCT/JP2018/010078 priority Critical patent/WO2019176035A1/en
Priority to JP2020506041A priority patent/JP7122372B2/en
Publication of WO2019176035A1 publication Critical patent/WO2019176035A1/en

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/66Transforming electric information into light information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to an apparatus, system, and method for generating an image.
  • Head mounted display is used in various fields. By providing the HMD with a head tracking function and updating the display screen in conjunction with the posture of the user's head, a sense of immersion in the video world can be enhanced.
  • a command is transmitted from the HMD to a control device such as a robot located at a remote place to operate the robot, and an image viewed from a camera mounted on the robot is received from the robot and displayed on the display panel of the HMD.
  • a control device such as a robot located at a remote place to operate the robot
  • an image viewed from a camera mounted on the robot is received from the robot and displayed on the display panel of the HMD.
  • the posture of the robot camera is controlled in conjunction with the posture of the user wearing the HMD, and the user can view the captured image of the robot camera in real time.
  • Remote operation can be performed with a sense of presence like being in a designated place.
  • the camera posture does not match the HMD posture, and the camera image deviated from the HMD posture is viewed on the HMD. You may feel like
  • the present invention has been made in view of these problems, and an object of the present invention is to provide an image generation technique capable of adapting the camera image displayed on the head mounted display to the attitude of the head mounted display.
  • an image generation apparatus includes a camera image acquisition unit that acquires a captured image obtained by a camera capable of changing a posture in conjunction with a change in posture of a head-mounted display; Based on the calculated difference, a camera posture information acquisition unit that acquires posture information at the time of shooting, a difference calculation unit that calculates a difference between posture information of the head mounted display and posture information at the time of shooting of the camera, and And an image correction unit that corrects the captured image so as to match the posture of the head mounted display.
  • Another aspect of the present invention is an image generation method.
  • This method includes a camera image acquisition step for acquiring a photographed image obtained by a camera capable of posture change in conjunction with a posture change of the head mounted display, a camera posture information acquisition step for acquiring posture information at the time of shooting of the camera, A difference calculating step for calculating a difference between the posture information of the head mounted display and the posture information at the time of shooting of the camera, and correcting the captured image so as to match the posture of the head mounted display based on the calculated difference.
  • Image correction step includes a camera image acquisition step for acquiring a photographed image obtained by a camera capable of posture change in conjunction with a posture change of the head mounted display, a camera posture information acquisition step for acquiring posture information at the time of shooting of the camera, A difference calculating step for calculating a difference between the posture information of the head mounted display and the posture information at the time of shooting of the camera, and correcting the captured image so as to match the posture of the head mounted display based on the calculated difference.
  • Image correction step
  • any combination of the above components, the expression of the present invention converted between a method, an apparatus, a system, a computer program, a recording medium on which the computer program is recorded so as to be readable, a data structure, and the like are also included in the present invention. It is effective as an embodiment of
  • the camera image displayed on the head mounted display can be adapted to the posture of the head mounted display.
  • FIG. 1 shows a configuration example of an information processing system 1 according to the present embodiment.
  • the information processing system 1 includes a robot 10 and a head mounted display device (HMD) 100 that a user A wears on the head.
  • the HMD 100 is connected to the network 4 via an access point (AP) 2.
  • the AP 2 has functions of a wireless access point and a router, and the HMD 100 is connected to the AP 2 by a known wireless communication protocol, but may be connected by a cable.
  • the robot 10 includes an actuator device 12 and a housing 20 that is driven by the actuator device 12 so that the posture can be changed.
  • the housing 20 is equipped with a right camera 14a and a left camera 14b.
  • the right camera 14a and the left camera 14b are referred to as “camera 14” unless otherwise distinguished.
  • the camera 14 is provided in a housing 20 that is driven by the actuator device 12.
  • the robot 10 is connected to the network 4 via an access point (AP) 3.
  • the robot 10 is connected to the AP 3 by a known wireless communication protocol, but may be connected by a cable.
  • the HMD 100 and the robot 10 are communicably connected via the network 4.
  • the HMD 100 and the robot 10 may be connected to each other so as to be able to communicate directly or wirelessly without using an AP.
  • the robot 10 operates as a so-called alternate user A.
  • the movement of the HMD 100 worn by the user A is transmitted to the robot 10, and the actuator device 12 moves the housing 20 in conjunction with the movement of the HMD 100.
  • the actuator device 12 moves the casing 20 to swing back and forth
  • the actuator device 12 moves the casing 20 to swing left and right.
  • a person around the robot 10 can communicate with the user A with a sense that the user A is on the spot.
  • an information processing device such as a game machine or a personal computer having a communication function may be provided as a client device, and the HMD 100 may be connected to the client device wirelessly or by wire.
  • an information processing device such as a game machine or a personal computer may be provided as a server device instead of AP3, and the robot 10 may be connected to the server device wirelessly or by wire.
  • the right camera 14a and the left camera 14b are arranged at a predetermined interval in the horizontal direction on the front surface of the housing 20.
  • the right camera 14a and the left camera 14b constitute a stereo camera.
  • the right camera 14a captures a right-eye image at a predetermined cycle
  • the left camera 14b captures a left-eye image at a predetermined cycle.
  • the right camera 14 a and the left camera 14 b are configured to be able to change postures in conjunction with the posture change of the HMD 100.
  • the captured right-eye image and left-eye image are transmitted to the user A's HMD 100 in real time.
  • the HMD 100 displays the received right-eye image on the right-eye display panel, and displays the received left-eye image on the left-eye display panel.
  • the user A can view the video in the direction in which the housing 20 of the robot 10 is facing in real time.
  • the robot 10 is remotely operated by the user A to reproduce the movement of the face of the user A, and the user A can view the image around the robot through the HMD 100.
  • the HMD 100 is not limited to an immersive (non-transmissive) display device that completely covers both eyes, and may be a transmissive display device.
  • the shape may be a hat shape as shown in the figure, but may also be a glasses shape.
  • the HMD 100 may include not only a dedicated head-mounted display device but also a terminal device having a display panel, a microphone, and a speaker, and a casing that fixes the display panel of the terminal device to a position in front of the user. Good.
  • the terminal device may have a relatively small display panel such as a smartphone or a portable game machine.
  • FIG. 2 shows functional blocks of the HMD 100.
  • the control unit 120 is a main processor that processes and outputs various signals and data such as image signals, audio signals, sensor information, and commands.
  • the storage unit 122 temporarily stores data, commands, and the like that are processed by the control unit 120.
  • the attitude sensor 124 detects attitude information such as the rotation angle and inclination of the HMD 100 at a predetermined cycle.
  • the posture sensor 124 includes at least a triaxial acceleration sensor and a triaxial gyro sensor.
  • the communication control unit 126 transmits and receives signals and data to and from the robot 10 by wired or wireless communication via a network adapter or an antenna.
  • the communication control unit 126 receives posture information detected by the posture sensor 124 from the control unit 120 and transmits the posture information to the robot 10.
  • the communication control unit 126 receives image data from the robot 10 and supplies the image data to the control unit 120.
  • the control unit 120 supplies the image data to the display panel 102 for display.
  • the reprojection unit 130 performs a reprojection process on the image data received from the robot 10. The detailed configuration of the reprojection unit 130 will be described later.
  • FIG. 3 shows an external configuration of the robot 10.
  • the housing 20 accommodates the camera 14.
  • the camera 14 is provided on the front surface of the housing.
  • the camera 14 operates by being supplied with electric power from a power supply device accommodated in the housing 36 via a power line (not shown).
  • the housing 20 has a protective cover 19.
  • the protective cover 19 is disposed at a closed position that covers the front of the housing, and the camera 14 is Protect.
  • the housing 20 is supported by the actuator device 12 so that the posture can be changed.
  • the actuator device 12 includes a leg portion 40, a hemispherical housing 36 supported on the upper portion of the leg portion 40, and a drive mechanism 50 for driving the housing 20.
  • the drive mechanism 50 includes a first arcuate arm 32 having a first through hole 32a formed in the longitudinal direction, a second arcuate arm 34 having a second through hole 34a formed in the longitudinal direction, A pedestal 30 that rotatably supports the first arc-shaped arm 32 and the second arc-shaped arm 34 in a state where the first arc-shaped arm 32 and the second arc-shaped arm 34 intersect each other is provided.
  • the upper side of the pedestal 30 is covered with a cover 38, and motors for rotating the first arcuate arm 32 and the second arcuate arm 34 are arranged in the space covered with the cover 38, respectively.
  • the pedestal 30 is rotatably supported with respect to the housing 36, and a motor for rotating the pedestal 30 is disposed in the housing 36.
  • the first arc-shaped arm 32 and the second arc-shaped arm 34 are formed in a semicircular shape, and both ends are supported by the pedestal 30 so as to have the same center of rotation.
  • the diameter of the semicircular first arc-shaped arm 32 is slightly larger than the diameter of the semicircular second arc-shaped arm 34, and the first arc-shaped arm 32 is located on the outer peripheral side of the second arc-shaped arm 34. Be placed.
  • the first arc-shaped arm 32 and the second arc-shaped arm 34 may be arranged so as to be orthogonal to each other on the pedestal 30.
  • a line connecting both ends of the first arcuate arm 32 supported by the pedestal 30 and a line connecting both ends of the second arcuate arm 34 supported by the pedestal 30 are orthogonal to each other.
  • the insertion member 42 is inserted into the first through long hole 32a and the second through long hole 34a, and is disposed at the intersection of the first through long hole 32a and the second through long hole 34a.
  • the insertion member 42 slides in the first through long hole 32a and the second through long hole 34a by the rotation of the first arc-shaped arm 32 and the second arc-shaped arm 34.
  • the first motor is provided for rotating the first arcuate arm 32
  • the second motor is provided for rotating the second arcuate arm 34.
  • a 1st motor and a 2nd motor are arrange
  • the third motor is provided to rotate the pedestal 30 and is disposed in the housing 36. The first motor, the second motor, and the third motor are rotated by power supplied from a power supply device (not shown).
  • the first motor rotates the first arc-shaped arm 32
  • the second motor rotates the second arc-shaped arm 34
  • the third motor rotates the pedestal 30, whereby the actuator device 12 is attached to the insertion member 42.
  • the orientation and posture of the housing 20 can be changed.
  • FIG. 4A and 4B show an example in which the housing 20 is tilted in the left-right direction.
  • 5A and 5B show an example in which the housing 20 is tilted in the front-rear direction.
  • the drive mechanism 50 of the robot 10 can cause the housing 20 to take an arbitrary posture.
  • the attitude of the housing 20 is controlled by adjusting the driving amounts of the first motor and the second motor, and the orientation of the housing 20 is controlled by adjusting the driving amount of the third motor.
  • FIG. 6 shows functional blocks of the robot 10.
  • the robot 10 includes an input system 22 that receives and processes input from the outside, and an output system 24 that processes output to the outside.
  • the input system 22 includes a reception unit 60, a sensor information acquisition unit 62, a motion detection unit 64, a line-of-sight direction determination unit 66, and an actuator control unit 68.
  • the output system 24 includes an image processing unit 80 and a transmission unit 90.
  • each element described as a functional block for performing various processes can be configured by a circuit block, a memory, and other LSIs in terms of hardware, and loaded in the memory in terms of software. Realized by programs. Therefore, it is understood by those skilled in the art that these functional blocks can be realized in various forms by hardware only, software only, or a combination thereof, and is not limited to any one.
  • the HMD 100 transmits the sensor information detected by the attitude sensor 124 to the robot 10, and the receiving unit 60 receives the sensor information.
  • the sensor information acquisition unit 62 acquires the posture information detected by the posture sensor 124 of the HMD 100.
  • the motion detection unit 64 detects the posture of the HMD 100 attached to the user A's head.
  • the line-of-sight direction determination unit 66 determines the line-of-sight direction of the camera 14 of the housing 20 according to the attitude of the HMD 100 detected by the motion detection unit 64.
  • the motion detection unit 64 performs a head tracking process for detecting the posture of the head of the user wearing the HMD 100.
  • the head tracking process is performed in order to link the visual field displayed on the display panel 102 of the HMD 100 to the posture of the user's head.
  • the rotation angle with respect to the horizontal reference direction of the HMD 100 and the horizontal plane The tilt angle is detected.
  • the horizontal reference direction may be set, for example, as a direction facing when the power of the HMD 100 is turned on.
  • the line-of-sight direction determination unit 66 determines the line-of-sight direction according to the attitude of the HMD 100 detected by the motion detection unit 64.
  • This line-of-sight direction is the line-of-sight direction of the user A, and by extension, the line-of-sight direction (optical axis direction) of the camera 14 of the robot 10 that is a substitute.
  • FIG. 3 shows a state in which the first arc-shaped arm 32 and the second arc-shaped arm 34 stand up by 90 degrees with respect to the pedestal 30. This state is set as the horizontal direction and the power supply of the robot 10
  • the direction in which the front surface of the housing 20 faces when is turned on may be set as the horizontal reference direction.
  • the robot 10 may have a posture sensor as in the HMD 100 so that the horizontal direction can be set autonomously.
  • the line-of-sight direction determination unit 66 determines the rotation angle and tilt angle detected by the motion detection unit 64 as the line-of-sight direction (optical axis direction) of the camera 14 as it is. Good.
  • the line-of-sight direction determination unit 66 determines the line-of-sight direction of the HMD 100 as a vector (x, y, z) of three-dimensional coordinates.
  • the line-of-sight direction of the camera 14 may be determined as the same (x, y, z), or may be determined as (x ′, y ′, z ′) with some correction.
  • the actuator control unit 68 controls the orientation of the camera 14 so that the line-of-sight direction determined by the line-of-sight direction determination unit 66 is obtained. Specifically, the actuator control unit 68 adjusts the power supplied to the first motor 52, the second motor 54, and the third motor 56 so that the movement of the housing 20 follows the movement of the HMD 100.
  • the motor drive control by the actuator control unit 68 is performed in real time, and therefore the direction of the housing 20 is moved in the same way as the direction of the line of sight of the user A.
  • the housing 20 is driven with reference to the rotation centers of the first arc-shaped arm 32 and the second arc-shaped arm 34, and this movement shows the same movement as the human neck. .
  • the actuator device 12 reproduces the movement of the neck of the user A with a simple structure in which two semicircular arms are crossed.
  • the right camera 14a and the left camera 14b are directed in the directions controlled by the actuator device 12 and shoot the respective angles of view.
  • the right camera 14a and the left camera 14b may be arranged apart from each other, for example, so as to be an average distance between eyes of an adult.
  • the right-eye image data captured by the right camera 14a and the left-eye image data captured by the left camera 14b are transmitted from the transmission unit 90 to the HMD 100 and displayed on the right half and the left half of the display panel 102, respectively. These images form parallax images viewed from the right eye and the left eye, and can be displayed stereoscopically by displaying the display panel 102 in an area divided into two.
  • the image processing unit 80 may generate image data in which optical distortion caused by the lens is corrected in advance and supply the image data to the HMD 100.
  • the right camera 14a and the left camera 14b perform shooting at a predetermined cycle (for example, 1/60 seconds), and the transmission unit 90 transmits image data to the HMD 100 without delay.
  • a predetermined cycle for example, 1/60 seconds
  • the transmission unit 90 transmits image data to the HMD 100 without delay.
  • the camera posture information acquisition unit 70 acquires posture information including the posture and orientation of the camera 14 and supplies the posture information to the transmission unit 90.
  • the transmission unit 90 transmits the posture information of the camera 14 to the HMD 100.
  • the posture information of the camera 14 may be obtained using, for example, posture information detected by a motion sensor, or position information of a motor mounted on the actuator device 12 that changes the posture and orientation of the housing 20 of the robot 10. You may ask for.
  • FIG. 7 is a functional configuration diagram of the reprojection unit 130 of FIG. This figure depicts a block diagram focusing on functions, and these functional blocks can be realized in various forms by hardware only, software only, or a combination thereof.
  • the reprojection unit 130 includes an HMD posture information acquisition unit 132, a camera posture information acquisition unit 134, a camera image acquisition unit 135, a difference calculation unit 136, a warning unit 137, and an image correction unit 138.
  • the HMD posture information acquisition unit 132 acquires posture information (referred to as “HMD posture information”) including the posture and orientation of the HMD 100 detected by the posture sensor 124 and supplies the posture information to the difference calculation unit 136.
  • the camera posture information acquisition unit 134 acquires posture information (referred to as “camera posture information”) that the communication control unit 126 received from the robot 10 and includes the posture and orientation of the camera 14 at the time of shooting with the camera 14 and calculates the difference. Supplied to the unit 136.
  • the camera image acquisition unit 135 acquires the captured images (referred to as “camera images”) of the left and right cameras 14 received by the communication control unit 126 from the robot 10 and supplies them to the image correction unit 138.
  • the camera posture information may be embedded as metadata in the camera image.
  • the HDMI (registered trademark) 2.1 standard may be used as a method for transmitting camera posture information as metadata embedded in a camera image.
  • it is studied to insert and transmit dynamic metadata to be applied to each frame in the VBlank signal of each frame data.
  • the camera posture information at the time of imaging by the camera 14 of the robot 10 can be transmitted to the HMD 100 in synchronization with the frame data of the camera image.
  • dynamic metadata is inserted into an HDMI 2.1-standard VBlank signal.
  • dynamic metadata may be inserted into some synchronization signal synchronized with each frame and transmitted.
  • the camera posture information acquisition unit 134 extracts the camera posture information from the metadata inserted in the synchronization signal of each frame of the camera image received by the communication control unit 126 from the robot 10.
  • the difference calculation unit 136 calculates the difference between the HMD posture information and the camera posture information and supplies the difference to the image correction unit 138.
  • the image correction unit 138 performs reprojection processing by correcting the camera image so as to match the latest posture of the HMD 100 based on the calculated posture difference.
  • the reprojected camera image is supplied to the control unit 120 and displayed on the display panel 102.
  • the difference calculation unit 136 interrupts or continues to instruct the image correction unit 138 to perform the reprojection process, and notifies the warning unit 137 of the instruction.
  • the warning unit 137 displays a message on the display panel 102 that the movable range is exceeded or that the movable limit is approached. At that time, the direction in which the user wearing the HMD 100 should return the head may be displayed together with the warning message.
  • the difference calculation unit 136 may determine whether the attitude of the HMD 100 is within the movable range of the robot 10. For example, when the camera 14 of the robot 10 cannot be rotated 360 degrees and can only be rotated within a certain range, the range is set in the HMD 100 in advance as the movable range, and the HMD posture information is moved to the movable position of the camera 14. Compare with range. Without setting the movable range in advance, it may be determined that the movable range has been exceeded when the posture of the HMD 100 and the posture of the camera 14 are greatly deviated.
  • the difference calculation unit 136 instructs the image correction unit 138 to perform a reprojection process. This is interrupted or continued, and the warning unit 137 is notified.
  • the warning unit 137 displays on the display panel 102 a message that prompts the user wearing the HMD 100 to move his / her head slowly.
  • the difference calculation unit 136 determines that the followable speed of the camera 14 has been exceeded when the difference between the HMD posture information and the camera posture information exceeds a threshold or when the change speed of the HMD posture information exceeds the threshold. .
  • the difference between the HMD attitude information and the camera attitude information increases, so it may be determined that the followable speed of the camera 14 has been exceeded.
  • the reprojection process is interrupted in such a case, and the warning unit 137 displays a warning on the display panel 102, thereby appropriately restricting the movement of the head of the user wearing the HMD 100. it can. However, even in such a case, it is possible to display a warning while continuing the reprojection process without interruption.
  • FIG. 8 is a diagram for explaining a conventional reprojection process.
  • the rendering apparatus 300 is connected to the HMD 100 via a network instead of the robot 10.
  • the rendering is delayed from rendering to display.
  • a deviation occurs between the orientation of the user's head that is assumed at the time and the orientation of the user's head when the image is displayed on the HMD 100, and the user may feel drunk.
  • a shift also occurs due to delay fluctuations caused by the network.
  • a reprojection is performed by applying a correction process for compensating for a deviation between the attitude of the HMD 100 used at the time of rendering and the attitude of the latest HMD 100 to the rendering image, and the user can display the image after the reprojection on the HMD 100.
  • the HMD 100 performs motion prediction based on the current posture information, and transmits the predicted HMD posture information 202 to the rendering device 300.
  • the rendering apparatus 300 renders an image to be displayed on the HMD 100 using the predicted HMD posture information 202.
  • the rendering apparatus 300 transmits the rendered image 310 to the HMD 100 together with the posture information 204 used for rendering.
  • the rendering apparatus 300 may perform motion prediction of the HMD 100 and generate the predicted HMD posture information 202.
  • the difference calculation unit 302 calculates a difference between the latest HMD posture information 200 and the posture information 204 used for rendering, and the reprojection unit 304 performs a reprojection process on the rendered image 310 based on the calculated posture difference. Display on HMD100.
  • FIG. 9 is a diagram for explaining reprojection processing by the reprojection unit 130 of FIG.
  • the motion is predicted based on the current posture information of the HMD 100, and the predicted HMD posture information 202 is transmitted to the robot 10.
  • the robot 10 controls the posture and orientation of the camera 14 using the predicted HMD posture information 202 and takes an image with the camera 14.
  • the robot 10 transmits the camera image 210 to the HMD 100 together with the camera posture information 205 at the time of shooting.
  • the camera posture information 205 at the time of shooting is different from the predicted HMD posture information 202. This is because when the motor controls the posture of the camera 14 in the robot 10, a motor error, an operation delay, and the like occur, and thus the posture of the camera 14 cannot be completely matched with the predicted posture of the HMD 100.
  • the difference calculation unit 136 calculates the difference between the latest HMD posture information 200 and the camera posture information 205 at the time of shooting.
  • the image correction unit 138 performs reprojection processing on the camera image 210 so as to match the latest attitude of the HMD 100 based on the calculated attitude difference, and displays the image on the HMD 100.
  • the latest HMD posture information 200 is predicted in consideration of the time until the image displayed on the HMD 100 after reprojection processing enters the user's eyes, not the posture of the current HMD 100. You may use the attitude
  • the reprojection unit 130 according to the present embodiment is different from the conventional reprojection processing in that reprojection is performed using the camera posture information 205 at the time of shooting instead of the predicted HMD posture information 202.
  • the reprojection process using the camera attitude information 205 at the time of shooting eliminates the deviation caused by the attitude of the camera 14 not matching the attitude of the HMD 100 due to a motor error or operation delay, and the user wearing the HMD 100 feels uncomfortable. Can be reduced.
  • the moving image shot by the camera 14 does not move smoothly. By performing the reprojection process, there is also an effect of smoothing the motion of the moving image shot by the camera 14.
  • the predicted HMD posture information 202 when the predicted HMD posture information 202 is transmitted to the robot 10, packet loss may occur depending on the congestion state of the network, and the predicted HMD posture information 202 may be lost without being transmitted to the robot 10. Even in such a case, since the posture information transmitted to the HMD 100 together with the camera image 210 is the camera posture information 205 at the time of shooting, even if the predicted HMD posture information 202 is missing, it is based on the camera posture information 205 at the time of shooting. There is also an advantage that the reprojection processing can be performed reliably.
  • the camera posture information 205 at the time of shooting is not transmitted together with the camera image 210, and the posture of the camera 14 at the time of shooting is estimated by self-position estimation from the camera image 210 using a technique such as SLAM (Simultaneous Localization and Mapping). You may comprise so that it may estimate.
  • SLAM Simultaneous Localization and Mapping
  • FIG. 10 is a flowchart for explaining the procedure of the reprojection process performed by the reprojection unit 130 shown in FIG.
  • the camera image acquisition unit 135 receives a camera image from the robot 10, and the camera posture information acquisition unit 134 acquires camera posture information added as metadata to the camera image (S10).
  • the HMD posture information acquisition unit 132 acquires the latest HMD posture information detected by the posture sensor 124 (S12).
  • the difference calculation unit 136 calculates the difference between the latest HMD posture information and the camera posture information (S14).
  • the difference calculation unit 136 determines whether or not the latest HMD posture information is within the movable range of the camera 14 of the robot 10 (S16), and if within the movable range (Y of S16), the process proceeds to step S20.
  • the warning unit 137 moves the movement of the HMD 100 within the movable range of the robot 10. Is displayed on the display panel 102 (S18).
  • the difference calculation unit 136 determines whether the posture change of the HMD 100 is within the followable speed of the camera 14 of the robot 10 (S20), and if within the followable speed (Y of S20), the process proceeds to step S24.
  • the warning unit 137 causes the posture change of the HMD 100 to be within the followable speed of the robot 10.
  • a warning message for guiding the user to fit in the screen is displayed on the display panel 102 (S22).
  • the image correcting unit 138 executes the reprojection process by correcting the camera image based on the difference between the latest HMD posture information and the camera posture information (S24).
  • the reprojection unit 130 is provided in the HMD 100, but the reprojection unit 130 may be provided in a client device to which the HMD 100 is connected.
  • the case where the HMD 100 and the robot 10 are connected via a network and the robot 10 is remotely controlled has been described. However, the same applies to the case where the HMD 100 and the robot 10 are connected via a network.
  • Reprojection processing can be applied. Since there is a motor delay or error, a deviation between the posture of the camera 14 and the posture of the HMD 100 occurs even if there is no delay variation due to the network. Therefore, the reprojection processing for correcting the camera image according to the posture of the HMD 100 is effective. is there. Further, the same reprojection process can be applied to any control device as long as the control device is not limited to the robot 10 and is equipped with the camera 14.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Studio Devices (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

A camera image acquisition unit 135 acquires an image captured by a camera in which it is possible to change orientation in coordination with changes in the orientation of a head-mounted display. A camera orientation information acquisition unit 134 acquires orientation information of a camera during photographing. A difference computation unit 136 calculates the difference between the orientation information of the head-mounted display and the orientation information of the camera during photographing. An image correction unit 138 corrects a photographic image on the basis of the calculated difference so as to match the orientation of the head-mounted display.

Description

画像生成装置、画像生成システム、および画像生成方法Image generating apparatus, image generating system, and image generating method
 この発明は、画像を生成する装置、システムおよび方法に関する。 The present invention relates to an apparatus, system, and method for generating an image.
 ヘッドマウントディスプレイ(HMD)が様々な分野で利用されている。HMDにヘッドトラッキング機能をもたせ、ユーザの頭部の姿勢と連動して表示画面を更新することで、映像世界への没入感を高められる。 Head mounted display (HMD) is used in various fields. By providing the HMD with a head tracking function and updating the display screen in conjunction with the posture of the user's head, a sense of immersion in the video world can be enhanced.
特開2015-95045号公報JP-A-2015-95045
 HMDから遠隔地に配置したロボットなどの制御装置にコマンドを送信してロボットを操作し、ロボットに搭載されたカメラから見た映像をロボットから受信してHMDの表示パネルに表示することにより、HMDを装着したユーザがロボットを遠隔操作するシステムが利用されている。 A command is transmitted from the HMD to a control device such as a robot located at a remote place to operate the robot, and an image viewed from a camera mounted on the robot is received from the robot and displayed on the display panel of the HMD. A system in which a user who wears a remote control robot is used.
 このような遠隔操作システムでは、HMDを装着したユーザの姿勢に連動してロボットのカメラの姿勢が制御され、ロボットのカメラの撮影画像をユーザがリアルタイムで見ることができるため、ユーザはロボットが配置された場所にいるような臨場感をもって、遠隔操作を行うことができる。しかしながら、ロボットのモータ制御の遅延と誤差、さらには変動するネットワーク遅延によって、カメラの姿勢がHMDの姿勢と一致せず、HMDの姿勢からずれたカメラ画像をHMDで見ることになり、ユーザは酔ったような感覚に陥ることがある。 In such a remote operation system, the posture of the robot camera is controlled in conjunction with the posture of the user wearing the HMD, and the user can view the captured image of the robot camera in real time. Remote operation can be performed with a sense of presence like being in a designated place. However, due to delays and errors in robot motor control, and fluctuating network delays, the camera posture does not match the HMD posture, and the camera image deviated from the HMD posture is viewed on the HMD. You may feel like
 本発明はこうした課題に鑑みてなされたものであり、その目的は、ヘッドマウントディスプレイに表示されるカメラ画像をヘッドマウントディスプレイの姿勢に適合させることのできる画像生成技術を提供することにある。 The present invention has been made in view of these problems, and an object of the present invention is to provide an image generation technique capable of adapting the camera image displayed on the head mounted display to the attitude of the head mounted display.
 上記課題を解決するために、本発明のある態様の画像生成装置は、ヘッドマウントディスプレイの姿勢変化に連動して姿勢変化が可能なカメラによる撮影画像を取得するカメラ画像取得部と、前記カメラの撮影時の姿勢情報を取得するカメラ姿勢情報取得部と、前記ヘッドマウントディスプレイの姿勢情報と前記カメラの撮影時の姿勢情報の差分を算出する差分計算部と、算出された前記差分にもとづいて前記ヘッドマウントディスプレイの姿勢に合うように前記撮影画像を補正する画像補正部とを含む。 In order to solve the above-described problem, an image generation apparatus according to an aspect of the present invention includes a camera image acquisition unit that acquires a captured image obtained by a camera capable of changing a posture in conjunction with a change in posture of a head-mounted display; Based on the calculated difference, a camera posture information acquisition unit that acquires posture information at the time of shooting, a difference calculation unit that calculates a difference between posture information of the head mounted display and posture information at the time of shooting of the camera, and And an image correction unit that corrects the captured image so as to match the posture of the head mounted display.
 本発明の別の態様は、画像生成方法である。この方法は、ヘッドマウントディスプレイの姿勢変化に連動して姿勢変化が可能なカメラによる撮影画像を取得するカメラ画像取得ステップと、前記カメラの撮影時の姿勢情報を取得するカメラ姿勢情報取得ステップと、前記ヘッドマウントディスプレイの姿勢情報と前記カメラの撮影時の姿勢情報の差分を算出する差分計算ステップと、算出された前記差分にもとづいて前記ヘッドマウントディスプレイの姿勢に合うように前記撮影画像を補正する画像補正ステップとを含む。 Another aspect of the present invention is an image generation method. This method includes a camera image acquisition step for acquiring a photographed image obtained by a camera capable of posture change in conjunction with a posture change of the head mounted display, a camera posture information acquisition step for acquiring posture information at the time of shooting of the camera, A difference calculating step for calculating a difference between the posture information of the head mounted display and the posture information at the time of shooting of the camera, and correcting the captured image so as to match the posture of the head mounted display based on the calculated difference. Image correction step.
 なお、以上の構成要素の任意の組合せ、本発明の表現を方法、装置、システム、コンピュータプログラム、コンピュータプログラムを読み取り可能に記録した記録媒体、データ構造などの間で変換したものもまた、本発明の態様として有効である。 Note that any combination of the above components, the expression of the present invention converted between a method, an apparatus, a system, a computer program, a recording medium on which the computer program is recorded so as to be readable, a data structure, and the like are also included in the present invention. It is effective as an embodiment of
 本発明によれば、ヘッドマウントディスプレイに表示されるカメラ画像をヘッドマウントディスプレイの姿勢に適合させることができる。 According to the present invention, the camera image displayed on the head mounted display can be adapted to the posture of the head mounted display.
本実施の形態に係る情報処理システムの構成例を示す図である。It is a figure which shows the structural example of the information processing system which concerns on this Embodiment. 図1のHMDの機能ブロックを示す図である。It is a figure which shows the functional block of HMD of FIG. 図1のロボットの外観構成を示す図である。It is a figure which shows the external appearance structure of the robot of FIG. 図3のロボットにおける筐体の姿勢の例を示す図である。It is a figure which shows the example of the attitude | position of the housing | casing in the robot of FIG. 図3のロボットにおける筐体の姿勢の例を示す図である。It is a figure which shows the example of the attitude | position of the housing | casing in the robot of FIG. 図1のロボットの機能ブロックを示す図である。It is a figure which shows the functional block of the robot of FIG. 図2のリプロジェクション部の機能構成図である。It is a function block diagram of the reprojection part of FIG. 従来のリプロジェクション処理を説明する図である。It is a figure explaining the conventional reprojection process. 図7のリプロジェクション部によるリプロジェクション処理を説明する図である。It is a figure explaining the reprojection process by the reprojection part of FIG. 図7のリプロジェクション部によるリプロジェクション処理の手順を説明するフローチャートである。It is a flowchart explaining the procedure of the reprojection process by the reprojection part of FIG.
 図1は、本実施の形態に係る情報処理システム1の構成例を示す。情報処理システム1は、ロボット10と、ユーザAが頭部に装着するヘッドマウントディスプレイ装置(HMD)100とを備える。HMD100はアクセスポイント(AP)2を介して、ネットワーク4に接続される。AP2は無線アクセスポイントおよびルータの機能を有し、HMD100は、AP2と既知の無線通信プロトコルで接続するが、ケーブルで接続してもよい。 FIG. 1 shows a configuration example of an information processing system 1 according to the present embodiment. The information processing system 1 includes a robot 10 and a head mounted display device (HMD) 100 that a user A wears on the head. The HMD 100 is connected to the network 4 via an access point (AP) 2. The AP 2 has functions of a wireless access point and a router, and the HMD 100 is connected to the AP 2 by a known wireless communication protocol, but may be connected by a cable.
 ロボット10は、アクチュエータ装置12と、アクチュエータ装置12により姿勢を変更可能に駆動される筐体20とを備える。筐体20には、右カメラ14aおよび左カメラ14bが搭載される。以下、右カメラ14aおよび左カメラ14bを特に区別しない場合には「カメラ14」と呼ぶ。実施例においてカメラ14は、アクチュエータ装置12により駆動される筐体20に設けられる。ロボット10はアクセスポイント(AP)3を介して、ネットワーク4に接続される。ロボット10は、AP3と既知の無線通信プロトコルで接続するが、ケーブルで接続してもよい。 The robot 10 includes an actuator device 12 and a housing 20 that is driven by the actuator device 12 so that the posture can be changed. The housing 20 is equipped with a right camera 14a and a left camera 14b. Hereinafter, the right camera 14a and the left camera 14b are referred to as “camera 14” unless otherwise distinguished. In the embodiment, the camera 14 is provided in a housing 20 that is driven by the actuator device 12. The robot 10 is connected to the network 4 via an access point (AP) 3. The robot 10 is connected to the AP 3 by a known wireless communication protocol, but may be connected by a cable.
 情報処理システム1において、HMD100とロボット10はネットワーク4を介して通信可能に接続する。なおHMD100とロボット10とが近くに存在する場合、両者はAPを介さずに、直接、無線または有線で通信可能に接続してもよい。情報処理システム1においてロボット10は、ユーザAのいわば分身として動作する。ユーザAが装着しているHMD100の動きはロボット10に伝達され、アクチュエータ装置12が、HMD100の動きに連動して筐体20を動かす。たとえばユーザAが首を前後に振ると、アクチュエータ装置12が筐体20を前後に振るように動かし、ユーザAが首を左右に振ると、アクチュエータ装置12が筐体20を左右に振るように動かす。これによりロボット10の周囲にいる人は、ユーザAがその場にいるかのような感覚をもって、ユーザAとコミュニケーションをとることができる。 In the information processing system 1, the HMD 100 and the robot 10 are communicably connected via the network 4. When the HMD 100 and the robot 10 exist in the vicinity, they may be connected to each other so as to be able to communicate directly or wirelessly without using an AP. In the information processing system 1, the robot 10 operates as a so-called alternate user A. The movement of the HMD 100 worn by the user A is transmitted to the robot 10, and the actuator device 12 moves the housing 20 in conjunction with the movement of the HMD 100. For example, when the user A swings his / her neck back and forth, the actuator device 12 moves the casing 20 to swing back and forth, and when the user A swings his / her neck left and right, the actuator device 12 moves the casing 20 to swing left and right. . Thus, a person around the robot 10 can communicate with the user A with a sense that the user A is on the spot.
 AP2の代わりに通信機能をもつゲーム機、パーソナルコンピュータなどの情報処理装置をクライアント装置として設け、HMD100をクライアント装置に無線または有線で接続してもよい。また、AP3の代わりにゲーム機、パーソナルコンピュータなどの情報処理装置をサーバ装置として設け、ロボット10をサーバ装置に無線または有線で接続してもよい。 Instead of AP2, an information processing device such as a game machine or a personal computer having a communication function may be provided as a client device, and the HMD 100 may be connected to the client device wirelessly or by wire. Further, an information processing device such as a game machine or a personal computer may be provided as a server device instead of AP3, and the robot 10 may be connected to the server device wirelessly or by wire.
 右カメラ14aおよび左カメラ14bは、筐体20の前面にて横方向に所定の間隔を空けて配置される。右カメラ14aおよび左カメラ14bはステレオカメラを構成し、右カメラ14aは右目用画像を所定の周期で撮影し、左カメラ14bは左目用画像を所定の周期で撮影する。この場合、右カメラ14aおよび左カメラ14bは、HMD100の姿勢変化に連動して姿勢変化が可能なように構成されている。撮影された右目用画像および左目用画像は、リアルタイムでユーザAのHMD100に送信される。HMD100は、受信した右目用画像を右目用表示パネルに表示し、受信した左目用画像を左目用表示パネルに表示する。これによりユーザAは、ロボット10の筐体20が向いている方向の映像をリアルタイムで見ることができる。 The right camera 14a and the left camera 14b are arranged at a predetermined interval in the horizontal direction on the front surface of the housing 20. The right camera 14a and the left camera 14b constitute a stereo camera. The right camera 14a captures a right-eye image at a predetermined cycle, and the left camera 14b captures a left-eye image at a predetermined cycle. In this case, the right camera 14 a and the left camera 14 b are configured to be able to change postures in conjunction with the posture change of the HMD 100. The captured right-eye image and left-eye image are transmitted to the user A's HMD 100 in real time. The HMD 100 displays the received right-eye image on the right-eye display panel, and displays the received left-eye image on the left-eye display panel. As a result, the user A can view the video in the direction in which the housing 20 of the robot 10 is facing in real time.
 このように情報処理システム1では、ロボット10がユーザAにより遠隔操作されてユーザAの顔の動きを再現し、またユーザAがHMD100を通じて、ロボット周辺の画像を見ることができる。 In this way, in the information processing system 1, the robot 10 is remotely operated by the user A to reproduce the movement of the face of the user A, and the user A can view the image around the robot through the HMD 100.
 HMD100は、両目を完全に覆う没入型(非透過型)のディスプレイ装置に限られず、透過型のディスプレイ装置であってもよい。また形状としては、図示されるような帽子型であってもよいが、眼鏡型であってもよい。なおHMD100は専用の頭部装着ディスプレイ装置のみならず、表示パネル、マイク、スピーカを有する端末装置と、端末装置の表示パネルをユーザの目の前の位置に固定する筐体とから構成されてもよい。端末装置は、たとえばスマートフォンやポータブルゲーム機など、比較的小型の表示パネルを有するものであってよい。 The HMD 100 is not limited to an immersive (non-transmissive) display device that completely covers both eyes, and may be a transmissive display device. The shape may be a hat shape as shown in the figure, but may also be a glasses shape. Note that the HMD 100 may include not only a dedicated head-mounted display device but also a terminal device having a display panel, a microphone, and a speaker, and a casing that fixes the display panel of the terminal device to a position in front of the user. Good. The terminal device may have a relatively small display panel such as a smartphone or a portable game machine.
 図2は、HMD100の機能ブロックを示す。制御部120は、画像信号、音声信号、センサ情報などの各種信号およびデータや、命令を処理して出力するメインプロセッサである。記憶部122は、制御部120が処理するデータや命令などを一時的に記憶する。姿勢センサ124は、HMD100の回転角度や傾きなどの姿勢情報を所定の周期で検出する。姿勢センサ124は、少なくとも3軸の加速度センサおよび3軸のジャイロセンサを含む。 FIG. 2 shows functional blocks of the HMD 100. The control unit 120 is a main processor that processes and outputs various signals and data such as image signals, audio signals, sensor information, and commands. The storage unit 122 temporarily stores data, commands, and the like that are processed by the control unit 120. The attitude sensor 124 detects attitude information such as the rotation angle and inclination of the HMD 100 at a predetermined cycle. The posture sensor 124 includes at least a triaxial acceleration sensor and a triaxial gyro sensor.
 通信制御部126は、ネットワークアダプタまたはアンテナを介して、有線または無線通信により、ロボット10との間で信号やデータを送受信する。通信制御部126は、制御部120から、姿勢センサ124で検出された姿勢情報を受け取り、ロボット10に送信する。また通信制御部126は、ロボット10から、画像データを受け取り、制御部120に供給する。制御部120は、画像データをロボット10から受け取ると、画像データを表示パネル102に供給して表示させる。 The communication control unit 126 transmits and receives signals and data to and from the robot 10 by wired or wireless communication via a network adapter or an antenna. The communication control unit 126 receives posture information detected by the posture sensor 124 from the control unit 120 and transmits the posture information to the robot 10. The communication control unit 126 receives image data from the robot 10 and supplies the image data to the control unit 120. When the control unit 120 receives the image data from the robot 10, the control unit 120 supplies the image data to the display panel 102 for display.
 リプロジェクション部130は、ロボット10から受信された画像データに対してリプロジェクション処理を施す。リプロジェクション部130の詳細な構成は後述する。 The reprojection unit 130 performs a reprojection process on the image data received from the robot 10. The detailed configuration of the reprojection unit 130 will be described later.
 図3は、ロボット10の外観構成を示す。筐体20は、カメラ14を収容する。カメラ14は筐体前面に設けられる。カメラ14は、ハウジング36内に収容されている電源装置から電力線(図示せず)を介して電力を供給されて動作する。 FIG. 3 shows an external configuration of the robot 10. The housing 20 accommodates the camera 14. The camera 14 is provided on the front surface of the housing. The camera 14 operates by being supplied with electric power from a power supply device accommodated in the housing 36 via a power line (not shown).
 筐体20は保護カバー19を有し、ロボット10を使用しない状態、つまりロボット10の電源がオフにされた状態では、保護カバー19が筐体前面を覆う閉位置に配置されて、カメラ14を保護する。 The housing 20 has a protective cover 19. When the robot 10 is not used, that is, when the power of the robot 10 is turned off, the protective cover 19 is disposed at a closed position that covers the front of the housing, and the camera 14 is Protect.
 筐体20はアクチュエータ装置12によって姿勢を変更可能に支持されている。アクチュエータ装置12は、脚部40と、脚部40の上部に支持される半球状のハウジング36と、筐体20を駆動するための駆動機構50とを備える。駆動機構50は、長尺方向に第1貫通長孔32aを形成された第1円弧状アーム32と、長尺方向に第2貫通長孔34aを形成された第2円弧状アーム34と、第1円弧状アーム32と第2円弧状アーム34とを交差させた状態で、第1円弧状アーム32と第2円弧状アーム34とを回動可能に支持する台座30とを備える。台座30の上側は、カバー38により覆われており、カバー38で覆われた空間には、第1円弧状アーム32および第2円弧状アーム34をそれぞれ回転させるモータが配置されている。なお台座30は、ハウジング36に対して回動可能に支持されており、ハウジング36内には、台座30を回転させるモータが配置されている。 The housing 20 is supported by the actuator device 12 so that the posture can be changed. The actuator device 12 includes a leg portion 40, a hemispherical housing 36 supported on the upper portion of the leg portion 40, and a drive mechanism 50 for driving the housing 20. The drive mechanism 50 includes a first arcuate arm 32 having a first through hole 32a formed in the longitudinal direction, a second arcuate arm 34 having a second through hole 34a formed in the longitudinal direction, A pedestal 30 that rotatably supports the first arc-shaped arm 32 and the second arc-shaped arm 34 in a state where the first arc-shaped arm 32 and the second arc-shaped arm 34 intersect each other is provided. The upper side of the pedestal 30 is covered with a cover 38, and motors for rotating the first arcuate arm 32 and the second arcuate arm 34 are arranged in the space covered with the cover 38, respectively. The pedestal 30 is rotatably supported with respect to the housing 36, and a motor for rotating the pedestal 30 is disposed in the housing 36.
 第1円弧状アーム32および第2円弧状アーム34は半円状に形成され、同じ回転中心を有するように両端部が台座30に支持される。半円状の第1円弧状アーム32の径は、半円状の第2円弧状アーム34の径よりも僅かに大きく、第1円弧状アーム32は、第2円弧状アーム34の外周側に配置される。第1円弧状アーム32と第2円弧状アーム34は、台座30において直交するように配置されてよい。実施例では、第1円弧状アーム32が台座30に支持された両端部を結ぶラインと、第2円弧状アーム34が台座30に支持された両端部を結ぶラインとが直交する。挿通部材42は、第1貫通長孔32aおよび第2貫通長孔34aに挿通されて、第1貫通長孔32aおよび第2貫通長孔34aの交差位置に配置される。挿通部材42は、第1円弧状アーム32および第2円弧状アーム34の回転により、第1貫通長孔32a内および第2貫通長孔34a内を摺動する。 The first arc-shaped arm 32 and the second arc-shaped arm 34 are formed in a semicircular shape, and both ends are supported by the pedestal 30 so as to have the same center of rotation. The diameter of the semicircular first arc-shaped arm 32 is slightly larger than the diameter of the semicircular second arc-shaped arm 34, and the first arc-shaped arm 32 is located on the outer peripheral side of the second arc-shaped arm 34. Be placed. The first arc-shaped arm 32 and the second arc-shaped arm 34 may be arranged so as to be orthogonal to each other on the pedestal 30. In the embodiment, a line connecting both ends of the first arcuate arm 32 supported by the pedestal 30 and a line connecting both ends of the second arcuate arm 34 supported by the pedestal 30 are orthogonal to each other. The insertion member 42 is inserted into the first through long hole 32a and the second through long hole 34a, and is disposed at the intersection of the first through long hole 32a and the second through long hole 34a. The insertion member 42 slides in the first through long hole 32a and the second through long hole 34a by the rotation of the first arc-shaped arm 32 and the second arc-shaped arm 34.
 第1モータは、第1円弧状アーム32を回転させるために設けられ、第2モータは、第2円弧状アーム34を回転させるために設けられる。第1モータおよび第2モータは、台座30上に配置されて、台座30が回転すると、第1モータおよび第2モータも台座30とともに回転する。第3モータは、台座30を回転させるために設けられ、ハウジング36内に配置される。第1モータ、第2モータおよび第3モータは、図示しない電源装置から電力を供給されて回転する。 The first motor is provided for rotating the first arcuate arm 32, and the second motor is provided for rotating the second arcuate arm 34. A 1st motor and a 2nd motor are arrange | positioned on the base 30, and if the base 30 rotates, a 1st motor and a 2nd motor will also rotate with the base 30. FIG. The third motor is provided to rotate the pedestal 30 and is disposed in the housing 36. The first motor, the second motor, and the third motor are rotated by power supplied from a power supply device (not shown).
 第1モータが第1円弧状アーム32を回転し、第2モータが第2円弧状アーム34を回転し、第3モータが台座30を回転することで、アクチュエータ装置12は、挿通部材42に取り付けられた筐体20の向きおよび姿勢を変化させられる。 The first motor rotates the first arc-shaped arm 32, the second motor rotates the second arc-shaped arm 34, and the third motor rotates the pedestal 30, whereby the actuator device 12 is attached to the insertion member 42. The orientation and posture of the housing 20 can be changed.
 図4および図5は、ロボット10における筐体20の姿勢の例を示す図である。
 図4(a)および(b)は、筐体20を左右方向に傾けた例を示す。図5(a)および(b)は、筐体20を前後方向に傾けた例を示す。このようにロボット10の駆動機構50は、筐体20に任意の姿勢をとらせることが可能となる。筐体20の姿勢は、第1モータおよび第2モータの駆動量を調整することで制御され、また筐体20の向きは、第3モータの駆動量を調整することで制御される。
4 and 5 are diagrams illustrating examples of the posture of the housing 20 in the robot 10. FIG.
4A and 4B show an example in which the housing 20 is tilted in the left-right direction. 5A and 5B show an example in which the housing 20 is tilted in the front-rear direction. Thus, the drive mechanism 50 of the robot 10 can cause the housing 20 to take an arbitrary posture. The attitude of the housing 20 is controlled by adjusting the driving amounts of the first motor and the second motor, and the orientation of the housing 20 is controlled by adjusting the driving amount of the third motor.
 図6は、ロボット10の機能ブロックを示す。ロボット10は、外部からの入力を受け付けて処理する入力系統22と、外部への出力を処理する出力系統24とを備える。入力系統22は、受信部60、センサ情報取得部62、動き検出部64、視線方向決定部66、およびアクチュエータ制御部68を備える。また出力系統24は、画像処理部80および送信部90を備える。 FIG. 6 shows functional blocks of the robot 10. The robot 10 includes an input system 22 that receives and processes input from the outside, and an output system 24 that processes output to the outside. The input system 22 includes a reception unit 60, a sensor information acquisition unit 62, a motion detection unit 64, a line-of-sight direction determination unit 66, and an actuator control unit 68. The output system 24 includes an image processing unit 80 and a transmission unit 90.
 図6において、さまざまな処理を行う機能ブロックとして記載される各要素は、ハードウェア的には、回路ブロック、メモリ、その他のLSIで構成することができ、ソフトウェア的には、メモリにロードされたプログラムなどによって実現される。したがって、これらの機能ブロックがハードウェアのみ、ソフトウェアのみ、またはそれらの組合せによっていろいろな形で実現できることは当業者には理解されるところであり、いずれかに限定されるものではない。 In FIG. 6, each element described as a functional block for performing various processes can be configured by a circuit block, a memory, and other LSIs in terms of hardware, and loaded in the memory in terms of software. Realized by programs. Therefore, it is understood by those skilled in the art that these functional blocks can be realized in various forms by hardware only, software only, or a combination thereof, and is not limited to any one.
 上記したようにHMD100は、姿勢センサ124が検出したセンサ情報をロボット10に送信し、受信部60は、センサ情報を受信する。 As described above, the HMD 100 transmits the sensor information detected by the attitude sensor 124 to the robot 10, and the receiving unit 60 receives the sensor information.
 センサ情報取得部62は、HMD100の姿勢センサ124が検出した姿勢情報を取得する。動き検出部64は、ユーザAの頭部に装着されたHMD100の姿勢を検出する。視線方向決定部66は、動き検出部64により検出されたHMD100の姿勢に応じて筐体20のカメラ14の視線方向を定める。 The sensor information acquisition unit 62 acquires the posture information detected by the posture sensor 124 of the HMD 100. The motion detection unit 64 detects the posture of the HMD 100 attached to the user A's head. The line-of-sight direction determination unit 66 determines the line-of-sight direction of the camera 14 of the housing 20 according to the attitude of the HMD 100 detected by the motion detection unit 64.
 動き検出部64は、HMD100を装着したユーザの頭部の姿勢を検出するヘッドトラッキング処理を行う。ヘッドトラッキング処理は、ユーザの頭部の姿勢に、HMD100の表示パネル102に表示する視野を連動させるために行われ、実施例のヘッドトラッキング処理では、HMD100の水平基準方向に対する回転角度と、水平面に対する傾き角度とが検出される。水平基準方向は、たとえばHMD100の電源がオンされたときに向いている方向として設定されてよい。 The motion detection unit 64 performs a head tracking process for detecting the posture of the head of the user wearing the HMD 100. The head tracking process is performed in order to link the visual field displayed on the display panel 102 of the HMD 100 to the posture of the user's head. In the head tracking process of the embodiment, the rotation angle with respect to the horizontal reference direction of the HMD 100 and the horizontal plane The tilt angle is detected. The horizontal reference direction may be set, for example, as a direction facing when the power of the HMD 100 is turned on.
 視線方向決定部66は、動き検出部64により検出されたHMD100の姿勢に応じて、視線方向を定める。この視線方向は、ユーザAの視線方向であり、ひいては分身であるロボット10のカメラ14の視線方向(光軸方向)である。 The line-of-sight direction determination unit 66 determines the line-of-sight direction according to the attitude of the HMD 100 detected by the motion detection unit 64. This line-of-sight direction is the line-of-sight direction of the user A, and by extension, the line-of-sight direction (optical axis direction) of the camera 14 of the robot 10 that is a substitute.
 カメラ14の視線方向(光軸方向)をユーザAの視線方向に連動させるために、ロボット10の基準姿勢を事前に設定しておく必要がある。図3には、第1円弧状アーム32と第2円弧状アーム34とが台座30に対して90度起立した状態を示しているが、この状態を水平方向として設定し、またロボット10の電源がオンされたときに筐体20の前面が向いている方向を、水平基準方向として設定してよい。なおロボット10は、HMD100と同様に姿勢センサを有して、水平方向を自律的に設定できるようにしてもよい。 In order to link the viewing direction (optical axis direction) of the camera 14 to the viewing direction of the user A, it is necessary to set the reference posture of the robot 10 in advance. FIG. 3 shows a state in which the first arc-shaped arm 32 and the second arc-shaped arm 34 stand up by 90 degrees with respect to the pedestal 30. This state is set as the horizontal direction and the power supply of the robot 10 The direction in which the front surface of the housing 20 faces when is turned on may be set as the horizontal reference direction. Note that the robot 10 may have a posture sensor as in the HMD 100 so that the horizontal direction can be set autonomously.
 HMD100およびロボット10の基準姿勢を設定した状態で、視線方向決定部66は、動き検出部64により検出された回転角度および傾き角度を、そのままカメラ14の視線方向(光軸方向)として決定してよい。動き検出部64が、HMD100の回転角度および傾き角度を検出すると、視線方向決定部66は、HMD100の視線方向を3次元座標のベクトル(x,y,z)として決定し、このときロボット10のカメラ14の視線方向を同じ(x,y,z)と決定してもよく、また何らかの補正を加えた(x’,y’,z’)として決定してもよい。 With the reference postures of the HMD 100 and the robot 10 set, the line-of-sight direction determination unit 66 determines the rotation angle and tilt angle detected by the motion detection unit 64 as the line-of-sight direction (optical axis direction) of the camera 14 as it is. Good. When the motion detection unit 64 detects the rotation angle and the inclination angle of the HMD 100, the line-of-sight direction determination unit 66 determines the line-of-sight direction of the HMD 100 as a vector (x, y, z) of three-dimensional coordinates. The line-of-sight direction of the camera 14 may be determined as the same (x, y, z), or may be determined as (x ′, y ′, z ′) with some correction.
 アクチュエータ制御部68は、視線方向決定部66で決定された視線方向となるようにカメラ14の向きを制御する。具体的にアクチュエータ制御部68は、第1モータ52、第2モータ54、第3モータ56に供給する電力を調整して、HMD100の動きに、筐体20の動きを追従させる。アクチュエータ制御部68によるモータ駆動制御は、リアルタイムに実施され、したがって筐体20の向きは、ユーザAの視線の向きと同じように動かされる。 The actuator control unit 68 controls the orientation of the camera 14 so that the line-of-sight direction determined by the line-of-sight direction determination unit 66 is obtained. Specifically, the actuator control unit 68 adjusts the power supplied to the first motor 52, the second motor 54, and the third motor 56 so that the movement of the housing 20 follows the movement of the HMD 100. The motor drive control by the actuator control unit 68 is performed in real time, and therefore the direction of the housing 20 is moved in the same way as the direction of the line of sight of the user A.
 実施例のアクチュエータ装置12によれば、筐体20は、第1円弧状アーム32および第2円弧状アーム34の回転中心を基準として駆動されるが、この動きは人の首と同じ動きを示す。アクチュエータ装置12は、2本の半円アームを交差させた簡易な構造でユーザAの首の動きを再現する。 According to the actuator device 12 of the embodiment, the housing 20 is driven with reference to the rotation centers of the first arc-shaped arm 32 and the second arc-shaped arm 34, and this movement shows the same movement as the human neck. . The actuator device 12 reproduces the movement of the neck of the user A with a simple structure in which two semicircular arms are crossed.
 次に出力系統24について説明する。
 出力系統24において、右カメラ14aおよび左カメラ14bは、アクチュエータ装置12により制御された方向に向けられて、それぞれの画角内を撮影する。右カメラ14aおよび左カメラ14bは、たとえば大人の平均的な両目の間隔となるように離れて配置されてよい。右カメラ14aが撮影した右目用画像データおよび左カメラ14bが撮影した左目用画像データは、送信部90からHMD100に送信されて、それぞれ表示パネル102の右半分および左半分に表示される。これらの画像は、右目および左目から見た視差画像を形成し、表示パネル102を2分割してなる領域にそれぞれ表示させることで、画像を立体視させることができる。なおユーザAは光学レンズを通して表示パネル102を見るために、画像処理部80は、予めレンズによる光学歪みを補正した画像データを生成して、HMD100に供給してもよい。
Next, the output system 24 will be described.
In the output system 24, the right camera 14a and the left camera 14b are directed in the directions controlled by the actuator device 12 and shoot the respective angles of view. The right camera 14a and the left camera 14b may be arranged apart from each other, for example, so as to be an average distance between eyes of an adult. The right-eye image data captured by the right camera 14a and the left-eye image data captured by the left camera 14b are transmitted from the transmission unit 90 to the HMD 100 and displayed on the right half and the left half of the display panel 102, respectively. These images form parallax images viewed from the right eye and the left eye, and can be displayed stereoscopically by displaying the display panel 102 in an area divided into two. In order for the user A to view the display panel 102 through the optical lens, the image processing unit 80 may generate image data in which optical distortion caused by the lens is corrected in advance and supply the image data to the HMD 100.
 右カメラ14aおよび左カメラ14bは、所定の周期(たとえば1/60秒)で撮影を行い、送信部90は遅延なく画像データをHMD100に送信する。これによりユーザAはロボット10の周囲の状況をリアルタイムで見ることができ、また顔の向きを変えることで、見たい方向を見ることができる。 The right camera 14a and the left camera 14b perform shooting at a predetermined cycle (for example, 1/60 seconds), and the transmission unit 90 transmits image data to the HMD 100 without delay. As a result, the user A can see the situation around the robot 10 in real time, and can also see the desired direction by changing the direction of the face.
 カメラ姿勢情報取得部70は、カメラ14の姿勢と向きを含む姿勢情報を取得して送信部90に供給する。送信部90はカメラ14の姿勢情報をHMD100に送信する。カメラ14の姿勢情報は、たとえば、モーションセンサにより検知させる姿勢情報を用いて求めてもよく、あるいは、ロボット10の筐体20の姿勢と向きを変化させるアクチュエータ装置12に搭載されたモータの位置情報から求めてもよい。 The camera posture information acquisition unit 70 acquires posture information including the posture and orientation of the camera 14 and supplies the posture information to the transmission unit 90. The transmission unit 90 transmits the posture information of the camera 14 to the HMD 100. The posture information of the camera 14 may be obtained using, for example, posture information detected by a motion sensor, or position information of a motor mounted on the actuator device 12 that changes the posture and orientation of the housing 20 of the robot 10. You may ask for.
 図7は、図2のリプロジェクション部130の機能構成図である。同図は機能に着目したブロック図を描いており、これらの機能ブロックはハードウエアのみ、ソフトウエアのみ、またはそれらの組合せによっていろいろな形で実現することができる。 FIG. 7 is a functional configuration diagram of the reprojection unit 130 of FIG. This figure depicts a block diagram focusing on functions, and these functional blocks can be realized in various forms by hardware only, software only, or a combination thereof.
 リプロジェクション部130は、HMD姿勢情報取得部132と、カメラ姿勢情報取得部134と、カメラ画像取得部135と、差分計算部136と、警告部137と、画像補正部138とを含む。 The reprojection unit 130 includes an HMD posture information acquisition unit 132, a camera posture information acquisition unit 134, a camera image acquisition unit 135, a difference calculation unit 136, a warning unit 137, and an image correction unit 138.
 HMD姿勢情報取得部132は、姿勢センサ124が検出したHMD100の姿勢と向きを含む姿勢情報(「HMD姿勢情報」と呼ぶ)を取得し、差分計算部136に供給する。カメラ姿勢情報取得部134は、通信制御部126がロボット10から受信した、カメラ14の撮影時のカメラ14の姿勢と向きを含む姿勢情報(「カメラ姿勢情報」と呼ぶ)を取得し、差分計算部136に供給する。カメラ画像取得部135は、通信制御部126がロボット10から受信した左右のカメラ14の撮影画像(「カメラ画像」と呼ぶ)を取得し、画像補正部138に供給する。 The HMD posture information acquisition unit 132 acquires posture information (referred to as “HMD posture information”) including the posture and orientation of the HMD 100 detected by the posture sensor 124 and supplies the posture information to the difference calculation unit 136. The camera posture information acquisition unit 134 acquires posture information (referred to as “camera posture information”) that the communication control unit 126 received from the robot 10 and includes the posture and orientation of the camera 14 at the time of shooting with the camera 14 and calculates the difference. Supplied to the unit 136. The camera image acquisition unit 135 acquires the captured images (referred to as “camera images”) of the left and right cameras 14 received by the communication control unit 126 from the robot 10 and supplies them to the image correction unit 138.
 ここで、カメラ姿勢情報はカメラ画像にメタデータとして埋め込まれてもよい。カメラ姿勢情報をカメラ画像にメタデータとして埋め込んで伝送する方法として、HDMI(登録商標)2.1規格を利用してもよい。HDMI2.1規格では各フレームデータのVBlank信号に各フレームに適用すべき動的メタデータを挿入して伝送することが検討されている。動的メタデータにカメラ姿勢情報を埋め込めば、ロボット10のカメラ14による撮像時のカメラ姿勢情報をカメラ画像のフレームデータに同期させてHMD100に伝送することができる。ここではHDMI2.1規格のVBlank信号に動的メタデータを挿入する例を説明したが、これは一例であり、各フレームに同期する何らかの同期信号に動的メタデータを挿入して伝送すればよい。カメラ姿勢情報取得部134は、通信制御部126がロボット10から受信したカメラ画像の各フレームの同期信号に挿入されたメタデータからカメラ姿勢情報を抽出する。 Here, the camera posture information may be embedded as metadata in the camera image. The HDMI (registered trademark) 2.1 standard may be used as a method for transmitting camera posture information as metadata embedded in a camera image. In the HDMI 2.1 standard, it is studied to insert and transmit dynamic metadata to be applied to each frame in the VBlank signal of each frame data. If the camera posture information is embedded in the dynamic metadata, the camera posture information at the time of imaging by the camera 14 of the robot 10 can be transmitted to the HMD 100 in synchronization with the frame data of the camera image. Here, an example has been described in which dynamic metadata is inserted into an HDMI 2.1-standard VBlank signal. However, this is only an example, and dynamic metadata may be inserted into some synchronization signal synchronized with each frame and transmitted. . The camera posture information acquisition unit 134 extracts the camera posture information from the metadata inserted in the synchronization signal of each frame of the camera image received by the communication control unit 126 from the robot 10.
 差分計算部136は、HMD姿勢情報とカメラ姿勢情報の差分を算出し、画像補正部138に供給する。画像補正部138は、算出された姿勢差分にもとづいてHMD100の最新の姿勢に合うようにカメラ画像を補正することによりリプロジェクション処理を実行する。リプロジェクション処理されたカメラ画像は制御部120に供給され、表示パネル102に表示される。 The difference calculation unit 136 calculates the difference between the HMD posture information and the camera posture information and supplies the difference to the image correction unit 138. The image correction unit 138 performs reprojection processing by correcting the camera image so as to match the latest posture of the HMD 100 based on the calculated posture difference. The reprojected camera image is supplied to the control unit 120 and displayed on the display panel 102.
 HMD姿勢情報にもとづいて、ロボット10の可動範囲を越えてHMD100が動かされたと判定される場合や、HMD100の姿勢がロボット10の可動範囲の限界値に対して所定の閾値まで近づいたと判定される場合、差分計算部136は画像補正部138にリプロジェクション処理を指示することを中断または継続し、警告部137に通知する。警告部137は、可動範囲を超えるか可動限界に近づいている旨のメッセージを表示パネル102に表示する。その際、HMD100を装着したユーザが頭部を戻すべき方向を警告メッセージと合わせて表示してもよい。 Based on the HMD posture information, it is determined that the HMD 100 has been moved beyond the movable range of the robot 10, or it is determined that the posture of the HMD 100 has approached a predetermined threshold with respect to the limit value of the movable range of the robot 10. In this case, the difference calculation unit 136 interrupts or continues to instruct the image correction unit 138 to perform the reprojection process, and notifies the warning unit 137 of the instruction. The warning unit 137 displays a message on the display panel 102 that the movable range is exceeded or that the movable limit is approached. At that time, the direction in which the user wearing the HMD 100 should return the head may be displayed together with the warning message.
 具体的には、差分計算部136は、HMD100の姿勢がロボット10の可動範囲内にあるかどうかを判定してもよい。たとえば、ロボット10のカメラ14が360度回転させることができず、一定の範囲にしか回転させられない場合、あらかじめその範囲を可動範囲としてHMD100に設定しておき、HMD姿勢情報をカメラ14の可動範囲と比較する。あらかじめ可動範囲を設定せずに、HMD100の姿勢とカメラ14の姿勢が大きく乖離している場合に、可動範囲を超えたと判定してもよい。 Specifically, the difference calculation unit 136 may determine whether the attitude of the HMD 100 is within the movable range of the robot 10. For example, when the camera 14 of the robot 10 cannot be rotated 360 degrees and can only be rotated within a certain range, the range is set in the HMD 100 in advance as the movable range, and the HMD posture information is moved to the movable position of the camera 14. Compare with range. Without setting the movable range in advance, it may be determined that the movable range has been exceeded when the posture of the HMD 100 and the posture of the camera 14 are greatly deviated.
 また、ロボット10のカメラ14はモータによって姿勢制御されるため、HMD100を装着したユーザのように姿勢を速く変えることはできない。そこで、HMD姿勢情報にもとづいて、HMD100の姿勢変化にカメラ14が追従可能な速度を超えてHMD100が動かされたと判定される場合、差分計算部136は画像補正部138によるリプロジェクション処理を指示することを中断または継続し、警告部137に通知する。警告部137は、HMD100を装着するユーザに頭部をゆっくり動かすように促すメッセージを表示パネル102に表示する。 Further, since the posture of the camera 14 of the robot 10 is controlled by a motor, the posture cannot be changed as fast as the user wearing the HMD 100. Therefore, if it is determined that the HMD 100 has been moved beyond the speed at which the camera 14 can follow the attitude change of the HMD 100 based on the HMD attitude information, the difference calculation unit 136 instructs the image correction unit 138 to perform a reprojection process. This is interrupted or continued, and the warning unit 137 is notified. The warning unit 137 displays on the display panel 102 a message that prompts the user wearing the HMD 100 to move his / her head slowly.
 具体的には、差分計算部136は、HMD姿勢情報とカメラ姿勢情報の差分が閾値を超える場合やHMD姿勢情報の変化速度が閾値を超えた場合、カメラ14の追従可能速度を超えたと判定する。HMD100の姿勢が所定の閾値よりも速く動いた場合に、HMD姿勢情報とカメラ姿勢情報の差分が大きくなることから、カメラ14の追従可能速度を超えたと判定してもよい。 Specifically, the difference calculation unit 136 determines that the followable speed of the camera 14 has been exceeded when the difference between the HMD posture information and the camera posture information exceeds a threshold or when the change speed of the HMD posture information exceeds the threshold. . When the attitude of the HMD 100 moves faster than a predetermined threshold, the difference between the HMD attitude information and the camera attitude information increases, so it may be determined that the followable speed of the camera 14 has been exceeded.
 このように、HMD100の動きがカメラ14の可動範囲や追従可能速度の限界に近づいたり、限界を超えた場合に、リプロジェクション処理によってカメラ画像を補正して対処すると、ユーザがカメラ14の可動範囲や追従可能速度を超えても頭部を動かせると勘違いしてしまうことがある。本実施の形態では、このような場合にリプロジェクション処理を中断し、警告部137が警告を表示パネル102に表示することで、HMD100を装着するユーザの頭部の動きを適切に規制することができる。もっともこのような場合でもリプロジェクション処理を中断せず継続しつつ警告を表示することも可能である。 As described above, when the movement of the HMD 100 approaches the limit of the movable range or followable speed of the camera 14 or exceeds the limit, when the user corrects the camera image by the reprojection process, the user moves the movable range of the camera 14. If you can move your head even after exceeding the speed that can be followed, it may be misunderstood. In the present embodiment, the reprojection process is interrupted in such a case, and the warning unit 137 displays a warning on the display panel 102, thereby appropriately restricting the movement of the head of the user wearing the HMD 100. it can. However, even in such a case, it is possible to display a warning while continuing the reprojection process without interruption.
 図7のリプロジェクション部130の動作を説明する。比較のため、図8を参照して従来のリプロジェクション処理を説明した後、図9を参照して本実施の形態のリプロジェクション処理を説明する。 The operation of the reprojection unit 130 in FIG. 7 will be described. For comparison, the conventional reprojection processing will be described with reference to FIG. 8, and then the reprojection processing of the present embodiment will be described with reference to FIG.
 図8は、従来のリプロジェクション処理を説明する図である。ここでは、ロボット10の代わりにレンダリング装置300がネットワークを介してHMD100と接続している。 FIG. 8 is a diagram for explaining a conventional reprojection process. Here, the rendering apparatus 300 is connected to the HMD 100 via a network instead of the robot 10.
 HMD100にヘッドトラッキング機能をもたせて、ユーザの頭部の動きと連動して視点や視線方向を変えて画像をレンダリングしてHMD100に表示する場合、画像のレンダリングから表示までに遅延があるため、レンダリング時に前提としたユーザの頭部の向きと、画像をHMD100に表示した時点でのユーザの頭部の向きとの間でずれが発生し、ユーザは酔ったような感覚に陥ることがある。またネットワークを経由する場合、ネットワークによる遅延変動によってもずれが生じる。そこでレンダリング時に使用したHMD100の姿勢と最新のHMD100の姿勢のずれを補償するための補正処理をレンダリング画像にかけることにより、リプロジェクションを実行し、リプロジェクション後の画像をHMD100に表示することでユーザの酔いを低減させる。 When the HMD 100 is provided with a head tracking function and the image is rendered by changing the viewpoint or line-of-sight direction in conjunction with the movement of the user's head and displayed on the HMD 100, the rendering is delayed from rendering to display. Sometimes, a deviation occurs between the orientation of the user's head that is assumed at the time and the orientation of the user's head when the image is displayed on the HMD 100, and the user may feel drunk. In the case of passing through a network, a shift also occurs due to delay fluctuations caused by the network. Therefore, a reprojection is performed by applying a correction process for compensating for a deviation between the attitude of the HMD 100 used at the time of rendering and the attitude of the latest HMD 100 to the rendering image, and the user can display the image after the reprojection on the HMD 100. Reduce the sickness of
 HMD100は、現在の姿勢情報にもとづいて動き予測を行い、予測したHMD姿勢情報202をレンダリング装置300に送信する。レンダリング装置300は予測されたHMD姿勢情報202を用いてHMD100に表示すべき画像をレンダリングする。レンダリング装置300はレンダリング画像310をレンダリングに使用した姿勢情報204とともにHMD100に送信する。 The HMD 100 performs motion prediction based on the current posture information, and transmits the predicted HMD posture information 202 to the rendering device 300. The rendering apparatus 300 renders an image to be displayed on the HMD 100 using the predicted HMD posture information 202. The rendering apparatus 300 transmits the rendered image 310 to the HMD 100 together with the posture information 204 used for rendering.
 ここでレンダリングに使用した姿勢情報204は、予測したHMD姿勢情報202と同じものであることに留意する。レンダリング装置300がHMD100の動き予測を行い、予測したHMD姿勢情報202を生成してもよい。 Note that the posture information 204 used for rendering here is the same as the predicted HMD posture information 202. The rendering apparatus 300 may perform motion prediction of the HMD 100 and generate the predicted HMD posture information 202.
 差分計算部302は、最新のHMD姿勢情報200とレンダリングに使用した姿勢情報204の差分を算出し、リプロジェクション部304は、算出された姿勢差分にもとづいてレンダリング画像310にリプロジェクション処理を施し、HMD100に表示する。 The difference calculation unit 302 calculates a difference between the latest HMD posture information 200 and the posture information 204 used for rendering, and the reprojection unit 304 performs a reprojection process on the rendered image 310 based on the calculated posture difference. Display on HMD100.
 図9は、図7のリプロジェクション部130によるリプロジェクション処理を説明する図である。 FIG. 9 is a diagram for explaining reprojection processing by the reprojection unit 130 of FIG.
 HMD100の現在の姿勢情報にもとづいて動き予測を行い、予測したHMD姿勢情報202をロボット10に送信する。ロボット10は予測されたHMD姿勢情報202を用いてカメラ14の姿勢と向きを制御し、カメラ14で画像を撮影する。ロボット10はカメラ画像210を撮影時のカメラ姿勢情報205とともにHMD100に送信する。 The motion is predicted based on the current posture information of the HMD 100, and the predicted HMD posture information 202 is transmitted to the robot 10. The robot 10 controls the posture and orientation of the camera 14 using the predicted HMD posture information 202 and takes an image with the camera 14. The robot 10 transmits the camera image 210 to the HMD 100 together with the camera posture information 205 at the time of shooting.
 ここで撮影時のカメラ姿勢情報205は、予測したHMD姿勢情報202とは異なることに留意する。ロボット10においてモータがカメラ14の姿勢制御を行うとき、モータの誤差や動作遅延などが発生するため、予測したHMD100の姿勢にカメラ14の姿勢を完全に一致させることはできないからである。 Note that the camera posture information 205 at the time of shooting is different from the predicted HMD posture information 202. This is because when the motor controls the posture of the camera 14 in the robot 10, a motor error, an operation delay, and the like occur, and thus the posture of the camera 14 cannot be completely matched with the predicted posture of the HMD 100.
 差分計算部136は、最新のHMD姿勢情報200と撮影時のカメラ姿勢情報205の差分を算出する。画像補正部138は、算出された姿勢差分にもとづいて最新のHMD100の姿勢に合うようにカメラ画像210にリプロジェクション処理を施し、HMD100に表示する。ここで、厳密に言えば、最新のHMD姿勢情報200は、現時点のHMD100の姿勢ではなく、リプロジェクション処理してHMD100に表示される画像がユーザの目に入るまでの時間を考慮に入れて予測したHMD100の姿勢を用いてもよい。 The difference calculation unit 136 calculates the difference between the latest HMD posture information 200 and the camera posture information 205 at the time of shooting. The image correction unit 138 performs reprojection processing on the camera image 210 so as to match the latest attitude of the HMD 100 based on the calculated attitude difference, and displays the image on the HMD 100. Strictly speaking, the latest HMD posture information 200 is predicted in consideration of the time until the image displayed on the HMD 100 after reprojection processing enters the user's eyes, not the posture of the current HMD 100. You may use the attitude | position of HMD100.
 本実施の形態のリプロジェクション部130では、予測したHMD姿勢情報202ではなく、撮影時のカメラ姿勢情報205を用いてリプロジェクションを行う点が従来のリプロジェクション処理とは異なる。 The reprojection unit 130 according to the present embodiment is different from the conventional reprojection processing in that reprojection is performed using the camera posture information 205 at the time of shooting instead of the predicted HMD posture information 202.
 撮影時のカメラ姿勢情報205を用いたリプロジェクション処理によって、モータの誤差や動作遅延が原因でカメラ14の姿勢がHMD100の姿勢に合わないことにより生じるずれを解消し、HMD100を装着したユーザの違和感を軽減することができる。 The reprojection process using the camera attitude information 205 at the time of shooting eliminates the deviation caused by the attitude of the camera 14 not matching the attitude of the HMD 100 due to a motor error or operation delay, and the user wearing the HMD 100 feels uncomfortable. Can be reduced.
 人間の頭部の動きに比べてモータによって制御されるカメラ14の動きは精度が粗いため、カメラ14によって撮影される動画は滑らかな動きにはならない。リプロジェクション処理を施すことでカメラ14によって撮影された動画の動きを滑らかにする効果もある。 Since the movement of the camera 14 controlled by the motor is less accurate than the movement of the human head, the moving image shot by the camera 14 does not move smoothly. By performing the reprojection process, there is also an effect of smoothing the motion of the moving image shot by the camera 14.
 また、予測したHMD姿勢情報202をロボット10に送信する際、ネットワークの輻輳状態によってはパケット損失が起き、予測したHMD姿勢情報202がロボット10に送信されずに欠落することもある。そのような場合でも、カメラ画像210とともにHMD100に送信される姿勢情報は撮影時のカメラ姿勢情報205であるため、予測したHMD姿勢情報202が欠落しても、撮影時のカメラ姿勢情報205にもとづいてリプロジェクション処理を確実に行うことができるという利点もある。 Further, when the predicted HMD posture information 202 is transmitted to the robot 10, packet loss may occur depending on the congestion state of the network, and the predicted HMD posture information 202 may be lost without being transmitted to the robot 10. Even in such a case, since the posture information transmitted to the HMD 100 together with the camera image 210 is the camera posture information 205 at the time of shooting, even if the predicted HMD posture information 202 is missing, it is based on the camera posture information 205 at the time of shooting. There is also an advantage that the reprojection processing can be performed reliably.
 なお、撮影時のカメラ姿勢情報205をカメラ画像210とともに送信せずに、SLAM(Simultaneous Localization and Mapping)などの技術を利用し、カメラ画像210から自己位置推定により、撮影時のカメラ14の姿勢を推定するように構成してもよい。 The camera posture information 205 at the time of shooting is not transmitted together with the camera image 210, and the posture of the camera 14 at the time of shooting is estimated by self-position estimation from the camera image 210 using a technique such as SLAM (Simultaneous Localization and Mapping). You may comprise so that it may estimate.
 図10は、図7のリプロジェクション部130によるリプロジェクション処理の手順を説明するフローチャートである。 FIG. 10 is a flowchart for explaining the procedure of the reprojection process performed by the reprojection unit 130 shown in FIG.
 カメラ画像取得部135はロボット10からカメラ画像を受信し、カメラ姿勢情報取得部134はカメラ画像にメタデータとして付加されたカメラ姿勢情報を取得する(S10)。 The camera image acquisition unit 135 receives a camera image from the robot 10, and the camera posture information acquisition unit 134 acquires camera posture information added as metadata to the camera image (S10).
 HMD姿勢情報取得部132は、姿勢センサ124が検出した最新のHMD姿勢情報を取得する(S12)。 The HMD posture information acquisition unit 132 acquires the latest HMD posture information detected by the posture sensor 124 (S12).
 差分計算部136は、最新のHMD姿勢情報とカメラ姿勢情報の差分を算出する(S14)。 The difference calculation unit 136 calculates the difference between the latest HMD posture information and the camera posture information (S14).
 差分計算部136は、最新のHMD姿勢情報がロボット10のカメラ14の可動範囲内であるかどうかを判定し(S16)、可動範囲内であれば(S16のY)、ステップS20に進む。 The difference calculation unit 136 determines whether or not the latest HMD posture information is within the movable range of the camera 14 of the robot 10 (S16), and if within the movable range (Y of S16), the process proceeds to step S20.
 最新のHMD姿勢情報がロボット10のカメラ14の可動範囲の限界値に近づいているか、可動範囲を超えている場合(S16のN)、警告部137は、HMD100の動きをロボット10の可動範囲内に収めるようにユーザを誘導する警告メッセージを表示パネル102に表示する(S18)。 When the latest HMD posture information is approaching the limit value of the movable range of the camera 14 of the robot 10 or exceeds the movable range (N in S16), the warning unit 137 moves the movement of the HMD 100 within the movable range of the robot 10. Is displayed on the display panel 102 (S18).
 差分計算部136は、HMD100の姿勢変化がロボット10のカメラ14の追従可能速度内であるかどうかを判定し(S20)、追従可能速度内であれば(S20のY)、ステップS24に進む。 The difference calculation unit 136 determines whether the posture change of the HMD 100 is within the followable speed of the camera 14 of the robot 10 (S20), and if within the followable speed (Y of S20), the process proceeds to step S24.
 HMD100の姿勢変化がロボット10の追従可能速度の限界値に近づいているか、追従可能速度を越えている場合(S20のN)、警告部137は、HMD100の姿勢変化をロボット10の追従可能速度内に収めるにユーザを誘導する警告メッセージを表示パネル102に表示する(S22)。 When the posture change of the HMD 100 approaches the limit value of the followable speed of the robot 10 or exceeds the followable speed (N in S20), the warning unit 137 causes the posture change of the HMD 100 to be within the followable speed of the robot 10. A warning message for guiding the user to fit in the screen is displayed on the display panel 102 (S22).
 画像補正部138は、最新のHMD姿勢情報とカメラ姿勢情報の差分にもとづいてカメラ画像を補正することにより、リプロジェクション処理を実行する(S24)。 The image correcting unit 138 executes the reprojection process by correcting the camera image based on the difference between the latest HMD posture information and the camera posture information (S24).
 以上、本発明を実施の形態をもとに説明した。実施の形態は例示であり、それらの各構成要素や各処理プロセスの組合せにいろいろな変形例が可能なこと、またそうした変形例も本発明の範囲にあることは当業者に理解されるところである。そのような変形例を説明する。 The present invention has been described based on the embodiments. The embodiments are exemplifications, and it will be understood by those skilled in the art that various modifications can be made to combinations of the respective constituent elements and processing processes, and such modifications are within the scope of the present invention. . Such a modification will be described.
 上記の実施の形態では、リプロジェクション部130はHMD100に設けられたが、リプロジェクション部130をHMD100が接続されたクライアント装置に設けてもよい。 In the above embodiment, the reprojection unit 130 is provided in the HMD 100, but the reprojection unit 130 may be provided in a client device to which the HMD 100 is connected.
 上記の実施の形態では、HMD100とロボット10がネットワークを経由して接続され、ロボット10を遠隔操作する場合を説明したが、ネットワークを経由しないでHMD100とロボット10が接続されている場合でも同様のリプロジェクション処理を適用することができる。モータの遅延や誤差があるため、ネットワークによる遅延変動がなくても、カメラ14の姿勢とHMD100の姿勢のずれが発生するため、カメラ画像をHMD100の姿勢に合わせて補正するリプロジェクション処理が有効である。また、ロボット10に限らず、カメラ14が搭載された制御装置であれば、どのような制御装置に対しても同様のリプロジェクション処理を適用することができる。 In the above embodiment, the case where the HMD 100 and the robot 10 are connected via a network and the robot 10 is remotely controlled has been described. However, the same applies to the case where the HMD 100 and the robot 10 are connected via a network. Reprojection processing can be applied. Since there is a motor delay or error, a deviation between the posture of the camera 14 and the posture of the HMD 100 occurs even if there is no delay variation due to the network. Therefore, the reprojection processing for correcting the camera image according to the posture of the HMD 100 is effective. is there. Further, the same reprojection process can be applied to any control device as long as the control device is not limited to the robot 10 and is equipped with the camera 14.
 上記の実施の形態では、ロボット10そのものは移動せず、ロボット10に搭載されたカメラ14が姿勢を変える場合を説明したが、ロボット10自体が位置を変える場合であっても同様のリプロジェクション処理を適用することができる。その場合、モーションセンサによるカメラ姿勢だけでなく、サーボ機構の位置情報などからカメラの絶対的な位置情報を取得してカメラ画像とともに送信する。カメラ14が移動する場合は、SLAMのように環境地図上でのカメラ14の位置と姿勢を推定する自己位置推定技術を利用することもできる。なお、サーボ機構の位置情報やSLAMなどの自己位置推定技術は、ロボット10が位置を変えない場合にもカメラ14の姿勢情報を取得するために利用することができる。 In the above-described embodiment, the case where the robot 10 itself does not move and the camera 14 mounted on the robot 10 changes the posture has been described, but the same reprojection processing is performed even when the robot 10 itself changes the position. Can be applied. In that case, not only the camera posture by the motion sensor but also the absolute position information of the camera is acquired from the position information of the servo mechanism and transmitted together with the camera image. When the camera 14 moves, a self-position estimation technique that estimates the position and orientation of the camera 14 on the environment map, such as SLAM, can be used. Note that the position information of the servo mechanism and the self-position estimation technique such as SLAM can be used to acquire the posture information of the camera 14 even when the robot 10 does not change the position.
 1 情報処理システム、 10 ロボット、 12 アクチュエータ装置、 14a 右カメラ、 14b 左カメラ、 20 筐体、 22 入力系統、 24 出力系統、 30 台座、 32 第1円弧状アーム、 34 第2円弧状アーム、 36 ハウジング、 38 カバー、 40 脚部、 42 挿通部材、 50 駆動機構、 52 第1モータ、 54 第2モータ、 56 第3モータ、 60 受信部、 62 センサ情報取得部、 64 動き検出部、 66 視線方向決定部、 68 アクチュエータ制御部、 70 カメラ姿勢情報取得部、 80 画像処理部、 90 送信部、 100 HMD、 102 表示パネル、 120 制御部、 122 記憶部、 124 姿勢センサ、 126 通信制御部、 130 リプロジェクション部、 132 HMD姿勢情報取得部、 134 カメラ姿勢情報取得部、 135 カメラ画像取得部、 136 差分計算部、 137 警告部、 138 画像補正部。 1 Information processing system, 10 robot, 12 actuator device, 14a right camera, 14b left camera, 20 housing, 22 input system, 24 output system, 30 pedestal, 32 1st arc arm, 34 2nd arc arm, 36 Housing, 38 cover, 40 legs, 42 insertion member, 50 drive mechanism, 52 1st motor, 54 2nd motor, 56 3rd motor, 60 receiving section, 62 sensor information acquisition section, 64 motion detection section, 66 gaze direction Determination unit, 68 Actuator control unit, 70 Camera posture information acquisition unit, 80 Image processing unit, 90 Transmission unit, 100 HMD, 102 Display panel, 120 Control unit, 122 Storage unit, 124 Posture sensor, 126 Communication Control unit, 130 reprojection unit, 132 HMD orientation obtaining unit, 134 a camera orientation obtaining unit, 135 a camera image acquiring unit, 136 difference calculation unit, 137 a warning unit, 138 image correcting unit.
 画像を生成する技術に適用できる。 * Applicable to image generation technology.

Claims (6)

  1.  ヘッドマウントディスプレイの姿勢変化に連動して姿勢変化が可能なカメラによる撮影画像を取得するカメラ画像取得部と、
     前記カメラの撮影時の姿勢情報を取得するカメラ姿勢情報取得部と、
     前記ヘッドマウントディスプレイの姿勢情報と前記カメラの撮影時の姿勢情報の差分を算出する差分計算部と、
     算出された前記差分にもとづいて前記ヘッドマウントディスプレイの姿勢に合うように前記撮影画像を補正する画像補正部とを含むことを特徴とする画像生成装置。
    A camera image acquisition unit that acquires a photographed image by a camera capable of posture change in conjunction with the posture change of the head mounted display;
    A camera posture information acquisition unit for acquiring posture information at the time of shooting of the camera;
    A difference calculation unit for calculating a difference between posture information of the head-mounted display and posture information at the time of shooting of the camera;
    An image generation apparatus comprising: an image correction unit configured to correct the captured image so as to match the posture of the head mounted display based on the calculated difference.
  2.  前記カメラ姿勢情報取得部は、前記カメラの撮影時の姿勢情報を前記撮影画像の各フレームの同期信号に挿入されたメタデータから抽出することを特徴とする請求項1に記載の画像生成装置。 The image generation apparatus according to claim 1, wherein the camera posture information acquisition unit extracts posture information at the time of shooting of the camera from metadata inserted in a synchronization signal of each frame of the shot image.
  3.  前記ヘッドマウントディスプレイの姿勢情報にもとづいて前記ヘッドマウントディスプレイの姿勢が前記カメラの可動範囲の限界値に対して所定の閾値まで近づいたか、または前記カメラの可動範囲を越えたと判定される場合、前記画像補正部による前記撮影画像の補正を中断または継続し、警告を表示する警告部をさらに含むことを特徴とする請求項1または2に記載の画像生成装置。 When it is determined that the attitude of the head mounted display approaches a predetermined threshold with respect to the limit value of the movable range of the camera based on the attitude information of the head mounted display, or exceeds the movable range of the camera, The image generation apparatus according to claim 1, further comprising a warning unit that interrupts or continues correction of the captured image by the image correction unit and displays a warning.
  4.  前記ヘッドマウントディスプレイの姿勢情報にもとづいて前記ヘッドマウントディスプレイの姿勢変化が前記カメラの追従可能速度を越えたと判定される場合、前記画像補正部による前記撮影画像の補正を中断または継続し、警告を表示する警告部をさらに含むことを特徴とする請求項1または2に記載の画像生成装置。 When it is determined that the posture change of the head mounted display exceeds the followable speed of the camera based on the posture information of the head mounted display, the correction of the captured image by the image correction unit is interrupted or continued, and a warning is issued. The image generating apparatus according to claim 1, further comprising a warning unit for displaying.
  5.  ヘッドマウントディスプレイの姿勢変化に連動して姿勢変化が可能なカメラによる撮影画像を取得するカメラ画像取得ステップと、
     前記カメラの撮影時の姿勢情報を取得するカメラ姿勢情報取得ステップと、
     前記ヘッドマウントディスプレイの姿勢情報と前記カメラの撮影時の姿勢情報の差分を算出する差分計算ステップと、
     算出された前記差分にもとづいて前記ヘッドマウントディスプレイの姿勢に合うように前記撮影画像を補正する画像補正ステップとを含むことを特徴とする画像生成方法。
    A camera image acquisition step for acquiring a photographed image by a camera capable of changing the posture in conjunction with the posture change of the head mounted display;
    Camera posture information acquisition step for acquiring posture information at the time of shooting of the camera;
    A difference calculating step for calculating a difference between posture information of the head-mounted display and posture information at the time of shooting of the camera;
    An image generation method comprising: an image correction step of correcting the captured image so as to match the posture of the head mounted display based on the calculated difference.
  6.  ヘッドマウントディスプレイの姿勢変化に連動して姿勢変化が可能なカメラによる撮影画像を取得するカメラ画像取得機能と、
     前記カメラの撮影時の姿勢情報を取得するカメラ姿勢情報取得機能と、
     前記ヘッドマウントディスプレイの姿勢情報と前記カメラの撮影時の姿勢情報の差分を算出する差分計算機能と、
     算出された前記差分にもとづいて前記ヘッドマウントディスプレイの姿勢に合うように前記撮影画像を補正する画像補正機能とをコンピュータに実現させることを特徴とするプログラム。
    A camera image acquisition function for acquiring a captured image by a camera capable of changing the posture in conjunction with the posture change of the head mounted display;
    A camera posture information acquisition function for acquiring posture information at the time of shooting of the camera;
    A difference calculation function for calculating a difference between posture information of the head-mounted display and posture information at the time of shooting of the camera;
    A program for causing a computer to realize an image correction function for correcting the photographed image so as to match the posture of the head mounted display based on the calculated difference.
PCT/JP2018/010078 2018-03-14 2018-03-14 Image generation device, image generation system, and image generation method WO2019176035A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2018/010078 WO2019176035A1 (en) 2018-03-14 2018-03-14 Image generation device, image generation system, and image generation method
JP2020506041A JP7122372B2 (en) 2018-03-14 2018-03-14 Image generation device, image generation system, and image generation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/010078 WO2019176035A1 (en) 2018-03-14 2018-03-14 Image generation device, image generation system, and image generation method

Publications (1)

Publication Number Publication Date
WO2019176035A1 true WO2019176035A1 (en) 2019-09-19

Family

ID=67907053

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/010078 WO2019176035A1 (en) 2018-03-14 2018-03-14 Image generation device, image generation system, and image generation method

Country Status (2)

Country Link
JP (1) JP7122372B2 (en)
WO (1) WO2019176035A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021180425A (en) * 2020-05-14 2021-11-18 エヌ・ティ・ティ・コミュニケーションズ株式会社 Remote control system, remote work device thereof, video processing device and program
WO2022149497A1 (en) * 2021-01-05 2022-07-14 ソニーグループ株式会社 Information processing device, information processing method, and computer program

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05228855A (en) * 1992-02-19 1993-09-07 Yaskawa Electric Corp Tele-existance visual device
JP2002135641A (en) * 2000-10-27 2002-05-10 Nippon Telegr & Teleph Corp <Ntt> Camera system of freely moving view point and visual line
WO2016152572A1 (en) * 2015-03-20 2016-09-29 日産自動車株式会社 Indirect-view presentation device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05228855A (en) * 1992-02-19 1993-09-07 Yaskawa Electric Corp Tele-existance visual device
JP2002135641A (en) * 2000-10-27 2002-05-10 Nippon Telegr & Teleph Corp <Ntt> Camera system of freely moving view point and visual line
WO2016152572A1 (en) * 2015-03-20 2016-09-29 日産自動車株式会社 Indirect-view presentation device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021180425A (en) * 2020-05-14 2021-11-18 エヌ・ティ・ティ・コミュニケーションズ株式会社 Remote control system, remote work device thereof, video processing device and program
WO2021230364A1 (en) * 2020-05-14 2021-11-18 エヌ・ティ・ティ・コミュニケーションズ株式会社 Remote control system, remote operation device therefor, video processing device, and program
CN115552891A (en) * 2020-05-14 2022-12-30 Ntt通信公司 Remote control system, remote operation device, image processing device, and program
EP4152310A4 (en) * 2020-05-14 2023-11-08 NTT Communications Corporation Remote control system, remote operation device therefor, video processing device, and program
WO2022149497A1 (en) * 2021-01-05 2022-07-14 ソニーグループ株式会社 Information processing device, information processing method, and computer program
US12001018B2 (en) 2021-01-05 2024-06-04 Sony Group Corporation Device, method and program for improving cooperation between tele-existence and head-mounted display

Also Published As

Publication number Publication date
JPWO2019176035A1 (en) 2021-03-11
JP7122372B2 (en) 2022-08-19

Similar Documents

Publication Publication Date Title
EP3379525B1 (en) Image processing device and image generation method
AU2019282933B2 (en) Smart glasses, method and device for tracking eyeball trajectory, and storage medium
EP3287837B1 (en) Head-mountable display system
JP6378781B2 (en) Head-mounted display device and video display system
JP6540691B2 (en) Head position detection device and head position detection method, image processing device and image processing method, display device, and computer program
WO2016113951A1 (en) Head-mounted display device and video display system
JP2022530012A (en) Head-mounted display with pass-through image processing
US11277603B2 (en) Head-mountable display system
US20190045125A1 (en) Virtual reality video processing
WO2016013272A1 (en) Information processing device, information processing method and image display system
WO2017183319A1 (en) Robot and housing
US20140361987A1 (en) Eye controls
US11647292B2 (en) Image adjustment system, image adjustment device, and image adjustment
CN112272817B (en) Method and apparatus for providing audio content in immersive reality
US20180299948A1 (en) Method for communicating via virtual space and system for executing the method
GB2523554A (en) Head-mountable apparatus and systems
JP2021060627A (en) Information processing apparatus, information processing method, and program
WO2019176035A1 (en) Image generation device, image generation system, and image generation method
US20240036327A1 (en) Head-mounted display and image displaying method
CN105828021A (en) Specialized robot image acquisition control method and system based on augmented reality technology
JP2023099494A (en) Data processing apparatus for virtual reality, data processing method, and computer software
WO2017183292A1 (en) Processing device and image determination method
JP6867566B2 (en) Image display device and image display system
JP6615716B2 (en) Robot and enclosure
JP2019216344A (en) Whole-sky stereoscopic image display device and program of the same, whole-sky stereoscopic image capturing device, and whole-sky stereoscopic video system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18909580

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020506041

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18909580

Country of ref document: EP

Kind code of ref document: A1