WO2020045141A1 - Dispositif de traitement d'informations, procédé de traitement d'informations et programme - Google Patents

Dispositif de traitement d'informations, procédé de traitement d'informations et programme Download PDF

Info

Publication number
WO2020045141A1
WO2020045141A1 PCT/JP2019/032260 JP2019032260W WO2020045141A1 WO 2020045141 A1 WO2020045141 A1 WO 2020045141A1 JP 2019032260 W JP2019032260 W JP 2019032260W WO 2020045141 A1 WO2020045141 A1 WO 2020045141A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
area
user
illuminance
information processing
Prior art date
Application number
PCT/JP2019/032260
Other languages
English (en)
Japanese (ja)
Inventor
京二郎 永野
富士夫 荒井
秀憲 青木
靖子 石原
新太郎 筒井
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2020045141A1 publication Critical patent/WO2020045141A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present technology relates to an information processing apparatus, an information processing method, and a program related to processing relating to a virtual object displayed on a transmission type head mounted display or the like.
  • Patent Document 1 describes a system that shares a virtual reality space between a non-transmissive head-mounted display used by a first user and a non-transmissive head-mounted display used by a second user.
  • a virtual reality space image displayed on a head-mounted display used by the first user is generated based on the line-of-sight information of the second user transmitted via the network.
  • a transmission type head mounted display that is mounted on the head of the user U and superimposes and displays an image such as a virtual object on the field of view of the user U while allowing the user U to visually recognize the outside world is presented to the user U.
  • HMD head mounted display
  • an object of the present technology is to provide an information processing device, an information processing method, and a program capable of reducing a processing load.
  • an information processing device includes a control unit.
  • the control unit invites the first peripheral information of the first area and the second peripheral information of the second area of the real space obtained from the sensor, which are different from the visual field information corresponding to the visual field of the user.
  • a first process related to a first virtual object arranged in the first area and a second process related to a second virtual object arranged in the second area based on the attraction degree. Determine the priority of processing.
  • the control unit may be configured to perform the first processing with higher priority than the second processing when determining a lower degree of interest for the second peripheral information than the first peripheral information.
  • the priority may be determined.
  • the control unit determines the priority such that the first processing relating to the rendering of the first virtual object has a higher processing load than the second processing relating to the rendering of the second virtual object. You may.
  • the sensor includes a first illuminance sensor that detects illuminance of the first area as the first peripheral information, and a second illuminance sensor that detects illuminance of the second area as the second peripheral information. May be included.
  • the control unit may determine the degree of attraction in consideration of the environment of the real space.
  • the environment of the real space may be outdoors under sunlight.
  • the control unit sets the degree of attraction to the peripheral information acquired from the illuminance sensor that detects the illuminance of the real space to which the sunlight is irradiated, using the illuminance of the area corresponding to the user's field of view and the illuminance sensor.
  • the illuminance difference between the detected sunlight and the illuminance in the real space to be irradiated may be determined to be lower when the difference is equal to or larger than the threshold than when the difference is smaller than the threshold.
  • the real space environment may be indoors.
  • the control unit may determine the degree of attraction of the peripheral information acquired from the illuminance sensor that detects the illuminance of the real space to be irradiated with sunlight so that the priority is the lowest.
  • the control unit sets the light other than the sunlight to The attractiveness of the peripheral information acquired from the illuminance sensor that detects the illuminance of the real space to be illuminated, the illuminance of the area corresponding to the user's field of view and the light other than the sunlight detected by the illuminance sensor
  • the illuminance difference between the illuminance to be irradiated and the illuminance in the real space may be determined to be lower when the difference is less than the threshold than when the difference is equal to or greater than the threshold.
  • the first sensor detects a loudness of the sound in the first area as the first peripheral information, and detects a loudness of the sound in the second area as the second peripheral information.
  • a second sound sensor may be included.
  • the control unit when the loudness of the sound in the second area is smaller than the loudness of the sound in the first area, sets the degree of attraction for the second peripheral information to the degree of attraction for the first peripheral information. May be determined to be low.
  • the sensor detects a odor intensity of the first area as the first peripheral information, and detects an odor intensity of the second area as the second peripheral information.
  • a second odor sensor may be included.
  • the control unit when the odor intensity of the second area is weaker than the odor of the first area, sets the degree of attraction to the second peripheral information to be lower than the degree of attraction to the first peripheral information. It may be determined.
  • the sensor may be a camera that acquires image information around the user as the surrounding information.
  • the image information includes positional relationship information between the first region and the second region and the user, and the control unit determines the degree of attraction using the positional relationship information. You may.
  • the information processing device can be mounted on the head of the user, and can present the first virtual object and the second virtual object in the field of view of the user while allowing the user to visually recognize the outside world. It may be a head mounted display configured.
  • an information processing method is configured to provide first peripheral information of a first area in a real space different from visual field information corresponding to a visual field of a user and second information of a second area.
  • the second peripheral information is obtained from the sensor, the degree of attraction of each of the first peripheral information and the second peripheral information is determined, and the first attraction arranged in the first area is determined based on the degree of attraction.
  • the priority of the first process related to the virtual object and the priority of the second process related to the second virtual object arranged in the second area are determined.
  • a program includes first peripheral information of a first area in a real space different from visual field information corresponding to a visual field of a user, and second peripheral information of a second area.
  • the information processing apparatus is configured to execute a process including a step of determining a priority of a first process related to one virtual object and a second process related to a second virtual object arranged in the second area.
  • FIG. 6 is a block diagram of an information processing device according to first to fourth embodiments of the present technology.
  • FIG. 13 is a flowchart illustrating an example of a process regarding a virtual object in the information processing apparatuses according to the first to third embodiments.
  • FIG. 13 is a flowchart illustrating an example of a process regarding a virtual object in the information processing apparatuses according to the first to third embodiments.
  • FIG. 13 is a flowchart illustrating an example of a process regarding a virtual object in the information processing apparatuses according to the first to third embodiments. It is a figure showing the hardware constitutions of the above-mentioned information processor.
  • FIG. 13 is a flowchart illustrating an example of a process regarding a virtual object in the information processing apparatuses according to the first to third embodiments.
  • FIG. 9 is a schematic diagram for explaining a relationship between an area around the user U and the number of priority processing points when the situation around the user U is not a situation that induces the movement of the line of sight of the user U. It is a figure for explaining a 1st embodiment, and when a surrounding situation of user U is a situation which induces eye movement of user U, it is a field of a surrounding of user U, and a priority processing score. It is a schematic diagram for demonstrating a relationship. It is a figure for explaining a 2nd embodiment, and when a surrounding situation of user U is a situation which induces a line-of-sight movement of user U, an area around user U and a priority processing score are shown. It is a schematic diagram for demonstrating a relationship.
  • the information processing device is a head-mounted display (HMD), which is a display device mounted on the head of the user U, and is one of wearable computers.
  • the shape of the HMD is typically an eye mirror type or a hat type.
  • the HMD of the present embodiment is a transparent HMD, for example, for reproducing a game content and superimposing and presenting a virtual object (display image) corresponding to the content to the user U in the external world, which is a real space.
  • the transmission type HMD includes an optical transmission type, a video transmission type, and the like.
  • the transmissive HMD has a display arranged in front of the user U when worn on the head of the user U.
  • the display includes an image display element and an optical element.
  • a display image displayed on the image display element is presented to the user U via an optical element such as a holographic optical element or a half mirror disposed in front of the user U.
  • the user U can see the state of the outside world through the optical element in a see-through manner, and a display image such as a virtual object displayed on the image display element is arranged and superimposed on the state of the external world that is a real space. Is presented to the user U.
  • the image includes a still image and a moving image.
  • the user cannot directly see the external world when worn, but an image in which the virtual object is superimposed on the external image obtained by the camera is displayed on the display, so that the user U can visually recognize the external world.
  • the virtual object can be presented while the virtual object is being displayed.
  • the HMD according to the present embodiment predicts a region in a real space that is likely to be in the user's field of view based on the peripheral information of the user. Based on the result of the prediction, the processing content regarding the drawing of the virtual object arranged in the real space is changed. Specifically, the processing load on the virtual object placed in an area that is unlikely to be in the field of view of the user U is lower than the processing load on the virtual object placed in an area that is likely to be in the field of view of the user U. I do.
  • the peripheral information includes the peripheral situation information and the space information, but may be only the peripheral situation information.
  • the peripheral situation information is information on the peripheral situation of the user, for example, illuminance information, sound information, odor information, image information, and the like.
  • the spatial information is the orientation, position information, and the like of the user.
  • the real space around the user is divided into a plurality of regions, and the user's degree of attraction is determined for each region based on the peripheral information acquired for each region.
  • the degree of attraction indicates a possibility of entering the field of view of the user U.
  • the area that is highly likely to be in the user's field of view is an area where the user U is likely to be caught by the eye, and is an area where the degree of attraction is high.
  • the processing loads for the virtual objects arranged in the region with a high degree of attraction are made different from each other so that the processing for the virtual objects arranged in the region with a low degree of attraction is prioritized over the processing with respect to the virtual objects arranged in the region with a low degree of attraction.
  • efficient processing can be performed. The details will be described below.
  • FIG. 1 is a block diagram of a head mounted display (HMD) 1 as an information processing device.
  • HMD head mounted display
  • the HMD 1 includes a control unit 10, an input unit 20, an output unit 30, a storage unit 46, and a support (not shown).
  • the support can be mounted on the head of the user U.
  • the support is supported by the displays 31R and 31L, which are the output units 30, in front of the user at the time of wearing.
  • the supporter supports each sensor, which will be described later, which is the input unit 20.
  • the shape of the support is not particularly limited, and may be, for example, a hat shape as a whole.
  • the input unit 20 is a sensor unit group including a plurality of sensor units.
  • the detection result detected by each sensor unit is input to the control unit 10.
  • the detection result detected by each sensor unit includes the surrounding situation information and the space information.
  • the peripheral situation information is information around the user U in the real space where the user U wearing the HMD 1 is located.
  • the surrounding situation information is illuminance information, sound information, odor information, image information, and the like in the real space. These pieces of information can be acquired from an illuminance sensor 22, a sound sensor 23, an odor sensor 24, and a camera 25 described later.
  • the space information includes image information captured by a camera 25 described later mounted on the HMD 1, acceleration information, angular velocity information, azimuth information, and the like of the HMD 1 detected by a nine-axis sensor 21 described later. It is possible to detect the position, orientation, movement, and posture (walking, running, stopping, etc.) of the user U from these spatial information.
  • the input unit 20 includes a nine-axis sensor 21, an illuminance sensor 22, a sound sensor 23, an odor sensor 24, and a camera 25. These sensors are mounted on the HMD1.
  • the 9-axis sensor 21 includes a 3-axis acceleration sensor, a 3-axis gyro sensor, and a 3-axis compass sensor.
  • the nine-axis sensor 21 can detect the acceleration, angular velocity, and orientation of the HMD 1 in three axes, and can detect the position, orientation, movement, and posture (walking, running, stopping, and the like) of the user U. it can.
  • the detection result detected by the 9-axis sensor 21 is output to the control unit 10 as spatial information.
  • the illuminance sensor 22 has a light receiving element, and converts the light incident on the light receiving element into a current to detect brightness (illuminance).
  • illuminance brightness
  • four illuminance sensors 22 are provided. The four illuminance sensors 22 detect illuminance in each area when the real space where the user U is located is divided into four areas.
  • FIG. 6 is a diagram illustrating a positional relationship between each area and the user U when the real space where the user U is located is divided into four areas.
  • FIG. 6 and FIGS. 7 to 11 described below correspond to views of the real space 60 or 70 where the user U moves while wearing the HMD 1 as viewed from above the user U.
  • the real space 60 is divided into a front area 60F on the front side of the user U, a rear area 60B on the back side, a right area 60R on the right side, and a left area on the left side with the user U as a center. It is divided into four 60L regions.
  • the four illuminance sensors 22 include an illuminance sensor 22F that detects illuminance in the front area 60F, an illuminance sensor 22R that detects illuminance in the right area 60R, an illuminance sensor 22L that detects illuminance in the left area 60L, and a rear area 60B.
  • An illuminance sensor 22B that detects illuminance.
  • the detection result (illuminance information) detected by each of the illuminance sensors 22R, 22B, and 22L is the surrounding situation information in each region, and the field of view of the user U. Is different from the visual field information corresponding to.
  • the detection results detected by the four illuminance sensors 22 are output to the control unit 10 as peripheral situation information.
  • the sound sensor 23 detects the loudness of the sound around the user U.
  • four sound sensors 23 are provided. Like the illuminance sensor 22, the four sound sensors 23 respectively detect the sound volume in each area when the real space where the user U is located is divided into four areas. Using the detection results of the four sound sensors 23, the direction in which the sound source as viewed from the user U exists can be specified.
  • the four sound sensors 23 include a sound sensor 23F that detects the volume of the front area 60F, a sound sensor 23R that detects the volume of the right area 60R, a sound sensor 23L that detects the volume of the left area 60L, and a sound sensor 23L that detects the volume of the rear area 60B.
  • the detection result (sound information) detected by each of the sound sensors 23R, 23B, and 23L is the surrounding situation information in each area, and the field of view of the user U. Is different from the visual field information corresponding to.
  • the odor sensor 24 detects the intensity of the odor around the user U.
  • four odor sensors 24 are provided. Using the detection results of the four odor sensors 24, the direction in which the source of the odor as viewed from the user U exists can be specified.
  • the four odor sensors 24 include an odor sensor 24F that detects the intensity of the odor in the front region 60F, an odor sensor 24R that detects the odor in the right region 60R, an odor sensor 24L that detects the odor in the left region 60L, and a rear.
  • the odor sensor 24B detects the odor of the area 60B. The detection results detected by the four odor sensors 24 are output to the control unit 10 as peripheral situation information.
  • the detection result (odor information) detected by each of the odor sensors 24R, 24B, and 24L is the surrounding situation information in each region, and the field of view of the user U. Is different from the visual field information corresponding to.
  • odor sensor 24 for example, an oxide semiconductor sensor, a crystal oscillator (QCM) sensor having a film having molecular selectivity formed on the oscillator surface, or the like can be used.
  • QCM crystal oscillator
  • the camera 25 includes a right-eye camera 251 and a left-eye camera 252.
  • the right-eye camera 251 and the left-eye camera 252 capture an image of the front area 60F of the user U corresponding to the visual field area of the user U, and acquire a captured image as image information.
  • the right-eye camera 251 and the left-eye camera 252 are arranged at a predetermined interval in the horizontal direction on the front surface of the head mounted display 1.
  • the right-eye camera 251 captures a right-eye image
  • the left-eye camera 252 captures a left-eye image.
  • the right-eye image and the left-eye image captured by the right-eye camera 251R and the left-eye camera 252 include spatial information such as the position and orientation of the user U wearing the HMD1.
  • the right-eye image and the left-eye image are output to the control unit 10 as spatial information.
  • the output unit 30 has a display 31R for the right eye and a display 31L for the left eye, and these displays are mounted on the HMD1.
  • the right-eye display 31R and the left-eye display 31L are arranged in front of the right and left eyes of the user U, respectively.
  • the right-eye display 31R (the left-eye display 31L) has a right-eye image display element 311R (a left-eye image display element 311L) and a right-eye optical element 312R (a left-eye optical element 312L).
  • the display 31R for the right eye and the display 31L for the left eye are referred to as the display 31, the image display element 311R for the right eye and the image display element 311L for the left eye are referred to as the image display element 311, and the display element for the right eye.
  • the optical element 312R and the left-eye optical element 312L may be referred to as an optical element 312 in some cases.
  • the image display element 311 includes an organic EL display element, a liquid crystal display element as a light modulation element, and the like.
  • the image display element 311 forms a display image such as a virtual object based on the image signal output from the control unit 10 and emits display light.
  • the display light enters the eyes of the user U via the optical element 312, the virtual object can be presented to the user U in the viewing area of the user U.
  • the optical element 312 is a holographic optical element, a half mirror, or the like, and is arranged in front of the user U.
  • the optical element 312 is configured to diffract light emitted from the image display element 311 and guide the light to the left and right eyes of the user.
  • the optical element 312 is configured to transmit light from the outside world. Therefore, in the HMD 1, it is possible to present the display image formed by the image display element 311 to the user U by superimposing the display image on the light from the outside.
  • the control unit 10 includes a communication control unit 11, a surrounding situation information management unit 12, a spatial information acquisition unit 13, a spatial information management unit 14, a priority processing point determination unit 15, a drawing processing load determination unit 16, It includes an image generation unit 17, an output image control unit 18, and a drawing processing load management unit 19.
  • the communication control unit 11 communicates with the various sensors 21 to 25 and the display 31 mounted on the HMD 1 to transmit and receive various information. Specifically, the communication control unit 11 receives the detection results of the various sensors 21 to 25 as the surrounding situation information and the space information, and transmits an image signal or the like to the display 31.
  • the communication control unit 11 communicates with an HMD worn by another user or an external peripheral device, and transmits and receives various information.
  • the communication control unit 11 can acquire image information and the like captured by a camera mounted on the HMD worn by another user.
  • Image information captured by a camera mounted on another user's HMD includes positional relationship information between another user U wearing the HMD and a priority processing point determination target area described later.
  • the peripheral situation information management unit 12 obtains the detection results obtained by the illuminance sensor 22, the sound sensor 23, and the odor sensor 24 via the communication control unit 11, and the image information (detection results) obtained from the HMD of another user. ) Is stored in a peripheral situation database (not shown) in a time-series manner as peripheral situation information, and data is updated and managed so as to always be the latest peripheral situation information.
  • the space information acquisition unit 13 acquires, as space information, the detection results acquired by the 9-axis sensor 21 and the camera 25 acquired via the communication control unit 11.
  • the spatial information management unit 14 uses the detection results detected by the 9-axis sensor 21 and the camera 25 and the position and orientation information of the camera 25 obtained based on the detection results as spatial information as a spatial information database (not shown). And updates and manages the data so that it is always the latest spatial information.
  • the priority processing point determination unit 15 determines the priority processing points of each area based on the surrounding situation information managed by the surrounding situation information management unit 12 and the position and orientation information of the camera 25 managed by the spatial information management unit 14. The degree of attraction).
  • the priority processing score is obtained by converting the priority of processing relating to drawing of a virtual object (display image) currently arranged in the non-viewing area of the user U into a score, and is an attractiveness determined for the peripheral information. .
  • An area that is likely to be viewed by the user U is assigned a high priority processing score as an area with a high degree of attraction. The higher the number of assigned priority processing points, the higher the priority of processing relating to drawing of a virtual object.
  • the priority processing points are divided into a front area 60F, a rear area 60B, a right area 60R, and a left area obtained by dividing a 360-degree periphery of the user U centering on the user U wearing the HMD1 existing in the real space into four parts as shown in FIG. It is required every 60L.
  • a front area 60F a rear area 60B
  • a right area 60R a left area obtained by dividing a 360-degree periphery of the user U centering on the user U wearing the HMD1 existing in the real space into four parts as shown in FIG. It is required every 60L.
  • the present invention is not limited to this.
  • the ranges of the front region 60F, the rear region 60B, the right region 60R, and the left region 60L can be determined in consideration of a general normal human visual field region.
  • the visual field of a normal person is said to be about 60 degrees on the nose side with one eye and about 90 to 100 degrees on the ear side, and the range that can be seen simultaneously by both eyes is about 120 degrees left and right.
  • a range in the left-right direction of the front area 60F in a 360-degree area around the user U along the direction connecting the right eye and the left eye of the user U is defined.
  • the angle theta F 120 ° may be set the angle theta R and theta L defining the range of the right side area 60R, and the left region 60L respectively 40 °, the angle theta B to define the scope of the rear region 60B to 160 degrees .
  • An imaginary line bisecting the angle ⁇ F defining the range of the front region 60F is located at the center of the front of the user U.
  • the angle ⁇ F defining the range of the front region 60F and the angle ⁇ B defining the range of the rear region 60B are the same for easy viewing. The angle does not match the above numerical range.
  • FIG. 6 shows an example of the number of priority processing points for each area when the surrounding situation is not a situation that induces the movement of the user's line of sight.
  • the priority processing point P1 of the front area 60F which is the visual field area of the user U, is the highest 10 points as compared with other areas.
  • the priority processing points P2 and P4 are five.
  • the priority processing score P3 is three.
  • the peripheral information (second peripheral information) of the rear area 60B in the real space makes the eyes of the user U more noticeable than the peripheral information (first peripheral information) of the right area 60R and the left area 60L. It is difficult to attract and has low attraction.
  • the priority processing score indicating the degree of attraction of the user U in each of the areas 60F, 60B, 60R, and 60L is shown in FIG. It is different from the case shown.
  • the priority processing score is obtained based on the peripheral information in each of the areas 60F, 60B, 60R, and 60L.
  • the peripheral information includes the peripheral situation information and the space information. If the peripheral situation of the user U is a situation that induces the user U's line of sight, the priority processing points and the points shown in FIG. Will change.
  • the surrounding situation information includes illuminance information detected by four illuminance sensors 22F, 22B, 22R, and 22L mounted on the HMD 1 so as to acquire the surrounding situation of each of the areas 60F, 60B, 60R, and 60L, and four sounds.
  • the sound information detected by the sensors 23F, 23B, 23R, and 23L, the odor information detected by the four odor sensors 24F, 24B, 24R, and 24L, the camera 25 mounted on the HMD 1 worn by the user U and another user U At least one of the pieces of image information captured in step (1).
  • the drawing processing load determination unit 16 determines the processing load related to the drawing of the virtual object placed in each of the areas 60F, 60B, 60R, and 60L according to the priority processing points determined by the priority processing point determination unit 15.
  • the drawing processing load determination unit 16 determines that the processing X with a low processing load such as moving the virtual object once every 100 msec is performed. do.
  • the drawing processing load determination unit 16 determines to perform the processing Y with a medium processing load such as moving the virtual object once every 50 msec.
  • the drawing processing load determination unit 16 determines to perform the processing Z with a high processing load such as moving the virtual object once every 16 msec.
  • the processing load is not limited to this.
  • the priority processing score of the front area 60F is 10, and therefore, the drawing processing load determination unit 16 determines that the drawing processing of the virtual object arranged in the front area 60F is performed in the processing Z. Since the priority processing points of the right area 60R and the left area 60L are 5, the rendering processing load determination unit 16 determines that the rendering processing of the virtual objects arranged in the right area 60R and the left area 60L is performed in the processing Y. . Since the number of priority processing points of the rear area 60B is three, the rendering processing load determination unit 16 determines that the rendering processing of the virtual object arranged in the rear area 60B is performed in the process X.
  • the load of the rendering processing of the virtual object arranged in the front area 60F is the highest.
  • the load of the rendering processing of the virtual objects arranged in the right area 60R and the left area 60L is medium.
  • the load of the rendering processing of the virtual object arranged in the rear area 60B is the lowest.
  • the processing load on the information processing device can be reduced as compared with the case where the drawing processing is executed with the same processing load on all the virtual objects arranged in the non-viewing area, and the efficiency can be reduced.
  • Good processing can be performed. For example, if the HMD 1 is not powered by wire but powered by a battery, a reduction in processing load leads to an increase in battery life due to a reduction in power consumption.
  • the processing Y on the virtual objects (first virtual objects) arranged in the right area 60R and the left area 60L (first area) corresponds to the first processing.
  • the processing X for the virtual object (second virtual object) arranged in the rear area 60B (second area) corresponds to the second processing.
  • the processing priority is changed by changing the processing content so that the processing load is different according to the priority processing score (attraction level).
  • the output image generation unit 17 changes the processing content of the CPU and the GPU based on the processing content determined by the drawing processing load determination unit 16, and generates a virtual object (display image).
  • the output image generation unit 17 If the processing content determined by the drawing processing load determination unit 16 is processing X, the output image generation unit 17 generates a virtual object so that the virtual object moves once every 100 msec. In the case of processing Y, the output image generation unit 17 generates a virtual object such that the virtual object moves once every 50 msec. In the case of processing Z, the output image generation unit 17 generates a virtual object such that the virtual object moves once every 16 msec.
  • the output image control unit 18 outputs the virtual object generated by the output image generation unit 17 as an image signal so that the virtual object can be displayed on the right-eye display 31R and the left-eye display 31L of the HMD 1.
  • the drawing processing load management unit 19 stores the processing content determined by the drawing processing load determination unit 16 in a time series in a drawing processing load database (not shown) in association with the surrounding situation information, the space information, and the number of priority processing points. Update and manage data so that it is always up-to-date.
  • the storage unit 46 stores a program for causing the HMD 1 as an information processing device to execute a series of information processing on the virtual object performed by the control unit 10.
  • FIG. 2 is a flowchart for explaining processing relating to a virtual object in the HMD 1.
  • the surrounding state information is acquired by the communication control unit 11 (S1).
  • the peripheral situation information is stored in the peripheral situation database.
  • the surrounding situation information is a detection value detected by the illuminance sensor 22, a detection value detected by the sound sensor 23, a detection value detected by the odor sensor 24, and the like, and at least one of them is acquired.
  • An information processing method in a case where image information captured by the camera 25 is acquired as the peripheral situation information will be described in a fourth embodiment described later.
  • the detection value detected by the 9-axis sensor 21 and the image information (photographed image) photographed by the camera 25 are acquired by the spatial information acquisition unit 13 via the communication control unit 11, and the camera information is acquired based on these information.
  • Information on the position and orientation of the 25 is obtained (S2).
  • Spatial information such as a detection value detected by the 9-axis sensor 21, image information captured by the camera 25, and information on the position and orientation of the camera 25 obtained based on these information is stored in a spatial information database.
  • the priority processing point determination unit 15 performs the priority processing for each of the front area 60F, the rear area 60B, the right area 60R, and the left area 60L of the user U based on the surrounding situation information and the position and orientation information of the camera 25.
  • a score P is obtained (S3).
  • the processing content is determined by the drawing processing load determination unit 16 based on the priority processing score obtained in S3, and the output image generation unit 17 generates a virtual object (display image) based on the determined processing content.
  • Generation processing is executed (S4).
  • the output image control unit 18 converts the virtual object generated by the output image generation unit 17 into an image signal so that it can be displayed on the display 31, and outputs the image signal to the output unit 30 (S5).
  • the virtual object is displayed on the display 31 based on the image signal output from the control unit 10 and presented to the user U.
  • a series of processes related to the drawing process described above may be performed at regular intervals, or may be performed each time the direction of the user U changes and the direction of the HMD 1 changes.
  • a series of processes related to the drawing process is executed at regular time intervals with reference to FIG. 3 will be described.
  • a series of processes related to the drawing process will be executed by changing the direction of the HMD 1 with reference to FIG. An example will be described.
  • the communication control unit 11 acquires the surrounding situation information.
  • the acquired surrounding situation information is stored in the surrounding situation database.
  • the detection value detected by the 9-axis sensor 21 and the image information (photographed image) photographed by the camera 25 are acquired by the spatial information acquisition unit 13 via the communication control unit 11, and the camera information is acquired based on these information.
  • Information on the position and orientation of the 25 is obtained (S13).
  • Spatial information such as a detection value detected by the 9-axis sensor 21, image information captured by the camera 25, and information on the position and orientation of the camera 25 obtained based on these information is stored in a spatial information database.
  • the priority processing point determination unit 15 performs the priority processing for each of the front area 60F, the rear area 60B, the right area 60R, and the left area 60L of the user U based on the surrounding situation information and the position and orientation information of the camera 25.
  • a score P is obtained (S14).
  • the processing contents are determined by the drawing processing load determination unit 16 based on the priority processing points obtained in S14 (S15).
  • a process of generating a virtual object (display image) by the output image generation unit 17 is performed based on the determined processing content (S16).
  • a virtual object generation process is executed in S16 based on the processing content determined in the previous process.
  • the output image control unit 18 converts the virtual object generated by the output image generation unit 17 into an image signal so that it can be displayed on the display 31, and outputs the image signal to the output unit 30 (S17).
  • the virtual object is displayed on the display 31 based on the image signal output from the control unit 10 and presented to the user U.
  • the rotation angle of the HMD 1 is acquired from the detection value detected by the 9-axis sensor 21 by the spatial information acquisition unit 13 via the communication control unit 11 (S21).
  • the rotation amount of the HMD 1 is calculated by the control unit 10 from the rotation angle of the HMD 1 acquired in the previous drawing process and the rotation angle of the HMD 1 acquired in S21, and the rotation amount is equal to or larger than the threshold value. It is determined whether or not it is (S22).
  • the threshold is set in advance.
  • the surrounding situation information is acquired by the communication control unit 11, and the surrounding situation information is stored in the surrounding situation database.
  • the detection value detected by the 9-axis sensor 21 and the image information (photographed image) photographed by the camera 25 are acquired by the spatial information acquisition unit 13 via the communication control unit 11, and the camera information is acquired based on these information.
  • Information on the position and orientation of the position No. 25 is obtained (S24).
  • Spatial information such as a detection value detected by the 9-axis sensor 21, image information captured by the camera 25, and information on the position and orientation of the camera 25 obtained based on these information is stored in a spatial information database.
  • the priority processing point determination unit 15 performs the priority processing for each of the front area 60F, the rear area 60B, the right area 60R, and the left area 60L of the user U based on the surrounding situation information and the position and orientation information of the camera 25.
  • a score P is obtained (S25).
  • the content of the processing is determined by the drawing processing load determination unit 16 based on the priority processing score obtained in S25 (S26).
  • a process of generating a virtual object (display image) by the output image generation unit 17 is performed based on the determined processing content (S27). If the rotation amount is determined to be less than the threshold value in S22 and the process proceeds to S27, a virtual object generation process is performed in S27 based on the processing content determined in the previous process.
  • the output image control unit 18 converts the virtual object generated by the output image generation unit 17 into an image signal so that it can be displayed on the display 31, and outputs the image signal to the output unit 30 (S28).
  • the virtual object is displayed on the display 31 based on the image signal output from the control unit 10 and presented to the user U.
  • control unit 10 controls the processing related to the virtual object currently arranged in the non-visual area of the user U based on the surrounding situation information and the space information acquired from the various sensors of the input unit 20.
  • control unit 10 predicts an area having a high possibility of entering the field of view of the user U using the surrounding situation information and the spatial information. Then, the control unit 10 performs the rendering process of the virtual object in the region that is likely to be in the predicted field of view of the user U, that is, the region where the user U has a high degree of attraction, and performs the drawing in another region with a low degree of attraction. Control is performed such that the processing is performed with higher priority than the processing.
  • FIG. 5 is a diagram for explaining a hardware configuration of the HMD 1.
  • the information processing in the HMD 1 as the information processing apparatus described above is realized by cooperation between software and hardware of the HMD 1 described below.
  • the HMD 1 includes a CPU (Central Processing Unit) 51, a RAM (Random Access Memory) 52, a ROM (Read Only Memory) 53, a GPU (Graphics Processing Unit) 54, and a communication device 55. , A sensor 56, an output device 57, a storage device 58, and an imaging device 59, which are connected via a bus 61.
  • CPU Central Processing Unit
  • RAM Random Access Memory
  • ROM Read Only Memory
  • GPU Graphics Processing Unit
  • a sensor 56, an output device 57, a storage device 58, and an imaging device 59 which are connected via a bus 61.
  • the CPU 51 controls the overall operation of the HMD 1 according to various programs.
  • the ROM 53 stores programs used by the CPU 51, operation parameters, and the like.
  • the RAM 52 temporarily stores a program used in the execution of the CPU 51, a parameter appropriately changed in the execution, and the like.
  • the GPU 54 performs various processes related to generation of a display image (virtual object).
  • the communication device 55 is a communication interface configured with a communication device or the like for connecting to the communication network 62.
  • the communication device 55 may include a communication device compatible with a wireless LAN (Local Area Network), a communication device compatible with LTE (Long Term Evolution), a wire communication device that performs wired communication, or a Bluetooth (registered trademark) communication device.
  • a wireless LAN Local Area Network
  • LTE Long Term Evolution
  • wire communication device that performs wired communication
  • Bluetooth registered trademark
  • the sensor 56 detects various data related to the surrounding situation information and the space information.
  • the sensor 56 corresponds to the nine-axis sensor 21, the illuminance sensor 22, the sound sensor 23, and the odor sensor 24 described with reference to FIG.
  • the output device 57 includes a display device such as a liquid crystal display device and an organic EL (Electroluminescence) display device. Further, the output device 57 includes a sound output device such as a speaker or headphones. The display device displays a captured image, a generated image, and the like. On the other hand, the sound output device converts a sound signal into sound and outputs the sound.
  • the output device 57 corresponds to the display 31 described with reference to FIG.
  • the storage device 58 is a device for storing data.
  • the storage device 58 may include a recording medium, a recording device that records data on the recording medium, a reading device that reads data from the recording medium, a deletion device that deletes data recorded on the recording medium, and the like.
  • the storage device 58 stores programs executed by the CPU 51 and the GPU 54 and various data.
  • the storage device 58 corresponds to the storage unit 46 described with reference to FIG.
  • the imaging device 59 includes an imaging optical system such as a photographing lens and a zoom lens that collects light, and a signal conversion element such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor).
  • the imaging optical system collects light emitted from the subject and forms a subject image on the signal conversion unit.
  • the signal conversion element converts the formed subject image into an electric image signal.
  • the imaging device 59 corresponds to the camera 25 described with reference to FIG.
  • a program constituting the software includes a step of acquiring first peripheral information and second peripheral information in a real space different from visual field information corresponding to a visual field of the user from the sensor; and a step of acquiring the first peripheral information and the second peripheral information. Determining the degree of attraction of each piece of information; first processing and second processing relating to the first virtual object arranged in the first area of the real space corresponding to the first peripheral information based on the degree of attraction This is for causing the information processing apparatus to execute a process including a step of determining the priority of the second process of the second virtual object arranged in the second area of the real space corresponding to the surrounding information of the second virtual object.
  • the detection result of the illuminance sensor 22 is used as the peripheral situation information when the priority processing point is determined.
  • an example using the detection result of the sound sensor 23 will be described.
  • an example using the detection result of the odor sensor 24 will be described.
  • an example in which a captured image (detection result) captured by the camera 25 is used will be described.
  • the illuminance sensor 22, the sound sensor 23, and the odor sensor 24 for acquiring the surrounding situation information are all mounted on the HMD 1, but the invention is not limited to this. At least one of an illuminance sensor 22, a sound sensor 23, an odor sensor 24, and a camera capable of acquiring image information may be used to acquire the surrounding situation information. Further, the sensor for acquiring the surrounding situation information may be an external device without being mounted on the HMD 1.
  • a detection result (illuminance information) acquired from the illuminance sensor 22 as surrounding situation information is used to determine a priority processing point (attraction level).
  • the control of the processing related to the virtual object based on the illuminance information as the peripheral situation information acquired by the illuminance sensor 22 can be applied to, for example, the HMD 1 that reproduces the game content.
  • the HMD 1 can present the user U with a virtual object (display image) corresponding to the content superimposed on the external world that is a real space.
  • the user U can enjoy the game while freely moving in the real space while wearing the HMD.
  • playing a game while moving in the real space it is assumed that the game is enjoyed outdoors or indoors.
  • the behavior of the user U such as avoiding moving the line of sight in the direction of the sun where the illuminance is higher than the illuminance of the area corresponding to the current visual field is predicted. Is done. Further, when the user U is facing in the direction where the dazzling sunlight is located at the present time, it is predicted that the user U will take an action of not being dazzled or moving his / her gaze to a region with lower illuminance.
  • the behavior of the user U such as avoiding moving his line of sight to an area irradiated with sunlight, which is predicted to be difficult to see by sunlight entering indoors, is predicted as in the case of outdoors.
  • the predicted action against the sunlight is opposite.
  • the user U moves his / her line of sight in a direction that emits brighter light.
  • the priority processing points are determined by taking into account the real space environment such as the outdoor or indoor environment described above in addition to the illuminance information as the peripheral situation information acquired from the illuminance sensor 22.
  • the environment of the real space is, for example, whether it is outdoors or indoors, if it is outdoors, whether it is sunshine when the sun is out, if it is indoors, it is whether there is a window through which the sunlight enters etc. is there.
  • the illuminance information as the peripheral situation information is information that occurs accidentally, such as sunlight, an outdoor light, an indoor light, and the like, which is different from information preset in the game content.
  • the details will be described.
  • FIG. 7 shows a determination example of the priority processing score for each area in a situation where the user U is located outdoors in the sunshine and the sun 65 is located in the right area 60R of the user U.
  • the situation shown in FIG. 7 is a situation in which the sunlight is dazzling and induces the movement of the line of sight of the user U.
  • the time of sunshine refers to a state in which the direct sunlight of the sun illuminates the ground surface to the extent that a shadow of an object is formed.
  • the illuminance sensor 22 mainly detects the illuminance in a real space where sunlight is irradiated. In addition to sunlight, it is assumed that artificial light such as an outdoor light is detected.
  • the user U does not dare to look at the screen because it is too dazzling.
  • the user U does not become too dazzling.
  • the area is not specifically avoided and may be viewed as a normal course of action.
  • the attractiveness of the area where the illuminance is higher than the front area 60F corresponding to the current visual field of the user U in the non-visual area of the user U is determined as follows.
  • the illuminance of the front region 60F corresponding to the current visual field of the user U detected by the illuminance sensor 22F and the illuminance sensors 22R (22B, 22L) detecting the illuminance of the non-visual region of the user U are detected.
  • the attractiveness of the right region 60R is determined to be lower when the illuminance difference is equal to or larger than the threshold than when the illuminance difference is smaller than the threshold.
  • an area where the illuminance difference is significantly different from the front area 60F is an area that is too dazzling for the user U, and it is assumed that the user U is unlikely to see the area, and the priority processing score (attraction degree) is determined to be low.
  • an area where the illuminance difference is small from the front area 60F is an area that is not too dazzling for the user U, is assumed to be likely to be seen by the user U, and has a large illuminance difference (in the case of a threshold or more).
  • the priority processing score (attraction level) is determined to be higher than that.
  • the sun is located on the right side of the user U, and the illuminance value detected by the illuminance sensor 22R that detects the illuminance in the right side region 60R is the highest among the four illuminance sensors 22, and the front region 60F and the left side
  • the difference (illuminance difference) between the illuminance value of the front region 60F and the illuminance value of the right region 60R is equal to or larger than the threshold value.
  • the user U is predicted to have the lowest possibility of directing his or her gaze to the dazzling right region 60 ⁇ / b> R where the sun is located, and thus the priority processing score P ⁇ b> 2 is one.
  • the front area 60F, the left area 60L, and the rear area 60B are predicted to have lower illuminance than the right area 60R and a higher possibility of the user U turning their eyes, and the respective priority processing points P1, P4, and P3 are: There are eight points.
  • the peripheral information (second peripheral information) of the right area 60R (second area) of the real space is the peripheral information (second area information) of the rear area 60B and the left area 60L (first area). It is harder to catch the eyes of the user U and has a lower degree of attraction than the first peripheral information).
  • the processing X (second virtual object) is performed on the virtual object (second virtual object) arranged in the right area 60R (second area). Is performed based on the above process.
  • drawing is performed based on the process Y (first process).
  • the front region 60F and the right region 60R are the same. If the illuminance difference is less than the threshold value, a different priority processing score is determined than when the difference is equal to or greater than the threshold value.
  • the user U may see an area where the illuminance difference of the non-viewing area is less than the threshold value as a normal flow of action without particular avoidance.
  • the priority processing point is determined to be a case where the surrounding situation is not a situation that induces the user U's gaze movement, and the right processing area is determined.
  • the priority processing score P2 of 60R is determined to be five. This is a higher priority processing score than the case where the illuminance difference is equal to or larger than the threshold value shown in FIG.
  • the left region 60L and the rear region 60B have the same illuminance. However, if the illuminance is different, the priority processing score is determined according to the illuminance value. Then, according to the priority processing points determined in the left region 60L, the rear region 60B, and the right region 60R corresponding to the non-viewing region, the priority of the process regarding the virtual object arranged in each region in the non-viewing region is determined. Is done.
  • Examples of situations in which the illuminance difference is less than the threshold include the case of evening, the case where the outdoor light is turned on outdoors in the sunshine, and the case where the user U is in the shadow of a building.
  • the priority processing score is determined to be a case where the surrounding situation is not a situation that induces the user U's line of sight as shown in FIG.
  • the brightness of the sun at about 10:00 am in fine weather is about 65,000 lux
  • the brightness of the sun one hour before the sunset of fine weather is about 1000 lux
  • the evening is compared with the daytime.
  • the illuminance difference tends to be small.
  • the glare of the sun is less likely to be felt than in the daytime, and the user U may see the direction in which the sun is located without particularly avoiding an area where the illuminance is high.
  • the illuminance detected in the direction where the outside light is located and the illuminance of the situation where the user U is currently placed, that is, the front area 60F corresponding to the field of view of the user U are displayed. It is assumed that the illuminance does not change much. This situation is not considered to be a situation that would induce the user U to move his / her gaze.
  • the attractiveness of the area where the illuminance is lower than the front area 60F corresponding to the current visual field of the user U in the non-visual area of the user U is determined as follows.
  • the illuminance of the front region 60F corresponding to the current visual field of the user U detected by the illuminance sensor 22F and the illuminance sensors 22R (22B, 22L) detecting the illuminance of the non-visual region of the user U are detected.
  • the degree of attraction of the right region 60R is determined to be lower when the illuminance difference is less than the threshold than when it is greater than or equal to the threshold.
  • an area where the illuminance difference is largely different from the front area 60F is an area that is not too dazzling for the user U, and it is assumed that the user U has a high possibility to see in order to avoid dazzling at the present time. ) Is determined to be high.
  • the area where the illuminance difference from the front area 60F is small is glare that is not so different from the front area 60F for the user U, and it is assumed that the user U may see it as a normal action flow.
  • the priority processing score is determined to be lower than when it is larger (when it is equal to or more than the threshold).
  • the user U In the case of indoors, for a real space area where sunlight is irradiated, the user U has a very low possibility of seeing, and has the lowest priority, that is, the number of priority processing points so that the processing load is the lowest. Is determined to be low.
  • the priority processing point (attraction degree) for the real space area irradiated with sunlight has the lowest priority of the processing relating to the virtual objects arranged in the area, that is, the processing load is the lowest. Is determined as follows. In the example of this embodiment, the processing load is divided into three stages, and the processing X is the processing with the lowest processing load. Therefore, indoors, the priority processing points for the real space area irradiated with sunlight are given. Is determined to be 3 points or less so that the determination of the processing X is made.
  • the illuminance detected by the illuminance sensor 22 is not caused by sunlight, the illuminance detected indoors is recognized by the user U as being caused by artificially generated light such as an interior light or a spotlight. You.
  • the illuminance does not change much from the illuminance of the front region 60F corresponding to the current visual field of the user U.
  • the user U is unlikely to notice the light. In such a case, even in a region where the illuminance value is detected to be high, the user U does not pay particular attention to the region, and may see it as a normal flow of action.
  • the priority processing point (attraction degree) is determined as follows.
  • the illuminance of the front region 60F corresponding to the current visual field of the user U detected by the illuminance sensor 22F and the illuminance sensors 22R (22B, 22L) detecting the illuminance of the non-visual region of the user U are detected.
  • the right region 60R the rear region 60B and the left region 60L.
  • the attractiveness of the right region 60R is lower when the illuminance difference is less than the threshold than when it is greater than or equal to the threshold.
  • the priority processing score (attraction degree) is determined to be low.
  • the priority processing score (attraction level) is determined to be higher.
  • the user U may view the real space area irradiated with light other than the sunlight as a normal flow of action, and the priority processing score is relatively high.
  • the priority processing score is determined to be lower than the case where the priority processing score is equal to or greater than the threshold value.
  • a region in the real space where light other than sunlight whose illuminance difference is equal to or larger than the threshold is irradiated is determined as a region where the possibility of the user U to see is considerably high, and the priority processing score is considerably increased.
  • the illuminance difference is less than the threshold value and the area is not an area that induces the gaze movement of the user U, but if it is likely to be viewed as a normal action flow, a medium priority processing point is set. Is determined.
  • As an example of a situation where the illuminance difference is equal to or larger than the threshold value, there is a case where a bright indoor is locally illuminated with high illuminance light such as a spotlight.
  • the area where such high illuminance light is irradiated is easy to catch the user U's eyes even when the whole room is bright, and the area with such light is an area where the degree of attraction is high.
  • Another example of a situation where the illuminance difference is equal to or larger than the threshold value is a case where a dark room is locally illuminated with a spotlight, brighter illumination light, downlight, or the like. Even if the area irradiated with such light is dark light, the user U can easily catch the eyes in a dark indoor.
  • Examples of a situation where the illuminance difference is less than the threshold include a case where a bright indoor is locally illuminated with ordinary room lighting or downlight, or a shadow area of an object such as a table.
  • the area to which such light is radiated and the shadow area are not so different from the average brightness of the real space, and therefore do not particularly attract the user U's eyes.
  • ⁇ ⁇ Another example of a situation where the illuminance difference is less than the threshold value is a dark area where there is no particular light inside a dark room. Such an area is not so different from the average brightness of the real space, and therefore does not particularly attract the user U's eyes.
  • a bright indoor refers to, for example, an indoor with a brightness of 30 lux or more
  • a dark indoor refers to, for example, an indoor with a brightness of less than 30 lux.
  • Whether or not the illuminance detected by the illuminance sensor 22 is due to sunlight can be determined using the position information of the user U, date and time / weather information, indoor information acquired in advance, and the like.
  • the position information of the user U can be acquired by a GPS (Global Positioning System) mounted on the HMD 1 or the like.
  • the date / time / weather information is, for example, the sun altitude (elevation angle) and azimuth angle of each place and date / time, which can be acquired by the HMD 1 communicating with an application server that provides date / time / weather information on an external network.
  • Location information and weather information such as fine rain.
  • the indoor information includes window position information such as the presence / absence of an indoor window, the direction in which the window is located indoors, and the position of the window with respect to the wall.
  • Whether the user U is indoor or outdoors can be detected from the position information of the user U. Further, when the user U is indoors, whether or not the light detected by the illuminance sensor is caused by sunlight can be determined based on the window presence / absence information, which is indoor information. When there is no window, it is determined that the detected light is not light caused by sunlight, and the above-described processing when there is no sun can be performed. On the other hand, when there is a window, the irradiation position of the sunlight entering the room through the window can be obtained from the position of the sun from the date / time / weather information and the position of the window from the indoor information. It can be determined whether or not the detected light is caused by sunlight.
  • the position information, date and time, and weather information of the user U are used to determine whether the light detected indoors is due to sunlight, but the present invention is not limited to this.
  • the position information of the user U and the date / time / weather information can be used in other ways.
  • the weather information at the place where the user U is currently located can be grasped from the date / time / weather information and the position information of the user U.
  • the power consumption can be reduced.
  • the user U plays a game while moving indoors or outdoors has been described as an example.
  • the user U may move from indoor to outdoor and from outdoor to indoor according to the content of the game.
  • the user U when the user U goes from indoor to outdoor in the flow of the game, the user U may move with his / her gaze to a dazzling area where sunlight is irradiated. Since the action of the user U going from indoors to outdoors can be assumed in advance according to the content of the game content, in addition to the above-described peripheral situation information, the user's action predicted according to the content is taken into consideration, The priority of the virtual object drawing process may be determined.
  • the threshold value of the illuminance difference is set in advance for each different environment such as outdoors or indoors. Further, the threshold is appropriately set for each brightness of the environment.
  • the average brightness of the environment in which the user U is placed, the illuminance value detected by each illuminance sensor, the illuminance difference between the front region 60F and each of the non-viewing regions, and the user U Data linking the behavior pattern of the user U such as whether or not a high region has been seen may be accumulated as needed. Then, based on these accumulated data, statistically, the relationship between the average brightness of the environment where the user U is placed, the behavior pattern of the user U, and the threshold value is established, and more appropriate A threshold may be set.
  • the surrounding situation information is obtained by the illuminance sensor, and the behavior of the user U is predicted using the surrounding situation information. Is determined. Based on the degree of attraction, the priority of the process regarding the virtual object arranged in each of the non-viewing areas of the user U is determined. Then, based on the determined priority, the processing is executed so as to reduce the drawing processing load of the virtual object arranged in the area where the degree of attraction of the user U is low, so that efficient processing can be performed.
  • FIG. 8 shows an example of determining the priority processing score for each area in a situation where the explosion sound 66 has occurred in the right area 60R of the user U.
  • the situation illustrated in FIG. 8 is a situation that induces the movement of the line of sight of the user U.
  • the priority processing score of each of the regions 60F, 60B, 60R, and 60L is obtained based on the sound volume that is the sound information detected by each of the sound sensors 23F, 23R, 23L, and 23B.
  • the priority processing point is determined according to the volume so that the priority processing point of the region where the detected volume is the largest is higher.
  • the four sound sensors 23F, 23R, 23L, 23B detect the volume of sounds around the user U.
  • the volume value detected by the sound sensor 23R that detects the sound information in the right side region 60R is the largest.
  • the gaze direction of the user U from the state in which the user U is facing the front region 60F to the right region 60R, which is the sound source. Moves from the front area 60F to the right area 60R.
  • the assumed line-of-sight movement area of the user U is an area where the user U is likely to see. Therefore, in the example shown in FIG. 8, the priority processing points P1 and P2 of the front area 60F and the right area 60R are determined to be 10 points higher than the other areas.
  • the priority processing point P3 is determined to be three points. Is done.
  • the rear area 60B is an area adjacent to the right area 60R, and is more likely to enter the field of view of the user U than the left area 60L. Therefore, the priority processing score P3 is determined to be four.
  • the peripheral information (second peripheral information) of the rear area 60B and the left area 60L (second area) of the real space is the peripheral information (first peripheral information) of the right area 60R (first area).
  • drawing is performed based on the processing Z for the virtual objects arranged in the right area 60R and the front area 60F based on the determined priority processing points.
  • drawing is performed based on the processing X.
  • drawing is performed based on the processing Y.
  • the process Z relating to the virtual object (first virtual object) arranged in the right region 60R (first region) is the first process
  • the virtual object (second region) arranged in the left region 60L (second region) Processing X relating to the second virtual object) and processing Y relating to the virtual object (the second virtual object) arranged in the rear area 60B (the second area) correspond to the second processing.
  • the processing Y for the virtual object (first virtual object) arranged in the rear area 60B (first area) corresponds to the first processing.
  • Processing X relating to the virtual object (second virtual object) arranged in the left area 60L (second area) corresponds to the second processing.
  • the sound information as the surrounding situation information referred to here is environmental sounds such as a nearby human voice, music sound flowing around, construction sound, and explosion sound, which occur accidentally. Sounds generated in advance are excluded.
  • an environmental sound such as a crowd, which is a situation where a person does not pay attention even if the sound occurs, is exceptionally obtained from the sound information acquired by the sound sensor 23.
  • the noise may be appropriately canceled.
  • the control of the process regarding the virtual object using the sound information acquired from the sound sensor 23 can be applied to, for example, an HMD that reproduces the game content.
  • the user U can enjoy the game while wearing the HMD and freely moving in the real space.
  • environmental sounds other than sounds generated in advance in the game content may be generated in the real space.
  • the direction in which the sound source is located can be specified by installing a sound sensor on the HMD and detecting sound information of the surrounding situation. Since the user U is assumed to face a certain direction of the sound source, the HMD 1 performs a process of drawing a virtual object arranged in an area from a region in front of the user U to a certain region of the sound source at the time of sound generation. It is performed with higher priority than the rendering processing of the virtual object arranged in another area.
  • the user U can experience a realistic game. Further, in the HMD 1, since the processing is executed while reducing the drawing processing load of the virtual object in the area where the degree of attraction of the user U is low, efficient processing is possible.
  • FIG. 9 shows an example of determining the priority processing score for each area in a situation where the source of the smell 67 is in the right area 60R of the user U.
  • the situation shown in FIG. 9 is a situation that induces the movement of the line of sight of the user U.
  • the priority processing points of the areas 60F, 60B, 60R, and 60L are obtained based on the odor intensities detected by the odor sensors 24F, 24R, 24L, and 24B, respectively.
  • the priority processing point is determined such that the priority processing point (attraction degree) of the region having the highest detected odor intensity is high.
  • the four odor sensors 24F, 24R, 24L, 24B detect the odor intensity around the user U.
  • the value of the odor intensity detected by the odor sensor 24R that detects the odor intensity in the right region 60R is the largest.
  • the line of sight of the user U is from the front area 60F of the user U when the odor is detected to the right area. Move over 60R. Therefore, in the example shown in FIG. 9, the priority processing points P1 and P2 of the front area 60F and the right area 60R are determined to be 10 points higher than the other areas.
  • the priority processing score P4 is determined to be four.
  • the rear area 60B is an area adjacent to the right area 60R, and is more likely to be in the field of view of the user U than the left area 60L. Therefore, the priority processing score P3 is determined to be five.
  • the peripheral information (second peripheral information) of the rear area 60B and the left area 60L (second area) of the real space is the peripheral information (first peripheral information) of the right area 60R (first area).
  • the control of the processing related to the virtual object based on the odor information acquired by the odor sensor 24 can be applied to, for example, an HMD for a simulated disaster experience.
  • the user U wearing the HMD can have a simulated experience of a fire by presenting a virtual object (display image) such as fire or smoke in a superimposed manner on the external world that is a real space.
  • an odor is generated in the real space, and the direction in which the source of the odor is located is specified based on the detection result of the odor sensor mounted on the HMD.
  • the odor generated in the real space a gas that is harmless to the body and can be detected by the odor sensor can be used.
  • the rendering process of the virtual object arranged in the region from the front region 60F of the user U to the region with the odor source at the time of the odor detection is performed by another process. This is performed prior to the rendering processing of the virtual object arranged in the area.
  • image information detected by the camera 25 mounted on the HMD 1 is acquired as peripheral situation information, and the priority processing score is determined using the image information.
  • the HMD worn by each of the users U1 to U3 is the HMD1 having the same structure shown in FIG.
  • the HMDs 1 worn by the users U1 to U3 are configured to be able to communicate with each other, and can transmit / receive various information including image information acquired by each HMD to / from another HMD.
  • FIG. 10 is a schematic diagram for explaining the relationship between the area around the user U1 and the priority processing score when the surrounding situation of the user U1 is not a surrounding situation that induces the movement of the user U1's gaze.
  • FIG. 11 is a schematic diagram for explaining the determination of the priority processing score using the peripheral situation information.
  • FIG. 12 is a flowchart illustrating an example of a process related to a virtual object in the information processing apparatus according to the present embodiment.
  • the real space 70 where the user U actually moves in the game content is divided into a plurality of areas A1 to A16, and the user U1 is located in the area A11.
  • the priority processing point P10 is highest in the area A10 located in front (front) of the user U1, and is determined to be 10 points. In the areas A1, A2, A5, A6, A9, A10, A13, and A14 in front of the user U1, the priority processing points are determined such that the longer the distance from the user U1, the lower the priority processing points.
  • areas A7 and A15 immediately to the right or left of the user U1 have lower priority processing points than A10, which is a front area, and the priority processing points P7 and P15 are determined to be six. Further, in the rightward direction and the leftward direction of the user U1, the priority processing points are determined so as to become lower as the distance from the user U1 increases, and the priority processing points P3 of the area A3 are determined to be three points.
  • the area A12 immediately behind the user U1 is determined as the area where the user U1 is least likely to see, and the priority processing score A12 is determined to be 0. Also, in the other areas A4, A8, and A16 behind the user U1, the priority processing score is determined to be relatively low, as the possibility that the user U1 sees is low.
  • the peripheral situation information used when determining the priority processing point is positional relationship information between each of the users U1 to U3 and the area to be subjected to the priority processing point determination.
  • the positional relationship information between the user U and the priority processing point determination target area includes the line-of-sight information of the user U and distance information between the user U and the priority processing point determination target area.
  • the line-of-sight information of the user U is information on the direction of the user U with respect to the priority processing point determination target area.
  • the line-of-sight information and distance information of each of the users U1 to U3 are detected based on images (image information) captured by the camera 25 mounted on the HMD 1 worn by each of the users U1 to U3.
  • captured images (image information) acquired from the cameras 25 of the HMDs 1 of the other users U2 and U3 are image information in which a non-visual area of the user U1 is captured, and are in the visual field of the user U1. This is peripheral information different from the corresponding visual field information.
  • the image captured by the camera 25 mounted on the HMD worn by the user U1 is visual field information corresponding to the visual field of the user U1.
  • the user U1 is located in the area A11
  • the user U2 is located in the area A4
  • the user U3 is located in the area A2.
  • the virtual object A is an object arranged in the area A7
  • the virtual object B is an object arranged in the area A15.
  • the users U2 and U3 pay attention to the virtual object A.
  • the virtual object A and the virtual object B are objects located around the user U1 and arranged in the non-viewing area of the user U1.
  • the positional relationship information is information on the user's orientation (the line-of-sight information of the user U) with respect to the priority processing point determination target area and distance information between the user U and the priority processing point determination target area.
  • a high priority processing score is determined for an area where many users are looking and an area closer to the user is closer.
  • the virtual object A around the user U1 is located in the area A7 where the users U2 and U3 other than the user U1 are focused, and is closer to the users U2 and U3 than the virtual object B. In position.
  • the virtual object B around the user U1 is not noticed by any user, and is located in an area A15 farther from the positions of the users U2 and U3 than the virtual object A.
  • the area A7 in which the virtual object A is arranged has a higher priority processing score than the area A15 in which the virtual object B is arranged.
  • the determination of the priority processing score will be described in detail with reference to the flowchart of FIG.
  • the information of the image photographed at is obtained (S31). From the image information, the distance information between the user U and the priority processing point determination target area, and the user's direction information (gaze information) with respect to the priority processing point determination target area are obtained for each of the users U1 to U3.
  • the priority processing point determination unit 15 determines the priority processing point (attraction degree) for each peripheral area of the user U1 from the image information based on the above-described distance information and line-of-sight information (S32).
  • the priority processing points of the priority processing point determination target area are calculated for each user U, and the total of the priority processing points calculated for each user is set as the priority processing point of the priority processing point determination target area.
  • the determination of the priority processing points of the area A7 where the virtual object A is arranged and the area A11 where the virtual object B is arranged in FIG. 11 will be specifically described as an example.
  • the number of processing points is determined.
  • the priority processing points of the area A7 (A11) where the virtual object A (virtual object B) is arranged in FIG. 11 are obtained from the sum of the priority processing points obtained for each of the users A1 to A3.
  • the priority processing score of each user is obtained by the following equation.
  • P is the number of priority processing points
  • l (m) is the distance between the camera 25 mounted on the HMD 1 worn by each user U and the area for which priority processing points are to be determined
  • ⁇ (rad) is the horizontal plane. The angle at which the priority processing point determination target area is located from the front of each user U, and E indicates a priority processing point coefficient.
  • the priority processing point coefficient E is a coefficient based on the user U1. Since the priority processing coefficient E is determined based on the positional relationship between the user U and the priority processing point determination target area, the priority processing coefficient differs for each user U even if the priority processing point determination target area is the same.
  • the distance 1 between the user U1 and the area A7 is 1, the distance between the users U2 and U3 and the area A7 is a square root of 2. expressed.
  • the angle between the user U1 and the area A7 can be represented by ⁇ / 2 radians (about 1.57 radians), and the angle between the users U2 and U3 and the area A7 can be represented by 0 radians.
  • the distance 1 between the user U1 and the area A15 is 1, and the distance between the users U2 and U3 and the area A15 is represented by a square root of 10.
  • the angle between the user U1 and the area A15 can be represented by ⁇ / 2 radians (about 1.57 radians), and the angle between the users U2 and U3 and the area A15 can be represented by about 0.46 radians.
  • the rendering processing load determination unit 16 determines the load of the processing related to the rendering of the virtual object placed in the peripheral area of the user U ⁇ b> 1.
  • a virtual object generation process is executed by the output image generation unit 17 based on the processed content (S33).
  • the threshold value Pt of the priority processing point for determining the processing load is set to 2.0.
  • the rendering processing load determination unit 16 performs a comparatively heavy load such that the virtual object placed in the area where the priority processing score determined by the priority processing score determination unit 15 is less than the threshold is moved once every 100 msec. It is determined that a low process X is performed.
  • the rendering processing load determination unit 16 performs a comparatively heavy load such as moving the virtual object placed in the area where the priority processing point calculated by the priority processing point determination unit 15 is equal to or greater than the threshold value once every 16 msec. It is determined that high processing Y is performed.
  • the processing load is divided into two stages has been described, but the present invention is not limited to this.
  • the drawing processing load determination unit 16 determines to perform the processing Y.
  • the value of PB is 1.76, so that it is determined that the process X is performed.
  • the output image generation unit 17 executes a virtual object generation process based on the processing content determined by the drawing processing load determination unit 16.
  • the virtual object generated by the output image generation unit 17 is converted into an image signal so that it can be displayed on the display 31 by the output image control unit 18 and output to the output unit 30 (S34).
  • the processing load related to the drawing processing of the virtual object A that is likely to be noticed by the user U1 increases, and the processing load related to the drawing processing of the virtual object B that is unlikely to be noticed by the user U1 decreases. Good processing can be performed.
  • a captured image of a camera mounted on the HMD is used has been described, but a captured image captured by an external camera as an external device disposed in a real space that is not mounted on the HMD may be used.
  • the external camera may be fixed or may be configured to be movable, as long as the external camera can acquire image information having positional relationship information between the user U and the priority processing point determination target area.
  • the positional relationship information between the user U and the priority processing point determination target area is obtained from the image information, and based on this information, the processing related to the drawing processing of the virtual object arranged in the non-visual area of the user U1 is controlled. You may.
  • the processing regarding the virtual object arranged in the non-viewing area is performed based on the degree of attraction determined based on the peripheral information of the real space acquired from the sensor. Are determined, and the process related to the virtual object is executed. Thereby, the processing relating to the virtual object arranged in the area with a high degree of attraction can be performed prior to the processing relating to the virtual object arranged in the area with a low degree of attraction, and efficient processing can be performed. .
  • Embodiments of the present technology are not limited to the above-described embodiments, and various modifications can be made without departing from the gist of the present technology.
  • the visual field of a normal person is said to be about 60 degrees on the upper side and about 70 degrees on the lower side in one eye.
  • each of the four regions divided in the left-right direction in the above-described embodiment may be further divided into three in the up-down direction, and divided into 12 regions as a whole.
  • the angle defining the range in the vertical direction of the upper area is 30 degrees
  • the vertical direction of the intermediate area is May be 130 degrees
  • the angle that defines the range in the vertical direction of the lower region may be 20 degrees.
  • a sensor is mounted on the HMD so that the surrounding situation in each area can be individually detected.
  • sensors such as the illuminance sensor 22, the sound sensor 23, and the odor sensor 24 which detect the surrounding situation are mounted on the HMD. It may be installed as an external device different from.
  • the external device may be installed so as to be able to detect the surrounding situation of the user U.
  • a wristband-type device equipped with a sensor for detecting the surrounding situation of each of the right area and the left area of the user U is used. be able to.
  • the external device and the control unit of the HMD are configured to be able to communicate with each other, and the control unit of the HMD is configured to be able to acquire the detection result of the external device.
  • the drawing process of the virtual object arranged in the non-viewing area is controlled using the illuminance information, the sound information, the odor information, or the image information as the peripheral situation information of the user U.
  • the drawing process of the virtual object may be controlled using a combination of these pieces of information.
  • the surrounding situation information and spatial information such as the position and orientation information of the camera are used, but only the surrounding situation information may be used.
  • the illuminance sensor 22, the sound sensor 23, and the odor sensor 24 for detecting the surrounding state are respectively installed to detect the respective areas 60F, 60B, 60R, and 60L, for example, the sensor for detecting the surrounding state of the front area 60F is used. If specified, the orientation of the user U, that is, the position and orientation of the camera can be grasped.
  • the present technology may have the following configurations. (1) Determining the degree of attraction for each of the first peripheral information of the first area and the second peripheral information of the second area in the real space obtained from the sensor, which is different from the visual field information corresponding to the user's visual field; Based on the degree of attraction, the priority of the first processing for the first virtual object arranged in the first area and the priority of the second processing for the second virtual object arranged in the second area are determined.
  • An information processing device including a control unit for determining.
  • the control unit may be configured to perform the first processing with higher priority than the second processing when determining a lower degree of interest for the second peripheral information than the first peripheral information.
  • An information processing device that determines the priority.
  • the control unit determines the priority such that the processing load of the first process related to the drawing of the first virtual object is higher than that of the second process related to the drawing of the second virtual object.
  • Information processing device determines the priority such that the processing load of the first process related to the drawing of the first virtual object is higher than that of the second process related to the drawing of the second virtual object.
  • the information processing apparatus includes a first illuminance sensor that detects illuminance of the first area as the first peripheral information, and a second illuminance sensor that detects illuminance of the second area as the second peripheral information.
  • Information processing device including.
  • the information processing apparatus When at least one of the first illuminance sensor and the second illuminance sensor detects an illuminance in the real space to which sunlight having an illuminance higher than an illuminance in a region corresponding to the user's field of view is applied.
  • the control unit sets the degree of attraction to the peripheral information acquired from the illuminance sensor that detects the illuminance of the real space to which the sunlight is irradiated, using the illuminance of the area corresponding to the user's field of view and the illuminance sensor.
  • An information processing apparatus that determines that an illuminance difference between the detected sunlight and the illuminance in the real space that is equal to or larger than the threshold is lower than a case where the illuminance difference is smaller than the threshold.
  • the information processing apparatus according to (5), The environment of the real space is an indoor information processing device.
  • the information processing apparatus When at least one of the first illuminance sensor and the second illuminance sensor detects the illuminance of the real space to which the light other than the sunlight is irradiated, the control unit sets the light other than the sunlight to The attractiveness of the peripheral information acquired from the illuminance sensor that detects the illuminance of the real space to be illuminated, the illuminance of the area corresponding to the user's field of view and the light other than the sunlight detected by the illuminance sensor An information processing apparatus which determines that an illuminance difference between the illuminance and the illuminance in the real space to be irradiated is lower when the illuminance difference is less than the threshold than when the illuminance difference is equal to or greater than the threshold.
  • the information processing apparatus according to any one of (1) to (10), The first sensor detects a loudness of the sound in the first area as the first peripheral information, and detects a loudness of the sound in the second area as the second peripheral information.
  • An information processing apparatus including a second sound sensor that performs the processing.
  • the information processing apparatus according to (11), The control unit, when the loudness of the sound in the second area is smaller than the loudness of the sound in the first area, sets the degree of attraction for the second peripheral information to the degree of attraction for the first peripheral information. Information processing device that also determines low.
  • the information processing apparatus detects a odor intensity of the first area as the first peripheral information, and detects an odor intensity of the second area as the second peripheral information.
  • An information processing device including a second odor sensor.
  • the information processing apparatus according to (13), The control unit, when the odor intensity of the second area is weaker than the odor of the first area, sets the degree of attraction to the second peripheral information to be lower than the degree of attraction to the first peripheral information. Judge information processing device.
  • the information processing apparatus according to any one of (1) to (14), The information processing device, wherein the sensor is a camera that acquires image information around the user as the surrounding information.
  • the information processing apparatus includes positional relationship information between the first area and the second area and the user, The information processing device, wherein the control unit determines the degree of attraction using the positional relationship information.
  • the information processing apparatus can be mounted on the head of the user, and can present the first virtual object and the second virtual object in the field of view of the user while allowing the user to visually recognize the outside world.
  • An information processing device that is a head-mounted display configured.
  • HMD information processing device
  • control unit 22 illuminance sensor (sensor) 23 ... Sound sensor (sensor) 24 ... Odor sensor (sensor) 25 ...
  • 60B rear area (first area, second area) 60L... Left side area (first area, second area) 60R right side area (first area, second area) 65 ... sun 66 ... explosion sound (sound) 67: smell
  • A virtual object A (first virtual object)
  • B virtual object B (second virtual object)
  • A7... Area A7 (first area) A15... Area A15 (second area)
  • U, U1 to U3 User P1 to P16: Priority processing points (attraction level)

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Le problème décrit par la présente invention est de fournir un dispositif de traitement d'informations, un procédé de traitement d'informations et un programme avec lesquels il est possible de réduire une charge de traitement. La solution de l'invention porte sur un dispositif de traitement d'informations qui est équipé d'une unité de commande. L'unité de commande : détermine le degré de perception de chacune des premières informations environnantes dans une première région et des secondes informations environnantes dans une seconde région dans un espace réel acquis par un capteur, le degré de perception étant différent des informations de champ de vision correspondant au champ de vision d'un utilisateur; et détermine, sur la base du degré de perception, le degré de priorité d'un premier processus se rapportant à un premier objet virtuel disposé dans la première région, et d'un second processus se rapportant à un second objet virtuel disposé dans la seconde région.
PCT/JP2019/032260 2018-08-31 2019-08-19 Dispositif de traitement d'informations, procédé de traitement d'informations et programme WO2020045141A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-162266 2018-08-31
JP2018162266 2018-08-31

Publications (1)

Publication Number Publication Date
WO2020045141A1 true WO2020045141A1 (fr) 2020-03-05

Family

ID=69643883

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/032260 WO2020045141A1 (fr) 2018-08-31 2019-08-19 Dispositif de traitement d'informations, procédé de traitement d'informations et programme

Country Status (1)

Country Link
WO (1) WO2020045141A1 (fr)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016527536A (ja) * 2013-06-07 2016-09-08 株式会社ソニー・インタラクティブエンタテインメント ヘッドマウントディスプレイでユーザーの動きに応答する画像レンダリング

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016527536A (ja) * 2013-06-07 2016-09-08 株式会社ソニー・インタラクティブエンタテインメント ヘッドマウントディスプレイでユーザーの動きに応答する画像レンダリング

Similar Documents

Publication Publication Date Title
US10055642B2 (en) Staredown to produce changes in information density and type
US9615177B2 (en) Wireless immersive experience capture and viewing
US9846304B2 (en) Display method and display apparatus in which a part of a screen area is in a through-state
CN102591016B (zh) 用于扩展现实显示的优化聚焦区
CN102566756B (zh) 用于扩展现实显示的基于理解力和意图的内容
US20210329764A1 (en) Systems and methods for retarding myopia progression
US11631380B2 (en) Information processing apparatus, information processing method, and recording medium
CN106662747A (zh) 用于增强现实和虚拟现实感知的具有电致变色调光模块的头戴式显示器
CN104956252A (zh) 用于近眼显示设备的外围显示器
US11165938B2 (en) Animal-wearable first person view system
US20140361987A1 (en) Eye controls
WO2017153778A1 (fr) Visiocasque
US20210026142A1 (en) Information processing apparatus, information processing method, and program
US10571700B2 (en) Head-mountable display system
JP2013110764A (ja) 撮像表示装置、撮像表示方法
JP7271909B2 (ja) 表示装置、及び、表示装置の制御方法
JP6529571B1 (ja) 仮想空間を提供するためにコンピュータで実行されるプログラム、方法、およびプログラムを実行するための情報処理装置
WO2020045141A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
JP2013083994A (ja) 表示装置、表示方法
JP5971298B2 (ja) 表示装置、表示方法
US11762204B2 (en) Head mountable display system and methods
US20240087221A1 (en) Method and apparatus for determining persona of avatar object in virtual space
WO2023227876A1 (fr) Casque de réalité étendue, système et appareil
GB2619367A (en) Extended reality headset, system and apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19856371

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19856371

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP