WO2020045141A1 - Information processing device, information processing method, and program - Google Patents

Information processing device, information processing method, and program Download PDF

Info

Publication number
WO2020045141A1
WO2020045141A1 PCT/JP2019/032260 JP2019032260W WO2020045141A1 WO 2020045141 A1 WO2020045141 A1 WO 2020045141A1 JP 2019032260 W JP2019032260 W JP 2019032260W WO 2020045141 A1 WO2020045141 A1 WO 2020045141A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
area
user
illuminance
information processing
Prior art date
Application number
PCT/JP2019/032260
Other languages
French (fr)
Japanese (ja)
Inventor
京二郎 永野
富士夫 荒井
秀憲 青木
靖子 石原
新太郎 筒井
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2020045141A1 publication Critical patent/WO2020045141A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present technology relates to an information processing apparatus, an information processing method, and a program related to processing relating to a virtual object displayed on a transmission type head mounted display or the like.
  • Patent Document 1 describes a system that shares a virtual reality space between a non-transmissive head-mounted display used by a first user and a non-transmissive head-mounted display used by a second user.
  • a virtual reality space image displayed on a head-mounted display used by the first user is generated based on the line-of-sight information of the second user transmitted via the network.
  • a transmission type head mounted display that is mounted on the head of the user U and superimposes and displays an image such as a virtual object on the field of view of the user U while allowing the user U to visually recognize the outside world is presented to the user U.
  • HMD head mounted display
  • an object of the present technology is to provide an information processing device, an information processing method, and a program capable of reducing a processing load.
  • an information processing device includes a control unit.
  • the control unit invites the first peripheral information of the first area and the second peripheral information of the second area of the real space obtained from the sensor, which are different from the visual field information corresponding to the visual field of the user.
  • a first process related to a first virtual object arranged in the first area and a second process related to a second virtual object arranged in the second area based on the attraction degree. Determine the priority of processing.
  • the control unit may be configured to perform the first processing with higher priority than the second processing when determining a lower degree of interest for the second peripheral information than the first peripheral information.
  • the priority may be determined.
  • the control unit determines the priority such that the first processing relating to the rendering of the first virtual object has a higher processing load than the second processing relating to the rendering of the second virtual object. You may.
  • the sensor includes a first illuminance sensor that detects illuminance of the first area as the first peripheral information, and a second illuminance sensor that detects illuminance of the second area as the second peripheral information. May be included.
  • the control unit may determine the degree of attraction in consideration of the environment of the real space.
  • the environment of the real space may be outdoors under sunlight.
  • the control unit sets the degree of attraction to the peripheral information acquired from the illuminance sensor that detects the illuminance of the real space to which the sunlight is irradiated, using the illuminance of the area corresponding to the user's field of view and the illuminance sensor.
  • the illuminance difference between the detected sunlight and the illuminance in the real space to be irradiated may be determined to be lower when the difference is equal to or larger than the threshold than when the difference is smaller than the threshold.
  • the real space environment may be indoors.
  • the control unit may determine the degree of attraction of the peripheral information acquired from the illuminance sensor that detects the illuminance of the real space to be irradiated with sunlight so that the priority is the lowest.
  • the control unit sets the light other than the sunlight to The attractiveness of the peripheral information acquired from the illuminance sensor that detects the illuminance of the real space to be illuminated, the illuminance of the area corresponding to the user's field of view and the light other than the sunlight detected by the illuminance sensor
  • the illuminance difference between the illuminance to be irradiated and the illuminance in the real space may be determined to be lower when the difference is less than the threshold than when the difference is equal to or greater than the threshold.
  • the first sensor detects a loudness of the sound in the first area as the first peripheral information, and detects a loudness of the sound in the second area as the second peripheral information.
  • a second sound sensor may be included.
  • the control unit when the loudness of the sound in the second area is smaller than the loudness of the sound in the first area, sets the degree of attraction for the second peripheral information to the degree of attraction for the first peripheral information. May be determined to be low.
  • the sensor detects a odor intensity of the first area as the first peripheral information, and detects an odor intensity of the second area as the second peripheral information.
  • a second odor sensor may be included.
  • the control unit when the odor intensity of the second area is weaker than the odor of the first area, sets the degree of attraction to the second peripheral information to be lower than the degree of attraction to the first peripheral information. It may be determined.
  • the sensor may be a camera that acquires image information around the user as the surrounding information.
  • the image information includes positional relationship information between the first region and the second region and the user, and the control unit determines the degree of attraction using the positional relationship information. You may.
  • the information processing device can be mounted on the head of the user, and can present the first virtual object and the second virtual object in the field of view of the user while allowing the user to visually recognize the outside world. It may be a head mounted display configured.
  • an information processing method is configured to provide first peripheral information of a first area in a real space different from visual field information corresponding to a visual field of a user and second information of a second area.
  • the second peripheral information is obtained from the sensor, the degree of attraction of each of the first peripheral information and the second peripheral information is determined, and the first attraction arranged in the first area is determined based on the degree of attraction.
  • the priority of the first process related to the virtual object and the priority of the second process related to the second virtual object arranged in the second area are determined.
  • a program includes first peripheral information of a first area in a real space different from visual field information corresponding to a visual field of a user, and second peripheral information of a second area.
  • the information processing apparatus is configured to execute a process including a step of determining a priority of a first process related to one virtual object and a second process related to a second virtual object arranged in the second area.
  • FIG. 6 is a block diagram of an information processing device according to first to fourth embodiments of the present technology.
  • FIG. 13 is a flowchart illustrating an example of a process regarding a virtual object in the information processing apparatuses according to the first to third embodiments.
  • FIG. 13 is a flowchart illustrating an example of a process regarding a virtual object in the information processing apparatuses according to the first to third embodiments.
  • FIG. 13 is a flowchart illustrating an example of a process regarding a virtual object in the information processing apparatuses according to the first to third embodiments. It is a figure showing the hardware constitutions of the above-mentioned information processor.
  • FIG. 13 is a flowchart illustrating an example of a process regarding a virtual object in the information processing apparatuses according to the first to third embodiments.
  • FIG. 9 is a schematic diagram for explaining a relationship between an area around the user U and the number of priority processing points when the situation around the user U is not a situation that induces the movement of the line of sight of the user U. It is a figure for explaining a 1st embodiment, and when a surrounding situation of user U is a situation which induces eye movement of user U, it is a field of a surrounding of user U, and a priority processing score. It is a schematic diagram for demonstrating a relationship. It is a figure for explaining a 2nd embodiment, and when a surrounding situation of user U is a situation which induces a line-of-sight movement of user U, an area around user U and a priority processing score are shown. It is a schematic diagram for demonstrating a relationship.
  • the information processing device is a head-mounted display (HMD), which is a display device mounted on the head of the user U, and is one of wearable computers.
  • the shape of the HMD is typically an eye mirror type or a hat type.
  • the HMD of the present embodiment is a transparent HMD, for example, for reproducing a game content and superimposing and presenting a virtual object (display image) corresponding to the content to the user U in the external world, which is a real space.
  • the transmission type HMD includes an optical transmission type, a video transmission type, and the like.
  • the transmissive HMD has a display arranged in front of the user U when worn on the head of the user U.
  • the display includes an image display element and an optical element.
  • a display image displayed on the image display element is presented to the user U via an optical element such as a holographic optical element or a half mirror disposed in front of the user U.
  • the user U can see the state of the outside world through the optical element in a see-through manner, and a display image such as a virtual object displayed on the image display element is arranged and superimposed on the state of the external world that is a real space. Is presented to the user U.
  • the image includes a still image and a moving image.
  • the user cannot directly see the external world when worn, but an image in which the virtual object is superimposed on the external image obtained by the camera is displayed on the display, so that the user U can visually recognize the external world.
  • the virtual object can be presented while the virtual object is being displayed.
  • the HMD according to the present embodiment predicts a region in a real space that is likely to be in the user's field of view based on the peripheral information of the user. Based on the result of the prediction, the processing content regarding the drawing of the virtual object arranged in the real space is changed. Specifically, the processing load on the virtual object placed in an area that is unlikely to be in the field of view of the user U is lower than the processing load on the virtual object placed in an area that is likely to be in the field of view of the user U. I do.
  • the peripheral information includes the peripheral situation information and the space information, but may be only the peripheral situation information.
  • the peripheral situation information is information on the peripheral situation of the user, for example, illuminance information, sound information, odor information, image information, and the like.
  • the spatial information is the orientation, position information, and the like of the user.
  • the real space around the user is divided into a plurality of regions, and the user's degree of attraction is determined for each region based on the peripheral information acquired for each region.
  • the degree of attraction indicates a possibility of entering the field of view of the user U.
  • the area that is highly likely to be in the user's field of view is an area where the user U is likely to be caught by the eye, and is an area where the degree of attraction is high.
  • the processing loads for the virtual objects arranged in the region with a high degree of attraction are made different from each other so that the processing for the virtual objects arranged in the region with a low degree of attraction is prioritized over the processing with respect to the virtual objects arranged in the region with a low degree of attraction.
  • efficient processing can be performed. The details will be described below.
  • FIG. 1 is a block diagram of a head mounted display (HMD) 1 as an information processing device.
  • HMD head mounted display
  • the HMD 1 includes a control unit 10, an input unit 20, an output unit 30, a storage unit 46, and a support (not shown).
  • the support can be mounted on the head of the user U.
  • the support is supported by the displays 31R and 31L, which are the output units 30, in front of the user at the time of wearing.
  • the supporter supports each sensor, which will be described later, which is the input unit 20.
  • the shape of the support is not particularly limited, and may be, for example, a hat shape as a whole.
  • the input unit 20 is a sensor unit group including a plurality of sensor units.
  • the detection result detected by each sensor unit is input to the control unit 10.
  • the detection result detected by each sensor unit includes the surrounding situation information and the space information.
  • the peripheral situation information is information around the user U in the real space where the user U wearing the HMD 1 is located.
  • the surrounding situation information is illuminance information, sound information, odor information, image information, and the like in the real space. These pieces of information can be acquired from an illuminance sensor 22, a sound sensor 23, an odor sensor 24, and a camera 25 described later.
  • the space information includes image information captured by a camera 25 described later mounted on the HMD 1, acceleration information, angular velocity information, azimuth information, and the like of the HMD 1 detected by a nine-axis sensor 21 described later. It is possible to detect the position, orientation, movement, and posture (walking, running, stopping, etc.) of the user U from these spatial information.
  • the input unit 20 includes a nine-axis sensor 21, an illuminance sensor 22, a sound sensor 23, an odor sensor 24, and a camera 25. These sensors are mounted on the HMD1.
  • the 9-axis sensor 21 includes a 3-axis acceleration sensor, a 3-axis gyro sensor, and a 3-axis compass sensor.
  • the nine-axis sensor 21 can detect the acceleration, angular velocity, and orientation of the HMD 1 in three axes, and can detect the position, orientation, movement, and posture (walking, running, stopping, and the like) of the user U. it can.
  • the detection result detected by the 9-axis sensor 21 is output to the control unit 10 as spatial information.
  • the illuminance sensor 22 has a light receiving element, and converts the light incident on the light receiving element into a current to detect brightness (illuminance).
  • illuminance brightness
  • four illuminance sensors 22 are provided. The four illuminance sensors 22 detect illuminance in each area when the real space where the user U is located is divided into four areas.
  • FIG. 6 is a diagram illustrating a positional relationship between each area and the user U when the real space where the user U is located is divided into four areas.
  • FIG. 6 and FIGS. 7 to 11 described below correspond to views of the real space 60 or 70 where the user U moves while wearing the HMD 1 as viewed from above the user U.
  • the real space 60 is divided into a front area 60F on the front side of the user U, a rear area 60B on the back side, a right area 60R on the right side, and a left area on the left side with the user U as a center. It is divided into four 60L regions.
  • the four illuminance sensors 22 include an illuminance sensor 22F that detects illuminance in the front area 60F, an illuminance sensor 22R that detects illuminance in the right area 60R, an illuminance sensor 22L that detects illuminance in the left area 60L, and a rear area 60B.
  • An illuminance sensor 22B that detects illuminance.
  • the detection result (illuminance information) detected by each of the illuminance sensors 22R, 22B, and 22L is the surrounding situation information in each region, and the field of view of the user U. Is different from the visual field information corresponding to.
  • the detection results detected by the four illuminance sensors 22 are output to the control unit 10 as peripheral situation information.
  • the sound sensor 23 detects the loudness of the sound around the user U.
  • four sound sensors 23 are provided. Like the illuminance sensor 22, the four sound sensors 23 respectively detect the sound volume in each area when the real space where the user U is located is divided into four areas. Using the detection results of the four sound sensors 23, the direction in which the sound source as viewed from the user U exists can be specified.
  • the four sound sensors 23 include a sound sensor 23F that detects the volume of the front area 60F, a sound sensor 23R that detects the volume of the right area 60R, a sound sensor 23L that detects the volume of the left area 60L, and a sound sensor 23L that detects the volume of the rear area 60B.
  • the detection result (sound information) detected by each of the sound sensors 23R, 23B, and 23L is the surrounding situation information in each area, and the field of view of the user U. Is different from the visual field information corresponding to.
  • the odor sensor 24 detects the intensity of the odor around the user U.
  • four odor sensors 24 are provided. Using the detection results of the four odor sensors 24, the direction in which the source of the odor as viewed from the user U exists can be specified.
  • the four odor sensors 24 include an odor sensor 24F that detects the intensity of the odor in the front region 60F, an odor sensor 24R that detects the odor in the right region 60R, an odor sensor 24L that detects the odor in the left region 60L, and a rear.
  • the odor sensor 24B detects the odor of the area 60B. The detection results detected by the four odor sensors 24 are output to the control unit 10 as peripheral situation information.
  • the detection result (odor information) detected by each of the odor sensors 24R, 24B, and 24L is the surrounding situation information in each region, and the field of view of the user U. Is different from the visual field information corresponding to.
  • odor sensor 24 for example, an oxide semiconductor sensor, a crystal oscillator (QCM) sensor having a film having molecular selectivity formed on the oscillator surface, or the like can be used.
  • QCM crystal oscillator
  • the camera 25 includes a right-eye camera 251 and a left-eye camera 252.
  • the right-eye camera 251 and the left-eye camera 252 capture an image of the front area 60F of the user U corresponding to the visual field area of the user U, and acquire a captured image as image information.
  • the right-eye camera 251 and the left-eye camera 252 are arranged at a predetermined interval in the horizontal direction on the front surface of the head mounted display 1.
  • the right-eye camera 251 captures a right-eye image
  • the left-eye camera 252 captures a left-eye image.
  • the right-eye image and the left-eye image captured by the right-eye camera 251R and the left-eye camera 252 include spatial information such as the position and orientation of the user U wearing the HMD1.
  • the right-eye image and the left-eye image are output to the control unit 10 as spatial information.
  • the output unit 30 has a display 31R for the right eye and a display 31L for the left eye, and these displays are mounted on the HMD1.
  • the right-eye display 31R and the left-eye display 31L are arranged in front of the right and left eyes of the user U, respectively.
  • the right-eye display 31R (the left-eye display 31L) has a right-eye image display element 311R (a left-eye image display element 311L) and a right-eye optical element 312R (a left-eye optical element 312L).
  • the display 31R for the right eye and the display 31L for the left eye are referred to as the display 31, the image display element 311R for the right eye and the image display element 311L for the left eye are referred to as the image display element 311, and the display element for the right eye.
  • the optical element 312R and the left-eye optical element 312L may be referred to as an optical element 312 in some cases.
  • the image display element 311 includes an organic EL display element, a liquid crystal display element as a light modulation element, and the like.
  • the image display element 311 forms a display image such as a virtual object based on the image signal output from the control unit 10 and emits display light.
  • the display light enters the eyes of the user U via the optical element 312, the virtual object can be presented to the user U in the viewing area of the user U.
  • the optical element 312 is a holographic optical element, a half mirror, or the like, and is arranged in front of the user U.
  • the optical element 312 is configured to diffract light emitted from the image display element 311 and guide the light to the left and right eyes of the user.
  • the optical element 312 is configured to transmit light from the outside world. Therefore, in the HMD 1, it is possible to present the display image formed by the image display element 311 to the user U by superimposing the display image on the light from the outside.
  • the control unit 10 includes a communication control unit 11, a surrounding situation information management unit 12, a spatial information acquisition unit 13, a spatial information management unit 14, a priority processing point determination unit 15, a drawing processing load determination unit 16, It includes an image generation unit 17, an output image control unit 18, and a drawing processing load management unit 19.
  • the communication control unit 11 communicates with the various sensors 21 to 25 and the display 31 mounted on the HMD 1 to transmit and receive various information. Specifically, the communication control unit 11 receives the detection results of the various sensors 21 to 25 as the surrounding situation information and the space information, and transmits an image signal or the like to the display 31.
  • the communication control unit 11 communicates with an HMD worn by another user or an external peripheral device, and transmits and receives various information.
  • the communication control unit 11 can acquire image information and the like captured by a camera mounted on the HMD worn by another user.
  • Image information captured by a camera mounted on another user's HMD includes positional relationship information between another user U wearing the HMD and a priority processing point determination target area described later.
  • the peripheral situation information management unit 12 obtains the detection results obtained by the illuminance sensor 22, the sound sensor 23, and the odor sensor 24 via the communication control unit 11, and the image information (detection results) obtained from the HMD of another user. ) Is stored in a peripheral situation database (not shown) in a time-series manner as peripheral situation information, and data is updated and managed so as to always be the latest peripheral situation information.
  • the space information acquisition unit 13 acquires, as space information, the detection results acquired by the 9-axis sensor 21 and the camera 25 acquired via the communication control unit 11.
  • the spatial information management unit 14 uses the detection results detected by the 9-axis sensor 21 and the camera 25 and the position and orientation information of the camera 25 obtained based on the detection results as spatial information as a spatial information database (not shown). And updates and manages the data so that it is always the latest spatial information.
  • the priority processing point determination unit 15 determines the priority processing points of each area based on the surrounding situation information managed by the surrounding situation information management unit 12 and the position and orientation information of the camera 25 managed by the spatial information management unit 14. The degree of attraction).
  • the priority processing score is obtained by converting the priority of processing relating to drawing of a virtual object (display image) currently arranged in the non-viewing area of the user U into a score, and is an attractiveness determined for the peripheral information. .
  • An area that is likely to be viewed by the user U is assigned a high priority processing score as an area with a high degree of attraction. The higher the number of assigned priority processing points, the higher the priority of processing relating to drawing of a virtual object.
  • the priority processing points are divided into a front area 60F, a rear area 60B, a right area 60R, and a left area obtained by dividing a 360-degree periphery of the user U centering on the user U wearing the HMD1 existing in the real space into four parts as shown in FIG. It is required every 60L.
  • a front area 60F a rear area 60B
  • a right area 60R a left area obtained by dividing a 360-degree periphery of the user U centering on the user U wearing the HMD1 existing in the real space into four parts as shown in FIG. It is required every 60L.
  • the present invention is not limited to this.
  • the ranges of the front region 60F, the rear region 60B, the right region 60R, and the left region 60L can be determined in consideration of a general normal human visual field region.
  • the visual field of a normal person is said to be about 60 degrees on the nose side with one eye and about 90 to 100 degrees on the ear side, and the range that can be seen simultaneously by both eyes is about 120 degrees left and right.
  • a range in the left-right direction of the front area 60F in a 360-degree area around the user U along the direction connecting the right eye and the left eye of the user U is defined.
  • the angle theta F 120 ° may be set the angle theta R and theta L defining the range of the right side area 60R, and the left region 60L respectively 40 °, the angle theta B to define the scope of the rear region 60B to 160 degrees .
  • An imaginary line bisecting the angle ⁇ F defining the range of the front region 60F is located at the center of the front of the user U.
  • the angle ⁇ F defining the range of the front region 60F and the angle ⁇ B defining the range of the rear region 60B are the same for easy viewing. The angle does not match the above numerical range.
  • FIG. 6 shows an example of the number of priority processing points for each area when the surrounding situation is not a situation that induces the movement of the user's line of sight.
  • the priority processing point P1 of the front area 60F which is the visual field area of the user U, is the highest 10 points as compared with other areas.
  • the priority processing points P2 and P4 are five.
  • the priority processing score P3 is three.
  • the peripheral information (second peripheral information) of the rear area 60B in the real space makes the eyes of the user U more noticeable than the peripheral information (first peripheral information) of the right area 60R and the left area 60L. It is difficult to attract and has low attraction.
  • the priority processing score indicating the degree of attraction of the user U in each of the areas 60F, 60B, 60R, and 60L is shown in FIG. It is different from the case shown.
  • the priority processing score is obtained based on the peripheral information in each of the areas 60F, 60B, 60R, and 60L.
  • the peripheral information includes the peripheral situation information and the space information. If the peripheral situation of the user U is a situation that induces the user U's line of sight, the priority processing points and the points shown in FIG. Will change.
  • the surrounding situation information includes illuminance information detected by four illuminance sensors 22F, 22B, 22R, and 22L mounted on the HMD 1 so as to acquire the surrounding situation of each of the areas 60F, 60B, 60R, and 60L, and four sounds.
  • the sound information detected by the sensors 23F, 23B, 23R, and 23L, the odor information detected by the four odor sensors 24F, 24B, 24R, and 24L, the camera 25 mounted on the HMD 1 worn by the user U and another user U At least one of the pieces of image information captured in step (1).
  • the drawing processing load determination unit 16 determines the processing load related to the drawing of the virtual object placed in each of the areas 60F, 60B, 60R, and 60L according to the priority processing points determined by the priority processing point determination unit 15.
  • the drawing processing load determination unit 16 determines that the processing X with a low processing load such as moving the virtual object once every 100 msec is performed. do.
  • the drawing processing load determination unit 16 determines to perform the processing Y with a medium processing load such as moving the virtual object once every 50 msec.
  • the drawing processing load determination unit 16 determines to perform the processing Z with a high processing load such as moving the virtual object once every 16 msec.
  • the processing load is not limited to this.
  • the priority processing score of the front area 60F is 10, and therefore, the drawing processing load determination unit 16 determines that the drawing processing of the virtual object arranged in the front area 60F is performed in the processing Z. Since the priority processing points of the right area 60R and the left area 60L are 5, the rendering processing load determination unit 16 determines that the rendering processing of the virtual objects arranged in the right area 60R and the left area 60L is performed in the processing Y. . Since the number of priority processing points of the rear area 60B is three, the rendering processing load determination unit 16 determines that the rendering processing of the virtual object arranged in the rear area 60B is performed in the process X.
  • the load of the rendering processing of the virtual object arranged in the front area 60F is the highest.
  • the load of the rendering processing of the virtual objects arranged in the right area 60R and the left area 60L is medium.
  • the load of the rendering processing of the virtual object arranged in the rear area 60B is the lowest.
  • the processing load on the information processing device can be reduced as compared with the case where the drawing processing is executed with the same processing load on all the virtual objects arranged in the non-viewing area, and the efficiency can be reduced.
  • Good processing can be performed. For example, if the HMD 1 is not powered by wire but powered by a battery, a reduction in processing load leads to an increase in battery life due to a reduction in power consumption.
  • the processing Y on the virtual objects (first virtual objects) arranged in the right area 60R and the left area 60L (first area) corresponds to the first processing.
  • the processing X for the virtual object (second virtual object) arranged in the rear area 60B (second area) corresponds to the second processing.
  • the processing priority is changed by changing the processing content so that the processing load is different according to the priority processing score (attraction level).
  • the output image generation unit 17 changes the processing content of the CPU and the GPU based on the processing content determined by the drawing processing load determination unit 16, and generates a virtual object (display image).
  • the output image generation unit 17 If the processing content determined by the drawing processing load determination unit 16 is processing X, the output image generation unit 17 generates a virtual object so that the virtual object moves once every 100 msec. In the case of processing Y, the output image generation unit 17 generates a virtual object such that the virtual object moves once every 50 msec. In the case of processing Z, the output image generation unit 17 generates a virtual object such that the virtual object moves once every 16 msec.
  • the output image control unit 18 outputs the virtual object generated by the output image generation unit 17 as an image signal so that the virtual object can be displayed on the right-eye display 31R and the left-eye display 31L of the HMD 1.
  • the drawing processing load management unit 19 stores the processing content determined by the drawing processing load determination unit 16 in a time series in a drawing processing load database (not shown) in association with the surrounding situation information, the space information, and the number of priority processing points. Update and manage data so that it is always up-to-date.
  • the storage unit 46 stores a program for causing the HMD 1 as an information processing device to execute a series of information processing on the virtual object performed by the control unit 10.
  • FIG. 2 is a flowchart for explaining processing relating to a virtual object in the HMD 1.
  • the surrounding state information is acquired by the communication control unit 11 (S1).
  • the peripheral situation information is stored in the peripheral situation database.
  • the surrounding situation information is a detection value detected by the illuminance sensor 22, a detection value detected by the sound sensor 23, a detection value detected by the odor sensor 24, and the like, and at least one of them is acquired.
  • An information processing method in a case where image information captured by the camera 25 is acquired as the peripheral situation information will be described in a fourth embodiment described later.
  • the detection value detected by the 9-axis sensor 21 and the image information (photographed image) photographed by the camera 25 are acquired by the spatial information acquisition unit 13 via the communication control unit 11, and the camera information is acquired based on these information.
  • Information on the position and orientation of the 25 is obtained (S2).
  • Spatial information such as a detection value detected by the 9-axis sensor 21, image information captured by the camera 25, and information on the position and orientation of the camera 25 obtained based on these information is stored in a spatial information database.
  • the priority processing point determination unit 15 performs the priority processing for each of the front area 60F, the rear area 60B, the right area 60R, and the left area 60L of the user U based on the surrounding situation information and the position and orientation information of the camera 25.
  • a score P is obtained (S3).
  • the processing content is determined by the drawing processing load determination unit 16 based on the priority processing score obtained in S3, and the output image generation unit 17 generates a virtual object (display image) based on the determined processing content.
  • Generation processing is executed (S4).
  • the output image control unit 18 converts the virtual object generated by the output image generation unit 17 into an image signal so that it can be displayed on the display 31, and outputs the image signal to the output unit 30 (S5).
  • the virtual object is displayed on the display 31 based on the image signal output from the control unit 10 and presented to the user U.
  • a series of processes related to the drawing process described above may be performed at regular intervals, or may be performed each time the direction of the user U changes and the direction of the HMD 1 changes.
  • a series of processes related to the drawing process is executed at regular time intervals with reference to FIG. 3 will be described.
  • a series of processes related to the drawing process will be executed by changing the direction of the HMD 1 with reference to FIG. An example will be described.
  • the communication control unit 11 acquires the surrounding situation information.
  • the acquired surrounding situation information is stored in the surrounding situation database.
  • the detection value detected by the 9-axis sensor 21 and the image information (photographed image) photographed by the camera 25 are acquired by the spatial information acquisition unit 13 via the communication control unit 11, and the camera information is acquired based on these information.
  • Information on the position and orientation of the 25 is obtained (S13).
  • Spatial information such as a detection value detected by the 9-axis sensor 21, image information captured by the camera 25, and information on the position and orientation of the camera 25 obtained based on these information is stored in a spatial information database.
  • the priority processing point determination unit 15 performs the priority processing for each of the front area 60F, the rear area 60B, the right area 60R, and the left area 60L of the user U based on the surrounding situation information and the position and orientation information of the camera 25.
  • a score P is obtained (S14).
  • the processing contents are determined by the drawing processing load determination unit 16 based on the priority processing points obtained in S14 (S15).
  • a process of generating a virtual object (display image) by the output image generation unit 17 is performed based on the determined processing content (S16).
  • a virtual object generation process is executed in S16 based on the processing content determined in the previous process.
  • the output image control unit 18 converts the virtual object generated by the output image generation unit 17 into an image signal so that it can be displayed on the display 31, and outputs the image signal to the output unit 30 (S17).
  • the virtual object is displayed on the display 31 based on the image signal output from the control unit 10 and presented to the user U.
  • the rotation angle of the HMD 1 is acquired from the detection value detected by the 9-axis sensor 21 by the spatial information acquisition unit 13 via the communication control unit 11 (S21).
  • the rotation amount of the HMD 1 is calculated by the control unit 10 from the rotation angle of the HMD 1 acquired in the previous drawing process and the rotation angle of the HMD 1 acquired in S21, and the rotation amount is equal to or larger than the threshold value. It is determined whether or not it is (S22).
  • the threshold is set in advance.
  • the surrounding situation information is acquired by the communication control unit 11, and the surrounding situation information is stored in the surrounding situation database.
  • the detection value detected by the 9-axis sensor 21 and the image information (photographed image) photographed by the camera 25 are acquired by the spatial information acquisition unit 13 via the communication control unit 11, and the camera information is acquired based on these information.
  • Information on the position and orientation of the position No. 25 is obtained (S24).
  • Spatial information such as a detection value detected by the 9-axis sensor 21, image information captured by the camera 25, and information on the position and orientation of the camera 25 obtained based on these information is stored in a spatial information database.
  • the priority processing point determination unit 15 performs the priority processing for each of the front area 60F, the rear area 60B, the right area 60R, and the left area 60L of the user U based on the surrounding situation information and the position and orientation information of the camera 25.
  • a score P is obtained (S25).
  • the content of the processing is determined by the drawing processing load determination unit 16 based on the priority processing score obtained in S25 (S26).
  • a process of generating a virtual object (display image) by the output image generation unit 17 is performed based on the determined processing content (S27). If the rotation amount is determined to be less than the threshold value in S22 and the process proceeds to S27, a virtual object generation process is performed in S27 based on the processing content determined in the previous process.
  • the output image control unit 18 converts the virtual object generated by the output image generation unit 17 into an image signal so that it can be displayed on the display 31, and outputs the image signal to the output unit 30 (S28).
  • the virtual object is displayed on the display 31 based on the image signal output from the control unit 10 and presented to the user U.
  • control unit 10 controls the processing related to the virtual object currently arranged in the non-visual area of the user U based on the surrounding situation information and the space information acquired from the various sensors of the input unit 20.
  • control unit 10 predicts an area having a high possibility of entering the field of view of the user U using the surrounding situation information and the spatial information. Then, the control unit 10 performs the rendering process of the virtual object in the region that is likely to be in the predicted field of view of the user U, that is, the region where the user U has a high degree of attraction, and performs the drawing in another region with a low degree of attraction. Control is performed such that the processing is performed with higher priority than the processing.
  • FIG. 5 is a diagram for explaining a hardware configuration of the HMD 1.
  • the information processing in the HMD 1 as the information processing apparatus described above is realized by cooperation between software and hardware of the HMD 1 described below.
  • the HMD 1 includes a CPU (Central Processing Unit) 51, a RAM (Random Access Memory) 52, a ROM (Read Only Memory) 53, a GPU (Graphics Processing Unit) 54, and a communication device 55. , A sensor 56, an output device 57, a storage device 58, and an imaging device 59, which are connected via a bus 61.
  • CPU Central Processing Unit
  • RAM Random Access Memory
  • ROM Read Only Memory
  • GPU Graphics Processing Unit
  • a sensor 56, an output device 57, a storage device 58, and an imaging device 59 which are connected via a bus 61.
  • the CPU 51 controls the overall operation of the HMD 1 according to various programs.
  • the ROM 53 stores programs used by the CPU 51, operation parameters, and the like.
  • the RAM 52 temporarily stores a program used in the execution of the CPU 51, a parameter appropriately changed in the execution, and the like.
  • the GPU 54 performs various processes related to generation of a display image (virtual object).
  • the communication device 55 is a communication interface configured with a communication device or the like for connecting to the communication network 62.
  • the communication device 55 may include a communication device compatible with a wireless LAN (Local Area Network), a communication device compatible with LTE (Long Term Evolution), a wire communication device that performs wired communication, or a Bluetooth (registered trademark) communication device.
  • a wireless LAN Local Area Network
  • LTE Long Term Evolution
  • wire communication device that performs wired communication
  • Bluetooth registered trademark
  • the sensor 56 detects various data related to the surrounding situation information and the space information.
  • the sensor 56 corresponds to the nine-axis sensor 21, the illuminance sensor 22, the sound sensor 23, and the odor sensor 24 described with reference to FIG.
  • the output device 57 includes a display device such as a liquid crystal display device and an organic EL (Electroluminescence) display device. Further, the output device 57 includes a sound output device such as a speaker or headphones. The display device displays a captured image, a generated image, and the like. On the other hand, the sound output device converts a sound signal into sound and outputs the sound.
  • the output device 57 corresponds to the display 31 described with reference to FIG.
  • the storage device 58 is a device for storing data.
  • the storage device 58 may include a recording medium, a recording device that records data on the recording medium, a reading device that reads data from the recording medium, a deletion device that deletes data recorded on the recording medium, and the like.
  • the storage device 58 stores programs executed by the CPU 51 and the GPU 54 and various data.
  • the storage device 58 corresponds to the storage unit 46 described with reference to FIG.
  • the imaging device 59 includes an imaging optical system such as a photographing lens and a zoom lens that collects light, and a signal conversion element such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor).
  • the imaging optical system collects light emitted from the subject and forms a subject image on the signal conversion unit.
  • the signal conversion element converts the formed subject image into an electric image signal.
  • the imaging device 59 corresponds to the camera 25 described with reference to FIG.
  • a program constituting the software includes a step of acquiring first peripheral information and second peripheral information in a real space different from visual field information corresponding to a visual field of the user from the sensor; and a step of acquiring the first peripheral information and the second peripheral information. Determining the degree of attraction of each piece of information; first processing and second processing relating to the first virtual object arranged in the first area of the real space corresponding to the first peripheral information based on the degree of attraction This is for causing the information processing apparatus to execute a process including a step of determining the priority of the second process of the second virtual object arranged in the second area of the real space corresponding to the surrounding information of the second virtual object.
  • the detection result of the illuminance sensor 22 is used as the peripheral situation information when the priority processing point is determined.
  • an example using the detection result of the sound sensor 23 will be described.
  • an example using the detection result of the odor sensor 24 will be described.
  • an example in which a captured image (detection result) captured by the camera 25 is used will be described.
  • the illuminance sensor 22, the sound sensor 23, and the odor sensor 24 for acquiring the surrounding situation information are all mounted on the HMD 1, but the invention is not limited to this. At least one of an illuminance sensor 22, a sound sensor 23, an odor sensor 24, and a camera capable of acquiring image information may be used to acquire the surrounding situation information. Further, the sensor for acquiring the surrounding situation information may be an external device without being mounted on the HMD 1.
  • a detection result (illuminance information) acquired from the illuminance sensor 22 as surrounding situation information is used to determine a priority processing point (attraction level).
  • the control of the processing related to the virtual object based on the illuminance information as the peripheral situation information acquired by the illuminance sensor 22 can be applied to, for example, the HMD 1 that reproduces the game content.
  • the HMD 1 can present the user U with a virtual object (display image) corresponding to the content superimposed on the external world that is a real space.
  • the user U can enjoy the game while freely moving in the real space while wearing the HMD.
  • playing a game while moving in the real space it is assumed that the game is enjoyed outdoors or indoors.
  • the behavior of the user U such as avoiding moving the line of sight in the direction of the sun where the illuminance is higher than the illuminance of the area corresponding to the current visual field is predicted. Is done. Further, when the user U is facing in the direction where the dazzling sunlight is located at the present time, it is predicted that the user U will take an action of not being dazzled or moving his / her gaze to a region with lower illuminance.
  • the behavior of the user U such as avoiding moving his line of sight to an area irradiated with sunlight, which is predicted to be difficult to see by sunlight entering indoors, is predicted as in the case of outdoors.
  • the predicted action against the sunlight is opposite.
  • the user U moves his / her line of sight in a direction that emits brighter light.
  • the priority processing points are determined by taking into account the real space environment such as the outdoor or indoor environment described above in addition to the illuminance information as the peripheral situation information acquired from the illuminance sensor 22.
  • the environment of the real space is, for example, whether it is outdoors or indoors, if it is outdoors, whether it is sunshine when the sun is out, if it is indoors, it is whether there is a window through which the sunlight enters etc. is there.
  • the illuminance information as the peripheral situation information is information that occurs accidentally, such as sunlight, an outdoor light, an indoor light, and the like, which is different from information preset in the game content.
  • the details will be described.
  • FIG. 7 shows a determination example of the priority processing score for each area in a situation where the user U is located outdoors in the sunshine and the sun 65 is located in the right area 60R of the user U.
  • the situation shown in FIG. 7 is a situation in which the sunlight is dazzling and induces the movement of the line of sight of the user U.
  • the time of sunshine refers to a state in which the direct sunlight of the sun illuminates the ground surface to the extent that a shadow of an object is formed.
  • the illuminance sensor 22 mainly detects the illuminance in a real space where sunlight is irradiated. In addition to sunlight, it is assumed that artificial light such as an outdoor light is detected.
  • the user U does not dare to look at the screen because it is too dazzling.
  • the user U does not become too dazzling.
  • the area is not specifically avoided and may be viewed as a normal course of action.
  • the attractiveness of the area where the illuminance is higher than the front area 60F corresponding to the current visual field of the user U in the non-visual area of the user U is determined as follows.
  • the illuminance of the front region 60F corresponding to the current visual field of the user U detected by the illuminance sensor 22F and the illuminance sensors 22R (22B, 22L) detecting the illuminance of the non-visual region of the user U are detected.
  • the attractiveness of the right region 60R is determined to be lower when the illuminance difference is equal to or larger than the threshold than when the illuminance difference is smaller than the threshold.
  • an area where the illuminance difference is significantly different from the front area 60F is an area that is too dazzling for the user U, and it is assumed that the user U is unlikely to see the area, and the priority processing score (attraction degree) is determined to be low.
  • an area where the illuminance difference is small from the front area 60F is an area that is not too dazzling for the user U, is assumed to be likely to be seen by the user U, and has a large illuminance difference (in the case of a threshold or more).
  • the priority processing score (attraction level) is determined to be higher than that.
  • the sun is located on the right side of the user U, and the illuminance value detected by the illuminance sensor 22R that detects the illuminance in the right side region 60R is the highest among the four illuminance sensors 22, and the front region 60F and the left side
  • the difference (illuminance difference) between the illuminance value of the front region 60F and the illuminance value of the right region 60R is equal to or larger than the threshold value.
  • the user U is predicted to have the lowest possibility of directing his or her gaze to the dazzling right region 60 ⁇ / b> R where the sun is located, and thus the priority processing score P ⁇ b> 2 is one.
  • the front area 60F, the left area 60L, and the rear area 60B are predicted to have lower illuminance than the right area 60R and a higher possibility of the user U turning their eyes, and the respective priority processing points P1, P4, and P3 are: There are eight points.
  • the peripheral information (second peripheral information) of the right area 60R (second area) of the real space is the peripheral information (second area information) of the rear area 60B and the left area 60L (first area). It is harder to catch the eyes of the user U and has a lower degree of attraction than the first peripheral information).
  • the processing X (second virtual object) is performed on the virtual object (second virtual object) arranged in the right area 60R (second area). Is performed based on the above process.
  • drawing is performed based on the process Y (first process).
  • the front region 60F and the right region 60R are the same. If the illuminance difference is less than the threshold value, a different priority processing score is determined than when the difference is equal to or greater than the threshold value.
  • the user U may see an area where the illuminance difference of the non-viewing area is less than the threshold value as a normal flow of action without particular avoidance.
  • the priority processing point is determined to be a case where the surrounding situation is not a situation that induces the user U's gaze movement, and the right processing area is determined.
  • the priority processing score P2 of 60R is determined to be five. This is a higher priority processing score than the case where the illuminance difference is equal to or larger than the threshold value shown in FIG.
  • the left region 60L and the rear region 60B have the same illuminance. However, if the illuminance is different, the priority processing score is determined according to the illuminance value. Then, according to the priority processing points determined in the left region 60L, the rear region 60B, and the right region 60R corresponding to the non-viewing region, the priority of the process regarding the virtual object arranged in each region in the non-viewing region is determined. Is done.
  • Examples of situations in which the illuminance difference is less than the threshold include the case of evening, the case where the outdoor light is turned on outdoors in the sunshine, and the case where the user U is in the shadow of a building.
  • the priority processing score is determined to be a case where the surrounding situation is not a situation that induces the user U's line of sight as shown in FIG.
  • the brightness of the sun at about 10:00 am in fine weather is about 65,000 lux
  • the brightness of the sun one hour before the sunset of fine weather is about 1000 lux
  • the evening is compared with the daytime.
  • the illuminance difference tends to be small.
  • the glare of the sun is less likely to be felt than in the daytime, and the user U may see the direction in which the sun is located without particularly avoiding an area where the illuminance is high.
  • the illuminance detected in the direction where the outside light is located and the illuminance of the situation where the user U is currently placed, that is, the front area 60F corresponding to the field of view of the user U are displayed. It is assumed that the illuminance does not change much. This situation is not considered to be a situation that would induce the user U to move his / her gaze.
  • the attractiveness of the area where the illuminance is lower than the front area 60F corresponding to the current visual field of the user U in the non-visual area of the user U is determined as follows.
  • the illuminance of the front region 60F corresponding to the current visual field of the user U detected by the illuminance sensor 22F and the illuminance sensors 22R (22B, 22L) detecting the illuminance of the non-visual region of the user U are detected.
  • the degree of attraction of the right region 60R is determined to be lower when the illuminance difference is less than the threshold than when it is greater than or equal to the threshold.
  • an area where the illuminance difference is largely different from the front area 60F is an area that is not too dazzling for the user U, and it is assumed that the user U has a high possibility to see in order to avoid dazzling at the present time. ) Is determined to be high.
  • the area where the illuminance difference from the front area 60F is small is glare that is not so different from the front area 60F for the user U, and it is assumed that the user U may see it as a normal action flow.
  • the priority processing score is determined to be lower than when it is larger (when it is equal to or more than the threshold).
  • the user U In the case of indoors, for a real space area where sunlight is irradiated, the user U has a very low possibility of seeing, and has the lowest priority, that is, the number of priority processing points so that the processing load is the lowest. Is determined to be low.
  • the priority processing point (attraction degree) for the real space area irradiated with sunlight has the lowest priority of the processing relating to the virtual objects arranged in the area, that is, the processing load is the lowest. Is determined as follows. In the example of this embodiment, the processing load is divided into three stages, and the processing X is the processing with the lowest processing load. Therefore, indoors, the priority processing points for the real space area irradiated with sunlight are given. Is determined to be 3 points or less so that the determination of the processing X is made.
  • the illuminance detected by the illuminance sensor 22 is not caused by sunlight, the illuminance detected indoors is recognized by the user U as being caused by artificially generated light such as an interior light or a spotlight. You.
  • the illuminance does not change much from the illuminance of the front region 60F corresponding to the current visual field of the user U.
  • the user U is unlikely to notice the light. In such a case, even in a region where the illuminance value is detected to be high, the user U does not pay particular attention to the region, and may see it as a normal flow of action.
  • the priority processing point (attraction degree) is determined as follows.
  • the illuminance of the front region 60F corresponding to the current visual field of the user U detected by the illuminance sensor 22F and the illuminance sensors 22R (22B, 22L) detecting the illuminance of the non-visual region of the user U are detected.
  • the right region 60R the rear region 60B and the left region 60L.
  • the attractiveness of the right region 60R is lower when the illuminance difference is less than the threshold than when it is greater than or equal to the threshold.
  • the priority processing score (attraction degree) is determined to be low.
  • the priority processing score (attraction level) is determined to be higher.
  • the user U may view the real space area irradiated with light other than the sunlight as a normal flow of action, and the priority processing score is relatively high.
  • the priority processing score is determined to be lower than the case where the priority processing score is equal to or greater than the threshold value.
  • a region in the real space where light other than sunlight whose illuminance difference is equal to or larger than the threshold is irradiated is determined as a region where the possibility of the user U to see is considerably high, and the priority processing score is considerably increased.
  • the illuminance difference is less than the threshold value and the area is not an area that induces the gaze movement of the user U, but if it is likely to be viewed as a normal action flow, a medium priority processing point is set. Is determined.
  • As an example of a situation where the illuminance difference is equal to or larger than the threshold value, there is a case where a bright indoor is locally illuminated with high illuminance light such as a spotlight.
  • the area where such high illuminance light is irradiated is easy to catch the user U's eyes even when the whole room is bright, and the area with such light is an area where the degree of attraction is high.
  • Another example of a situation where the illuminance difference is equal to or larger than the threshold value is a case where a dark room is locally illuminated with a spotlight, brighter illumination light, downlight, or the like. Even if the area irradiated with such light is dark light, the user U can easily catch the eyes in a dark indoor.
  • Examples of a situation where the illuminance difference is less than the threshold include a case where a bright indoor is locally illuminated with ordinary room lighting or downlight, or a shadow area of an object such as a table.
  • the area to which such light is radiated and the shadow area are not so different from the average brightness of the real space, and therefore do not particularly attract the user U's eyes.
  • ⁇ ⁇ Another example of a situation where the illuminance difference is less than the threshold value is a dark area where there is no particular light inside a dark room. Such an area is not so different from the average brightness of the real space, and therefore does not particularly attract the user U's eyes.
  • a bright indoor refers to, for example, an indoor with a brightness of 30 lux or more
  • a dark indoor refers to, for example, an indoor with a brightness of less than 30 lux.
  • Whether or not the illuminance detected by the illuminance sensor 22 is due to sunlight can be determined using the position information of the user U, date and time / weather information, indoor information acquired in advance, and the like.
  • the position information of the user U can be acquired by a GPS (Global Positioning System) mounted on the HMD 1 or the like.
  • the date / time / weather information is, for example, the sun altitude (elevation angle) and azimuth angle of each place and date / time, which can be acquired by the HMD 1 communicating with an application server that provides date / time / weather information on an external network.
  • Location information and weather information such as fine rain.
  • the indoor information includes window position information such as the presence / absence of an indoor window, the direction in which the window is located indoors, and the position of the window with respect to the wall.
  • Whether the user U is indoor or outdoors can be detected from the position information of the user U. Further, when the user U is indoors, whether or not the light detected by the illuminance sensor is caused by sunlight can be determined based on the window presence / absence information, which is indoor information. When there is no window, it is determined that the detected light is not light caused by sunlight, and the above-described processing when there is no sun can be performed. On the other hand, when there is a window, the irradiation position of the sunlight entering the room through the window can be obtained from the position of the sun from the date / time / weather information and the position of the window from the indoor information. It can be determined whether or not the detected light is caused by sunlight.
  • the position information, date and time, and weather information of the user U are used to determine whether the light detected indoors is due to sunlight, but the present invention is not limited to this.
  • the position information of the user U and the date / time / weather information can be used in other ways.
  • the weather information at the place where the user U is currently located can be grasped from the date / time / weather information and the position information of the user U.
  • the power consumption can be reduced.
  • the user U plays a game while moving indoors or outdoors has been described as an example.
  • the user U may move from indoor to outdoor and from outdoor to indoor according to the content of the game.
  • the user U when the user U goes from indoor to outdoor in the flow of the game, the user U may move with his / her gaze to a dazzling area where sunlight is irradiated. Since the action of the user U going from indoors to outdoors can be assumed in advance according to the content of the game content, in addition to the above-described peripheral situation information, the user's action predicted according to the content is taken into consideration, The priority of the virtual object drawing process may be determined.
  • the threshold value of the illuminance difference is set in advance for each different environment such as outdoors or indoors. Further, the threshold is appropriately set for each brightness of the environment.
  • the average brightness of the environment in which the user U is placed, the illuminance value detected by each illuminance sensor, the illuminance difference between the front region 60F and each of the non-viewing regions, and the user U Data linking the behavior pattern of the user U such as whether or not a high region has been seen may be accumulated as needed. Then, based on these accumulated data, statistically, the relationship between the average brightness of the environment where the user U is placed, the behavior pattern of the user U, and the threshold value is established, and more appropriate A threshold may be set.
  • the surrounding situation information is obtained by the illuminance sensor, and the behavior of the user U is predicted using the surrounding situation information. Is determined. Based on the degree of attraction, the priority of the process regarding the virtual object arranged in each of the non-viewing areas of the user U is determined. Then, based on the determined priority, the processing is executed so as to reduce the drawing processing load of the virtual object arranged in the area where the degree of attraction of the user U is low, so that efficient processing can be performed.
  • FIG. 8 shows an example of determining the priority processing score for each area in a situation where the explosion sound 66 has occurred in the right area 60R of the user U.
  • the situation illustrated in FIG. 8 is a situation that induces the movement of the line of sight of the user U.
  • the priority processing score of each of the regions 60F, 60B, 60R, and 60L is obtained based on the sound volume that is the sound information detected by each of the sound sensors 23F, 23R, 23L, and 23B.
  • the priority processing point is determined according to the volume so that the priority processing point of the region where the detected volume is the largest is higher.
  • the four sound sensors 23F, 23R, 23L, 23B detect the volume of sounds around the user U.
  • the volume value detected by the sound sensor 23R that detects the sound information in the right side region 60R is the largest.
  • the gaze direction of the user U from the state in which the user U is facing the front region 60F to the right region 60R, which is the sound source. Moves from the front area 60F to the right area 60R.
  • the assumed line-of-sight movement area of the user U is an area where the user U is likely to see. Therefore, in the example shown in FIG. 8, the priority processing points P1 and P2 of the front area 60F and the right area 60R are determined to be 10 points higher than the other areas.
  • the priority processing point P3 is determined to be three points. Is done.
  • the rear area 60B is an area adjacent to the right area 60R, and is more likely to enter the field of view of the user U than the left area 60L. Therefore, the priority processing score P3 is determined to be four.
  • the peripheral information (second peripheral information) of the rear area 60B and the left area 60L (second area) of the real space is the peripheral information (first peripheral information) of the right area 60R (first area).
  • drawing is performed based on the processing Z for the virtual objects arranged in the right area 60R and the front area 60F based on the determined priority processing points.
  • drawing is performed based on the processing X.
  • drawing is performed based on the processing Y.
  • the process Z relating to the virtual object (first virtual object) arranged in the right region 60R (first region) is the first process
  • the virtual object (second region) arranged in the left region 60L (second region) Processing X relating to the second virtual object) and processing Y relating to the virtual object (the second virtual object) arranged in the rear area 60B (the second area) correspond to the second processing.
  • the processing Y for the virtual object (first virtual object) arranged in the rear area 60B (first area) corresponds to the first processing.
  • Processing X relating to the virtual object (second virtual object) arranged in the left area 60L (second area) corresponds to the second processing.
  • the sound information as the surrounding situation information referred to here is environmental sounds such as a nearby human voice, music sound flowing around, construction sound, and explosion sound, which occur accidentally. Sounds generated in advance are excluded.
  • an environmental sound such as a crowd, which is a situation where a person does not pay attention even if the sound occurs, is exceptionally obtained from the sound information acquired by the sound sensor 23.
  • the noise may be appropriately canceled.
  • the control of the process regarding the virtual object using the sound information acquired from the sound sensor 23 can be applied to, for example, an HMD that reproduces the game content.
  • the user U can enjoy the game while wearing the HMD and freely moving in the real space.
  • environmental sounds other than sounds generated in advance in the game content may be generated in the real space.
  • the direction in which the sound source is located can be specified by installing a sound sensor on the HMD and detecting sound information of the surrounding situation. Since the user U is assumed to face a certain direction of the sound source, the HMD 1 performs a process of drawing a virtual object arranged in an area from a region in front of the user U to a certain region of the sound source at the time of sound generation. It is performed with higher priority than the rendering processing of the virtual object arranged in another area.
  • the user U can experience a realistic game. Further, in the HMD 1, since the processing is executed while reducing the drawing processing load of the virtual object in the area where the degree of attraction of the user U is low, efficient processing is possible.
  • FIG. 9 shows an example of determining the priority processing score for each area in a situation where the source of the smell 67 is in the right area 60R of the user U.
  • the situation shown in FIG. 9 is a situation that induces the movement of the line of sight of the user U.
  • the priority processing points of the areas 60F, 60B, 60R, and 60L are obtained based on the odor intensities detected by the odor sensors 24F, 24R, 24L, and 24B, respectively.
  • the priority processing point is determined such that the priority processing point (attraction degree) of the region having the highest detected odor intensity is high.
  • the four odor sensors 24F, 24R, 24L, 24B detect the odor intensity around the user U.
  • the value of the odor intensity detected by the odor sensor 24R that detects the odor intensity in the right region 60R is the largest.
  • the line of sight of the user U is from the front area 60F of the user U when the odor is detected to the right area. Move over 60R. Therefore, in the example shown in FIG. 9, the priority processing points P1 and P2 of the front area 60F and the right area 60R are determined to be 10 points higher than the other areas.
  • the priority processing score P4 is determined to be four.
  • the rear area 60B is an area adjacent to the right area 60R, and is more likely to be in the field of view of the user U than the left area 60L. Therefore, the priority processing score P3 is determined to be five.
  • the peripheral information (second peripheral information) of the rear area 60B and the left area 60L (second area) of the real space is the peripheral information (first peripheral information) of the right area 60R (first area).
  • the control of the processing related to the virtual object based on the odor information acquired by the odor sensor 24 can be applied to, for example, an HMD for a simulated disaster experience.
  • the user U wearing the HMD can have a simulated experience of a fire by presenting a virtual object (display image) such as fire or smoke in a superimposed manner on the external world that is a real space.
  • an odor is generated in the real space, and the direction in which the source of the odor is located is specified based on the detection result of the odor sensor mounted on the HMD.
  • the odor generated in the real space a gas that is harmless to the body and can be detected by the odor sensor can be used.
  • the rendering process of the virtual object arranged in the region from the front region 60F of the user U to the region with the odor source at the time of the odor detection is performed by another process. This is performed prior to the rendering processing of the virtual object arranged in the area.
  • image information detected by the camera 25 mounted on the HMD 1 is acquired as peripheral situation information, and the priority processing score is determined using the image information.
  • the HMD worn by each of the users U1 to U3 is the HMD1 having the same structure shown in FIG.
  • the HMDs 1 worn by the users U1 to U3 are configured to be able to communicate with each other, and can transmit / receive various information including image information acquired by each HMD to / from another HMD.
  • FIG. 10 is a schematic diagram for explaining the relationship between the area around the user U1 and the priority processing score when the surrounding situation of the user U1 is not a surrounding situation that induces the movement of the user U1's gaze.
  • FIG. 11 is a schematic diagram for explaining the determination of the priority processing score using the peripheral situation information.
  • FIG. 12 is a flowchart illustrating an example of a process related to a virtual object in the information processing apparatus according to the present embodiment.
  • the real space 70 where the user U actually moves in the game content is divided into a plurality of areas A1 to A16, and the user U1 is located in the area A11.
  • the priority processing point P10 is highest in the area A10 located in front (front) of the user U1, and is determined to be 10 points. In the areas A1, A2, A5, A6, A9, A10, A13, and A14 in front of the user U1, the priority processing points are determined such that the longer the distance from the user U1, the lower the priority processing points.
  • areas A7 and A15 immediately to the right or left of the user U1 have lower priority processing points than A10, which is a front area, and the priority processing points P7 and P15 are determined to be six. Further, in the rightward direction and the leftward direction of the user U1, the priority processing points are determined so as to become lower as the distance from the user U1 increases, and the priority processing points P3 of the area A3 are determined to be three points.
  • the area A12 immediately behind the user U1 is determined as the area where the user U1 is least likely to see, and the priority processing score A12 is determined to be 0. Also, in the other areas A4, A8, and A16 behind the user U1, the priority processing score is determined to be relatively low, as the possibility that the user U1 sees is low.
  • the peripheral situation information used when determining the priority processing point is positional relationship information between each of the users U1 to U3 and the area to be subjected to the priority processing point determination.
  • the positional relationship information between the user U and the priority processing point determination target area includes the line-of-sight information of the user U and distance information between the user U and the priority processing point determination target area.
  • the line-of-sight information of the user U is information on the direction of the user U with respect to the priority processing point determination target area.
  • the line-of-sight information and distance information of each of the users U1 to U3 are detected based on images (image information) captured by the camera 25 mounted on the HMD 1 worn by each of the users U1 to U3.
  • captured images (image information) acquired from the cameras 25 of the HMDs 1 of the other users U2 and U3 are image information in which a non-visual area of the user U1 is captured, and are in the visual field of the user U1. This is peripheral information different from the corresponding visual field information.
  • the image captured by the camera 25 mounted on the HMD worn by the user U1 is visual field information corresponding to the visual field of the user U1.
  • the user U1 is located in the area A11
  • the user U2 is located in the area A4
  • the user U3 is located in the area A2.
  • the virtual object A is an object arranged in the area A7
  • the virtual object B is an object arranged in the area A15.
  • the users U2 and U3 pay attention to the virtual object A.
  • the virtual object A and the virtual object B are objects located around the user U1 and arranged in the non-viewing area of the user U1.
  • the positional relationship information is information on the user's orientation (the line-of-sight information of the user U) with respect to the priority processing point determination target area and distance information between the user U and the priority processing point determination target area.
  • a high priority processing score is determined for an area where many users are looking and an area closer to the user is closer.
  • the virtual object A around the user U1 is located in the area A7 where the users U2 and U3 other than the user U1 are focused, and is closer to the users U2 and U3 than the virtual object B. In position.
  • the virtual object B around the user U1 is not noticed by any user, and is located in an area A15 farther from the positions of the users U2 and U3 than the virtual object A.
  • the area A7 in which the virtual object A is arranged has a higher priority processing score than the area A15 in which the virtual object B is arranged.
  • the determination of the priority processing score will be described in detail with reference to the flowchart of FIG.
  • the information of the image photographed at is obtained (S31). From the image information, the distance information between the user U and the priority processing point determination target area, and the user's direction information (gaze information) with respect to the priority processing point determination target area are obtained for each of the users U1 to U3.
  • the priority processing point determination unit 15 determines the priority processing point (attraction degree) for each peripheral area of the user U1 from the image information based on the above-described distance information and line-of-sight information (S32).
  • the priority processing points of the priority processing point determination target area are calculated for each user U, and the total of the priority processing points calculated for each user is set as the priority processing point of the priority processing point determination target area.
  • the determination of the priority processing points of the area A7 where the virtual object A is arranged and the area A11 where the virtual object B is arranged in FIG. 11 will be specifically described as an example.
  • the number of processing points is determined.
  • the priority processing points of the area A7 (A11) where the virtual object A (virtual object B) is arranged in FIG. 11 are obtained from the sum of the priority processing points obtained for each of the users A1 to A3.
  • the priority processing score of each user is obtained by the following equation.
  • P is the number of priority processing points
  • l (m) is the distance between the camera 25 mounted on the HMD 1 worn by each user U and the area for which priority processing points are to be determined
  • ⁇ (rad) is the horizontal plane. The angle at which the priority processing point determination target area is located from the front of each user U, and E indicates a priority processing point coefficient.
  • the priority processing point coefficient E is a coefficient based on the user U1. Since the priority processing coefficient E is determined based on the positional relationship between the user U and the priority processing point determination target area, the priority processing coefficient differs for each user U even if the priority processing point determination target area is the same.
  • the distance 1 between the user U1 and the area A7 is 1, the distance between the users U2 and U3 and the area A7 is a square root of 2. expressed.
  • the angle between the user U1 and the area A7 can be represented by ⁇ / 2 radians (about 1.57 radians), and the angle between the users U2 and U3 and the area A7 can be represented by 0 radians.
  • the distance 1 between the user U1 and the area A15 is 1, and the distance between the users U2 and U3 and the area A15 is represented by a square root of 10.
  • the angle between the user U1 and the area A15 can be represented by ⁇ / 2 radians (about 1.57 radians), and the angle between the users U2 and U3 and the area A15 can be represented by about 0.46 radians.
  • the rendering processing load determination unit 16 determines the load of the processing related to the rendering of the virtual object placed in the peripheral area of the user U ⁇ b> 1.
  • a virtual object generation process is executed by the output image generation unit 17 based on the processed content (S33).
  • the threshold value Pt of the priority processing point for determining the processing load is set to 2.0.
  • the rendering processing load determination unit 16 performs a comparatively heavy load such that the virtual object placed in the area where the priority processing score determined by the priority processing score determination unit 15 is less than the threshold is moved once every 100 msec. It is determined that a low process X is performed.
  • the rendering processing load determination unit 16 performs a comparatively heavy load such as moving the virtual object placed in the area where the priority processing point calculated by the priority processing point determination unit 15 is equal to or greater than the threshold value once every 16 msec. It is determined that high processing Y is performed.
  • the processing load is divided into two stages has been described, but the present invention is not limited to this.
  • the drawing processing load determination unit 16 determines to perform the processing Y.
  • the value of PB is 1.76, so that it is determined that the process X is performed.
  • the output image generation unit 17 executes a virtual object generation process based on the processing content determined by the drawing processing load determination unit 16.
  • the virtual object generated by the output image generation unit 17 is converted into an image signal so that it can be displayed on the display 31 by the output image control unit 18 and output to the output unit 30 (S34).
  • the processing load related to the drawing processing of the virtual object A that is likely to be noticed by the user U1 increases, and the processing load related to the drawing processing of the virtual object B that is unlikely to be noticed by the user U1 decreases. Good processing can be performed.
  • a captured image of a camera mounted on the HMD is used has been described, but a captured image captured by an external camera as an external device disposed in a real space that is not mounted on the HMD may be used.
  • the external camera may be fixed or may be configured to be movable, as long as the external camera can acquire image information having positional relationship information between the user U and the priority processing point determination target area.
  • the positional relationship information between the user U and the priority processing point determination target area is obtained from the image information, and based on this information, the processing related to the drawing processing of the virtual object arranged in the non-visual area of the user U1 is controlled. You may.
  • the processing regarding the virtual object arranged in the non-viewing area is performed based on the degree of attraction determined based on the peripheral information of the real space acquired from the sensor. Are determined, and the process related to the virtual object is executed. Thereby, the processing relating to the virtual object arranged in the area with a high degree of attraction can be performed prior to the processing relating to the virtual object arranged in the area with a low degree of attraction, and efficient processing can be performed. .
  • Embodiments of the present technology are not limited to the above-described embodiments, and various modifications can be made without departing from the gist of the present technology.
  • the visual field of a normal person is said to be about 60 degrees on the upper side and about 70 degrees on the lower side in one eye.
  • each of the four regions divided in the left-right direction in the above-described embodiment may be further divided into three in the up-down direction, and divided into 12 regions as a whole.
  • the angle defining the range in the vertical direction of the upper area is 30 degrees
  • the vertical direction of the intermediate area is May be 130 degrees
  • the angle that defines the range in the vertical direction of the lower region may be 20 degrees.
  • a sensor is mounted on the HMD so that the surrounding situation in each area can be individually detected.
  • sensors such as the illuminance sensor 22, the sound sensor 23, and the odor sensor 24 which detect the surrounding situation are mounted on the HMD. It may be installed as an external device different from.
  • the external device may be installed so as to be able to detect the surrounding situation of the user U.
  • a wristband-type device equipped with a sensor for detecting the surrounding situation of each of the right area and the left area of the user U is used. be able to.
  • the external device and the control unit of the HMD are configured to be able to communicate with each other, and the control unit of the HMD is configured to be able to acquire the detection result of the external device.
  • the drawing process of the virtual object arranged in the non-viewing area is controlled using the illuminance information, the sound information, the odor information, or the image information as the peripheral situation information of the user U.
  • the drawing process of the virtual object may be controlled using a combination of these pieces of information.
  • the surrounding situation information and spatial information such as the position and orientation information of the camera are used, but only the surrounding situation information may be used.
  • the illuminance sensor 22, the sound sensor 23, and the odor sensor 24 for detecting the surrounding state are respectively installed to detect the respective areas 60F, 60B, 60R, and 60L, for example, the sensor for detecting the surrounding state of the front area 60F is used. If specified, the orientation of the user U, that is, the position and orientation of the camera can be grasped.
  • the present technology may have the following configurations. (1) Determining the degree of attraction for each of the first peripheral information of the first area and the second peripheral information of the second area in the real space obtained from the sensor, which is different from the visual field information corresponding to the user's visual field; Based on the degree of attraction, the priority of the first processing for the first virtual object arranged in the first area and the priority of the second processing for the second virtual object arranged in the second area are determined.
  • An information processing device including a control unit for determining.
  • the control unit may be configured to perform the first processing with higher priority than the second processing when determining a lower degree of interest for the second peripheral information than the first peripheral information.
  • An information processing device that determines the priority.
  • the control unit determines the priority such that the processing load of the first process related to the drawing of the first virtual object is higher than that of the second process related to the drawing of the second virtual object.
  • Information processing device determines the priority such that the processing load of the first process related to the drawing of the first virtual object is higher than that of the second process related to the drawing of the second virtual object.
  • the information processing apparatus includes a first illuminance sensor that detects illuminance of the first area as the first peripheral information, and a second illuminance sensor that detects illuminance of the second area as the second peripheral information.
  • Information processing device including.
  • the information processing apparatus When at least one of the first illuminance sensor and the second illuminance sensor detects an illuminance in the real space to which sunlight having an illuminance higher than an illuminance in a region corresponding to the user's field of view is applied.
  • the control unit sets the degree of attraction to the peripheral information acquired from the illuminance sensor that detects the illuminance of the real space to which the sunlight is irradiated, using the illuminance of the area corresponding to the user's field of view and the illuminance sensor.
  • An information processing apparatus that determines that an illuminance difference between the detected sunlight and the illuminance in the real space that is equal to or larger than the threshold is lower than a case where the illuminance difference is smaller than the threshold.
  • the information processing apparatus according to (5), The environment of the real space is an indoor information processing device.
  • the information processing apparatus When at least one of the first illuminance sensor and the second illuminance sensor detects the illuminance of the real space to which the light other than the sunlight is irradiated, the control unit sets the light other than the sunlight to The attractiveness of the peripheral information acquired from the illuminance sensor that detects the illuminance of the real space to be illuminated, the illuminance of the area corresponding to the user's field of view and the light other than the sunlight detected by the illuminance sensor An information processing apparatus which determines that an illuminance difference between the illuminance and the illuminance in the real space to be irradiated is lower when the illuminance difference is less than the threshold than when the illuminance difference is equal to or greater than the threshold.
  • the information processing apparatus according to any one of (1) to (10), The first sensor detects a loudness of the sound in the first area as the first peripheral information, and detects a loudness of the sound in the second area as the second peripheral information.
  • An information processing apparatus including a second sound sensor that performs the processing.
  • the information processing apparatus according to (11), The control unit, when the loudness of the sound in the second area is smaller than the loudness of the sound in the first area, sets the degree of attraction for the second peripheral information to the degree of attraction for the first peripheral information. Information processing device that also determines low.
  • the information processing apparatus detects a odor intensity of the first area as the first peripheral information, and detects an odor intensity of the second area as the second peripheral information.
  • An information processing device including a second odor sensor.
  • the information processing apparatus according to (13), The control unit, when the odor intensity of the second area is weaker than the odor of the first area, sets the degree of attraction to the second peripheral information to be lower than the degree of attraction to the first peripheral information. Judge information processing device.
  • the information processing apparatus according to any one of (1) to (14), The information processing device, wherein the sensor is a camera that acquires image information around the user as the surrounding information.
  • the information processing apparatus includes positional relationship information between the first area and the second area and the user, The information processing device, wherein the control unit determines the degree of attraction using the positional relationship information.
  • the information processing apparatus can be mounted on the head of the user, and can present the first virtual object and the second virtual object in the field of view of the user while allowing the user to visually recognize the outside world.
  • An information processing device that is a head-mounted display configured.
  • HMD information processing device
  • control unit 22 illuminance sensor (sensor) 23 ... Sound sensor (sensor) 24 ... Odor sensor (sensor) 25 ...
  • 60B rear area (first area, second area) 60L... Left side area (first area, second area) 60R right side area (first area, second area) 65 ... sun 66 ... explosion sound (sound) 67: smell
  • A virtual object A (first virtual object)
  • B virtual object B (second virtual object)
  • A7... Area A7 (first area) A15... Area A15 (second area)
  • U, U1 to U3 User P1 to P16: Priority processing points (attraction level)

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

[Problem] To provide an information processing device, information processing method, and program with which it is possible to reduce a processing load. [Solution] An information processing device is equipped with a control unit. The control unit: determines the degree of noticeability of each of first surrounding information in a first region and second surrounding information in a second region in an actual space acquired by a sensor, the degree of noticeability differing from the view field information corresponding to the view field of a user; and determines, based on the degree of noticeability, the degree of priority of a first process relating to a first virtual object disposed in the first region, and a second process relating to a second virtual object disposed in the second region.

Description

情報処理装置、情報処理方法、及びプログラムInformation processing apparatus, information processing method, and program
 本技術は、透過型ヘッドマウントディスプレイなどに表示する仮想オブジェクトに関する処理に係る情報処理装置、情報処理方法、及びプログラムに関する。 The present technology relates to an information processing apparatus, an information processing method, and a program related to processing relating to a virtual object displayed on a transmission type head mounted display or the like.
 特許文献1には、第1ユーザが使用する非透過型ヘッドマウントディスプレイと、第2ユーザが使用する非透過型ヘッドマウントディスプレイとの間で仮想現実空間を共有するシステムについて記載されている。当該システムでは、ネットワークを介して伝送される第2ユーザの視線情報を基に、第1ユーザが使用するヘッドマウントディスプレイに表示される仮想現実空間画像が生成される。 Patent Document 1 describes a system that shares a virtual reality space between a non-transmissive head-mounted display used by a first user and a non-transmissive head-mounted display used by a second user. In this system, a virtual reality space image displayed on a head-mounted display used by the first user is generated based on the line-of-sight information of the second user transmitted via the network.
 また、ユーザUの頭部に装着され、ユーザUに外界を視認させつつ、仮想オブジェクトといった画像をユーザUの視界に重ねて表示して、ユーザUに提示する透過型ヘッドマウントディスプレイ(HMD)が知られている。 Further, a transmission type head mounted display (HMD) that is mounted on the head of the user U and superimposes and displays an image such as a virtual object on the field of view of the user U while allowing the user U to visually recognize the outside world is presented to the user U. Are known.
特開2017-78891号公報JP-A-2017-78891
 ヘッドマウントディスプレイに表示される仮想オブジェクトに関する処理において、処理負荷の低減が望まれている。 処理 In processing related to virtual objects displayed on a head-mounted display, it is desired to reduce the processing load.
 以上のような事情に鑑み、本技術の目的は、処理負荷を低減することが可能な情報処理装置、情報処理方法、及びプログラムを提供することにある。 In view of the circumstances described above, an object of the present technology is to provide an information processing device, an information processing method, and a program capable of reducing a processing load.
 上記目的を達成するため、本技術の一形態に係る情報処理装置は、制御部を具備する。
 上記制御部は、ユーザの視野に対応する視野情報とは異なる、センサから取得した実空間の第1の領域の第1の周辺情報と第2の領域の第2の周辺情報それぞれに対して誘目度を判定し、上記誘目度を基に、上記第1の領域に配置される第1の仮想オブジェクトに関する第1の処理と上記第2の領域に配置される第2の仮想オブジェクトに関する第2の処理の優先度を判定する。
In order to achieve the above object, an information processing device according to an embodiment of the present technology includes a control unit.
The control unit invites the first peripheral information of the first area and the second peripheral information of the second area of the real space obtained from the sensor, which are different from the visual field information corresponding to the visual field of the user. A first process related to a first virtual object arranged in the first area and a second process related to a second virtual object arranged in the second area based on the attraction degree. Determine the priority of processing.
 このような構成によれば、ユーザの周辺情報から、ユーザが見る可能性のある領域を予測し、非視野領域にある第1の領域及び第2の領域における誘目度を判定することができる。そして、この誘目度に基づいて、第1の領域に配置される第1の仮想オブジェクトに関する第1の処理と第2の領域に配置される第2の仮想オブジェクトに関する第2の処理の優先度を判定することができる。 According to such a configuration, it is possible to predict an area that the user may see from the peripheral information of the user and determine the degree of attraction in the first area and the second area in the non-viewing area. Then, based on the degree of attraction, the priority of the first processing for the first virtual object arranged in the first area and the priority of the second processing for the second virtual object arranged in the second area are determined. Can be determined.
 上記制御部は、上記第2の周辺情報に対して上記第1の周辺情報よりも低い誘目度を判定する場合、上記第1の処理を上記第2の処理よりも優先的に行うように上記優先度を判定してもよい。 The control unit may be configured to perform the first processing with higher priority than the second processing when determining a lower degree of interest for the second peripheral information than the first peripheral information. The priority may be determined.
 上記制御部は、上記第1の仮想オブジェクトの描画に関する上記第1の処理を、上記第2の仮想オブジェクトの描画に関する上記第2の処理よりも処理負荷を高くするように上記優先度を判定してもよい。 The control unit determines the priority such that the first processing relating to the rendering of the first virtual object has a higher processing load than the second processing relating to the rendering of the second virtual object. You may.
 上記センサは、上記第1の周辺情報として上記第1の領域の照度を検出する第1の照度センサと、上記第2の周辺情報として上記第2の領域の照度を検出する第2の照度センサを含んでもよい。 The sensor includes a first illuminance sensor that detects illuminance of the first area as the first peripheral information, and a second illuminance sensor that detects illuminance of the second area as the second peripheral information. May be included.
 上記制御部は、上記実空間の環境を加味して上記誘目度を判定してもよい。 制 御 The control unit may determine the degree of attraction in consideration of the environment of the real space.
 上記実空間の環境は日照時の屋外であってもよい。 環境 The environment of the real space may be outdoors under sunlight.
 上記第1の照度センサ及び上記第2の照度センサのうち少なくとも1つが、上記ユーザの視野に対応する領域の照度よりも高い照度となる太陽光が照射される上記実空間の照度を検出した場合、上記制御部は、上記太陽光が照射される上記実空間の照度を検出する上記照度センサから取得した上記周辺情報に対する誘目度を、上記ユーザの視野に対応する領域の照度と上記照度センサで検出される上記太陽光が照射される上記実空間の照度との照度差がしきい値未満の場合よりも上記しきい値以上の場合の方が低くなるように判定してもよい。 When at least one of the first illuminance sensor and the second illuminance sensor detects an illuminance in the real space to which sunlight having an illuminance higher than an illuminance in a region corresponding to the user's field of view is applied. The control unit sets the degree of attraction to the peripheral information acquired from the illuminance sensor that detects the illuminance of the real space to which the sunlight is irradiated, using the illuminance of the area corresponding to the user's field of view and the illuminance sensor. The illuminance difference between the detected sunlight and the illuminance in the real space to be irradiated may be determined to be lower when the difference is equal to or larger than the threshold than when the difference is smaller than the threshold.
 上記実空間の環境は屋内であってもよい。 環境 The real space environment may be indoors.
 上記制御部は、太陽光が照射される上記実空間の照度を検出する上記照度センサから取得した上記周辺情報の誘目度を、上記優先度が最も低くなるように判定してもよい。 The control unit may determine the degree of attraction of the peripheral information acquired from the illuminance sensor that detects the illuminance of the real space to be irradiated with sunlight so that the priority is the lowest.
 上記第1の照度センサ及び上記第2の照度センサのうち少なくとも1つが、太陽光以外の光が照射される上記実空間の照度を検出した場合、上記制御部は、上記太陽光以外の光が照射される上記実空間の照度を検出する上記照度センサから取得した上記周辺情報の誘目度を、上記ユーザの視野に対応する領域の照度と上記照度センサで検出される上記太陽光以外の光が照射される上記実空間の照度との照度差がしきい値以上の場合よりも上記しきい値未満の場合の方が低くなるように判定してもよい。 When at least one of the first illuminance sensor and the second illuminance sensor detects the illuminance of the real space to which the light other than the sunlight is irradiated, the control unit sets the light other than the sunlight to The attractiveness of the peripheral information acquired from the illuminance sensor that detects the illuminance of the real space to be illuminated, the illuminance of the area corresponding to the user's field of view and the light other than the sunlight detected by the illuminance sensor The illuminance difference between the illuminance to be irradiated and the illuminance in the real space may be determined to be lower when the difference is less than the threshold than when the difference is equal to or greater than the threshold.
 上記センサは、上記第1の周辺情報として上記第1の領域の音の大きさを検出する第1の音センサと、上記第2の周辺情報として上記第2の領域の音の大きさを検出する第2の音センサを含んでもよい。 The first sensor detects a loudness of the sound in the first area as the first peripheral information, and detects a loudness of the sound in the second area as the second peripheral information. A second sound sensor may be included.
 上記制御部は、上記第2の領域の音の大きさが上記第1の領域の音の大きさよりも小さい場合、上記第2の周辺情報に対する誘目度を上記第1の周辺情報に対する誘目度よりも低く判定してもよい。 The control unit, when the loudness of the sound in the second area is smaller than the loudness of the sound in the first area, sets the degree of attraction for the second peripheral information to the degree of attraction for the first peripheral information. May be determined to be low.
 上記センサは、上記第1の周辺情報として上記第1の領域の匂いの強さを検出する第1の匂いセンサと、上記第2の周辺情報として上記第2の領域の匂い強さを検出する第2の匂いセンサを含んでもよい。 The sensor detects a odor intensity of the first area as the first peripheral information, and detects an odor intensity of the second area as the second peripheral information. A second odor sensor may be included.
 上記制御部は、上記第2の領域の匂いの強さが上記第1の領域の匂いよりも弱い場合、上記第2の周辺情報に対する誘目度を上記第1の周辺情報に対する誘目度よりも低く判定してもよい。 The control unit, when the odor intensity of the second area is weaker than the odor of the first area, sets the degree of attraction to the second peripheral information to be lower than the degree of attraction to the first peripheral information. It may be determined.
 上記センサは、上記周辺情報として上記ユーザの周辺の画像情報を取得するカメラであってもよい。 The sensor may be a camera that acquires image information around the user as the surrounding information.
 上記画像情報には、上記第1の領域及び上記第2の領域それぞれの領域と上記ユーザとの位置関係情報が含まれ、上記制御部は、上記位置関係情報を用いて上記誘目度を判定してもよい。 The image information includes positional relationship information between the first region and the second region and the user, and the control unit determines the degree of attraction using the positional relationship information. You may.
 上記情報処理装置は、上記ユーザの頭部に装着可能であって上記ユーザに外界を視認させつつ上記ユーザの視野に上記第1の仮想オブジェクト及び上記第2の仮想オブジェクトを提示することが可能に構成されるヘッドマウントディスプレイであってもよい。 The information processing device can be mounted on the head of the user, and can present the first virtual object and the second virtual object in the field of view of the user while allowing the user to visually recognize the outside world. It may be a head mounted display configured.
 上記目的を達成するため、本技術の一形態に係る情報処理方法は、ユーザの視野に対応する視野情報とは異なる実空間の第1の領域の第1の周辺情報と第2の領域の第2の周辺情報をセンサから取得し、上記第1の周辺情報と上記第2の周辺情報それぞれの誘目度を判定し、上記誘目度を基に、上記第1の領域に配置される第1の仮想オブジェクトに関する第1の処理と上記第2の領域に配置される第2の仮想オブジェクトに関する第2の処理の優先度を判定する。 In order to achieve the above object, an information processing method according to an embodiment of the present technology is configured to provide first peripheral information of a first area in a real space different from visual field information corresponding to a visual field of a user and second information of a second area. The second peripheral information is obtained from the sensor, the degree of attraction of each of the first peripheral information and the second peripheral information is determined, and the first attraction arranged in the first area is determined based on the degree of attraction. The priority of the first process related to the virtual object and the priority of the second process related to the second virtual object arranged in the second area are determined.
 上記目的を達成するため、本技術の一形態に係るプログラムは、ユーザの視野に対応する視野情報とは異なる実空間の第1の領域の第1の周辺情報と第2の領域の第2の周辺情報をセンサから取得するステップと、上記第1の周辺情報と上記第2の周辺情報それぞれの誘目度を判定するステップと、上記誘目度を基に、上記第1の領域に配置される第1の仮想オブジェクトに関する第1の処理と上記第2の領域に配置される第2の仮想オブジェクトに関する第2の処理の優先度判定するステップを含む処理を情報処理装置に実行させる。 In order to achieve the above object, a program according to an embodiment of the present technology includes first peripheral information of a first area in a real space different from visual field information corresponding to a visual field of a user, and second peripheral information of a second area. Obtaining peripheral information from a sensor; determining the degree of attraction of each of the first peripheral information and the second peripheral information; and determining a degree of attraction in the first area based on the degree of attraction. The information processing apparatus is configured to execute a process including a step of determining a priority of a first process related to one virtual object and a second process related to a second virtual object arranged in the second area.
本技術の第1~第4の実施形態に係る情報処理装置のブロック図である。FIG. 6 is a block diagram of an information processing device according to first to fourth embodiments of the present technology. 第1~第3の実施形態に係る情報処理装置における仮想オブジェクトに関する処理の一例を説明するフロー図である。FIG. 13 is a flowchart illustrating an example of a process regarding a virtual object in the information processing apparatuses according to the first to third embodiments. 第1~第3の実施形態に係る情報処理装置における仮想オブジェクトに関する処理の一例を説明するフロー図である。FIG. 13 is a flowchart illustrating an example of a process regarding a virtual object in the information processing apparatuses according to the first to third embodiments. 第1~第3の実施形態に係る情報処理装置における仮想オブジェクトに関する処理の一例を説明するフロー図である。FIG. 13 is a flowchart illustrating an example of a process regarding a virtual object in the information processing apparatuses according to the first to third embodiments. 上述の情報処理装置のハードウェア構成を示す図である。It is a figure showing the hardware constitutions of the above-mentioned information processor. ユーザUの周辺状況が、ユーザUの視線移動を誘発するような状況でない場合における、ユーザUの周辺の領域と優先処理点数との関係を説明するための模式図である。FIG. 9 is a schematic diagram for explaining a relationship between an area around the user U and the number of priority processing points when the situation around the user U is not a situation that induces the movement of the line of sight of the user U. 第1の実施形態を説明するための図であって、ユーザUの周辺状況が、ユーザUの視線移動を誘発するような状況である場合における、ユーザUの周辺の領域と優先処理点数との関係を説明するための模式図である。It is a figure for explaining a 1st embodiment, and when a surrounding situation of user U is a situation which induces eye movement of user U, it is a field of a surrounding of user U, and a priority processing score. It is a schematic diagram for demonstrating a relationship. 第2の実施形態を説明するための図であって、ユーザUの周辺状況が、ユーザUの視線移動を誘発するような状況である場合における、ユーザUの周辺の領域と優先処理点数との関係を説明するための模式図である。It is a figure for explaining a 2nd embodiment, and when a surrounding situation of user U is a situation which induces a line-of-sight movement of user U, an area around user U and a priority processing score are shown. It is a schematic diagram for demonstrating a relationship. 第3の実施形態を説明するための図であって、ユーザUの周辺状況が、ユーザUの視線移動を誘発するような状況である場合における、ユーザUの周辺の領域と優先処理点数との関係を説明するための模式図である。It is a figure for explaining a 3rd embodiment, and when a surrounding situation of user U is a situation which induces a gaze shift of user U, it is a field of a surrounding of user U, and a priority processing score. It is a schematic diagram for demonstrating a relationship. 第4の実施形態を説明するための図であって、ユーザU1の周辺状況が、ユーザUの視線移動を誘発するような状況でない場合における、ユーザU1の周辺の領域と優先処理点数との関係を説明するための模式図である。It is a figure for explaining a 4th embodiment, and a relation between an area around user U1 and a priority processing point when a situation surrounding user U1 is not a situation which induces a user's line of sight movement. It is a schematic diagram for demonstrating. 第4の実施形態を説明するための図であって、周辺情報を用いた優先処理点数の判定を説明するための模式図である。It is a figure for explaining a 4th embodiment, and is a mimetic diagram for explaining judgment of a priority processing point using peripheral information. 第4の実施形態に係る情報処理装置における仮想オブジェクトに関する処理の一例を説明するフロー図である。It is a flow figure explaining an example of processing about a virtual object in an information processor concerning a 4th embodiment.
 (情報処理装置の概要)
 本技術の一実施形態に係る情報処理装置は、ユーザUの頭部に装着されるディスプレイ装置であるヘッドマウントディスプレイ(HMD)であり、ウエアラブルコンピュータの一つである。HMDの形状は、典型的には目鏡型又は帽子型である。
(Overview of information processing device)
The information processing device according to an embodiment of the present technology is a head-mounted display (HMD), which is a display device mounted on the head of the user U, and is one of wearable computers. The shape of the HMD is typically an eye mirror type or a hat type.
 本実施形態のHMDは透過型HMDであり、例えば、ゲームコンテンツを再生し、ユーザUに対して、実空間である外界に、コンテンツに応じた仮想オブジェクト(表示画像)を重畳して提示することが可能に構成される。透過型HMDとしては、光学透過方式、ビデオ透過方式等がある。 The HMD of the present embodiment is a transparent HMD, for example, for reproducing a game content and superimposing and presenting a virtual object (display image) corresponding to the content to the user U in the external world, which is a real space. Can be configured. The transmission type HMD includes an optical transmission type, a video transmission type, and the like.
 透過型HMDは、ユーザUの頭部に装着された際にユーザUの目前に配置されるディスプレイを有する。ディスプレイは、画像表示素子と、光学素子を含む。 The transmissive HMD has a display arranged in front of the user U when worn on the head of the user U. The display includes an image display element and an optical element.
 光学透過方式では、画像表示素子に表示される表示画像が、ユーザUの目の前に配置されるホログラフィック光学素子やハーフミラー等といった光学素子を介して、ユーザUに対して提示される。光学透過方式では、ユーザUは、光学素子を介して外界の様子をシースルーで見ることができ、画像表示素子で表示された仮想オブジェクトといった表示画像が実空間である外界の様子に配置され重畳されてユーザUに提示される。
 尚、画像とは、静止画及び動画を含む。
In the optical transmission system, a display image displayed on the image display element is presented to the user U via an optical element such as a holographic optical element or a half mirror disposed in front of the user U. In the optical transmission system, the user U can see the state of the outside world through the optical element in a see-through manner, and a display image such as a virtual object displayed on the image display element is arranged and superimposed on the state of the external world that is a real space. Is presented to the user U.
The image includes a still image and a moving image.
 ビデオ透過方式では、装着すると直接外界の様子を見ることはできないが、カメラにより取得された外界の画像に仮想オブジェクトが重畳された画像がディスプレイに表示されることにより、ユーザUに外界を視認させつつ仮想オブジェクトを提示することができる。 In the video transmission system, the user cannot directly see the external world when worn, but an image in which the virtual object is superimposed on the external image obtained by the camera is displayed on the display, so that the user U can visually recognize the external world. The virtual object can be presented while the virtual object is being displayed.
 HMDにおいて、当該HMDを装着するユーザUの非視野領域に配置される仮想オブジェクト全てで同一の処理負荷で描画に関する処理が行われると、HMDの処理負荷が大きくなり、非効率的である。 (4) In the HMD, if processing related to drawing is performed with the same processing load for all virtual objects arranged in the non-viewing area of the user U wearing the HMD, the processing load of the HMD increases, which is inefficient.
 そこで、本実施形態におけるHMDでは、ユーザの周辺情報に基づいてユーザの視野に入る可能性が高い実空間での領域を予測する。この予測結果に基づいて、実空間に配置する仮想オブジェクトの描画に関する処理内容を変えている。具体的には、ユーザUの視野に入る可能性の低い領域に配置される仮想オブジェクトに関する処理の負荷を、ユーザUの視野に入る可能性の高い領域に配置される仮想オブジェクトに関する処理よりも低くする。 Therefore, the HMD according to the present embodiment predicts a region in a real space that is likely to be in the user's field of view based on the peripheral information of the user. Based on the result of the prediction, the processing content regarding the drawing of the virtual object arranged in the real space is changed. Specifically, the processing load on the virtual object placed in an area that is unlikely to be in the field of view of the user U is lower than the processing load on the virtual object placed in an area that is likely to be in the field of view of the user U. I do.
 周辺情報は、周辺状況情報と空間情報を含むが、周辺状況情報のみでもよい。
 周辺状況情報は、ユーザの周辺状況の情報であり、例えば、照度情報、音情報、匂い情報、画像情報等である。
 空間情報は、ユーザの向き、位置情報等である。
The peripheral information includes the peripheral situation information and the space information, but may be only the peripheral situation information.
The peripheral situation information is information on the peripheral situation of the user, for example, illuminance information, sound information, odor information, image information, and the like.
The spatial information is the orientation, position information, and the like of the user.
 本実施形態では、ユーザの周辺の実空間を複数の領域に分割し、領域毎に取得される周辺情報に基づいて、領域毎にユーザの誘目度が判定される。誘目度は、ユーザUの視野に入る可能性を表す。ユーザの視野に入る可能性が高い領域とは、ユーザUの目をひきやすい状況となっている領域であり、誘目度の高い領域である。 In the present embodiment, the real space around the user is divided into a plurality of regions, and the user's degree of attraction is determined for each region based on the peripheral information acquired for each region. The degree of attraction indicates a possibility of entering the field of view of the user U. The area that is highly likely to be in the user's field of view is an area where the user U is likely to be caught by the eye, and is an area where the degree of attraction is high.
 そして、誘目度の高い領域に配置される仮想オブジェクトに関する処理を、誘目度の低い領域に配置される仮想オブジェクトに関する処理よりも優先的に行うように、双方の仮想オブジェクトに関する処理負荷を異ならせている。これにより、効率の良い処理を行うことができる。
 以下、詳細に説明する。
Then, the processing loads for the virtual objects arranged in the region with a high degree of attraction are made different from each other so that the processing for the virtual objects arranged in the region with a low degree of attraction is prioritized over the processing with respect to the virtual objects arranged in the region with a low degree of attraction. I have. Thus, efficient processing can be performed.
The details will be described below.
 (情報処理装置の構成)
 図1を参照して情報処理装置について説明する。図1は、情報処理装置としてのヘッドマウントディスプレイ(HMD)1のブロック図である。
(Configuration of information processing device)
The information processing apparatus will be described with reference to FIG. FIG. 1 is a block diagram of a head mounted display (HMD) 1 as an information processing device.
 図1に示すように、HMD1は、制御部10と、入力部20と、出力部30と、記憶部46と、支持体(図示せず)と、を備える。 As shown in FIG. 1, the HMD 1 includes a control unit 10, an input unit 20, an output unit 30, a storage unit 46, and a support (not shown).
 支持体は、ユーザUの頭部に装着されることが可能となっている。支持体は、装着時に出力部30であるディスプレイ31R及び31Lがユーザの眼前に支持する。また、支持体は、入力部20である後述の各センサを支持する。支持体の形状は特に限られず、一例として全体として帽子型の形状とすることができる。 The support can be mounted on the head of the user U. The support is supported by the displays 31R and 31L, which are the output units 30, in front of the user at the time of wearing. In addition, the supporter supports each sensor, which will be described later, which is the input unit 20. The shape of the support is not particularly limited, and may be, for example, a hat shape as a whole.
 [入力部]
 入力部20は、複数のセンサ部を備えるセンサ部群である。各センサ部で検出される検出結果は制御部10へ入力される。各センサ部で検出される検出結果には、周辺状況情報及び空間情報が含まれる。
[Input section]
The input unit 20 is a sensor unit group including a plurality of sensor units. The detection result detected by each sensor unit is input to the control unit 10. The detection result detected by each sensor unit includes the surrounding situation information and the space information.
 周辺状況情報は、HMD1を装着するユーザUが位置する実空間でのユーザUの周辺の情報である。具体的には、周辺状況情報は、実空間の照度情報、音情報、匂い情報、画像情報等である。
 これらの情報は、後述する照度センサ22、音センサ23、匂いセンサ24、カメラ25から取得することができる。
The peripheral situation information is information around the user U in the real space where the user U wearing the HMD 1 is located. Specifically, the surrounding situation information is illuminance information, sound information, odor information, image information, and the like in the real space.
These pieces of information can be acquired from an illuminance sensor 22, a sound sensor 23, an odor sensor 24, and a camera 25 described later.
 空間情報は、HMD1に搭載される後述するカメラ25で撮影される画像情報、後述する9軸センサ21で検出されるHMD1の加速度情報、角速度情報、及び方位情報等である。これら空間情報からユーザUの位置、向き、動きや姿勢(歩行、走行、停止等)を検出することができる。 The space information includes image information captured by a camera 25 described later mounted on the HMD 1, acceleration information, angular velocity information, azimuth information, and the like of the HMD 1 detected by a nine-axis sensor 21 described later. It is possible to detect the position, orientation, movement, and posture (walking, running, stopping, etc.) of the user U from these spatial information.
 入力部20は、9軸センサ21と、照度センサ22と、音センサ23と、匂いセンサ24と、カメラ25と、を有する。これらセンサはHMD1に搭載される。 The input unit 20 includes a nine-axis sensor 21, an illuminance sensor 22, a sound sensor 23, an odor sensor 24, and a camera 25. These sensors are mounted on the HMD1.
 9軸センサ21は、3軸加速度センサ、3軸ジャイロセンサ、及び3軸コンパスセンサを含む。9軸センサ21により、HMD1の、3軸における加速度、角速度、及び方位を検出することが可能であり、ユーザUの位置、向き、動きや姿勢(歩行、走行、停止等)を検出することができる。
 9軸センサ21で検出される検出結果は、空間情報として制御部10に出力される。
The 9-axis sensor 21 includes a 3-axis acceleration sensor, a 3-axis gyro sensor, and a 3-axis compass sensor. The nine-axis sensor 21 can detect the acceleration, angular velocity, and orientation of the HMD 1 in three axes, and can detect the position, orientation, movement, and posture (walking, running, stopping, and the like) of the user U. it can.
The detection result detected by the 9-axis sensor 21 is output to the control unit 10 as spatial information.
 照度センサ22は受光素子を有し、当該受光素子に入射した光を電流に変換して明るさ(照度)を検知する。本実施形態では、照度センサ22は、4つ設けられる。4つの照度センサ22は、ユーザUが位置する実空間を4つの領域に分割したときに各領域における照度をそれぞれ検出する。 The illuminance sensor 22 has a light receiving element, and converts the light incident on the light receiving element into a current to detect brightness (illuminance). In the present embodiment, four illuminance sensors 22 are provided. The four illuminance sensors 22 detect illuminance in each area when the real space where the user U is located is divided into four areas.
 図6は、ユーザUが位置する実空間を4つの領域に分割したときの各領域とユーザUとの位置関係を説明する図である。図6及び後述する図7~図11は、HMD1を装着してユーザUが移動する実空間60又は70をユーザUの頭上から見た図に相当する。 FIG. 6 is a diagram illustrating a positional relationship between each area and the user U when the real space where the user U is located is divided into four areas. FIG. 6 and FIGS. 7 to 11 described below correspond to views of the real space 60 or 70 where the user U moves while wearing the HMD 1 as viewed from above the user U.
 図6に示すように、実空間60を、ユーザUを中心にして、ユーザUの正面側にある前方領域60F、背後側にある後方領域60B、右側にある右側領域60R、左側にある左側領域60Lの4つの領域に分割する。 As shown in FIG. 6, the real space 60 is divided into a front area 60F on the front side of the user U, a rear area 60B on the back side, a right area 60R on the right side, and a left area on the left side with the user U as a center. It is divided into four 60L regions.
 4つの照度センサ22は、前方領域60Fの照度を検出する照度センサ22Fと、右側領域60Rの照度を検出する照度センサ22Rと、左側領域60Lの照度を検出する照度センサ22Lと、後方領域60Bの照度を検出する照度センサ22Bである。 The four illuminance sensors 22 include an illuminance sensor 22F that detects illuminance in the front area 60F, an illuminance sensor 22R that detects illuminance in the right area 60R, an illuminance sensor 22L that detects illuminance in the left area 60L, and a rear area 60B. An illuminance sensor 22B that detects illuminance.
 右側領域60R、後方領域60B、左側領域60Lそれぞれの領域で、照度センサ22R、22B、22Lそれぞれにより検出される検出結果(照度情報)は、各領域における周辺状況情報であって、ユーザUの視野に対応する視野情報とは異なる。
 4つの照度センサ22で検出される検出結果は、周辺状況情報として制御部10に出力される。
In each of the right region 60R, the rear region 60B, and the left region 60L, the detection result (illuminance information) detected by each of the illuminance sensors 22R, 22B, and 22L is the surrounding situation information in each region, and the field of view of the user U. Is different from the visual field information corresponding to.
The detection results detected by the four illuminance sensors 22 are output to the control unit 10 as peripheral situation information.
 音センサ23は、ユーザUの周辺の音の大きさを検出する。本実施形態では、音センサ23は、4つ設けられる。4つの音センサ23は、照度センサ22と同様に、ユーザUが位置する実空間を4つの領域に分割したときに各領域における音量をそれぞれ検出する。4つの音センサ23の検出結果を用いて、ユーザUからみた音の発生源が存在する方向を特定することができる。 The sound sensor 23 detects the loudness of the sound around the user U. In the present embodiment, four sound sensors 23 are provided. Like the illuminance sensor 22, the four sound sensors 23 respectively detect the sound volume in each area when the real space where the user U is located is divided into four areas. Using the detection results of the four sound sensors 23, the direction in which the sound source as viewed from the user U exists can be specified.
 4つの音センサ23は、前方領域60Fの音量を検出する音センサ23Fと、右側領域60Rの音量を検出する音センサ23Rと、左側領域60Lの音量を検出する音センサ23Lと、後方領域60Bの音量を検出する音センサ23Bである。
 4つの音センサ23で検出される検出結果は、周辺状況情報として制御部10に出力される。
The four sound sensors 23 include a sound sensor 23F that detects the volume of the front area 60F, a sound sensor 23R that detects the volume of the right area 60R, a sound sensor 23L that detects the volume of the left area 60L, and a sound sensor 23L that detects the volume of the rear area 60B. A sound sensor 23B for detecting a volume.
The detection results detected by the four sound sensors 23 are output to the control unit 10 as surrounding situation information.
 右側領域60R、後方領域60B、左側領域60Lそれぞれの領域で、音センサ23R、23B、23Lそれぞれにより検出される検出結果(音情報)は、各領域における周辺状況情報であって、ユーザUの視野に対応する視野情報とは異なる。 In each of the right area 60R, the rear area 60B, and the left area 60L, the detection result (sound information) detected by each of the sound sensors 23R, 23B, and 23L is the surrounding situation information in each area, and the field of view of the user U. Is different from the visual field information corresponding to.
 匂いセンサ24は、ユーザUの周辺の匂いの強さを検出する。本実施形態では、匂いセンサ24は、4つ設けられる。4つの匂いセンサ24それぞれの検出結果を用いて、ユーザUからみた匂いの発生源が存在する方向を特定することができる。 The odor sensor 24 detects the intensity of the odor around the user U. In the present embodiment, four odor sensors 24 are provided. Using the detection results of the four odor sensors 24, the direction in which the source of the odor as viewed from the user U exists can be specified.
 4つの匂いセンサ24は、前方領域60Fの匂いの強さを検出する匂いセンサ24Fと、右側領域60Rの匂いを検出する匂いセンサ24Rと、左側領域60Lの匂いを検出する匂いセンサ24Lと、後方領域60Bの匂いを検出する匂いセンサ24Bである。
 4つの匂いセンサ24で検出される検出結果は、周辺状況情報として制御部10に出力される。
The four odor sensors 24 include an odor sensor 24F that detects the intensity of the odor in the front region 60F, an odor sensor 24R that detects the odor in the right region 60R, an odor sensor 24L that detects the odor in the left region 60L, and a rear. The odor sensor 24B detects the odor of the area 60B.
The detection results detected by the four odor sensors 24 are output to the control unit 10 as peripheral situation information.
 右側領域60R、後方領域60B、左側領域60Lそれぞれの領域で、匂いセンサ24R、24B、24Lそれぞれにより検出される検出結果(匂い情報)は、各領域における周辺状況情報であって、ユーザUの視野に対応する視野情報とは異なる。 In each of the right region 60R, the rear region 60B, and the left region 60L, the detection result (odor information) detected by each of the odor sensors 24R, 24B, and 24L is the surrounding situation information in each region, and the field of view of the user U. Is different from the visual field information corresponding to.
 匂いセンサ24として、例えば、酸化物半導体型センサ、分子選択性のある膜を振動子表面に形成した水晶振動子(QCM)センサ等を用いることができる。 と し て As the odor sensor 24, for example, an oxide semiconductor sensor, a crystal oscillator (QCM) sensor having a film having molecular selectivity formed on the oscillator surface, or the like can be used.
 カメラ25は、右目カメラ251及び左目カメラ252を含む。右目カメラ251と左目カメラ252は、ユーザUの視野領域に対応するユーザUの前方領域60Fを撮影し、画像情報として撮影画像を取得する。 The camera 25 includes a right-eye camera 251 and a left-eye camera 252. The right-eye camera 251 and the left-eye camera 252 capture an image of the front area 60F of the user U corresponding to the visual field area of the user U, and acquire a captured image as image information.
 右目カメラ251及び左目カメラ252は、ヘッドマウントディスプレイ1の前面にて横方向に所定の間隔をあけて配置される。右目カメラ251は右目用画像を撮影し、左目カメラ252は左目用画像を撮影する。 The right-eye camera 251 and the left-eye camera 252 are arranged at a predetermined interval in the horizontal direction on the front surface of the head mounted display 1. The right-eye camera 251 captures a right-eye image, and the left-eye camera 252 captures a left-eye image.
 右目カメラ251R及び左目カメラ252で撮影された右目用画像及び左目用画像は、HMD1を装着するユーザUの位置、向き等の空間情報を含む。
 右目用画像及び左目用画像は、空間情報として制御部10に出力される。
The right-eye image and the left-eye image captured by the right-eye camera 251R and the left-eye camera 252 include spatial information such as the position and orientation of the user U wearing the HMD1.
The right-eye image and the left-eye image are output to the control unit 10 as spatial information.
 [出力部]
 出力部30は、右目用ディスプレイ31Rと、左目用ディスプレイ31Lと、を有し、これらディスプレイはHMD1に搭載される。
[Output unit]
The output unit 30 has a display 31R for the right eye and a display 31L for the left eye, and these displays are mounted on the HMD1.
 右目用ディスプレイ31R、左目用ディスプレイ31Lは、それぞれ、ユーザUの右目の目前、左目の目前に配置される。
 右目用ディスプレイ31R(左目用ディスプレイ31L)は、右目用画像表示素子311R(左目用画像表示素子311L)と、右目用光学素子312R(左目用光学素子312L)と、を有する。
The right-eye display 31R and the left-eye display 31L are arranged in front of the right and left eyes of the user U, respectively.
The right-eye display 31R (the left-eye display 31L) has a right-eye image display element 311R (a left-eye image display element 311L) and a right-eye optical element 312R (a left-eye optical element 312L).
 以下、特に左右を区別する必要がない場合、右目用ディスプレイ31R及び左目用ディスプレイ31Lをディスプレイ31と称し、右目用画像表示素子311Rと左目用画像表示素子311Lを画像表示素子311と称し、右目用光学素子312R及び左目用光学素子312Lを光学素子312と称する場合がある。 Hereinafter, when there is no particular need to distinguish between left and right, the display 31R for the right eye and the display 31L for the left eye are referred to as the display 31, the image display element 311R for the right eye and the image display element 311L for the left eye are referred to as the image display element 311, and the display element for the right eye. The optical element 312R and the left-eye optical element 312L may be referred to as an optical element 312 in some cases.
 画像表示素子311は、有機EL表示素子や光変調素子としての液晶表示素子等から構成される。画像表示素子311は、制御部10から出力された画像信号を基に、仮想オブジェクトといった表示画像を形成し、表示光を出射する。当該表示光が光学素子312を介してユーザUの目に入射されることにより、ユーザUに対してユーザUの視野領域に仮想オブジェクトを提示することができる。 (4) The image display element 311 includes an organic EL display element, a liquid crystal display element as a light modulation element, and the like. The image display element 311 forms a display image such as a virtual object based on the image signal output from the control unit 10 and emits display light. When the display light enters the eyes of the user U via the optical element 312, the virtual object can be presented to the user U in the viewing area of the user U.
 光学素子312は、ホログラフィック光学素子やハーフミラー等であって、ユーザUの目の前に配置される。光学素子312は、画像表示素子311から出射された光を回折させて、ユーザの左右それぞれの目に導くことが可能に構成される。 The optical element 312 is a holographic optical element, a half mirror, or the like, and is arranged in front of the user U. The optical element 312 is configured to diffract light emitted from the image display element 311 and guide the light to the left and right eyes of the user.
 更に、光学素子312は、外界からの光を透過可能に構成されている。これにより、HMD1では、画像表示素子311で形成された表示画像を外界からの光と重ね合わせて、ユーザUに対して提示することが可能となっている。 Furthermore, the optical element 312 is configured to transmit light from the outside world. Thereby, in the HMD 1, it is possible to present the display image formed by the image display element 311 to the user U by superimposing the display image on the light from the outside.
 制御部10は、通信制御部11と、周辺状況情報管理部12と、空間情報取得部13と、空間情報管理部14と、優先処理点数判定部15と、描画処理負荷判定部16と、出力画像生成部17と、出力画像制御部18と、描画処理負荷管理部19と、を有する。 The control unit 10 includes a communication control unit 11, a surrounding situation information management unit 12, a spatial information acquisition unit 13, a spatial information management unit 14, a priority processing point determination unit 15, a drawing processing load determination unit 16, It includes an image generation unit 17, an output image control unit 18, and a drawing processing load management unit 19.
 通信制御部11は、HMD1に搭載されている各種センサ21~25、ディスプレイ31と通信し、各種情報の送受信を行う。具体的には、通信制御部11は、周辺状況情報及び空間情報として、各種センサ21~25での検出結果を受信し、ディスプレイ31に対し画像信号等を送信する。 The communication control unit 11 communicates with the various sensors 21 to 25 and the display 31 mounted on the HMD 1 to transmit and receive various information. Specifically, the communication control unit 11 receives the detection results of the various sensors 21 to 25 as the surrounding situation information and the space information, and transmits an image signal or the like to the display 31.
 更に、通信制御部11は、他のユーザが装着しているHMDや外部の周辺機器と通信し、各種情報の送受信を行う。通信制御部11は、他のユーザが装着しているHMDに搭載されるカメラにより撮影された画像情報等を取得することができる。他のユーザのHMDに搭載されるカメラにより撮影された画像情報には、当該HMDを装着している他のユーザUと後述する優先処理点数判定対象の領域との位置関係情報が含まれる。 (4) Further, the communication control unit 11 communicates with an HMD worn by another user or an external peripheral device, and transmits and receives various information. The communication control unit 11 can acquire image information and the like captured by a camera mounted on the HMD worn by another user. Image information captured by a camera mounted on another user's HMD includes positional relationship information between another user U wearing the HMD and a priority processing point determination target area described later.
 周辺状況情報管理部12は、通信制御部11を介して取得した、照度センサ22、音センサ23、匂いセンサ24にて検出された検出結果、他のユーザのHMDから取得した画像情報(検出結果)を、周辺状況情報として図示しない周辺状況データベースに時系列に格納し、常に最新の周辺状況情報となるようにデータを更新し管理する。 The peripheral situation information management unit 12 obtains the detection results obtained by the illuminance sensor 22, the sound sensor 23, and the odor sensor 24 via the communication control unit 11, and the image information (detection results) obtained from the HMD of another user. ) Is stored in a peripheral situation database (not shown) in a time-series manner as peripheral situation information, and data is updated and managed so as to always be the latest peripheral situation information.
 空間情報取得部13は、通信制御部11を介して取得した、9軸センサ21、カメラ25で検出された検出結果を空間情報として取得する。 The space information acquisition unit 13 acquires, as space information, the detection results acquired by the 9-axis sensor 21 and the camera 25 acquired via the communication control unit 11.
 空間情報管理部14は、9軸センサ21及びカメラ25で検出された検出結果と、当該検出結果を基に求められるカメラ25の位置、向きの情報とを、空間情報として、図示しない空間情報データベースに時系列に格納し、常に最新の空間情報となるようにデータを更新し管理する。 The spatial information management unit 14 uses the detection results detected by the 9-axis sensor 21 and the camera 25 and the position and orientation information of the camera 25 obtained based on the detection results as spatial information as a spatial information database (not shown). And updates and manages the data so that it is always the latest spatial information.
 優先処理点数判定部15は、周辺状況情報管理部12が管理する周辺状況情報と、空間情報管理部14が管理するカメラ25の位置、向きの情報とを基に、各領域の優先処理点数(誘目度)を判定する。 The priority processing point determination unit 15 determines the priority processing points of each area based on the surrounding situation information managed by the surrounding situation information management unit 12 and the position and orientation information of the camera 25 managed by the spatial information management unit 14. The degree of attraction).
 優先処理点数は、現時点でユーザUの非視野領域に配置される仮想オブジェクト(表示画像)の描画に関する処理の優先度を点数化したものであり、周辺情報に対して判定される誘目度である。ユーザUが見る可能性の高い領域は誘目度が高い領域として高い優先処理点数が付与される。付与される優先処理点数が高いほど、仮想オブジェクトの描画に関する処理の優先度が高くなる。 The priority processing score is obtained by converting the priority of processing relating to drawing of a virtual object (display image) currently arranged in the non-viewing area of the user U into a score, and is an attractiveness determined for the peripheral information. . An area that is likely to be viewed by the user U is assigned a high priority processing score as an area with a high degree of attraction. The higher the number of assigned priority processing points, the higher the priority of processing relating to drawing of a virtual object.
 優先処理点数は、実空間に存在するHMD1を装着するユーザUを中心としたユーザUの周辺360度を図6に示すように4分割した前方領域60F、後方領域60B、右側領域60R、左側領域60L毎に求められる。尚、本実施形態では、実空間を4分割する例をあげるが、これに限定されず例えば8分割してもよい。 The priority processing points are divided into a front area 60F, a rear area 60B, a right area 60R, and a left area obtained by dividing a 360-degree periphery of the user U centering on the user U wearing the HMD1 existing in the real space into four parts as shown in FIG. It is required every 60L. In the present embodiment, an example in which the real space is divided into four parts will be described. However, the present invention is not limited to this.
 尚、前方領域60F、後方領域60B、右側領域60R、左側領域60Lの範囲は、一般的な正常な人の視野領域を考慮して決定することができる。
 一般に正常な人の視野は、片目では鼻側に約60度、耳側に約90~100度といわれ、両目で同時に見える範囲は左右120度程度である。
The ranges of the front region 60F, the rear region 60B, the right region 60R, and the left region 60L can be determined in consideration of a general normal human visual field region.
In general, the visual field of a normal person is said to be about 60 degrees on the nose side with one eye and about 90 to 100 degrees on the ear side, and the range that can be seen simultaneously by both eyes is about 120 degrees left and right.
 この一般的な視野領域を考慮し、図6に示すように、ユーザUの右目と左目とを結ぶ方向に沿ったユーザUの周辺360度の領域における前方領域60Fの左右方向における範囲を規定する角度θを120度、右側領域60R及び左側領域60Lの範囲を規定する角度θ及びθをそれぞれ40度、後方領域60Bの範囲を規定する角度θを160度に設定してもよい。前方領域60Fの範囲を規定する角度θを二等分する仮想線はユーザUの正面中心に位置する。 In consideration of this general visual field area, as shown in FIG. 6, a range in the left-right direction of the front area 60F in a 360-degree area around the user U along the direction connecting the right eye and the left eye of the user U is defined. the angle theta F 120 °, may be set the angle theta R and theta L defining the range of the right side area 60R, and the left region 60L respectively 40 °, the angle theta B to define the scope of the rear region 60B to 160 degrees . An imaginary line bisecting the angle θ F defining the range of the front region 60F is located at the center of the front of the user U.
 尚、本実施形態の説明に用いる図6~図9においては、図面を見やすくするために、前方領域60Fの範囲を規定する角度θと後方領域60Bの範囲を規定する角度θとを同じ角度とし、上述の数値範囲とは合致していない。 6 to 9 used in the description of the present embodiment, the angle θ F defining the range of the front region 60F and the angle θ B defining the range of the rear region 60B are the same for easy viewing. The angle does not match the above numerical range.
 図6は、周辺状況がユーザUの視線移動を誘発するような状況でない場合の領域毎の優先処理点数の一例を示す。 FIG. 6 shows an example of the number of priority processing points for each area when the surrounding situation is not a situation that induces the movement of the user's line of sight.
 図6において、ユーザUの視野領域である前方領域60Fの優先処理点数P1は、他の領域と比較して最も高い10点である。
 右側領域60R及び左側領域60Lでは、ユーザUが右に視線を向けるか、左に視線を向けるかは、同程度の可能性があると予測され、また、前方領域60Fよりも可能性が低いと予測されるので、優先処理点数P2及びP4は5点となっている。
 ユーザUが後方領域60Bに視線を向けるには、ユーザUは頭部の向きをほぼ180度変える必要があるため、右側及び左側よりも、ユーザUが後方を向く可能性が低いと予測されるので、優先処理点数P3は3点となっている。
In FIG. 6, the priority processing point P1 of the front area 60F, which is the visual field area of the user U, is the highest 10 points as compared with other areas.
In the right region 60R and the left region 60L, it is predicted that whether the user U directs his / her gaze to the right or to the left has a similar possibility, and is less likely than the front region 60F. Since it is predicted, the priority processing points P2 and P4 are five.
In order for the user U to turn his or her gaze to the rear area 60B, the user U needs to change his or her head direction by almost 180 degrees, so that it is predicted that the user U is less likely to turn rearward than to the right and left sides. Therefore, the priority processing score P3 is three.
 図6に示す例では、実空間の後方領域60Bの周辺情報(第2の周辺情報)は、右側領域60R及び左側領域60Lの周辺情報(第1の周辺情報)よりも、ユーザUの目をひきにくい、誘目度の低いものとなっている。 In the example illustrated in FIG. 6, the peripheral information (second peripheral information) of the rear area 60B in the real space makes the eyes of the user U more noticeable than the peripheral information (first peripheral information) of the right area 60R and the left area 60L. It is difficult to attract and has low attraction.
 これに対し、ユーザUの周辺状況がユーザUの視線移動を誘発するような状況である場合、各領域60F、60B、60R、60LにおけるユーザUの誘目度を表す優先処理点数が、図6に示す場合と異なってくる。 On the other hand, in a case where the surrounding situation of the user U is a situation that induces the movement of the line of sight of the user U, the priority processing score indicating the degree of attraction of the user U in each of the areas 60F, 60B, 60R, and 60L is shown in FIG. It is different from the case shown.
 優先処理点数は、各領域60F、60B、60R、60Lでの周辺情報に基づき、求められる。上述のように、周辺情報には周辺状況情報と空間情報とが含まれ、ユーザUの周辺状況がユーザUの視線移動を誘発するような状況である場合、図6に示す優先処理点数と点数が変わってくる。 The priority processing score is obtained based on the peripheral information in each of the areas 60F, 60B, 60R, and 60L. As described above, the peripheral information includes the peripheral situation information and the space information. If the peripheral situation of the user U is a situation that induces the user U's line of sight, the priority processing points and the points shown in FIG. Will change.
 周辺状況情報は、各領域60F、60B、60R、60Lそれぞれの周辺状況を取得するようにHMD1に搭載された、4つの照度センサ22F、22B、22R、22Lで検出された照度情報、4つの音センサ23F、23B、23R、23Lで検出された音情報、4つの匂いセンサ24F、24B、24R、24Lで検出された匂い情報、ユーザU及び他のユーザUが装着するHMD1に搭載されるカメラ25で撮像された画像情報のうち少なくとも1つを含む。 The surrounding situation information includes illuminance information detected by four illuminance sensors 22F, 22B, 22R, and 22L mounted on the HMD 1 so as to acquire the surrounding situation of each of the areas 60F, 60B, 60R, and 60L, and four sounds. The sound information detected by the sensors 23F, 23B, 23R, and 23L, the odor information detected by the four odor sensors 24F, 24B, 24R, and 24L, the camera 25 mounted on the HMD 1 worn by the user U and another user U At least one of the pieces of image information captured in step (1).
 描画処理負荷判定部16は、優先処理点数判定部15で判定された優先処理点数に応じて、各領域60F、60B、60R、60Lに配置される仮想オブジェクトの描画に関する処理の負荷を判定する。 (4) The drawing processing load determination unit 16 determines the processing load related to the drawing of the virtual object placed in each of the areas 60F, 60B, 60R, and 60L according to the priority processing points determined by the priority processing point determination unit 15.
 相対的に優先処理点数が高い領域(第1の周辺情報に対応する実空間の領域)に配置される仮想オブジェクト(第1の仮想オブジェクト)に関する処理(第1の処理)の負荷は高くなる。相対的に、優先処理点数が低い領域(第2の周辺情報に対応する実空間の領域)に配置される仮想オブジェクト(第2の仮想オブジェクト)に関する処理(第2の処理)の負荷は低くなる。 (4) The load of processing (first processing) related to a virtual object (first virtual object) arranged in an area having a relatively high priority processing point (an area of the real space corresponding to the first peripheral information) increases. Relatively, the load of the process (the second process) related to the virtual object (the second virtual object) arranged in the region where the number of priority processing points is low (the region of the real space corresponding to the second peripheral information) is reduced. .
 処理負荷判定の一例として、描画処理負荷判定部16は、優先処理点数Pが0<P≦3の時、100msecに1回の割合で仮想オブジェクトを動かすような処理負荷が低い処理Xを行う判定をする。
 描画処理負荷判定部16は、優先処理点数Pが3<P≦8の時、50msecに1回の割合で仮想オブジェクトを動かすような処理負荷が中程度の処理Yを行う判定をする。
 描画処理負荷判定部16は、優先処理点数Pが8<P≦10の時、16msecに1回の割合で仮想オブジェクトを動かすような処理負荷が高い処理Zを行う判定をする。
 尚、ここでは処理負荷を3段階にわける例をあげたが、これに限定されない。
As an example of the processing load determination, when the priority processing point P is 0 <P ≦ 3, the drawing processing load determination unit 16 determines that the processing X with a low processing load such as moving the virtual object once every 100 msec is performed. do.
When the priority processing point P is 3 <P ≦ 8, the drawing processing load determination unit 16 determines to perform the processing Y with a medium processing load such as moving the virtual object once every 50 msec.
When the priority processing point P satisfies 8 <P ≦ 10, the drawing processing load determination unit 16 determines to perform the processing Z with a high processing load such as moving the virtual object once every 16 msec.
Here, an example in which the processing load is divided into three stages has been described, but the processing load is not limited to this.
 図6に示す例では、前方領域60Fの優先処理点数は10点であるので、描画処理負荷判定部16は、前方領域60Fに配置される仮想オブジェクトの描画処理を処理Zで行う判定をする。
 右側領域60R及び左側領域60Lの優先処理点数は5点であるので、描画処理負荷判定部16は、右側領域60R及び左側領域60Lに配置される仮想オブジェクトの描画処理を処理Yで行う判定をする。
 後方領域60Bの優先処理点数は3点であるので、描画処理負荷判定部16は、後方領域60Bに配置される仮想オブジェクトの描画処理を処理Xで行う判定をする。
In the example shown in FIG. 6, the priority processing score of the front area 60F is 10, and therefore, the drawing processing load determination unit 16 determines that the drawing processing of the virtual object arranged in the front area 60F is performed in the processing Z.
Since the priority processing points of the right area 60R and the left area 60L are 5, the rendering processing load determination unit 16 determines that the rendering processing of the virtual objects arranged in the right area 60R and the left area 60L is performed in the processing Y. .
Since the number of priority processing points of the rear area 60B is three, the rendering processing load determination unit 16 determines that the rendering processing of the virtual object arranged in the rear area 60B is performed in the process X.
 従って、前方領域60Fに配置される仮想オブジェクトの描画処理の負荷が最も高くなる。右側領域60R及び左側領域60Lに配置される仮想オブジェクトの描画処理の負荷は中程度となる。後方領域60Bに配置される仮想オブジェクトの描画処理の負荷は最も低くなる。 Accordingly, the load of the rendering processing of the virtual object arranged in the front area 60F is the highest. The load of the rendering processing of the virtual objects arranged in the right area 60R and the left area 60L is medium. The load of the rendering processing of the virtual object arranged in the rear area 60B is the lowest.
 これにより、非視野領域に配置される仮想オブジェクト全てに対して同一の処理負荷で描画処理を実行する場合と比較して、情報処理装置(HMD)への処理負荷を低減することができ、効率のよい処理が行える。
 例えば、HMD1を有線による電源供給とせず、電池による電源供給とした場合、処理負荷の低減は消費電力低減による電池持続時間の向上につながる。
As a result, the processing load on the information processing device (HMD) can be reduced as compared with the case where the drawing processing is executed with the same processing load on all the virtual objects arranged in the non-viewing area, and the efficiency can be reduced. Good processing can be performed.
For example, if the HMD 1 is not powered by wire but powered by a battery, a reduction in processing load leads to an increase in battery life due to a reduction in power consumption.
 ユーザUの非視野領域において、右側領域60R及び左側領域60L(第1の領域)に配置される仮想オブジェクト(第1の仮想オブジェクト)に関する処理Yは第1の処理に相当する。後方領域60B(第2の領域)に配置される仮想オブジェクト(第2の仮想オブジェクト)に関する処理Xは第2の処理に相当する。 処理 In the non-viewing area of the user U, the processing Y on the virtual objects (first virtual objects) arranged in the right area 60R and the left area 60L (first area) corresponds to the first processing. The processing X for the virtual object (second virtual object) arranged in the rear area 60B (second area) corresponds to the second processing.
 このように、本実施形態では、優先処理点数(誘目度)に応じて処理負荷が異なるように処理内容を変えることにより、仮想オブジェクトに関する処理の優先度を変えている。 As described above, in the present embodiment, the processing priority is changed by changing the processing content so that the processing load is different according to the priority processing score (attraction level).
 出力画像生成部17は、描画処理負荷判定部16により判定された処理内容に基づいて、CPU、GPUでの処理内容を変更して、仮想オブジェクト(表示画像)を生成する。 (4) The output image generation unit 17 changes the processing content of the CPU and the GPU based on the processing content determined by the drawing processing load determination unit 16, and generates a virtual object (display image).
 描画処理負荷判定部16で判定された処理内容が処理Xであれば、出力画像生成部17は、100msecに1回の割合で仮想オブジェクトが動くように仮想オブジェクトを生成する。
 処理Yであれば、出力画像生成部17は、50msecに1回の割合で仮想オブジェクトが動くように仮想オブジェクトを生成する。
 処理Zであれば、出力画像生成部17は、16msecに1回の割合で仮想オブジェクトが動くように仮想オブジェクトを生成する。
If the processing content determined by the drawing processing load determination unit 16 is processing X, the output image generation unit 17 generates a virtual object so that the virtual object moves once every 100 msec.
In the case of processing Y, the output image generation unit 17 generates a virtual object such that the virtual object moves once every 50 msec.
In the case of processing Z, the output image generation unit 17 generates a virtual object such that the virtual object moves once every 16 msec.
 出力画像制御部18は、出力画像生成部17で生成された仮想オブジェクトをHMD1の右目用ディスプレイ31R、左目用ディスプレイ31Lに表示できるように画像信号にして出力する。 (4) The output image control unit 18 outputs the virtual object generated by the output image generation unit 17 as an image signal so that the virtual object can be displayed on the right-eye display 31R and the left-eye display 31L of the HMD 1.
 描画処理負荷管理部19は、描画処理負荷判定部16により判定された処理内容を、時系列に周辺状況情報、空間情報、及び優先処理点数と紐づけて、図示しない描画処理負荷データベースに格納し、常に最新の情報となるようにデータを更新し管理する。 The drawing processing load management unit 19 stores the processing content determined by the drawing processing load determination unit 16 in a time series in a drawing processing load database (not shown) in association with the surrounding situation information, the space information, and the number of priority processing points. Update and manage data so that it is always up-to-date.
 [記憶部]
 記憶部46は、制御部10で行われる仮想オブジェクトに係る一連の情報処理を、情報処理装置であるHMD1に実行させるためのプログラムを記憶する。
[Storage unit]
The storage unit 46 stores a program for causing the HMD 1 as an information processing device to execute a series of information processing on the virtual object performed by the control unit 10.
 (情報処理方法)
 次に、図2を用いて、HMD1で実行される、仮想オブジェクトの描画処理に係る一連の情報処理方法の一例を説明する。
 図2は、HMD1における仮想オブジェクトに関する処理を説明するためのフロー図である。
(Information processing method)
Next, an example of a series of information processing methods related to a virtual object drawing process executed by the HMD 1 will be described with reference to FIG.
FIG. 2 is a flowchart for explaining processing relating to a virtual object in the HMD 1.
 図2に示すように、処理がスタートすると、通信制御部11により周辺状況情報が取得される(S1)。周辺状況情報は周辺状況データベースに格納される。周辺状況情報は、照度センサ22で検出された検出値、音センサ23で検出された検出値、匂いセンサ24で検出された検出値等であり、これらのうち少なくとも1つ取得される。
 尚、周辺状況情報として、カメラ25で撮影される画像情報を取得する場合での情報処理方法については、後述の第4の実施形態で説明する。
As shown in FIG. 2, when the process starts, the surrounding state information is acquired by the communication control unit 11 (S1). The peripheral situation information is stored in the peripheral situation database. The surrounding situation information is a detection value detected by the illuminance sensor 22, a detection value detected by the sound sensor 23, a detection value detected by the odor sensor 24, and the like, and at least one of them is acquired.
An information processing method in a case where image information captured by the camera 25 is acquired as the peripheral situation information will be described in a fourth embodiment described later.
 次に、通信制御部11を介して空間情報取得部13により、9軸センサ21で検出された検出値、カメラ25で撮影された画像情報(撮影画像)が取得され、これら情報を基にカメラ25の位置、向きの情報が求められる(S2)。
 9軸センサ21で検出された検出値、カメラ25で撮影された画像情報、これら情報を基に求められたカメラ25の位置、向きの情報といった空間情報は、空間情報データベースに格納される。
Next, the detection value detected by the 9-axis sensor 21 and the image information (photographed image) photographed by the camera 25 are acquired by the spatial information acquisition unit 13 via the communication control unit 11, and the camera information is acquired based on these information. Information on the position and orientation of the 25 is obtained (S2).
Spatial information such as a detection value detected by the 9-axis sensor 21, image information captured by the camera 25, and information on the position and orientation of the camera 25 obtained based on these information is stored in a spatial information database.
 次に、優先処理点数判定部15により、周辺状況情報と、カメラ25の位置、向きの情報を基に、ユーザUの前方領域60F、後方領域60B、右側領域60R、左側領域60Lそれぞれの優先処理点数Pが求められる(S3)。 Next, the priority processing point determination unit 15 performs the priority processing for each of the front area 60F, the rear area 60B, the right area 60R, and the left area 60L of the user U based on the surrounding situation information and the position and orientation information of the camera 25. A score P is obtained (S3).
 次に、S3で求められた優先処理点数に基づいて、描画処理負荷判定部16により処理内容が判定され、当該判定された処理内容に基づいて出力画像生成部17による仮想オブジェクト(表示画像)の生成処理が実行される(S4)。 Next, the processing content is determined by the drawing processing load determination unit 16 based on the priority processing score obtained in S3, and the output image generation unit 17 generates a virtual object (display image) based on the determined processing content. Generation processing is executed (S4).
 次に、出力画像制御部18により、出力画像生成部17で生成された仮想オブジェクトがディスプレイ31に表示できるように画像信号にされて、出力部30に出力される(S5)。 Next, the output image control unit 18 converts the virtual object generated by the output image generation unit 17 into an image signal so that it can be displayed on the display 31, and outputs the image signal to the output unit 30 (S5).
 ディスプレイ31では、制御部10から出力された画像信号に基づき、仮想オブジェクトが表示され、ユーザUに提示される。 The virtual object is displayed on the display 31 based on the image signal output from the control unit 10 and presented to the user U.
 以上の描画処理に係る一連の処理は、一定時間毎に行われてもよく、また、ユーザUの向きが変わりHMD1の向きが変わる毎に行われてもよい。
 以下、図3を用いて一定時間毎に描画処理に係る一連の処理が実行される例を説明し、図4を用いてHMD1の向きが変わることにより描画処理に係る一連の処理が実行される例を説明する。
A series of processes related to the drawing process described above may be performed at regular intervals, or may be performed each time the direction of the user U changes and the direction of the HMD 1 changes.
Hereinafter, an example in which a series of processes related to the drawing process is executed at regular time intervals with reference to FIG. 3 will be described. A series of processes related to the drawing process will be executed by changing the direction of the HMD 1 with reference to FIG. An example will be described.
 一定時間毎に描画処理に係る一連の処理が実行される場合について説明する。
 図3に示すように、処理がスタートすると、初回の処理では、S11で一定時間経過したものとみなされて、S12以降にすすむ。2回目以降の処理では、前回の描画処理から一定時間経過したか否かが判定される(S11)。
 S11で一定時間経過したと判定されると(Yes)、S12にすすむ。
 S11で一定時間経過していないと判定されると(No)、S16にすすむ。
A case where a series of processes related to the drawing process is executed at regular intervals will be described.
As shown in FIG. 3, when the process is started, in the first process, it is considered that a predetermined time has elapsed in S11, and the process proceeds to S12 and thereafter. In the second and subsequent processes, it is determined whether a predetermined time has elapsed from the previous drawing process (S11).
If it is determined that the predetermined time has elapsed in S11 (Yes), the process proceeds to S12.
If it is determined in S11 that the predetermined time has not elapsed (No), the process proceeds to S16.
 S12では、通信制御部11により周辺状況情報が取得される。取得された周辺状況情報は周辺状況データベースに格納される。 In S12, the communication control unit 11 acquires the surrounding situation information. The acquired surrounding situation information is stored in the surrounding situation database.
 次に、通信制御部11を介して空間情報取得部13により、9軸センサ21で検出された検出値、カメラ25で撮影された画像情報(撮影画像)が取得され、これら情報を基にカメラ25の位置、向きの情報が求められる(S13)。
 9軸センサ21で検出された検出値、カメラ25で撮影された画像情報、これら情報を基に求められたカメラ25の位置、向きの情報といった空間情報は、空間情報データベースに格納される。
Next, the detection value detected by the 9-axis sensor 21 and the image information (photographed image) photographed by the camera 25 are acquired by the spatial information acquisition unit 13 via the communication control unit 11, and the camera information is acquired based on these information. Information on the position and orientation of the 25 is obtained (S13).
Spatial information such as a detection value detected by the 9-axis sensor 21, image information captured by the camera 25, and information on the position and orientation of the camera 25 obtained based on these information is stored in a spatial information database.
 次に、優先処理点数判定部15により、周辺状況情報と、カメラ25の位置、向きの情報を基に、ユーザUの前方領域60F、後方領域60B、右側領域60R、左側領域60Lそれぞれの優先処理点数Pが求められる(S14)。 Next, the priority processing point determination unit 15 performs the priority processing for each of the front area 60F, the rear area 60B, the right area 60R, and the left area 60L of the user U based on the surrounding situation information and the position and orientation information of the camera 25. A score P is obtained (S14).
 次に、描画処理負荷判定部16により、S14で求められた優先処理点数に基づいて、処理内容が判定される(S15)。 Next, the processing contents are determined by the drawing processing load determination unit 16 based on the priority processing points obtained in S14 (S15).
 次に、判定された処理内容に基づいて、出力画像生成部17による仮想オブジェクト(表示画像)の生成処理が実行される(S16)。ここで、S11で一定時間経過していないと判定されてS16にすすんでいる場合、S16では、前回の処理で判定された処理内容に基づいて、仮想オブジェクトの生成処理が実行される。 Next, a process of generating a virtual object (display image) by the output image generation unit 17 is performed based on the determined processing content (S16). Here, if it is determined in S11 that the predetermined time has not elapsed and the process proceeds to S16, a virtual object generation process is executed in S16 based on the processing content determined in the previous process.
 次に、出力画像制御部18により、出力画像生成部17で生成された仮想オブジェクトがディスプレイ31に表示できるように画像信号にされて、出力部30に出力される(S17)。 Next, the output image control unit 18 converts the virtual object generated by the output image generation unit 17 into an image signal so that it can be displayed on the display 31, and outputs the image signal to the output unit 30 (S17).
 ディスプレイ31では、制御部10から出力された画像信号に基づき、仮想オブジェクトが表示され、ユーザUに提示される。 The virtual object is displayed on the display 31 based on the image signal output from the control unit 10 and presented to the user U.
 次に、HMD1の向きが変わることにより描画処理に係る一連の処理が実行される場合について説明する。 Next, a case will be described in which a series of processes related to the drawing process are executed by changing the direction of the HMD 1.
 図4に示すように、処理がスタートすると、通信制御部11を介して空間情報取得部13により9軸センサ21で検出された検出値からHMD1の回転角度が取得される(S21)。 4. As shown in FIG. 4, when the process starts, the rotation angle of the HMD 1 is acquired from the detection value detected by the 9-axis sensor 21 by the spatial information acquisition unit 13 via the communication control unit 11 (S21).
 次に、初回の処理では、S22で回転量がしきい値を超えるとみなして、S23にすすむ。2回目以降の処理では、前回の描画処理時に取得したHMD1の回転角度と、S21で取得したHMD1の回転角度から、制御部10で、HMD1の回転量が算出され、回転量がしきい値以上か否かが判定される(S22)。しきい値は予め設定されている。 Next, in the first process, it is considered that the rotation amount exceeds the threshold value in S22, and the process proceeds to S23. In the second and subsequent processes, the rotation amount of the HMD 1 is calculated by the control unit 10 from the rotation angle of the HMD 1 acquired in the previous drawing process and the rotation angle of the HMD 1 acquired in S21, and the rotation amount is equal to or larger than the threshold value. It is determined whether or not it is (S22). The threshold is set in advance.
 S22でしきい値以上と判定されると(Yes)、S23にすすむ。
 S22でしきい値未満と判定されると(No)、S27にすすむ。
If it is determined in S22 that the value is equal to or larger than the threshold (Yes), the process proceeds to S23.
If it is determined in S22 that the value is less than the threshold value (No), the process proceeds to S27.
 S23では、通信制御部11により周辺状況情報が取得され、周辺状況情報は周辺状況データベースに格納される。 In S23, the surrounding situation information is acquired by the communication control unit 11, and the surrounding situation information is stored in the surrounding situation database.
 次に、通信制御部11を介して空間情報取得部13により、9軸センサ21で検出された検出値、カメラ25で撮影された画像情報(撮影画像)が取得され、これら情報を基にカメラ25の位置、向きの情報が求められる(S24)。
 9軸センサ21で検出された検出値、カメラ25で撮影された画像情報、これら情報を基に求められたカメラ25の位置、向きの情報といった空間情報は、空間情報データベースに格納される。
Next, the detection value detected by the 9-axis sensor 21 and the image information (photographed image) photographed by the camera 25 are acquired by the spatial information acquisition unit 13 via the communication control unit 11, and the camera information is acquired based on these information. Information on the position and orientation of the position No. 25 is obtained (S24).
Spatial information such as a detection value detected by the 9-axis sensor 21, image information captured by the camera 25, and information on the position and orientation of the camera 25 obtained based on these information is stored in a spatial information database.
 次に、優先処理点数判定部15により、周辺状況情報と、カメラ25の位置、向きの情報を基に、ユーザUの前方領域60F、後方領域60B、右側領域60R、左側領域60Lそれぞれの優先処理点数Pが求められる(S25)。 Next, the priority processing point determination unit 15 performs the priority processing for each of the front area 60F, the rear area 60B, the right area 60R, and the left area 60L of the user U based on the surrounding situation information and the position and orientation information of the camera 25. A score P is obtained (S25).
 次に、描画処理負荷判定部16により、S25で求められた優先処理点数に基づいて、処理内容が判定される(S26)。 Next, the content of the processing is determined by the drawing processing load determination unit 16 based on the priority processing score obtained in S25 (S26).
 次に、判定された処理内容に基づいて、出力画像生成部17による仮想オブジェクト(表示画像)の生成処理が実行される(S27)。ここで、S22で回転量がしきい値未満と判定されてS27にすすんでいる場合、S27では、前回の処理で判定された処理内容に基づいて、仮想オブジェクトの生成処理が実行される。 Next, a process of generating a virtual object (display image) by the output image generation unit 17 is performed based on the determined processing content (S27). If the rotation amount is determined to be less than the threshold value in S22 and the process proceeds to S27, a virtual object generation process is performed in S27 based on the processing content determined in the previous process.
 次に、出力画像制御部18により、出力画像生成部17で生成された仮想オブジェクトがディスプレイ31に表示できるように画像信号にされて、出力部30に出力される(S28)。 Next, the output image control unit 18 converts the virtual object generated by the output image generation unit 17 into an image signal so that it can be displayed on the display 31, and outputs the image signal to the output unit 30 (S28).
 ディスプレイ31では、制御部10から出力された画像信号に基づき、仮想オブジェクトが表示され、ユーザUに提示される。 The virtual object is displayed on the display 31 based on the image signal output from the control unit 10 and presented to the user U.
 以上のように、制御部10では、入力部20の各種センサから取得した周辺状況情報及び空間情報に基づいて、現時点でユーザUの非視野領域に配置される仮想オブジェクトに関する処理が制御される。 As described above, the control unit 10 controls the processing related to the virtual object currently arranged in the non-visual area of the user U based on the surrounding situation information and the space information acquired from the various sensors of the input unit 20.
 具体的には、制御部10により、周辺状況情報及び空間情報を用いてユーザUの視界に入る可能性が高い領域が予測される。そして、制御部10により、予測されたユーザUの視界に入る可能性が高い領域、すなわちユーザUの誘目度が高い領域での仮想オブジェクトの描画処理が、他の誘目度の低い領域での描画処理よりも優先的に行なわれるように制御される。 Specifically, the control unit 10 predicts an area having a high possibility of entering the field of view of the user U using the surrounding situation information and the spatial information. Then, the control unit 10 performs the rendering process of the virtual object in the region that is likely to be in the predicted field of view of the user U, that is, the region where the user U has a high degree of attraction, and performs the drawing in another region with a low degree of attraction. Control is performed such that the processing is performed with higher priority than the processing.
 (ハードウェア構成)
 図5は、HMD1のハードウェア構成を説明するための図である。上述した情報処理装置であるHMD1での情報処理は、ソフトウェアと、以下に説明するHMD1のハードウェアとの協働により実現される。
(Hardware configuration)
FIG. 5 is a diagram for explaining a hardware configuration of the HMD 1. The information processing in the HMD 1 as the information processing apparatus described above is realized by cooperation between software and hardware of the HMD 1 described below.
 図5に示すように、HMD1は、CPU(Central Processing Unit)51と、RAM(Random Access Memory)52と、ROM(Read Only Memory)53と、GPU(Graphics Processing Unit)54と、通信装置55と、センサ56と、出力装置57と、ストレージ装置58と、撮像装置59と、を有し、これらはバス61を介して接続される。 As shown in FIG. 5, the HMD 1 includes a CPU (Central Processing Unit) 51, a RAM (Random Access Memory) 52, a ROM (Read Only Memory) 53, a GPU (Graphics Processing Unit) 54, and a communication device 55. , A sensor 56, an output device 57, a storage device 58, and an imaging device 59, which are connected via a bus 61.
 CPU51は、各種プログラムに従ってHMD1内の動作全般を制御する。
 ROM53は、CPU51が使用するプログラムや演算パラメータ等を記憶する。
 RAM52は、CPU51の実行において使用するプログラムや、その実行において適宜変化するパラメータ等を一時記憶する。
 GPU54は、表示画像(仮想オブジェクト)の生成に関する様々な処理を行う。
The CPU 51 controls the overall operation of the HMD 1 according to various programs.
The ROM 53 stores programs used by the CPU 51, operation parameters, and the like.
The RAM 52 temporarily stores a program used in the execution of the CPU 51, a parameter appropriately changed in the execution, and the like.
The GPU 54 performs various processes related to generation of a display image (virtual object).
 通信装置55は、通信網62に接続するための通信デバイス等で構成された通信インターフェースである。また、通信装置55は、無線LAN(Local Area Network)対応通信装置、LTE(Long Term Evolution)対応通信装置、有線による通信を行うワイヤー通信装置、またはブルートゥース(登録商標)通信装置を含んでもよい。 The communication device 55 is a communication interface configured with a communication device or the like for connecting to the communication network 62. The communication device 55 may include a communication device compatible with a wireless LAN (Local Area Network), a communication device compatible with LTE (Long Term Evolution), a wire communication device that performs wired communication, or a Bluetooth (registered trademark) communication device.
 センサ56は、周辺状況情報及び空間情報に係る各種データを検出する。センサ56は、図1を参照して説明した、9軸センサ21、照度センサ22、音センサ23、匂いセンサ24に対応する。 The sensor 56 detects various data related to the surrounding situation information and the space information. The sensor 56 corresponds to the nine-axis sensor 21, the illuminance sensor 22, the sound sensor 23, and the odor sensor 24 described with reference to FIG.
 出力装置57は、例えば、液晶表示装置、有機EL(Electroluminescence)表示装置等の表示装置を含む。更に、出力装置57は、スピーカやヘッドホン等の音出力装置を含む。表示装置は、撮影された画像や生成された画像等を表示する。一方、音出力装置は、音声信号を音声に変換して出力する。出力装置57は、図1を参照して説明したディスプレイ31に対応する。 The output device 57 includes a display device such as a liquid crystal display device and an organic EL (Electroluminescence) display device. Further, the output device 57 includes a sound output device such as a speaker or headphones. The display device displays a captured image, a generated image, and the like. On the other hand, the sound output device converts a sound signal into sound and outputs the sound. The output device 57 corresponds to the display 31 described with reference to FIG.
 ストレージ装置58は、データ格納用の装置である。ストレージ装置58は、記録媒体、記録媒体にデータを記録する記録装置、記録媒体からデータを読み出す読出し装置及び記録媒体に記録されたデータを削除する削除装置等を含んでもよい。ストレージ装置58は、CPU51、GPU54が実行するプログラムや各種データを格納する。ストレージ装置58は、図1を参照して説明した記憶部46に対応する。 The storage device 58 is a device for storing data. The storage device 58 may include a recording medium, a recording device that records data on the recording medium, a reading device that reads data from the recording medium, a deletion device that deletes data recorded on the recording medium, and the like. The storage device 58 stores programs executed by the CPU 51 and the GPU 54 and various data. The storage device 58 corresponds to the storage unit 46 described with reference to FIG.
 撮像装置59は、光を集光する撮影レンズ及びズームレンズ等の撮像光学系、及びCCD(Charge Coupled Device)またはCMOS(Complementary Metal Oxide Semiconductor)等の信号変換素子を備える。撮像光学系は、被写体から発せられる光を集光して信号変換部に被写体像を形成する。信号変換素子は、形成された被写体像を電気的な画像信号に変換する。撮像装置59は、図1を参照して説明したカメラ25に対応する。 The imaging device 59 includes an imaging optical system such as a photographing lens and a zoom lens that collects light, and a signal conversion element such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor). The imaging optical system collects light emitted from the subject and forms a subject image on the signal conversion unit. The signal conversion element converts the formed subject image into an electric image signal. The imaging device 59 corresponds to the camera 25 described with reference to FIG.
 ソフトウェアを構成するプログラムは、ユーザの視野に対応する視野情報とは異なる実空間の第1の周辺情報と第2の周辺情報をセンサから取得するステップと、第1の周辺情報と第2の周辺情報それぞれの誘目度を判定するステップと、誘目度を基に、第1の周辺情報に対応する上記実空間の第1の領域に配置される第1の仮想オブジェクトに関する第1の処理と第2の周辺情報に対応する実空間の第2の領域に配置される第2の仮想オブジェクトの第2の処理の優先度を判定するステップを含む処理を情報処理装置に実行させるためのものである。 A program constituting the software includes a step of acquiring first peripheral information and second peripheral information in a real space different from visual field information corresponding to a visual field of the user from the sensor; and a step of acquiring the first peripheral information and the second peripheral information. Determining the degree of attraction of each piece of information; first processing and second processing relating to the first virtual object arranged in the first area of the real space corresponding to the first peripheral information based on the degree of attraction This is for causing the information processing apparatus to execute a process including a step of determining the priority of the second process of the second virtual object arranged in the second area of the real space corresponding to the surrounding information of the second virtual object.
 以下の各実施形態では、具体的な周辺状況情報を例にあげ、上述の情報処理装置としてのHMD1での仮想オブジェクトに関する処理において、どのように優先処理点数が判定されるかを説明する。 In the following embodiments, specific surrounding situation information will be described as an example, and how the priority processing score is determined in the processing relating to the virtual object in the HMD 1 as the information processing apparatus described above will be described.
 優先処理点数判定に際し、周辺状況情報として、第1の実施形態では照度センサ22の検出結果を用いる例をあげる。第2の実施形態では音センサ23の検出結果を用いる例をあげる。第3の実施形態では匂いセンサ24の検出結果を用いる例をあげる。第4の実施形態ではカメラ25で撮影される撮影画像(検出結果)を用いる例をあげる。 In the first embodiment, an example in which the detection result of the illuminance sensor 22 is used as the peripheral situation information when the priority processing point is determined is described. In the second embodiment, an example using the detection result of the sound sensor 23 will be described. In the third embodiment, an example using the detection result of the odor sensor 24 will be described. In the fourth embodiment, an example in which a captured image (detection result) captured by the camera 25 is used will be described.
 第1~第3の実施形態においては、取得する周辺状況情報の具体例が異なるのみで、上述で説明した図2~図4のフロー図に従って、仮想オブジェクトに関する処理を実行することができる。 In the first to third embodiments, only the specific example of the peripheral situation information to be obtained is different, and the processing relating to the virtual object can be executed in accordance with the flowcharts of FIGS. 2 to 4 described above.
 また、上述のHMD1の例では、周辺状況情報を取得するための、照度センサ22、音センサ23、匂いセンサ24が全てHMD1に搭載されているが、これに限定されない。周辺状況情報を取得するための、照度センサ22、音センサ23、匂いセンサ24、又は、画像情報を取得可能なカメラのうち少なくとも1種類があればよい。また、周辺状況情報を取得するためのセンサは、HMD1に搭載されず外部機器であってもよい。 Also, in the above-described example of the HMD 1, the illuminance sensor 22, the sound sensor 23, and the odor sensor 24 for acquiring the surrounding situation information are all mounted on the HMD 1, but the invention is not limited to this. At least one of an illuminance sensor 22, a sound sensor 23, an odor sensor 24, and a camera capable of acquiring image information may be used to acquire the surrounding situation information. Further, the sensor for acquiring the surrounding situation information may be an external device without being mounted on the HMD 1.
<第1の実施形態>
 第1の実施形態では、周辺状況情報として照度センサ22から取得した検出結果(照度情報)を用いて、優先処理点数(誘目度)を判定する例を説明する。
<First embodiment>
In the first embodiment, an example will be described in which a detection result (illuminance information) acquired from the illuminance sensor 22 as surrounding situation information is used to determine a priority processing point (attraction level).
 照度センサ22で取得される周辺状況情報としての照度情報に基づく仮想オブジェクトに関する処理の制御は、例えば、ゲームコンテンツを再生するHMD1に適用することができる。ユーザUがHMD1を装着することにより、HMD1は、ユーザUに対して、実空間である外界に、コンテンツに応じた仮想オブジェクト(表示画像)を重畳して提示することができる。 The control of the processing related to the virtual object based on the illuminance information as the peripheral situation information acquired by the illuminance sensor 22 can be applied to, for example, the HMD 1 that reproduces the game content. When the user U wears the HMD 1, the HMD 1 can present the user U with a virtual object (display image) corresponding to the content superimposed on the external world that is a real space.
 ユーザUはHMDを装着した状態で、実空間を自由に移動しながら、ゲームを楽しむことができる。このように実空間を移動しながらゲームをする場合、屋外や屋内でゲームを楽しむことが想定される。 The user U can enjoy the game while freely moving in the real space while wearing the HMD. When playing a game while moving in the real space, it is assumed that the game is enjoyed outdoors or indoors.
 通常、人は、太陽が位置する方向のように眩しい方向をみても、その明るさに目がすぐには慣れず見にくいことを経験的に学習している。 Normally, people have empirically learned that even if they look in a dazzling direction such as the direction in which the sun is located, their eyes are not immediately accustomed to their brightness and are difficult to see.
 そのため、屋外でのゲームでは、太陽光が眩しいため、現時点での視野に対応する領域の照度よりも高い照度となる太陽の位置する方向に視線を移動するのを避けるといったユーザUの行動が予測される。
 また、ユーザUが現時点で眩しい太陽光が位置する方向を向いている場合では、眩しくない、より照度の低い領域に視線を移動するといった行動をすることが予測される。
Therefore, in an outdoor game, since the sunlight is dazzling, the behavior of the user U such as avoiding moving the line of sight in the direction of the sun where the illuminance is higher than the illuminance of the area corresponding to the current visual field is predicted. Is done.
Further, when the user U is facing in the direction where the dazzling sunlight is located at the present time, it is predicted that the user U will take an action of not being dazzled or moving his / her gaze to a region with lower illuminance.
 屋内でのゲームでは、屋外と同様に、屋内に差し込む太陽光で見にくいと予測される太陽光が照射される領域に視線を移動するのを避けるといったユーザUの行動が予測される。
 その一方で、屋内でのゲームでは、室内灯等の太陽光以外の光がある場合、その光が現時点でのユーザUの視野における明るさよりも更に明るい場合は、太陽光に対する予測行動とは反対に、より明るい光を発する方向にユーザUが視線を移動することが予測される。
In an indoor game, the behavior of the user U such as avoiding moving his line of sight to an area irradiated with sunlight, which is predicted to be difficult to see by sunlight entering indoors, is predicted as in the case of outdoors.
On the other hand, in an indoor game, when there is light other than sunlight, such as an indoor light, if the light is brighter than the current brightness in the visual field of the user U, the predicted action against the sunlight is opposite. In addition, it is predicted that the user U moves his / her line of sight in a direction that emits brighter light.
 本実施形態では、照度センサ22から取得した周辺状況情報としての照度情報に加えて、上述のような屋外、屋内といった実空間の環境が加味されて、優先処理点数が判定される。実空間の環境とは、例えば、屋外であるか屋内であるか、屋外であれば、太陽が出ている日照時か否か、屋内であれば太陽光が差し込む窓があるか否か等である。 In the present embodiment, the priority processing points are determined by taking into account the real space environment such as the outdoor or indoor environment described above in addition to the illuminance information as the peripheral situation information acquired from the illuminance sensor 22. The environment of the real space is, for example, whether it is outdoors or indoors, if it is outdoors, whether it is sunshine when the sun is out, if it is indoors, it is whether there is a window through which the sunlight enters etc. is there.
 周辺状況情報としての照度情報は、太陽光、外灯、室内灯等、ゲームコンテンツで予め設定される情報とは異なる偶発的に発生する情報である。
 以下、詳細について説明する。
The illuminance information as the peripheral situation information is information that occurs accidentally, such as sunlight, an outdoor light, an indoor light, and the like, which is different from information preset in the game content.
Hereinafter, the details will be described.
 (日照時の屋外の場合)
 まず、ユーザUが存在する環境が、日照時の屋外である場合について説明する。
 図7は、ユーザUが日照時の屋外に位置し、ユーザUの右側領域60Rに太陽65が位置する状況での領域毎の優先処理点数の判定例を示す。図7に示す状況は、太陽光が眩しく、ユーザUの視線の動きを誘発するような状況である。
 尚、日照時とは、物の影ができる程度に太陽の直射光が地表を照らしている状態の時を指す。
(In the case of outdoors in sunshine)
First, a case where the environment in which the user U exists is outdoors in the sunshine will be described.
FIG. 7 shows a determination example of the priority processing score for each area in a situation where the user U is located outdoors in the sunshine and the sun 65 is located in the right area 60R of the user U. The situation shown in FIG. 7 is a situation in which the sunlight is dazzling and induces the movement of the line of sight of the user U.
In addition, the time of sunshine refers to a state in which the direct sunlight of the sun illuminates the ground surface to the extent that a shadow of an object is formed.
 日照時の屋外では、照度センサ22は、主に太陽光が照射される実空間の照度を検出する。太陽光以外には、外灯などの人工的な光を検出することが想定される。 屋外 Outdoors during sunshine, the illuminance sensor 22 mainly detects the illuminance in a real space where sunlight is irradiated. In addition to sunlight, it is assumed that artificial light such as an outdoor light is detected.
 日照時の屋外は明るい環境である。通常、人は、このような明るい環境では、太陽が位置する眩しい方向をみても、その明るさに目がすぐには慣れず見にくいことを経験的に学習している。 屋外 Outdoors in bright sunlight are bright environments. Normally, people have empirically learned that in such a bright environment, even if they look in the dazzling direction where the sun is located, their eyes are not immediately accustomed to the brightness and it is difficult to see.
 つまり、現時点でのユーザUの視野に対応する前方領域60Fの明るさ(照度)より明るい領域であって、更に前方領域60Fの照度と大きく照度が異なると想定される太陽が位置する領域に対しては、ユーザUは、眩しすぎるためあえて視線を向けないことが想定される。 In other words, a region that is brighter than the brightness (illuminance) of the front region 60F corresponding to the field of view of the user U at the present time and in which the sun where the illuminance is assumed to be greatly different from the illuminance of the front region 60F is located. In other words, it is assumed that the user U does not dare to look at the screen because it is too dazzling.
 一方、現時点でのユーザUの視野に対応する前方領域60Fの照度よりも高いが前方領域60Fの照度とさほど大きく照度が変わらない領域に対しては、ユーザUは、眩しすぎることがないため、その領域を特に避けるわけでもなく、普通の行動の流れとして見る可能性がある。 On the other hand, for a region that is higher than the illuminance of the front region 60F corresponding to the field of view of the user U at the current time but the illuminance does not change much from the illuminance of the front region 60F, the user U does not become too dazzling. The area is not specifically avoided and may be viewed as a normal course of action.
 そこで、日照時の屋外の環境では、ユーザUの非視野領域において、現時点でのユーザUの視野に対応する前方領域60Fよりも照度が高い領域の誘目度は次のように判定される。 Therefore, in the outdoor environment under the sunshine, the attractiveness of the area where the illuminance is higher than the front area 60F corresponding to the current visual field of the user U in the non-visual area of the user U is determined as follows.
 すなわち、まず、照度センサ22Fによって検出される現時点でのユーザUの視野に対応する前方領域60Fの照度と、ユーザUの非視野領域の照度を検出する照度センサ22R(22B、22L)で検出される右側領域60R(後方領域60B、左側領域60L)の照度との差が求められる。
 そして、右側領域60R(後方領域60B、左側領域60L)の誘目度は、上述の照度差がしきい値未満の場合よりもしきい値以上の場合の方が低くなるように判定される。
That is, first, the illuminance of the front region 60F corresponding to the current visual field of the user U detected by the illuminance sensor 22F and the illuminance sensors 22R (22B, 22L) detecting the illuminance of the non-visual region of the user U are detected. Of the right region 60R (the rear region 60B and the left region 60L).
Then, the attractiveness of the right region 60R (the rear region 60B, the left region 60L) is determined to be lower when the illuminance difference is equal to or larger than the threshold than when the illuminance difference is smaller than the threshold.
 つまり、前方領域60Fと照度差が大きく異なる領域は、ユーザUにとって眩しすぎる領域であり、ユーザUが見ない可能性が高いと想定され、優先処理点数(誘目度)が低く判定される。
 一方、前方領域60Fと照度差が小さい領域は、ユーザUにとって眩しすぎることはない領域であり、ユーザUが見る可能性があると想定され、照度差が大きい場合(しきい値以上の場合)よりも優先処理点数(誘目度)が高く判定される。
That is, an area where the illuminance difference is significantly different from the front area 60F is an area that is too dazzling for the user U, and it is assumed that the user U is unlikely to see the area, and the priority processing score (attraction degree) is determined to be low.
On the other hand, an area where the illuminance difference is small from the front area 60F is an area that is not too dazzling for the user U, is assumed to be likely to be seen by the user U, and has a large illuminance difference (in the case of a threshold or more). The priority processing score (attraction level) is determined to be higher than that.
 図7に示す例は、ユーザUの右側に太陽が位置し、右側領域60Rの照度を検出する照度センサ22Rで検出される照度値が4つの照度センサ22の中でも最も高く、前方領域60F、左側領域60L、後方領域60Bそれぞれで検出される照度値が同じ場合を示す。図7では、前方領域60Fの照度値と右側領域60Rの照度値の差(照度差)がしきい値以上であるとする。 In the example illustrated in FIG. 7, the sun is located on the right side of the user U, and the illuminance value detected by the illuminance sensor 22R that detects the illuminance in the right side region 60R is the highest among the four illuminance sensors 22, and the front region 60F and the left side This shows a case where the illuminance values detected in the area 60L and the rear area 60B are the same. In FIG. 7, it is assumed that the difference (illuminance difference) between the illuminance value of the front region 60F and the illuminance value of the right region 60R is equal to or larger than the threshold value.
 図7に示す例では、ユーザUは、太陽が位置し、眩しい右側領域60Rに視線を向ける可能性が最も低いと予測されるので、優先処理点数P2は1点となっている。一方、、前方領域60F、左側領域60L、後方領域60Bは、右側領域60Rよりも照度が低くユーザUが視線を向ける可能性が高いと予測され、それぞれの優先処理点数P1、P4、P3は、8点となっている。 In the example illustrated in FIG. 7, the user U is predicted to have the lowest possibility of directing his or her gaze to the dazzling right region 60 </ b> R where the sun is located, and thus the priority processing score P <b> 2 is one. On the other hand, the front area 60F, the left area 60L, and the rear area 60B are predicted to have lower illuminance than the right area 60R and a higher possibility of the user U turning their eyes, and the respective priority processing points P1, P4, and P3 are: There are eight points.
 図7では、非視野領域において、実空間の右側領域60R(第2の領域)の周辺情報(第2の周辺情報)は、後方領域60B及び左側領域60L(第1の領域)の周辺情報(第1の周辺情報)よりも、ユーザUの目をひきにくい、誘目度の低いものとなっている。 In FIG. 7, in the non-viewing area, the peripheral information (second peripheral information) of the right area 60R (second area) of the real space is the peripheral information (second area information) of the rear area 60B and the left area 60L (first area). It is harder to catch the eyes of the user U and has a lower degree of attraction than the first peripheral information).
 図7では、判定された優先処理点数を基に判定された処理内容に基づき、右側領域60R(第2の領域)に配置される仮想オブジェクト(第2の仮想オブジェクト)に関しては処理X(第2の処理)に基づいて描画が行われる。左側領域60L、前方領域60F及び後方領域60B(第1の領域)それぞれに配置される仮想オブジェクト(第1の仮想オブジェクト)に関しては、処理Y(第1の処理)に基づいて描画が行われる。 In FIG. 7, based on the processing content determined based on the determined priority processing score, the processing X (second virtual object) is performed on the virtual object (second virtual object) arranged in the right area 60R (second area). Is performed based on the above process. With respect to the virtual objects (first virtual objects) arranged in the left region 60L, the front region 60F, and the rear region 60B (first region), drawing is performed based on the process Y (first process).
 一方、ユーザUの右側に太陽が位置し、右側領域60Rの照度が最も高く、前方領域60F、左側領域60L、後方領域60Bそれぞれの照度が同じ場合であっても、前方領域60Fと右側領域60Rの照度差がしきい値未満の場合は、しきい値以上の場合と異なる優先処理点数が判定される。 On the other hand, even when the sun is located on the right side of the user U and the illuminance of the right region 60R is the highest and the illuminance of the front region 60F, the left region 60L, and the rear region 60B is the same, the front region 60F and the right region 60R. If the illuminance difference is less than the threshold value, a different priority processing score is determined than when the difference is equal to or greater than the threshold value.
 ユーザUは、非視野領域の照度差がしきい値未満の領域を、特に避けるわけでもなく、普通の行動の流れとして見る可能性がある。 The user U may see an area where the illuminance difference of the non-viewing area is less than the threshold value as a normal flow of action without particular avoidance.
 一例として、照度差がしきい値未満の場合は、図6に示すように、周辺状況がユーザUの視線移動を誘発するような状況でない場合であるとして、優先処理点数が判定され、右側領域60Rの優先処理点数P2は5点と判定される。これは、図7に示す、照度差がしきい値以上の場合よりも高い優先処理点数である。 As an example, when the illuminance difference is less than the threshold value, as shown in FIG. 6, the priority processing point is determined to be a case where the surrounding situation is not a situation that induces the user U's gaze movement, and the right processing area is determined. The priority processing score P2 of 60R is determined to be five. This is a higher priority processing score than the case where the illuminance difference is equal to or larger than the threshold value shown in FIG.
 尚、図7に示す例では、左側領域60Lと後方領域60Bの照度が同じ例をあげたが、照度が異なる場合は、照度値に応じて優先処理点数が判定される。そして、非視野領域に対応する左側領域60L、後方領域60B、右側領域60Rそれぞれに判定された優先処理点数に応じて、非視野領域における各領域に配置される仮想オブジェクトに関する処理の優先度が判定される。 In the example shown in FIG. 7, an example is given in which the left region 60L and the rear region 60B have the same illuminance. However, if the illuminance is different, the priority processing score is determined according to the illuminance value. Then, according to the priority processing points determined in the left region 60L, the rear region 60B, and the right region 60R corresponding to the non-viewing region, the priority of the process regarding the virtual object arranged in each region in the non-viewing region is determined. Is done.
 照度差がしきい値未満となる状況の例としては、夕方の場合、日照時の屋外で外灯がついている場合、ユーザUが建物の影などにいる場合等がある。 状況 Examples of situations in which the illuminance difference is less than the threshold include the case of evening, the case where the outdoor light is turned on outdoors in the sunshine, and the case where the user U is in the shadow of a building.
 これらの場合では、異なる領域間で照度差が生じるような光が存在しても、照度差が小さいため、ユーザUは、照度値が高く検出される領域を、特に避けるわけでもなく、また、特に注目するわけでもなく、普通の行動の流れとして見る可能性がある。このような場合では、例えば、図6に示すような、周辺状況がユーザUの視線移動を誘発するような状況でない場合であるとして、優先処理点数が判定される。 In these cases, even if there is light such that an illuminance difference occurs between different regions, since the illuminance difference is small, the user U does not particularly avoid an area in which the illuminance value is detected high. It doesn't pay special attention and may be viewed as a normal course of action. In such a case, for example, the priority processing score is determined to be a case where the surrounding situation is not a situation that induces the user U's line of sight as shown in FIG.
 例えば、晴天時の午前10時くらいの太陽の明るさは65000ルクス程度であるのに対し、晴天の日の入り1時間前の太陽の明るさは1000ルクス程度であり、夕方は、昼間と比較して、照度差が小さくなる傾向にある。
 このように、夕方の場合では、太陽の眩しさが昼間と比較して感じられにくく、ユーザUは照度が高い領域を特に避けることもなく、太陽の位置する方向を見る可能性がある。
For example, the brightness of the sun at about 10:00 am in fine weather is about 65,000 lux, whereas the brightness of the sun one hour before the sunset of fine weather is about 1000 lux, and the evening is compared with the daytime. The illuminance difference tends to be small.
Thus, in the case of the evening, the glare of the sun is less likely to be felt than in the daytime, and the user U may see the direction in which the sun is located without particularly avoiding an area where the illuminance is high.
 また、日照時の屋外で外灯がついている場合は、外灯がある方向で検出される照度と、現時点でユーザUがおかれている状況の照度、すなわちユーザUの視野に対応する前方領域60Fの照度とは、あまり変わらないことが想定される。この状況は、ユーザUの視線移動を誘発するような状況でないと考えられる。 When the outside light is on outdoors in the sunshine, the illuminance detected in the direction where the outside light is located and the illuminance of the situation where the user U is currently placed, that is, the front area 60F corresponding to the field of view of the user U are displayed. It is assumed that the illuminance does not change much. This situation is not considered to be a situation that would induce the user U to move his / her gaze.
 また、例えば晴天時の昼間の屋外であってもユーザUが建物の影にいる場合は、4つの照度センサで検出される検出値はさほど変わらず、照度差は小さいと想定される。このような場合も、ユーザUの視線移動を誘発するような状況でないと考えられる。 例 え ば Also, for example, when the user U is in the shadow of a building even in a sunny daytime outdoors, the detection values detected by the four illuminance sensors do not change so much, and the illuminance difference is assumed to be small. Even in such a case, it is considered that the situation does not induce the user U's gaze movement.
 また、現時点でのユーザUの視野に対応する前方領域60Fに太陽が位置する場合、ユーザUは、眩しくない、より照度の低い領域に視線を移動することが想定される。
 一方、現時点でのユーザUの視野に対応する前方領域60Fに太陽が位置している場合であって、非視野領域により照度の低い領域があっても、前方領域60Fとさほど大きく照度が変わらない領域に対しては、ユーザUは、あえて目を向けないが、普通の行動の流れとして見る可能性がある。
In addition, when the sun is located in the front area 60F corresponding to the current visual field of the user U, it is assumed that the user U moves his or her line of sight to a less dazzling area with lower illuminance.
On the other hand, when the sun is located in the front area 60F corresponding to the current visual field of the user U, even if there is a low illuminance area due to the non-visual field area, the illuminance does not change much as much as the front area 60F. The user U does not dare to look at the area, but may see it as a normal flow of action.
 そこで、日照時の屋外の環境では、ユーザUの非視野領域において、現時点でのユーザUの視野に対応する前方領域60Fよりも照度が低い領域の誘目度を次のように判定される。 Therefore, in the outdoor environment under the sunshine, the attractiveness of the area where the illuminance is lower than the front area 60F corresponding to the current visual field of the user U in the non-visual area of the user U is determined as follows.
 すなわち、まず、照度センサ22Fによって検出される現時点でのユーザUの視野に対応する前方領域60Fの照度と、ユーザUの非視野領域の照度を検出する照度センサ22R(22B、22L)で検出される右側領域60R(後方領域60B、左側領域60L)の照度との差が求められる。
 そして、右側領域60R(後方領域60B、左側領域60L)の誘目度は、上述の照度差がしきい値以上の場合よりもしきい値未満の場合の方が低くなるように判定される。
That is, first, the illuminance of the front region 60F corresponding to the current visual field of the user U detected by the illuminance sensor 22F and the illuminance sensors 22R (22B, 22L) detecting the illuminance of the non-visual region of the user U are detected. Of the right region 60R (the rear region 60B and the left region 60L).
Then, the degree of attraction of the right region 60R (the rear region 60B, the left region 60L) is determined to be lower when the illuminance difference is less than the threshold than when it is greater than or equal to the threshold.
 つまり、前方領域60Fと照度差が大きく異なる領域は、ユーザUにとって眩しすぎない領域であり、ユーザUが現時点の眩しさを避けるために見る可能性が高いと想定され、優先処理点数(誘目度)が高く判定される。
 一方、前方領域60Fとの照度差が小さい領域は、ユーザUにとって前方領域60Fとさほど変わらない眩しさであり、ユーザUが普通の行動の流れとして見る可能性があると想定され、照度差が大きい場合(しきい値以上の場合)よりも、優先処理点数(誘目度)が低く判定される。
That is, an area where the illuminance difference is largely different from the front area 60F is an area that is not too dazzling for the user U, and it is assumed that the user U has a high possibility to see in order to avoid dazzling at the present time. ) Is determined to be high.
On the other hand, the area where the illuminance difference from the front area 60F is small is glare that is not so different from the front area 60F for the user U, and it is assumed that the user U may see it as a normal action flow. The priority processing score (attraction level) is determined to be lower than when it is larger (when it is equal to or more than the threshold).
 尚、日中であっても、曇っていて、物の影ができる程度に太陽の直射光が地表を照らしていない屋外や、太陽が出ていない夜間の屋外については、後述する暗い屋内の場合での処理と同様となる。 In addition, even during the daytime, it is cloudy and outdoors where the direct sunlight of the sun does not illuminate the surface of the ground to the extent that shadows of objects can be created, and outdoors at night when the sun does not come out, in the case of dark indoors described later The processing is the same as that described above.
 (屋内の場合)
 次に、ユーザUが存在する環境が、屋内である場合について説明する。
 ユーザUが屋内にいる場合、照度センサ22で検出される照度が、屋外から屋内に入る太陽光に起因するものであるかが判定される。
(In the case of indoor)
Next, a case where the environment where the user U exists is indoors will be described.
When the user U is indoors, it is determined whether the illuminance detected by the illuminance sensor 22 is caused by sunlight entering the room from the outside.
 屋内の場合、太陽光が照射される実空間の領域に対しては、ユーザUは見る可能性がかなり低いとして、優先度が最も低くなる、すなわち、処理負荷が最も低くなるように優先処理点数が低くなるように判定される。 In the case of indoors, for a real space area where sunlight is irradiated, the user U has a very low possibility of seeing, and has the lowest priority, that is, the number of priority processing points so that the processing load is the lowest. Is determined to be low.
 屋内の場合、太陽光以外の光が照射される実空間の領域に対しては、基本的に、ユーザUは見る可能性が高いとして、優先処理点数が高くなるように判定される。
 以下、実空間の領域に照射される光が太陽光である場合とない場合にわけて説明する。
In the case of indoors, it is basically determined that the user U has a high possibility of seeing a real space region irradiated with light other than sunlight and the priority processing score is increased.
Hereinafter, the case where the light applied to the real space area is the sunlight and the case where the light is not applied will be described.
 [太陽光である場合]
 眩しい太陽光がさす領域は、現時点でユーザUがおかれている屋内という環境の平均的な輝度(照度)に対して大きく照度が異なって明るいため、太陽光がさす領域を見ても、目が明るさに慣れず、ものをはっきり見づらいことをユーザUは経験的に知っている。これにより、ユーザUは太陽光がさす領域を積極的に見ないことが想定される。
[In case of sunlight]
The area pointed by the dazzling sunlight has a large difference in illuminance with respect to the average luminance (illuminance) of the indoor environment where the user U is located at the present time and is bright. User U knows empirically that he is not accustomed to brightness and it is difficult to see things clearly. Thereby, it is assumed that the user U does not actively look at the region pointed to by sunlight.
 そこで、屋内においては、太陽光が照射する実空間の領域に対する優先処理点数(誘目度)は、領域に配置される仮想オブジェクトに関する処理の優先度が最も低くなる、すなわち、処理負荷が最も低くなるように判定される。
 本実施形態の例では、処理負荷を3段階に分け、処理Xが最も処理負荷が低い処理となる例をあげているので、屋内においては、太陽光が照射する実空間の領域に対する優先処理点数は、処理Xの判定がなされるように、3点以下に判定される。
Therefore, indoors, the priority processing point (attraction degree) for the real space area irradiated with sunlight has the lowest priority of the processing relating to the virtual objects arranged in the area, that is, the processing load is the lowest. Is determined as follows.
In the example of this embodiment, the processing load is divided into three stages, and the processing X is the processing with the lowest processing load. Therefore, indoors, the priority processing points for the real space area irradiated with sunlight are given. Is determined to be 3 points or less so that the determination of the processing X is made.
 [太陽光でない場合]
 一方、照度センサ22が検出する照度が太陽光に起因しない場合、屋内で検出される照度は、室内灯やスポットライトなどの人工的に作り出された光が起因するものとユーザUには認識される。
[When not in sunlight]
On the other hand, when the illuminance detected by the illuminance sensor 22 is not caused by sunlight, the illuminance detected indoors is recognized by the user U as being caused by artificially generated light such as an interior light or a spotlight. You.
 現時点でユーザUがおかれている屋内の環境の平均的な輝度(照度)に対して大きく照度が異なって明るい領域がある場合、太陽光の場合と反対に、ユーザUは、人工的に作り出された光が起因する明るい領域を積極的に見ることが想定される。 If there is a bright area where the illuminance differs greatly from the average luminance (illuminance) of the indoor environment where the user U is currently placed, the user U is artificially created, as opposed to the case of sunlight. It is assumed that a bright area caused by the reflected light is positively viewed.
 しかしながら、太陽光以外の光により他の領域よりも照度値が高く検出される領域があっても、現時点でのユーザUの視野に対応する前方領域60Fの照度とさほど大きく照度が変わらない領域に対しては、ユーザUは、光に気づきにくいことが想定される。
 このような場合、照度値が高く検出される領域であっても、ユーザUは、その領域を特に注目するわけでもなく、普通の行動の流れとして見る可能性がある。
However, even if there is a region where the illuminance value is detected to be higher than other regions due to light other than sunlight, the illuminance does not change much from the illuminance of the front region 60F corresponding to the current visual field of the user U. On the other hand, it is assumed that the user U is unlikely to notice the light.
In such a case, even in a region where the illuminance value is detected to be high, the user U does not pay particular attention to the region, and may see it as a normal flow of action.
 そこで、屋内での太陽光以外の光が照射される領域においては、次のように優先処理点数(誘目度)が判定される。 Therefore, in an area where indoors other than sunlight is irradiated, the priority processing point (attraction degree) is determined as follows.
 すなわち、まず、照度センサ22Fによって検出される現時点でのユーザUの視野に対応する前方領域60Fの照度と、ユーザUの非視野領域の照度を検出する照度センサ22R(22B、22L)で検出される右側領域60R(後方領域60B、左側領域60L)の照度との差が求められる。
 そして、右側領域60R(後方領域60B、左側領域60L)の誘目度を、上述の照度差がしきい値以上の場合よりもしきい値未満の場合の方が低くなるように判定される。
That is, first, the illuminance of the front region 60F corresponding to the current visual field of the user U detected by the illuminance sensor 22F and the illuminance sensors 22R (22B, 22L) detecting the illuminance of the non-visual region of the user U are detected. Of the right region 60R (the rear region 60B and the left region 60L).
Then, it is determined that the attractiveness of the right region 60R (the rear region 60B, the left region 60L) is lower when the illuminance difference is less than the threshold than when it is greater than or equal to the threshold.
 つまり、前方領域60Fとの照度差が小さくユーザUの視線移動を誘発するような領域でないと想定されるときは、優先処理点数(誘目度)は低く判定される。
 一方、前方領域60Fとの照度差が大きく、太陽光以外の光が位置する領域がユーザUの視線移動を誘発するような領域であると想定されるときは、照度差が小さい場合(しきい値未満の場合)よりも優先処理点数(誘目度)は高く判定される。
That is, when it is assumed that the illuminance difference from the front area 60F is small and is not an area that induces the user U's line of sight, the priority processing score (attraction degree) is determined to be low.
On the other hand, when it is assumed that the illuminance difference with the front area 60F is large and the area where light other than sunlight is located is an area that induces the user U's line of sight, the illuminance difference is small (threshold). (In the case of less than the value), the priority processing score (attraction level) is determined to be higher.
 照度差がしきい値未満の場合は、ユーザUは太陽光以外の光が照射される実空間の領域を、普通の行動の流れとして見る可能性があるとして、優先処理点数は比較的高くなるように判定されるが、しきい値以上の場合よりも優先処理点数は低く判定される。 When the illuminance difference is less than the threshold value, the user U may view the real space area irradiated with light other than the sunlight as a normal flow of action, and the priority processing score is relatively high. However, the priority processing score is determined to be lower than the case where the priority processing score is equal to or greater than the threshold value.
 例えば、照度差がしきい値以上である太陽光以外の光が照射される実空間の領域は、ユーザUが見る可能性がかなり高い領域として、優先処理点数がかなり高くなるように判定される。
 一方、照度差がしきい値未満の場合でユーザUの視線移動を誘発するような領域でないが、普通の行動の流れとしてみる可能性がある場合は、中程度の優先処理点数となるように判定される。
For example, a region in the real space where light other than sunlight whose illuminance difference is equal to or larger than the threshold is irradiated is determined as a region where the possibility of the user U to see is considerably high, and the priority processing score is considerably increased. .
On the other hand, if the illuminance difference is less than the threshold value and the area is not an area that induces the gaze movement of the user U, but if it is likely to be viewed as a normal action flow, a medium priority processing point is set. Is determined.
 照度差がしきい値以上となる状況の一例としては、明るい屋内を局所的にスポットライト等の高い照度の光で照らす場合等がある。このような高い照度の光が照射される領域は、屋内全体が明るい場合であってもユーザUの目をひきやすく、そのような光のある領域は誘目度が高い領域となる。 例 As an example of a situation where the illuminance difference is equal to or larger than the threshold value, there is a case where a bright indoor is locally illuminated with high illuminance light such as a spotlight. The area where such high illuminance light is irradiated is easy to catch the user U's eyes even when the whole room is bright, and the area with such light is an area where the degree of attraction is high.
 照度差がしきい値以上となる状況の他の例としては、暗い屋内を局所的にスポットライト、明るめの照明光やダウンライト等で照らす場合等がある。このような光が照射される領域は、暗めの光であっても、暗い屋内ではユーザUの目をひきやすい。 Another example of a situation where the illuminance difference is equal to or larger than the threshold value is a case where a dark room is locally illuminated with a spotlight, brighter illumination light, downlight, or the like. Even if the area irradiated with such light is dark light, the user U can easily catch the eyes in a dark indoor.
 照度差がしきい値未満となる状況の例としては、明るい屋内を局所的に通常の室内照明やダウンライト等で照らす場合や、テーブルといった物体の影の領域がある。このような光が照射される領域や影の領域は、実空間の平均的な明るさとあまり変わらないため、特にユーザUの目をひくものとはならない。 Examples of a situation where the illuminance difference is less than the threshold include a case where a bright indoor is locally illuminated with ordinary room lighting or downlight, or a shadow area of an object such as a table. The area to which such light is radiated and the shadow area are not so different from the average brightness of the real space, and therefore do not particularly attract the user U's eyes.
 照度差がしきい値未満となる状況の他の例としては、暗い屋内の特に明かりもない暗い領域等である。このような領域は、実空間の平均的な明るさとあまり変わらないため、特にユーザUの目をひくものとはならない。 の 他 Another example of a situation where the illuminance difference is less than the threshold value is a dark area where there is no particular light inside a dark room. Such an area is not so different from the average brightness of the real space, and therefore does not particularly attract the user U's eyes.
 ここで、明るい屋内とは例えば30ルクス以上の明るさのある屋内を指し、暗い屋内とは例えば30ルクス未満の明るさの屋内を指す。 Here, a bright indoor refers to, for example, an indoor with a brightness of 30 lux or more, and a dark indoor refers to, for example, an indoor with a brightness of less than 30 lux.
 [太陽光であるか否かの判定]
 上述の照度センサ22により検出される照度が太陽光に起因するか否かは、ユーザUの位置情報、日時・天候情報、予め取得されている屋内情報等を用いて、判定することができる。
[Judgment of sunlight]
Whether or not the illuminance detected by the illuminance sensor 22 is due to sunlight can be determined using the position information of the user U, date and time / weather information, indoor information acquired in advance, and the like.
 ユーザUの位置情報は、HMD1に搭載されるGPS(Global Positioning System)等により取得可能である。
 日時・天候情報は、例えばHMD1が外部ネットワーク上に存在する日時・天候情報を提供するアプリケーションサーバと通信することにより取得可能な、場所毎、日時毎の太陽の高度(仰角)と方位角といった太陽の位置情報、晴雨等の天候情報である。
 屋内情報には、屋内の窓の有無情報、屋内における窓が位置する方角、壁に対しての窓の位置等の窓の位置情報が含まれる。
The position information of the user U can be acquired by a GPS (Global Positioning System) mounted on the HMD 1 or the like.
The date / time / weather information is, for example, the sun altitude (elevation angle) and azimuth angle of each place and date / time, which can be acquired by the HMD 1 communicating with an application server that provides date / time / weather information on an external network. Location information and weather information such as fine rain.
The indoor information includes window position information such as the presence / absence of an indoor window, the direction in which the window is located indoors, and the position of the window with respect to the wall.
 ユーザUの位置情報からユーザUが屋内にいるか屋外にいるかを検出することができる。
 また、ユーザUが屋内にいる場合、照度センサで検出される光が太陽光に起因するか否かは、屋内情報である窓の有無情報で判定することができる。
 窓がない場合は、検出した光は太陽光に起因する光でないとして、上述の太陽がない場合の処理を実行することができる。
 一方、窓がある場合は、日時・天候情報からの太陽の位置、屋内情報からの窓の位置から、窓を介して屋内に入射される太陽光の照射位置を求めることができ、照度センサ22により検出した光が太陽光に起因するか否かを判定することができる。
Whether the user U is indoor or outdoors can be detected from the position information of the user U.
Further, when the user U is indoors, whether or not the light detected by the illuminance sensor is caused by sunlight can be determined based on the window presence / absence information, which is indoor information.
When there is no window, it is determined that the detected light is not light caused by sunlight, and the above-described processing when there is no sun can be performed.
On the other hand, when there is a window, the irradiation position of the sunlight entering the room through the window can be obtained from the position of the sun from the date / time / weather information and the position of the window from the indoor information. It can be determined whether or not the detected light is caused by sunlight.
 ここでは、ユーザUの位置情報、日時・天候情報を、屋内で検出される光が太陽光に起因するか否かの判定に用いる例をあげたが、これに限定されない。ユーザUの位置情報、日時・天候情報は、他にも用いることができる。 Here, an example has been described in which the position information, date and time, and weather information of the user U are used to determine whether the light detected indoors is due to sunlight, but the present invention is not limited to this. The position information of the user U and the date / time / weather information can be used in other ways.
 例えば、上述のようにユーザUの位置情報からユーザUが屋内にいるか屋外にいるかを把握することができるので、位置情報から、屋内にいる場合の処理と屋外にいる場合の処理のどちらの処理を実行するかを判定することができる。 For example, as described above, it is possible to grasp whether the user U is indoors or outdoors from the position information of the user U, and therefore, based on the position information, either processing when indoors or processing when outdoors Can be determined.
 また、屋外での処理において、日時・天候情報及びユーザUの位置情報から、ユーザUが現在位置する場所での天候情報が把握できる。これにより、例えば曇りで、太陽光に起因する光の照度情報を用いた非視野領域に配置される仮想オブジェクトに関する処理の優先度をつける情報処理を実行する必要がない場合は照度センサをオフにする等設定してもよく、消費電力を低減させることができる。 {Circle around (4)} In the outdoor processing, the weather information at the place where the user U is currently located can be grasped from the date / time / weather information and the position information of the user U. This makes it possible to turn off the illuminance sensor when it is not necessary to execute information processing to prioritize processing of a virtual object arranged in a non-viewing area using illuminance information of light caused by sunlight, for example, when it is cloudy. The power consumption can be reduced.
 上述では、屋内又は屋外でユーザUが移動しながらゲームをする場合を例にあげた。このようなゲームでは、ゲームのコンテンツに応じて、屋内から屋外、屋外から屋内へユーザUが移動する場合がある。 In the above, the case where the user U plays a game while moving indoors or outdoors has been described as an example. In such a game, the user U may move from indoor to outdoor and from outdoor to indoor according to the content of the game.
 例えば、ユーザUがゲームの流れ上、屋内から屋外へいく場合、ユーザUは太陽光が照射される眩しい領域へ視線を向けて移動する場合がある。このようなゲームコンテンツの内容により屋内から屋外へいくというユーザUの行動は予め想定することができるので、上述の周辺状況情報に加え、コンテンツに応じて予測されるユーザの行動を加味して、仮想オブジェクトの描画処理の優先度を判定するようにしてもよい。 For example, when the user U goes from indoor to outdoor in the flow of the game, the user U may move with his / her gaze to a dazzling area where sunlight is irradiated. Since the action of the user U going from indoors to outdoors can be assumed in advance according to the content of the game content, in addition to the above-described peripheral situation information, the user's action predicted according to the content is taken into consideration, The priority of the virtual object drawing process may be determined.
 上述した照度差のしきい値は、屋外又は屋内といった異なる環境毎に、予め設定される。更に、しきい値は、環境の明るさ毎にも適宜設定される。 The threshold value of the illuminance difference is set in advance for each different environment such as outdoors or indoors. Further, the threshold is appropriately set for each brightness of the environment.
 或いは、ユーザUが置かれている環境の平均的な明るさと、各照度センサで検出された照度値と、前方領域60Fと非視野領域の各領域との照度差と、ユーザUが照度値の高い領域を見たか否か等のユーザUの行動パターンとを紐づけたデータを随時蓄積していってもよい。そして、これら蓄積したデータを基に、統計的に、ユーザUが置かれている環境の平均的な明るさと、ユーザUの行動パターンと、しきい値との関係を構築して、より適切なしきい値が設定できるようにしてもよい。 Alternatively, the average brightness of the environment in which the user U is placed, the illuminance value detected by each illuminance sensor, the illuminance difference between the front region 60F and each of the non-viewing regions, and the user U Data linking the behavior pattern of the user U such as whether or not a high region has been seen may be accumulated as needed. Then, based on these accumulated data, statistically, the relationship between the average brightness of the environment where the user U is placed, the behavior pattern of the user U, and the threshold value is established, and more appropriate A threshold may be set.
 以上のように、本実施形態のHMD1では、照度センサによって周辺状況情報を得、当該周辺状況情報を用いて、ユーザUの行動が予測され、非視野領域を複数分割した領域毎に誘目度が判定される。この誘目度に基づいて、ユーザUの非視野領域の各領域に配置される仮想オブジェクトに関する処理の優先度が判定される。そして、判定された優先度に基づき、ユーザUの誘目度が低くなる領域に配置される仮想オブジェクトの描画処理負荷が低くなるように処理が実行されるので、効率的な処理が行える。 As described above, in the HMD 1 according to the present embodiment, the surrounding situation information is obtained by the illuminance sensor, and the behavior of the user U is predicted using the surrounding situation information. Is determined. Based on the degree of attraction, the priority of the process regarding the virtual object arranged in each of the non-viewing areas of the user U is determined. Then, based on the determined priority, the processing is executed so as to reduce the drawing processing load of the virtual object arranged in the area where the degree of attraction of the user U is low, so that efficient processing can be performed.
<第2の実施形態>
 第2の実施形態では、周辺状況情報として音センサ23で取得した検出結果である音情報を用いる場合を例にあげて説明する。
 図8は、ユーザUの右側領域60Rで爆発音66が生じた状況での領域毎の優先処理点数の判定例を示す。図8に示す状況は、ユーザUの視線の動きを誘発するような状況である。
<Second embodiment>
In the second embodiment, a case will be described as an example in which sound information that is a detection result obtained by the sound sensor 23 is used as surrounding situation information.
FIG. 8 shows an example of determining the priority processing score for each area in a situation where the explosion sound 66 has occurred in the right area 60R of the user U. The situation illustrated in FIG. 8 is a situation that induces the movement of the line of sight of the user U.
 本実施形態では、音センサ23F、23R、23L、23Bそれぞれで検出される音情報である音量に基づいて各領域60F、60B、60R、60Lの優先処理点数が求められる。 In the present embodiment, the priority processing score of each of the regions 60F, 60B, 60R, and 60L is obtained based on the sound volume that is the sound information detected by each of the sound sensors 23F, 23R, 23L, and 23B.
 一般的に、人は、周辺で何か音が生じたときに、音の発生源と思われる方向を見る傾向にある。そこで、本実施形態では、検出された音量が最も大きい領域の優先処理点数が高くなるように音量に応じて優先処理点数が判定される。 Generally, people tend to look in the direction that seems to be the source of sound when something sounds around. Therefore, in the present embodiment, the priority processing point is determined according to the volume so that the priority processing point of the region where the detected volume is the largest is higher.
 4つの音センサ23F、23R、23L、23Bは、ユーザUの周辺の音の音量を検出する。図8に示す例では、ユーザUの右側で爆発音66が生じているので、右側領域60Rの音情報を検出する音センサ23Rで検出される音量値が最も大きくなる。 The four sound sensors 23F, 23R, 23L, 23B detect the volume of sounds around the user U. In the example shown in FIG. 8, since the explosion sound 66 is generated on the right side of the user U, the volume value detected by the sound sensor 23R that detects the sound information in the right side region 60R is the largest.
 ユーザUが音の発生源である右側領域60Rの方を向く場合、ユーザUが前方領域60Fを向いている状態から音の発生源である右側領域60Rを向くまでの間、ユーザUの視線方向は、前方領域60Fから右側領域60Rにわたって移動する。この想定されるユーザUの視線移動領域は、ユーザUが見る可能性の高い領域である。
 従って、図8に示す例では、前方領域60F及び右側領域60Rの優先処理点数P1及びP2は、他の領域よりも高い10点と判定される。
When the user U faces the right region 60R, which is the sound source, the gaze direction of the user U from the state in which the user U is facing the front region 60F to the right region 60R, which is the sound source. Moves from the front area 60F to the right area 60R. The assumed line-of-sight movement area of the user U is an area where the user U is likely to see.
Therefore, in the example shown in FIG. 8, the priority processing points P1 and P2 of the front area 60F and the right area 60R are determined to be 10 points higher than the other areas.
 一方、音の発生源である右側領域60Rと正反対の左側領域60Lは、ユーザUが視線を向ける可能性が最も低くなる領域となることが想定されるので、優先処理点数P3は3点と判定される。
 後方領域60Bは、右側領域60Rの隣の領域となり、ユーザUの視野に入る可能性が左側領域60Lよりも若干高くなるので、優先処理点数P3は4点と判定される。
On the other hand, since the left area 60L, which is the exact opposite of the right area 60R, which is the sound source, is assumed to be the area where the possibility that the user U turns his or her gaze is the lowest, the priority processing point P3 is determined to be three points. Is done.
The rear area 60B is an area adjacent to the right area 60R, and is more likely to enter the field of view of the user U than the left area 60L. Therefore, the priority processing score P3 is determined to be four.
 図8では、実空間の後方領域60B及び左側領域60L(第2の領域)の周辺情報(第2の周辺情報)は、右側領域60R(第1の領域)の周辺情報(第1の周辺情報)よりも、ユーザUの目をひきにくい、誘目度の低いものとなっている。 In FIG. 8, the peripheral information (second peripheral information) of the rear area 60B and the left area 60L (second area) of the real space is the peripheral information (first peripheral information) of the right area 60R (first area). ), It is harder to catch the eyes of the user U and the degree of attraction is low.
 図8では、判定された優先処理点数を基に、右側領域60R及び前方領域60Fそれぞれに配置される仮想オブジェクトに関しては処理Zに基づいて描画が行われる。左側領域60Lに配置される仮想オブジェクトに関しては処理Xに基づいて描画が行われる。後方領域60Bに配置される仮想オブジェクトに関しては処理Yに基づいて描画が行われる。 In FIG. 8, drawing is performed based on the processing Z for the virtual objects arranged in the right area 60R and the front area 60F based on the determined priority processing points. With respect to the virtual object arranged in the left area 60L, drawing is performed based on the processing X. With respect to the virtual object arranged in the rear area 60B, drawing is performed based on the processing Y.
 右側領域60R(第1の領域)に配置される仮想オブジェクト(第1の仮想オブジェクト)に関する処理Zを第1の処理とすると、左側領域60L(第2の領域)に配置される仮想オブジェクト(第2の仮想オブジェクト)に関する処理X及び後方領域60B(第2の領域)に配置される仮想オブジェクト(第2の仮想オブジェクト)に関する処理Yは第2の処理に相当する。 Assuming that the process Z relating to the virtual object (first virtual object) arranged in the right region 60R (first region) is the first process, the virtual object (second region) arranged in the left region 60L (second region) Processing X relating to the second virtual object) and processing Y relating to the virtual object (the second virtual object) arranged in the rear area 60B (the second area) correspond to the second processing.
 後方領域60Bと左側領域60Lとの関係だけでみると、後方領域60B(第1の領域)に配置される仮想オブジェクト(第1の仮想オブジェクト)に関する処理Yは第1の処理に相当する。左側領域60L(第2の領域)に配置される仮想オブジェクト(第2の仮想オブジェクト)に関する処理Xは第2の処理に相当する。 Looking only at the relationship between the rear area 60B and the left area 60L, the processing Y for the virtual object (first virtual object) arranged in the rear area 60B (first area) corresponds to the first processing. Processing X relating to the virtual object (second virtual object) arranged in the left area 60L (second area) corresponds to the second processing.
 尚、ここでいう周辺状況情報としての音情報は、偶発的に発生する、周辺の人間の声、周辺で流れている音楽の音、工事音、爆発音といった環境音のことを指し、コンテンツで予め設定されて発生する音は除外する。 In addition, the sound information as the surrounding situation information referred to here is environmental sounds such as a nearby human voice, music sound flowing around, construction sound, and explosion sound, which occur accidentally. Sounds generated in advance are excluded.
 コンテンツで予め設定されて発生される音については、本実施形態で説明する、偶発的に発生する周辺情報に基づいて判定される誘目度を用いて実行される非視野領域に配置される仮想オブジェクトに関する処理は行わない。 For sounds that are set and generated in advance in the content, virtual objects arranged in a non-viewing area that are executed using the degree of attraction determined based on accidentally generated peripheral information described in the present embodiment No processing is performed.
 また、偶発的に発生する環境音であっても、例外的に、音が発生しても人が注目しないような状況である雑踏のような環境音は、音センサ23で取得した音情報からノイズとして適宜キャンセルするようにしてもよい。 In addition, even if it is an environmental sound that occurs accidentally, an environmental sound such as a crowd, which is a situation where a person does not pay attention even if the sound occurs, is exceptionally obtained from the sound information acquired by the sound sensor 23. The noise may be appropriately canceled.
 音センサ23から取得される音情報を用いた仮想オブジェクトに関する処理の制御は、例えば、ゲームコンテンツを再生するHMDに適用することができる。ユーザUは、HMDを装着し、実空間を自由に移動しながら、ゲームを楽しむことができる。実際に実空間を移動しながらゲームをする場合、ゲームコンテンツで予め設定されて発生する音以外の環境音が実空間で発生する場合がある。 The control of the process regarding the virtual object using the sound information acquired from the sound sensor 23 can be applied to, for example, an HMD that reproduces the game content. The user U can enjoy the game while wearing the HMD and freely moving in the real space. When playing a game while actually moving in the real space, environmental sounds other than sounds generated in advance in the game content may be generated in the real space.
 本実施形態では、HMDに音センサを搭載させて周辺状況の音情報を検出することにより、音の発生源がある方向を特定することができる。ユーザUは音の発生源のある方向を向くと想定されるので、HMD1では、音発生時におけるユーザUの前方領域から音発生源のある領域までの領域に配置される仮想オブジェクトの描画処理を他の領域に配置される仮想オブジェクトの描画処理よりも優先的に行なわれる。 In the present embodiment, the direction in which the sound source is located can be specified by installing a sound sensor on the HMD and detecting sound information of the surrounding situation. Since the user U is assumed to face a certain direction of the sound source, the HMD 1 performs a process of drawing a virtual object arranged in an area from a region in front of the user U to a certain region of the sound source at the time of sound generation. It is performed with higher priority than the rendering processing of the virtual object arranged in another area.
 これにより、ユーザUは臨場感のあるゲームを体験することができる。また、HMD1においては、ユーザUの誘目度が低くなる領域の仮想オブジェクトの描画処理負荷を低くして処理が実行されるので、効率的な処理が可能となる。 Thereby, the user U can experience a realistic game. Further, in the HMD 1, since the processing is executed while reducing the drawing processing load of the virtual object in the area where the degree of attraction of the user U is low, efficient processing is possible.
<第3の実施形態>
 第3の実施形態では、周辺状況情報として匂いセンサ24で取得した検出結果(匂い情報)を用いる場合を例にあげて説明する。
 図9は、ユーザUの右側領域60Rに匂い67の発生源がある状況での領域毎の優先処理点数の判定例を示す。図9に示す状況は、ユーザUの視線の動きを誘発するような状況である。
<Third embodiment>
In the third embodiment, a case where a detection result (smell information) acquired by the scent sensor 24 is used as the surrounding situation information will be described as an example.
FIG. 9 shows an example of determining the priority processing score for each area in a situation where the source of the smell 67 is in the right area 60R of the user U. The situation shown in FIG. 9 is a situation that induces the movement of the line of sight of the user U.
 本実施形態では、匂いセンサ24F、24R、24L、24Bそれぞれで検出された匂いの強さに基づいて各領域60F、60B、60R、60Lの優先処理点数が求められる。 In the present embodiment, the priority processing points of the areas 60F, 60B, 60R, and 60L are obtained based on the odor intensities detected by the odor sensors 24F, 24R, 24L, and 24B, respectively.
 一般的に、人は、周辺で何か匂いを感知した場合、匂いの発生源と思われる方向を向く傾向にある。そこで、本実施形態では、検出された匂いの強さが最も強い領域の優先処理点数(誘目度)が高くなるように優先処理点数が判定される。 Generally, when a person senses an odor in the vicinity, he or she tends to turn in the direction that is considered to be the source of the odor. Therefore, in the present embodiment, the priority processing point is determined such that the priority processing point (attraction degree) of the region having the highest detected odor intensity is high.
 4つの匂いセンサ24F、24R、24L、24Bは、ユーザUの周辺の匂いの強さを検出する。図9に示す例では、ユーザUの右側方向に匂い67の発生源があるので、右側領域60Rの匂い強さを検出する匂いセンサ24Rで検出される匂いの強さの値が最も大きくなる。 The four odor sensors 24F, 24R, 24L, 24B detect the odor intensity around the user U. In the example shown in FIG. 9, since the source of the odor 67 is located in the right direction of the user U, the value of the odor intensity detected by the odor sensor 24R that detects the odor intensity in the right region 60R is the largest.
 また、ユーザUが前方領域60Fを向いている状態から匂いの発生源である右側領域60Rを向く状態までの間、ユーザUの視線方向は、匂い検知時のユーザUの前方領域60Fから右側領域60Rにわたって移動する。
 従って、図9に示す例では、前方領域60F及び右側領域60Rの優先処理点数P1及びP2は、他の領域よりも高い10点と判定される。
Further, during a period from the state in which the user U is facing the front area 60F to the state in which the user U is facing the right area 60R, which is the source of the odor, the line of sight of the user U is from the front area 60F of the user U when the odor is detected to the right area. Move over 60R.
Therefore, in the example shown in FIG. 9, the priority processing points P1 and P2 of the front area 60F and the right area 60R are determined to be 10 points higher than the other areas.
 一方、匂いの発生源が位置する右側領域60Rと正反対の左側領域60Lは、ユーザUが視線を向ける可能性が最も低い領域になると想定されるので、優先処理点数P4は4点と判定される。
 後方領域60Bは、右側領域60Rの隣の領域となり、ユーザUの視野に入る可能性が左側領域60Lよりも若干高くなるので、優先処理点数P3は5点と判定される。
On the other hand, since the left area 60L, which is directly opposite to the right area 60R where the odor source is located, is assumed to be the area where the user U is unlikely to look, the priority processing score P4 is determined to be four. .
The rear area 60B is an area adjacent to the right area 60R, and is more likely to be in the field of view of the user U than the left area 60L. Therefore, the priority processing score P3 is determined to be five.
 図9では、実空間の後方領域60B及び左側領域60L(第2の領域)の周辺情報(第2の周辺情報)は、右側領域60R(第1の領域)の周辺情報(第1の周辺情報)よりも、ユーザUの目をひきにくい、誘目度の低いものとなっている。 In FIG. 9, the peripheral information (second peripheral information) of the rear area 60B and the left area 60L (second area) of the real space is the peripheral information (first peripheral information) of the right area 60R (first area). ), It is harder to catch the eyes of the user U and the degree of attraction is low.
 匂いセンサ24で取得される匂い情報に基づく仮想オブジェクトに関する処理の制御は、例えば、災害疑似体験用等のHMDに適用できる。HMDを装着するユーザUに、実空間である外界に、例えば火や煙などの仮想オブジェクト(表示画像)を重畳して提示することによって火事を疑似体験させることができる。 制 御 The control of the processing related to the virtual object based on the odor information acquired by the odor sensor 24 can be applied to, for example, an HMD for a simulated disaster experience. The user U wearing the HMD can have a simulated experience of a fire by presenting a virtual object (display image) such as fire or smoke in a superimposed manner on the external world that is a real space.
 そして、実空間で匂いを発生させ、HMDに搭載される匂いセンサの検出結果により当該匂いの発生源がある方向が特定される。尚、ここで、実空間で発生させる匂いには、体に害のない匂いであって、匂いセンサで匂い検出が可能な気体を用いることができる。 Then, an odor is generated in the real space, and the direction in which the source of the odor is located is specified based on the detection result of the odor sensor mounted on the HMD. Here, as the odor generated in the real space, a gas that is harmless to the body and can be detected by the odor sensor can be used.
 ユーザUは匂いの発生源のある方向を向くと想定されるので、匂い検知時のユーザUの前方領域60Fから匂い発生源のある領域までの領域に配置される仮想オブジェクトの描画処理を他の領域に配置される仮想オブジェクトの描画処理よりも優先的に行う。 Since it is assumed that the user U faces a certain direction of the odor source, the rendering process of the virtual object arranged in the region from the front region 60F of the user U to the region with the odor source at the time of the odor detection is performed by another process. This is performed prior to the rendering processing of the virtual object arranged in the area.
 これにより、ユーザUは臨場感のある災害疑似体験ができる。また、HMD1においては、ユーザUの誘目度が低くなる領域の仮想オブジェクトの描画処理負荷を低くして処理が実行されるので、効率的な処理が可能となる。 This allows the user U to have a realistic simulated disaster experience. Further, in the HMD 1, since the processing is executed while reducing the drawing processing load of the virtual object in the area where the degree of attraction of the user U is low, efficient processing is possible.
<第4の実施形態>
 第4の実施形態では、HMD1に搭載されるカメラ25で検出された画像情報を周辺状況情報として取得し、画像情報を用いて、優先処理点数が判定される。
<Fourth embodiment>
In the fourth embodiment, image information detected by the camera 25 mounted on the HMD 1 is acquired as peripheral situation information, and the priority processing score is determined using the image information.
 ここでは、一例として、HMDを搭載しているユーザUが3人おり、これら3人は同じゲームコンテンツを用いてプレイしているとする。3人のユーザUは、それぞれユーザU1、ユーザU2、ユーザU3であるとする。各ユーザU1~U3が装着するHMDは、いずれも図1に示す同じ構造を有するHMD1である。 Here, as an example, it is assumed that there are three users U equipped with the HMD, and these three people play using the same game content. It is assumed that the three users U are a user U1, a user U2, and a user U3, respectively. The HMD worn by each of the users U1 to U3 is the HMD1 having the same structure shown in FIG.
 本実施形態では、ユーザU1が装着するHMD1における処理について説明するが、他のユーザU2、U3においても、同様の処理が行われる。ユーザU1~U3それぞれが装着するHMD1は、互いに通信可能に構成され、各HMDにて取得された画像情報を含む各種情報を他のHMDと送受信することができる。 In the present embodiment, the processing in the HMD 1 worn by the user U1 will be described, but the same processing is performed in the other users U2 and U3. The HMDs 1 worn by the users U1 to U3 are configured to be able to communicate with each other, and can transmit / receive various information including image information acquired by each HMD to / from another HMD.
 図10は、ユーザU1の周辺状況が、ユーザU1の視線移動を誘発するような周辺状況でない場合における、ユーザU1の周辺の領域と優先処理点数との関係を説明するための模式図である。
 図11は、周辺状況情報を用いた優先処理点数の判定を説明するための模式図である。
 図12は、本実施形態に係る情報処理装置における仮想オブジェクトに関する処理の一例を説明するフロー図である。
FIG. 10 is a schematic diagram for explaining the relationship between the area around the user U1 and the priority processing score when the surrounding situation of the user U1 is not a surrounding situation that induces the movement of the user U1's gaze.
FIG. 11 is a schematic diagram for explaining the determination of the priority processing score using the peripheral situation information.
FIG. 12 is a flowchart illustrating an example of a process related to a virtual object in the information processing apparatus according to the present embodiment.
 図10に示すように、ゲームコンテンツで実際にユーザUが移動する実空間70を複数の領域A1~A16に分割し、ユーザU1が領域A11に位置するとする。 As shown in FIG. 10, it is assumed that the real space 70 where the user U actually moves in the game content is divided into a plurality of areas A1 to A16, and the user U1 is located in the area A11.
 図10に示す例では、ユーザU1の正面(前方)に位置する領域A10が最も優先処理点数P10が高くなり、10点と判定される。ユーザU1の前方にある領域A1、A2、A5、A6、A9、A10、A13、A14では、ユーザU1からの距離が遠くなるほど優先処理点数が低くなっていくように優先処理点数は判定される。 In the example shown in FIG. 10, the priority processing point P10 is highest in the area A10 located in front (front) of the user U1, and is determined to be 10 points. In the areas A1, A2, A5, A6, A9, A10, A13, and A14 in front of the user U1, the priority processing points are determined such that the longer the distance from the user U1, the lower the priority processing points.
 図10において、ユーザU1のすぐ右側又は左側にある領域A7、A15は、正面の領域となるA10よりも優先処理点数が低くなり、優先処理点数P7及びP15は6点と判定される。
 更に、ユーザU1の右側方向、左側方向において、ユーザU1からの距離が遠くなるほど低くなるように優先処理点数は判定され、領域A3の優先処理点数P3は3点と判定される。
In FIG. 10, areas A7 and A15 immediately to the right or left of the user U1 have lower priority processing points than A10, which is a front area, and the priority processing points P7 and P15 are determined to be six.
Further, in the rightward direction and the leftward direction of the user U1, the priority processing points are determined so as to become lower as the distance from the user U1 increases, and the priority processing points P3 of the area A3 are determined to be three points.
 図10において、ユーザU1の真後ろとなる領域A12は、ユーザU1が最も見る可能性が低い領域として、優先処理点数A12は0点と判定される。また、ユーザU1の後方にある他の領域A4、A8、A16においても、ユーザU1が見る可能性が低いとして、優先処理点数は相対的に低く判定される。 In FIG. 10, the area A12 immediately behind the user U1 is determined as the area where the user U1 is least likely to see, and the priority processing score A12 is determined to be 0. Also, in the other areas A4, A8, and A16 behind the user U1, the priority processing score is determined to be relatively low, as the possibility that the user U1 sees is low.
 本実施形態において、優先処理点数の判定時に用いられる周辺状況情報は、各ユーザU1~U3と優先処理点数判定対象の領域との位置関係情報である。ユーザUと優先処理点数判定対象の領域との位置関係情報には、ユーザUの視線情報と、ユーザUと優先処理点数判定対象の領域との距離情報が含まれる。ユーザUの視線情報は、優先処理点数判定対象の領域に対するユーザUの向きの情報である。 In the present embodiment, the peripheral situation information used when determining the priority processing point is positional relationship information between each of the users U1 to U3 and the area to be subjected to the priority processing point determination. The positional relationship information between the user U and the priority processing point determination target area includes the line-of-sight information of the user U and distance information between the user U and the priority processing point determination target area. The line-of-sight information of the user U is information on the direction of the user U with respect to the priority processing point determination target area.
 各ユーザU1~U3の視線情報及び距離情報は、ユーザU1~U3それぞれが装着するHMD1に搭載されるカメラ25で撮影される撮影画像(画像情報)を基に検出される。 視 The line-of-sight information and distance information of each of the users U1 to U3 are detected based on images (image information) captured by the camera 25 mounted on the HMD 1 worn by each of the users U1 to U3.
 図11に示す例では、他のユーザU2及びU3のHMD1のカメラ25から取得される撮影画像(画像情報)は、ユーザU1の非視野領域が撮影された画像情報であり、ユーザU1の視野に対応する視野情報とは異なる周辺情報である。ユーザU1が装着するHMDに搭載されるカメラ25で撮影される撮影画像は、ユーザU1の視野に対応する視野情報である。 In the example illustrated in FIG. 11, captured images (image information) acquired from the cameras 25 of the HMDs 1 of the other users U2 and U3 are image information in which a non-visual area of the user U1 is captured, and are in the visual field of the user U1. This is peripheral information different from the corresponding visual field information. The image captured by the camera 25 mounted on the HMD worn by the user U1 is visual field information corresponding to the visual field of the user U1.
 図11に示す例では、ユーザU1は領域A11に位置し、ユーザU2は領域A4に位置し、ユーザU3は領域A2に位置する。仮想オブジェクトAは領域A7に配置されるオブジェクトであり、仮想オブジェクトBは領域A15に配置されるオブジェクトである。ユーザU2及びユーザU3は仮想オブジェクトAに注目している。仮想オブジェクトAと仮想オブジェクトBはユーザU1の周辺に位置し、ユーザU1の非視野領域に配置されるオブジェクトである。 In the example shown in FIG. 11, the user U1 is located in the area A11, the user U2 is located in the area A4, and the user U3 is located in the area A2. The virtual object A is an object arranged in the area A7, and the virtual object B is an object arranged in the area A15. The users U2 and U3 pay attention to the virtual object A. The virtual object A and the virtual object B are objects located around the user U1 and arranged in the non-viewing area of the user U1.
 ゲームに参加している複数のユーザUのうち多くのユーザが注目している方向には、ユーザが注目する仮想オブジェクトが存在する可能性が高く、現時点で当該仮想オブジェクトのある方向を向いていないユーザUがその仮想オブジェクトを見る可能性が高い。
 図11に示す例では、ユーザU2、U3が仮想オブジェクトAを注目しており、現時点で仮想オブジェクトAのある領域を向いていないユーザU1においても、仮想オブジェクトAを見る可能性が高い。
In a direction in which many users are paying attention among a plurality of users U participating in the game, there is a high possibility that a virtual object to which the user pays attention exists, and the user does not face a certain direction of the virtual object at present. It is highly likely that user U will see the virtual object.
In the example shown in FIG. 11, the users U2 and U3 are paying attention to the virtual object A, and the user U1 who is not facing the area where the virtual object A is present at the present time is highly likely to see the virtual object A.
 本実施形態では、ユーザU1の周辺の領域毎に、同じ実空間で同じゲームコンテンツを用いてプレイしている全てのユーザU(図11においてユーザU1~U3)におけるユーザUと優先処理点数判定対象の領域との位置関係情報を用いて優先処理点数が判定される。 In the present embodiment, for each area around the user U1, all the users U (users U1 to U3 in FIG. 11) playing in the same real space using the same game content and the user U and the priority processing point determination target are determined. The priority processing score is determined using the positional relationship information with the area.
 上述のように、位置関係情報は、優先処理点数判定対象の領域に対するユーザの向きの情報(ユーザUの視線情報)と、ユーザUと優先処理点数判定対象領域との距離情報である。ここでは、多くのユーザが目を向けている領域であって、ユーザとの距離がより近い領域に対しては、高い優先処理点数が判定される。 As described above, the positional relationship information is information on the user's orientation (the line-of-sight information of the user U) with respect to the priority processing point determination target area and distance information between the user U and the priority processing point determination target area. Here, a high priority processing score is determined for an area where many users are looking and an area closer to the user is closer.
 図11に示す例では、ユーザU1の周辺にある仮想オブジェクトAは、ユーザU1以外のユーザU2及びU3が注目している領域A7に位置しており、仮想オブジェクトBよりもユーザU2及びU3に近い位置にある。
 一方、ユーザU1の周辺にある仮想オブジェクトBは、どのユーザも注目しておらず、ユーザU2及びU3の位置からは仮想オブジェクトAよりも遠い位置にある領域A15に位置する。
In the example illustrated in FIG. 11, the virtual object A around the user U1 is located in the area A7 where the users U2 and U3 other than the user U1 are focused, and is closer to the users U2 and U3 than the virtual object B. In position.
On the other hand, the virtual object B around the user U1 is not noticed by any user, and is located in an area A15 farther from the positions of the users U2 and U3 than the virtual object A.
 このような例では、仮想オブジェクトAが配置される領域A7は、仮想オブジェクトBが配置される領域A15よりも優先処理点数が高く判定される。
 以下、図12のフロー図を用いて、優先処理点数の判定について詳細に説明する。
In such an example, the area A7 in which the virtual object A is arranged has a higher priority processing score than the area A15 in which the virtual object B is arranged.
Hereinafter, the determination of the priority processing score will be described in detail with reference to the flowchart of FIG.
 まず、図12に示すように、ユーザU1のHMD1の周辺状況情報管理部12により、通信制御部11を介して、周辺状況情報として、各ユーザU1~U3が装着するHMD1に搭載されるカメラ25で撮影された画像情報が取得される(S31)。
 当該画像情報から、ユーザU1~U3毎に、ユーザUと優先処理点数判定対象の領域との距離情報、及び、優先処理点数判定対象の領域に対するユーザの向き情報(視線情報)を得る。
First, as shown in FIG. 12, the camera 25 mounted on the HMD 1 worn by each of the users U1 to U3 as the peripheral situation information via the communication control unit 11 by the peripheral situation information management unit 12 of the HMD 1 of the user U1. The information of the image photographed at is obtained (S31).
From the image information, the distance information between the user U and the priority processing point determination target area, and the user's direction information (gaze information) with respect to the priority processing point determination target area are obtained for each of the users U1 to U3.
 次に、優先処理点数判定部15により、画像情報から上述の距離情報及び視線情報を基に、ユーザU1の周辺領域毎の優先処理点数(誘目度)が判定される(S32)。 Next, the priority processing point determination unit 15 determines the priority processing point (attraction degree) for each peripheral area of the user U1 from the image information based on the above-described distance information and line-of-sight information (S32).
 S32では、ユーザU毎に優先処理点数判定対象の領域の優先処理点数が算出され、これらユーザ毎に算出された優先処理点数の合計を、優先処理点数判定対象の領域の優先処理点数とする。 In S32, the priority processing points of the priority processing point determination target area are calculated for each user U, and the total of the priority processing points calculated for each user is set as the priority processing point of the priority processing point determination target area.
 ここでは、図11における仮想オブジェクトAが配置される領域A7及び仮想オブジェクトBが配置される領域A11の優先処理点数の判定を例にあげて具体的に説明するが、実際には領域毎に優先処理点数が判定される。 Here, the determination of the priority processing points of the area A7 where the virtual object A is arranged and the area A11 where the virtual object B is arranged in FIG. 11 will be specifically described as an example. The number of processing points is determined.
 図11における仮想オブジェクトA(仮想オブジェクトB)が配置される領域A7(A11)の優先処理点数は、ユーザA1~A3毎に求められる優先処理点数の和から求められる。各ユーザの優先処理点数は次の式により求められる。
Figure JPOXMLDOC01-appb-M000001
The priority processing points of the area A7 (A11) where the virtual object A (virtual object B) is arranged in FIG. 11 are obtained from the sum of the priority processing points obtained for each of the users A1 to A3. The priority processing score of each user is obtained by the following equation.
Figure JPOXMLDOC01-appb-M000001
 上述の式中、Pは優先処理点数、l(m)は各ユーザUが装着するHMD1に搭載されるカメラ25と優先処理点数判定対象の領域との距離、θ(rad)は水平面内での各ユーザUの正面からみた優先処理点数判定対象の領域が位置する角度、Eは優先処理点数係数を示す。 In the above equation, P is the number of priority processing points, l (m) is the distance between the camera 25 mounted on the HMD 1 worn by each user U and the area for which priority processing points are to be determined, and θ (rad) is the horizontal plane. The angle at which the priority processing point determination target area is located from the front of each user U, and E indicates a priority processing point coefficient.
 ここでは、ユーザU1が装着するHMD1での仮想オブジェクトの描画に関する処理における優先処理点数の判定を行う場合について説明するので、優先処理点数係数Eは、ユーザU1を基準とした係数である。優先処理係数Eは、ユーザUと優先処理点数判定対象の領域との位置関係によって決定するので、優先処理点数判定対象の領域が同じであってもユーザU毎に優先処理係数は異なる。 Here, a case will be described in which the determination of the priority processing point in the processing relating to the drawing of the virtual object on the HMD 1 worn by the user U1 is performed. Therefore, the priority processing point coefficient E is a coefficient based on the user U1. Since the priority processing coefficient E is determined based on the positional relationship between the user U and the priority processing point determination target area, the priority processing coefficient differs for each user U even if the priority processing point determination target area is the same.
 図11に示す仮想オブジェクトAが配置される領域A7の優先処理点数の判定において、ユーザU1と領域A7との距離lを1とすると、ユーザU2及びU3と領域A7との距離は2の平方根で表される。
 ユーザU1と領域A7との角度はπ/2ラジアン(約1.57ラジアン)で表すことができ、ユーザU2及びU3と領域A7との角度は0ラジアンで表すことができる。
In the determination of the priority processing score of the area A7 in which the virtual object A is arranged shown in FIG. 11, if the distance 1 between the user U1 and the area A7 is 1, the distance between the users U2 and U3 and the area A7 is a square root of 2. expressed.
The angle between the user U1 and the area A7 can be represented by π / 2 radians (about 1.57 radians), and the angle between the users U2 and U3 and the area A7 can be represented by 0 radians.
 ここで、優先処理点数係数Eを10として各ユーザからみた領域A7の優先処理点数を算出すると、ユーザU1においてはP=1となり、ユーザU2及びU3においてはP=1.41となる。
 したがって、ユーザU1が装着するHMD1での領域A7に配置される仮想オブジェクトAの描画に関する処理における優先処理点数PAは、ユーザU毎に判定した優先処理点数の合計となるので、
PA=1+1.41+1.41=3.82
となる。
Here, when the priority processing point coefficient E is set to 10 and the priority processing point of the area A7 viewed from each user is calculated, P = 1 for the user U1, and P = 1.41 for the users U2 and U3.
Therefore, the priority processing score PA in the processing related to the drawing of the virtual object A arranged in the area A7 on the HMD 1 worn by the user U1 is the sum of the priority processing scores determined for each user U.
PA = 1 + 1.41 + 1.41 = 3.82
Becomes
 一方、仮想オブジェクトBが配置される領域A15の優先処理点数の判定におい、ユーザU1と領域A15との距離lは1となり、ユーザU2及びU3と領域A15との距離は10の平方根で表される。
 ユーザU1と領域A15との角度はπ/2ラジアン(約1.57ラジアン)で表すことができ、ユーザU2及びU3と領域A15との角度は約0.46ラジアンで表すことができる。
On the other hand, in the determination of the priority processing score of the area A15 where the virtual object B is arranged, the distance 1 between the user U1 and the area A15 is 1, and the distance between the users U2 and U3 and the area A15 is represented by a square root of 10. .
The angle between the user U1 and the area A15 can be represented by π / 2 radians (about 1.57 radians), and the angle between the users U2 and U3 and the area A15 can be represented by about 0.46 radians.
 ここで、優先処理点数係数Eを10として各ユーザからみた領域A15の優先処理点数を算出すると、ユーザU1においてはP=1となり、ユーザU2及びU3においてはP=0.38となる。
 したがって、ユーザU1が装着するHMD1での領域A15に配置される仮想オブジェクトBの描画に関する処理における優先処理点数Pbは、ユーザU毎に判定した優先処理点数の合計となるので、
Pb=1+0.38+0.38=1.76
となる。
Here, when the priority processing point coefficient of the area A15 viewed from each user is calculated with the priority processing point coefficient E set to 10, P = 1 for the user U1, and P = 0.38 for the users U2 and U3.
Therefore, the priority processing score Pb in the processing related to the drawing of the virtual object B arranged in the area A15 on the HMD 1 worn by the user U1 is the sum of the priority processing scores determined for each user U.
Pb = 1 + 0.38 + 0.38 = 1.76
Becomes
 次に、描画処理負荷判定部16により、優先処理点数判定部15で判定された優先処理点数を基に、ユーザU1の周辺領域に配置される仮想オブジェクトの描画に関する処理の負荷が判定され、判定された処理内容に基づいて、出力画像生成部17により仮想オブジェクトの生成処理が実行される(S33)。 Next, based on the priority processing score determined by the priority processing score determination unit 15, the rendering processing load determination unit 16 determines the load of the processing related to the rendering of the virtual object placed in the peripheral area of the user U <b> 1. A virtual object generation process is executed by the output image generation unit 17 based on the processed content (S33).
 本実施形態では、処理負荷を判定するための優先処理点数のしきい値Ptを2.0と設定する。
 描画処理負荷判定部16は、優先処理点数判定部15で判定された優先処理点数がしきい値未満となる領域に配置される仮想オブジェクトを100msecに1回の割合で動かすような比較的負荷の低い処理Xを行う判定をする。
 描画処理負荷判定部16は、優先処理点数判定部15で求められた優先処理点数がしきい値以上となる領域に配置される仮想オブジェクトを16msecに1回の割合で動かすような比較的負荷の高い処理Yを行う判定をする。
 尚、ここでは、処理負荷を2段階にわける例をあげたが、これに限定されない。
In the present embodiment, the threshold value Pt of the priority processing point for determining the processing load is set to 2.0.
The rendering processing load determination unit 16 performs a comparatively heavy load such that the virtual object placed in the area where the priority processing score determined by the priority processing score determination unit 15 is less than the threshold is moved once every 100 msec. It is determined that a low process X is performed.
The rendering processing load determination unit 16 performs a comparatively heavy load such as moving the virtual object placed in the area where the priority processing point calculated by the priority processing point determination unit 15 is equal to or greater than the threshold value once every 16 msec. It is determined that high processing Y is performed.
Here, an example in which the processing load is divided into two stages has been described, but the present invention is not limited to this.
 上述の例では、描画処理負荷判定部16により、領域A7に配置される仮想オブジェクトAについては、Paの値が3.82であるので、処理Yを行う判定がなされる。領域A15に配置される仮想オブジェクトBについては、PBの値が1.76であるので、処理Xを行う判定がなされる。 In the above example, since the value of Pa is 3.82 for the virtual object A arranged in the area A7, the drawing processing load determination unit 16 determines to perform the processing Y. For the virtual object B arranged in the area A15, the value of PB is 1.76, so that it is determined that the process X is performed.
 そして、出力画像生成部17により、描画処理負荷判定部16により判定された処理内容に基づき仮想オブジェクトの生成処理が実行される。 {Circle around (2)} The output image generation unit 17 executes a virtual object generation process based on the processing content determined by the drawing processing load determination unit 16.
 次に、出力画像制御部18により、出力画像生成部17で生成された仮想オブジェクトはディスプレイ31に表示できるように画像信号に変換されて、出力部30に出力される(S34)。 Next, the virtual object generated by the output image generation unit 17 is converted into an image signal so that it can be displayed on the display 31 by the output image control unit 18 and output to the output unit 30 (S34).
 これにより、ユーザU1が注目する可能性の高い仮想オブジェクトAの描画処理に関する処理負荷が高くなり、ユーザU1が注目する可能性の低い仮想オブジェクトBの描画処理に関する処理負荷が低くなるので、効率の良い処理を行うことができる。 As a result, the processing load related to the drawing processing of the virtual object A that is likely to be noticed by the user U1 increases, and the processing load related to the drawing processing of the virtual object B that is unlikely to be noticed by the user U1 decreases. Good processing can be performed.
 ここでは、HMDに搭載されるカメラの撮影画像を用いる例をあげたが、HMDに搭載されない実空間に配置される外部機器としての外部カメラで撮影される撮影画像を用いてもよい。外部カメラは固定されていてもよいし、移動可能に構成されていてもよく、ユーザUと優先処理点数判定対象の領域との位置関係情報を有する画像情報が取得可能であればよい。 Here, an example in which a captured image of a camera mounted on the HMD is used has been described, but a captured image captured by an external camera as an external device disposed in a real space that is not mounted on the HMD may be used. The external camera may be fixed or may be configured to be movable, as long as the external camera can acquire image information having positional relationship information between the user U and the priority processing point determination target area.
 このように、画像情報からユーザUと優先処理点数判定対象の領域との位置関係情報を得、これを基に、ユーザU1の非視野領域に配置される仮想オブジェクトの描画処理に関する処理を制御してもよい。 In this manner, the positional relationship information between the user U and the priority processing point determination target area is obtained from the image information, and based on this information, the processing related to the drawing processing of the virtual object arranged in the non-visual area of the user U1 is controlled. You may.
 以上説明したように、本技術に係る情報処理装置(HMD)では、センサから取得した実空間の周辺情報に対して判定される誘目度を基に、非視野領域に配置される仮想オブジェクトに関する処理の優先度が判定されて、仮想オブジェクトに係る処理が実行される。
 これにより、誘目度が高い領域に配置される仮想オブジェクトに関する処理を、誘目度が低い領域に配置される仮想オブジェクトに関する処理よりも優先的に行うことができ、効率的な処理を行うことができる。
As described above, in the information processing apparatus (HMD) according to the present technology, the processing regarding the virtual object arranged in the non-viewing area is performed based on the degree of attraction determined based on the peripheral information of the real space acquired from the sensor. Are determined, and the process related to the virtual object is executed.
Thereby, the processing relating to the virtual object arranged in the area with a high degree of attraction can be performed prior to the processing relating to the virtual object arranged in the area with a low degree of attraction, and efficient processing can be performed. .
<その他の実施形態>
 本技術の実施の形態は、上述した実施の形態に限定されるものではなく、本技術の要旨を逸脱しない範囲において種々の変更が可能である。
<Other embodiments>
Embodiments of the present technology are not limited to the above-described embodiments, and various modifications can be made without departing from the gist of the present technology.
 例えば、上述の実施形態においては、ユーザからみて左右方向にユーザの周辺360度を4つに分割する例をあげたが、更に上下方向で分割してもよい。 For example, in the above-described embodiment, an example has been described in which 360 degrees around the user are divided into four parts in the left-right direction as viewed from the user, but the division may be further made in the vertical direction.
 一般に正常な人の視野は、片目では上側で約60度、下側に約70度といわれる。
 一例として、上述の実施形態の左右方向に分割した4つの領域それぞれを更に上下方向に3分割し、全体で12分割の領域にわけてもよい。上下方向180度の領域を3分割した各領域を上から順に上側領域、中間領域、下側領域としたときに、上側領域の上下方向における範囲を規定する角度を30度、中間領域の上下方向における範囲を規定する角度を130度、下側領域の上下方向における範囲を規定する角度を20度としてもよい。
In general, the visual field of a normal person is said to be about 60 degrees on the upper side and about 70 degrees on the lower side in one eye.
As an example, each of the four regions divided in the left-right direction in the above-described embodiment may be further divided into three in the up-down direction, and divided into 12 regions as a whole. When each area obtained by dividing the 180-degree area in the vertical direction into three is defined as an upper area, an intermediate area, and a lower area in order from the top, the angle defining the range in the vertical direction of the upper area is 30 degrees, and the vertical direction of the intermediate area is May be 130 degrees, and the angle that defines the range in the vertical direction of the lower region may be 20 degrees.
 このように上下方向にも更に分割する場合、各領域における周辺状況が個別に検出可能にセンサがHMDに搭載される。 In the case of further dividing in the vertical direction as described above, a sensor is mounted on the HMD so that the surrounding situation in each area can be individually detected.
 例えば、ユーザUの足元や頭上で音が発生した場合、ユーザUは音の発生元がある足元や頭上を見ると想定される。
 このような場合、上下方向の周辺状況情報として音情報を検出する音センサを設けることにより、上述の実施形態と同様に、音センサの検出結果に対して判定される誘目度を基に、上下方向の非視野領域に配置される仮想オフジェクトの処理の優先度を判定することができる。これにより、上下方向における非視野領域に配置される仮想オブジェクトに関する処理を制御することができ、効率の良い処理を行うことができる。
For example, when a sound is generated at the feet or overhead of the user U, it is assumed that the user U looks at the feet or overhead where the sound is generated.
In such a case, by providing a sound sensor that detects sound information as the surrounding situation information in the up-down direction, as in the above-described embodiment, based on the degree of attraction determined based on the detection result of the sound sensor, It is possible to determine the priority of processing of the virtual object arranged in the non-viewing area in the direction. Accordingly, it is possible to control the processing relating to the virtual object arranged in the non-viewing area in the vertical direction, and it is possible to perform efficient processing.
 また、上述の実施形態においては、周辺状況を検出する照度センサ22、音センサ23、匂いセンサ24といったセンサがHMDに搭載される例をあげたが、これらセンサがHMDには搭載されず、HMDとは異なる外部機器として設置されてもよい。
 当該外部機器は、ユーザUの周辺状況を検出可能に設置されればよく、一例としてユーザUの右側領域及び左側領域それぞれの周辺状況を検出するセンサが搭載されたリストバンド型の機器等を用いることができる。
 外部機器とHMDの制御部とは通信可能に構成され、HMDの制御部は外部機器の検出結果を取得可能に構成される。
Further, in the above-described embodiment, an example is described in which sensors such as the illuminance sensor 22, the sound sensor 23, and the odor sensor 24 which detect the surrounding situation are mounted on the HMD. It may be installed as an external device different from.
The external device may be installed so as to be able to detect the surrounding situation of the user U. As an example, a wristband-type device equipped with a sensor for detecting the surrounding situation of each of the right area and the left area of the user U is used. be able to.
The external device and the control unit of the HMD are configured to be able to communicate with each other, and the control unit of the HMD is configured to be able to acquire the detection result of the external device.
 また、上述の各実施形態では、ユーザUの周辺状況情報として、照度情報、音情報、匂い情報、又は、画像情報を用いて、非視野領域に配置される仮想オブジェクトの描画処理を制御する例をあげたが、これら情報を組み合わせて用いて、仮想オブジェクトの描画処理を制御してもよい。 Further, in each of the above-described embodiments, an example in which the drawing process of the virtual object arranged in the non-viewing area is controlled using the illuminance information, the sound information, the odor information, or the image information as the peripheral situation information of the user U. However, the drawing process of the virtual object may be controlled using a combination of these pieces of information.
 また、上述の実施形態では、優先処理点数を判定する際、周辺状況情報と、カメラの位置、向き情報といった空間情報の両方を用いる例をあげたが、周辺状況情報のみを用いてもよい。
 周辺状況を検出する照度センサ22、音センサ23、匂いセンサ24はそれぞれ、各領域60F、60B、60R、60Lを検出するように設置されるので、例えば前方領域60Fの周辺状況を検出するセンサが特定できれば、ユーザUの向き、すなわちカメラの位置、向きを把握することができる。
Further, in the above-described embodiment, when judging the number of priority processing points, an example is given in which both the surrounding situation information and spatial information such as the position and orientation information of the camera are used, but only the surrounding situation information may be used.
Since the illuminance sensor 22, the sound sensor 23, and the odor sensor 24 for detecting the surrounding state are respectively installed to detect the respective areas 60F, 60B, 60R, and 60L, for example, the sensor for detecting the surrounding state of the front area 60F is used. If specified, the orientation of the user U, that is, the position and orientation of the camera can be grasped.
 なお、本技術は以下のような構成もとることができる。
(1)
 ユーザの視野に対応する視野情報とは異なる、センサから取得した実空間の第1の領域の第1の周辺情報と第2の領域の第2の周辺情報それぞれに対して誘目度を判定し、上記誘目度を基に、上記第1の領域に配置される第1の仮想オブジェクトに関する第1の処理と上記第2の領域に配置される第2の仮想オブジェクトに関する第2の処理の優先度を判定する制御部を
 具備する情報処理装置。
Note that the present technology may have the following configurations.
(1)
Determining the degree of attraction for each of the first peripheral information of the first area and the second peripheral information of the second area in the real space obtained from the sensor, which is different from the visual field information corresponding to the user's visual field; Based on the degree of attraction, the priority of the first processing for the first virtual object arranged in the first area and the priority of the second processing for the second virtual object arranged in the second area are determined. An information processing device including a control unit for determining.
(2)
 上記(1)に記載の情報処理装置であって、
 上記制御部は、上記第2の周辺情報に対して上記第1の周辺情報よりも低い誘目度を判定する場合、上記第1の処理を上記第2の処理よりも優先的に行うように上記優先度を判定する
 情報処理装置。
(2)
The information processing apparatus according to the above (1),
The control unit may be configured to perform the first processing with higher priority than the second processing when determining a lower degree of interest for the second peripheral information than the first peripheral information. An information processing device that determines the priority.
(3)
 上記(2)に記載の情報処理装置であって、
 上記制御部は、上記第1の仮想オブジェクトの描画に関する上記第1の処理を、上記第2の仮想オブジェクトの描画に関する上記第2の処理よりも処理負荷を高くするように上記優先度を判定する
 情報処理装置。
(3)
The information processing apparatus according to (2),
The control unit determines the priority such that the processing load of the first process related to the drawing of the first virtual object is higher than that of the second process related to the drawing of the second virtual object. Information processing device.
(4)
 上記(1)から(3)のいずれか1つに記載の情報処理装置であって、
 上記センサは、上記第1の周辺情報として上記第1の領域の照度を検出する第1の照度センサと、上記第2の周辺情報として上記第2の領域の照度を検出する第2の照度センサを含む
 情報処理装置。
(4)
The information processing apparatus according to any one of (1) to (3),
The sensor includes a first illuminance sensor that detects illuminance of the first area as the first peripheral information, and a second illuminance sensor that detects illuminance of the second area as the second peripheral information. Information processing device including.
(5)
 上記(4)に記載の情報処理装置であって、
 上記制御部は、上記実空間の環境を加味して上記誘目度を判定する
 情報処理装置。
(5)
The information processing apparatus according to (4),
The information processing device, wherein the control unit determines the degree of attraction in consideration of an environment of the real space.
(6)
 上記(5)に記載の情報処理装置であって、
 上記実空間の環境は日照時の屋外である
 情報処理装置。
(6)
The information processing apparatus according to (5),
The environment of the real space is outdoors under the sunshine.
(7)
 上記(6)に記載の情報処理装置であって、
 上記第1の照度センサ及び上記第2の照度センサのうち少なくとも1つが、上記ユーザの視野に対応する領域の照度よりも高い照度となる太陽光が照射される上記実空間の照度を検出した場合、上記制御部は、上記太陽光が照射される上記実空間の照度を検出する上記照度センサから取得した上記周辺情報に対する誘目度を、上記ユーザの視野に対応する領域の照度と上記照度センサで検出される上記太陽光が照射される上記実空間の照度との照度差がしきい値未満の場合よりも上記しきい値以上の場合の方が低くなるように判定する
 情報処理装置。
(7)
The information processing apparatus according to the above (6),
When at least one of the first illuminance sensor and the second illuminance sensor detects an illuminance in the real space to which sunlight having an illuminance higher than an illuminance in a region corresponding to the user's field of view is applied. The control unit sets the degree of attraction to the peripheral information acquired from the illuminance sensor that detects the illuminance of the real space to which the sunlight is irradiated, using the illuminance of the area corresponding to the user's field of view and the illuminance sensor. An information processing apparatus that determines that an illuminance difference between the detected sunlight and the illuminance in the real space that is equal to or larger than the threshold is lower than a case where the illuminance difference is smaller than the threshold.
(8)
 上記(5)に記載の情報処理装置であって、
 上記実空間の環境は屋内である
 情報処理装置。
(8)
The information processing apparatus according to (5),
The environment of the real space is an indoor information processing device.
(9)
 上記(8)に記載の情報処理装置であって、
 上記制御部は、太陽光が照射される上記実空間の照度を検出する上記照度センサから取得した上記周辺情報の誘目度を、上記優先度が最も低くなるように判定する
 情報処理装置。
(9)
The information processing apparatus according to (8),
The information processing device, wherein the control unit determines an eye-catching degree of the peripheral information acquired from the illuminance sensor that detects an illuminance of the real space to be irradiated with sunlight so that the priority becomes the lowest.
(10)
 上記(8)に記載の情報処理装置であって、
 上記第1の照度センサ及び上記第2の照度センサのうち少なくとも1つが、太陽光以外の光が照射される上記実空間の照度を検出した場合、上記制御部は、上記太陽光以外の光が照射される上記実空間の照度を検出する上記照度センサから取得した上記周辺情報の誘目度を、上記ユーザの視野に対応する領域の照度と上記照度センサで検出される上記太陽光以外の光が照射される上記実空間の照度との照度差がしきい値以上の場合よりも上記しきい値未満の場合の方が低くなるように判定する
 情報処理装置。
(10)
The information processing apparatus according to (8),
When at least one of the first illuminance sensor and the second illuminance sensor detects the illuminance of the real space to which the light other than the sunlight is irradiated, the control unit sets the light other than the sunlight to The attractiveness of the peripheral information acquired from the illuminance sensor that detects the illuminance of the real space to be illuminated, the illuminance of the area corresponding to the user's field of view and the light other than the sunlight detected by the illuminance sensor An information processing apparatus which determines that an illuminance difference between the illuminance and the illuminance in the real space to be irradiated is lower when the illuminance difference is less than the threshold than when the illuminance difference is equal to or greater than the threshold.
(11)
 上記(1)から(10)のいずれか1つに記載の情報処理装置であって、
 上記センサは、上記第1の周辺情報として上記第1の領域の音の大きさを検出する第1の音センサと、上記第2の周辺情報として上記第2の領域の音の大きさを検出する第2の音センサを含む
 情報処理装置。
(11)
The information processing apparatus according to any one of (1) to (10),
The first sensor detects a loudness of the sound in the first area as the first peripheral information, and detects a loudness of the sound in the second area as the second peripheral information. An information processing apparatus including a second sound sensor that performs the processing.
(12)
 上記(11)に記載の情報処理装置であって、
 上記制御部は、上記第2の領域の音の大きさが上記第1の領域の音の大きさよりも小さい場合、上記第2の周辺情報に対する誘目度を上記第1の周辺情報に対する誘目度よりも低く判定する
 情報処理装置。
(12)
The information processing apparatus according to (11),
The control unit, when the loudness of the sound in the second area is smaller than the loudness of the sound in the first area, sets the degree of attraction for the second peripheral information to the degree of attraction for the first peripheral information. Information processing device that also determines low.
(13)
 上記(1)から(12)のいずれか1つに記載の情報処理装置であって、
 上記センサは、上記第1の周辺情報として上記第1の領域の匂いの強さを検出する第1の匂いセンサと、上記第2の周辺情報として上記第2の領域の匂い強さを検出する第2の匂いセンサを含む
 情報処理装置。
(13)
The information processing apparatus according to any one of (1) to (12),
The sensor detects a odor intensity of the first area as the first peripheral information, and detects an odor intensity of the second area as the second peripheral information. An information processing device including a second odor sensor.
(14)
 上記(13)に記載の情報処理装置であって、
 上記制御部は、上記第2の領域の匂いの強さが上記第1の領域の匂いよりも弱い場合、上記第2の周辺情報に対する誘目度を上記第1の周辺情報に対する誘目度よりも低く判定する
 情報処理装置。
(14)
The information processing apparatus according to (13),
The control unit, when the odor intensity of the second area is weaker than the odor of the first area, sets the degree of attraction to the second peripheral information to be lower than the degree of attraction to the first peripheral information. Judge information processing device.
(15)
 上記(1)から(14)のいずれか1つに記載の情報処理装置であって、
 上記センサは、上記周辺情報として上記ユーザの周辺の画像情報を取得するカメラである
 情報処理装置。
(15)
The information processing apparatus according to any one of (1) to (14),
The information processing device, wherein the sensor is a camera that acquires image information around the user as the surrounding information.
(16)
 上記(15)に記載の情報処理装置であって、
 上記画像情報には、上記第1の領域及び上記第2の領域それぞれの領域と上記ユーザとの位置関係情報が含まれ、
 上記制御部は、上記位置関係情報を用いて上記誘目度を判定する
 情報処理装置。
(16)
The information processing apparatus according to (15),
The image information includes positional relationship information between the first area and the second area and the user,
The information processing device, wherein the control unit determines the degree of attraction using the positional relationship information.
(17)
 上記(1)から(16)のうちいずれか1つに記載の情報処理装置であって、
 上記情報処理装置は、上記ユーザの頭部に装着可能であって上記ユーザに外界を視認させつつ上記ユーザの視野に上記第1の仮想オブジェクト及び上記第2の仮想オブジェクトを提示することが可能に構成されるヘッドマウントディスプレイである
 情報処理装置。
(17)
The information processing apparatus according to any one of (1) to (16),
The information processing device can be mounted on the head of the user, and can present the first virtual object and the second virtual object in the field of view of the user while allowing the user to visually recognize the outside world. An information processing device that is a head-mounted display configured.
(18)
 ユーザの視野に対応する視野情報とは異なる実空間の第1の領域の第1の周辺情報と第2の領域の第2の周辺情報をセンサから取得し、
 上記第1の周辺情報と上記第2の周辺情報それぞれの誘目度を判定し、
 上記誘目度を基に、上記第1の領域に配置される第1の仮想オブジェクトに関する第1の処理と上記第2の領域に配置される第2の仮想オブジェクトに関する第2の処理の優先度を判定する
 情報処理方法。
(18)
Acquiring, from a sensor, first peripheral information of a first area and second peripheral information of a second area in a real space different from visual field information corresponding to a visual field of a user;
Judgment degree of each of the first peripheral information and the second peripheral information is determined,
Based on the degree of attraction, the priority of the first processing for the first virtual object arranged in the first area and the priority of the second processing for the second virtual object arranged in the second area are determined. Judgment Information processing method.
(19)
 ユーザの視野に対応する視野情報とは異なる実空間の第1の領域の第1の周辺情報と第2の領域の第2の周辺情報をセンサから取得するステップと、
 上記第1の周辺情報と上記第2の周辺情報それぞれの誘目度を判定するステップと、
 上記誘目度を基に、上記第1の領域に配置される第1の仮想オブジェクトに関する第1の処理と上記第2の領域に配置される第2の仮想オブジェクトに関する第2の処理の優先度を判定するステップ
 を含む処理を情報処理装置に実行させるためのプログラム。
(19)
Acquiring, from a sensor, first peripheral information of a first area and second peripheral information of a second area in a real space different from visual field information corresponding to a visual field of a user;
Determining the degree of attraction of each of the first peripheral information and the second peripheral information;
Based on the degree of attraction, the priority of the first processing for the first virtual object arranged in the first area and the priority of the second processing for the second virtual object arranged in the second area are determined. A program for causing an information processing apparatus to execute a process including a determining step.
 1…HMD(情報処理装置)
 10…制御部
 22…照度センサ(センサ)
 23…音センサ(センサ)
 24…匂いセンサ(センサ)
 25…カメラ(センサ)
 60、70…実空間
 60B…後方領域(第1の領域、第2の領域)
 60L…左側領域(第1の領域、第2の領域)
 60R…右側領域(第1の領域、第2の領域)
 65…太陽
 66…爆発音(音)
 67…匂い
 A…仮想オブジェクトA(第1の仮想オブジェクト)
 B…仮想オブジェクトB(第2の仮想オブジェクト)
 A7…領域A7(第1の領域)
 A15…領域A15(第2の領域)
 U、U1~U3…ユーザ
 P1~P16…優先処理点数(誘目度)
1. HMD (information processing device)
10: control unit 22: illuminance sensor (sensor)
23 ... Sound sensor (sensor)
24 ... Odor sensor (sensor)
25 ... Camera (sensor)
60, 70: real space 60B: rear area (first area, second area)
60L... Left side area (first area, second area)
60R right side area (first area, second area)
65 ... sun 66 ... explosion sound (sound)
67: smell A: virtual object A (first virtual object)
B: virtual object B (second virtual object)
A7... Area A7 (first area)
A15... Area A15 (second area)
U, U1 to U3: User P1 to P16: Priority processing points (attraction level)

Claims (19)

  1.  ユーザの視野に対応する視野情報とは異なる、センサから取得した実空間の第1の領域の第1の周辺情報と第2の領域の第2の周辺情報それぞれに対して誘目度を判定し、前記誘目度を基に、前記第1の領域に配置される第1の仮想オブジェクトに関する第1の処理と前記第2の領域に配置される第2の仮想オブジェクトに関する第2の処理の優先度を判定する制御部を
     具備する情報処理装置。
    Determining the degree of attraction for each of the first peripheral information of the first area and the second peripheral information of the second area in the real space obtained from the sensor, which is different from the visual field information corresponding to the user's visual field; Based on the degree of attraction, the priority of the first process for the first virtual object arranged in the first area and the priority of the second process for the second virtual object arranged in the second area are determined. An information processing device including a control unit for determining.
  2.  請求項1に記載の情報処理装置であって、
     前記制御部は、前記第2の周辺情報に対して前記第1の周辺情報よりも低い誘目度を判定する場合、前記第1の処理を前記第2の処理よりも優先的に行うように前記優先度を判定する
     情報処理装置。
    The information processing device according to claim 1,
    The control unit is configured to perform the first process with higher priority than the second process when determining a lower degree of interest for the second peripheral information than the first peripheral information. An information processing device that determines the priority.
  3.  請求項2に記載の情報処理装置であって、
     前記制御部は、前記第1の仮想オブジェクトの描画に関する前記第1の処理を、前記第2の仮想オブジェクトの描画に関する前記第2の処理よりも処理負荷を高くするように前記優先度を判定する
     情報処理装置。
    The information processing apparatus according to claim 2, wherein
    The control unit determines the priority such that the first processing relating to the rendering of the first virtual object has a higher processing load than the second processing relating to the rendering of the second virtual object. Information processing device.
  4.  請求項3に記載の情報処理装置であって、
     前記センサは、前記第1の周辺情報として前記第1の領域の照度を検出する第1の照度センサと、前記第2の周辺情報として前記第2の領域の照度を検出する第2の照度センサを含む
     情報処理装置。
    The information processing apparatus according to claim 3, wherein
    A first illuminance sensor for detecting illuminance of the first area as the first peripheral information; and a second illuminance sensor for detecting illuminance of the second area as the second peripheral information Information processing device including.
  5.  請求項4に記載の情報処理装置であって、
     前記制御部は、前記実空間の環境を加味して前記誘目度を判定する
     情報処理装置。
    The information processing apparatus according to claim 4, wherein
    The information processing device, wherein the control unit determines the degree of attraction in consideration of an environment of the real space.
  6.  請求項5に記載の情報処理装置であって、
     前記実空間の環境は日照時の屋外である
     情報処理装置。
    The information processing apparatus according to claim 5, wherein
    The environment of the real space is outdoors under sunshine.
  7.  請求項6に記載の情報処理装置であって、
     前記第1の照度センサ及び前記第2の照度センサのうち少なくとも1つが、前記ユーザの視野に対応する領域の照度よりも高い照度となる太陽光が照射される前記実空間の照度を検出した場合、前記制御部は、前記太陽光が照射される前記実空間の照度を検出する前記照度センサから取得した前記周辺情報に対する誘目度を、前記ユーザの視野に対応する領域の照度と前記照度センサで検出される前記太陽光が照射される前記実空間の照度との照度差がしきい値未満の場合よりも前記しきい値以上の場合の方が低くなるように判定する
     情報処理装置。
    The information processing device according to claim 6,
    When at least one of the first illuminance sensor and the second illuminance sensor detects the illuminance of the real space to which sunlight having an illuminance higher than the illuminance of an area corresponding to the user's field of view is applied. The control unit sets the degree of attraction to the peripheral information acquired from the illuminance sensor that detects the illuminance of the real space to which the sunlight is irradiated, with the illuminance of the area corresponding to the user's field of view and the illuminance sensor. An information processing apparatus which determines that an illuminance difference between the detected illuminance and the illuminance in the real space to which the sunlight is applied is lower when the illuminance is equal to or larger than the threshold than when the illuminance difference is smaller than the threshold.
  8.  請求項5に記載の情報処理装置であって、
     前記実空間の環境は屋内である
     情報処理装置。
    The information processing apparatus according to claim 5, wherein
    The environment of the real space is indoors.
  9.  請求項8に記載の情報処理装置であって、
     前記制御部は、太陽光が照射される前記実空間の照度を検出する前記照度センサから取得した前記周辺情報の誘目度を、前記優先度が最も低くなるように判定する
     情報処理装置。
    The information processing apparatus according to claim 8, wherein:
    The information processing device, wherein the control unit determines an eye-catching degree of the peripheral information acquired from the illuminance sensor that detects an illuminance of the real space to be irradiated with sunlight so that the priority becomes the lowest.
  10.  請求項8に記載の情報処理装置であって、
     前記第1の照度センサ及び前記第2の照度センサのうち少なくとも1つが、太陽光以外の光が照射される前記実空間の照度を検出した場合、前記制御部は、前記太陽光以外の光が照射される前記実空間の照度を検出する前記照度センサから取得した前記周辺情報の誘目度を、前記ユーザの視野に対応する領域の照度と前記照度センサで検出される前記太陽光以外の光が照射される前記実空間の照度との照度差がしきい値以上の場合よりも前記しきい値未満の場合の方が低くなるように判定する
     情報処理装置。
    The information processing apparatus according to claim 8, wherein:
    When at least one of the first illuminance sensor and the second illuminance sensor detects the illuminance of the real space to which the light other than sunlight is applied, the control unit sets the light other than the sunlight to light. The attractiveness of the peripheral information acquired from the illuminance sensor that detects the illuminance of the real space to be illuminated, the illuminance of the area corresponding to the user's field of view and the light other than the sunlight detected by the illuminance sensor are An information processing apparatus which determines that an illuminance difference between the illuminance and the illuminance in the real space to be irradiated is lower when the illuminance is less than the threshold than when the illuminance difference is greater than or equal to the threshold.
  11.  請求項3に記載の情報処理装置であって、
     前記センサは、前記第1の周辺情報として前記第1の領域の音の大きさを検出する第1の音センサと、前記第2の周辺情報として前記第2の領域の音の大きさを検出する第2の音センサを含む
     情報処理装置。
    The information processing apparatus according to claim 3, wherein
    The first sensor detects a loudness of the sound in the first area as the first peripheral information, and detects a loudness of the sound in the second area as the second peripheral information. An information processing apparatus including a second sound sensor that performs the processing.
  12.  請求項11に記載の情報処理装置であって、
     前記制御部は、前記第2の領域の音の大きさが前記第1の領域の音の大きさよりも小さい場合、前記第2の周辺情報に対する誘目度を前記第1の周辺情報に対する誘目度よりも低く判定する
     情報処理装置。
    The information processing apparatus according to claim 11, wherein
    The control unit, when the loudness of the sound in the second area is smaller than the loudness of the sound in the first area, sets the degree of attraction for the second peripheral information to the degree of attraction for the first peripheral information. Information processing device that also determines low.
  13.  請求項3に記載の情報処理装置であって、
     前記センサは、前記第1の周辺情報として前記第1の領域の匂いの強さを検出する第1の匂いセンサと、前記第2の周辺情報として前記第2の領域の匂い強さを検出する第2の匂いセンサを含む
     情報処理装置。
    The information processing apparatus according to claim 3, wherein
    The sensor detects a odor intensity of the first area as the first peripheral information, and detects an odor intensity of the second area as the second peripheral information. An information processing device including a second odor sensor.
  14.  請求項13に記載の情報処理装置であって、
     前記制御部は、前記第2の領域の匂いの強さが前記第1の領域の匂いよりも弱い場合、前記第2の周辺情報に対する誘目度を前記第1の周辺情報に対する誘目度よりも低く判定する
     情報処理装置。
    The information processing apparatus according to claim 13, wherein
    When the intensity of the odor of the second area is weaker than the odor of the first area, the controller sets the degree of attraction to the second peripheral information to be lower than the degree of attraction to the first peripheral information. Judge information processing device.
  15.  請求項3に記載の情報処理装置であって、
     前記センサは、前記周辺情報として前記ユーザの周辺の画像情報を取得するカメラである
     情報処理装置。
    The information processing apparatus according to claim 3, wherein
    The information processing device, wherein the sensor is a camera that acquires image information around the user as the surrounding information.
  16.  請求項15に記載の情報処理装置であって、
     前記画像情報には、前記第1の領域及び前記第2の領域それぞれの領域と前記ユーザとの位置関係情報が含まれ、
     前記制御部は、前記位置関係情報を用いて前記誘目度を判定する
     情報処理装置。
    The information processing apparatus according to claim 15, wherein
    The image information includes information on a positional relationship between each of the first area and the second area and the user,
    The information processing device, wherein the control unit determines the degree of attraction using the positional relationship information.
  17.  請求項3に記載の情報処理装置であって、
     前記情報処理装置は、前記ユーザの頭部に装着可能であって前記ユーザに外界を視認させつつ前記ユーザの視野に前記第1の仮想オブジェクト及び前記第2の仮想オブジェクトを提示することが可能に構成されるヘッドマウントディスプレイである
     情報処理装置。
    The information processing apparatus according to claim 3, wherein
    The information processing apparatus can be mounted on the head of the user, and can present the first virtual object and the second virtual object in the field of view of the user while allowing the user to visually recognize the outside world. An information processing device that is a head-mounted display configured.
  18.  ユーザの視野に対応する視野情報とは異なる実空間の第1の領域の第1の周辺情報と第2の領域の第2の周辺情報をセンサから取得し、
     前記第1の周辺情報と前記第2の周辺情報それぞれの誘目度を判定し、
     前記誘目度を基に、前記第1の領域に配置される第1の仮想オブジェクトに関する第1の処理と前記第2の領域に配置される第2の仮想オブジェクトに関する第2の処理の優先度を判定する
     情報処理方法。
    Acquiring, from a sensor, first peripheral information of a first area and second peripheral information of a second area in a real space different from visual field information corresponding to a visual field of a user;
    Determining the degree of attraction of each of the first peripheral information and the second peripheral information;
    Based on the degree of attraction, the priority of the first process for the first virtual object arranged in the first area and the priority of the second process for the second virtual object arranged in the second area are determined. Judgment Information processing method.
  19.  ユーザの視野に対応する視野情報とは異なる実空間の第1の領域の第1の周辺情報と第2の領域の第2の周辺情報をセンサから取得するステップと、
     前記第1の周辺情報と前記第2の周辺情報それぞれの誘目度を判定するステップと、
     前記誘目度を基に、前記第1の領域に配置される第1の仮想オブジェクトに関する第1の処理と前記第2の領域に配置される第2の仮想オブジェクトに関する第2の処理の優先度を判定するステップ
     を含む処理を情報処理装置に実行させるためのプログラム。
    Acquiring, from a sensor, first peripheral information of a first area and second peripheral information of a second area in a real space different from visual field information corresponding to a visual field of a user;
    Determining the degree of attraction of each of the first peripheral information and the second peripheral information;
    Based on the degree of attraction, the priority of the first process for the first virtual object arranged in the first area and the priority of the second process for the second virtual object arranged in the second area are determined. A program for causing an information processing apparatus to execute a process including a determining step.
PCT/JP2019/032260 2018-08-31 2019-08-19 Information processing device, information processing method, and program WO2020045141A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018162266 2018-08-31
JP2018-162266 2018-08-31

Publications (1)

Publication Number Publication Date
WO2020045141A1 true WO2020045141A1 (en) 2020-03-05

Family

ID=69643883

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/032260 WO2020045141A1 (en) 2018-08-31 2019-08-19 Information processing device, information processing method, and program

Country Status (1)

Country Link
WO (1) WO2020045141A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016527536A (en) * 2013-06-07 2016-09-08 株式会社ソニー・インタラクティブエンタテインメント Image rendering in response to user movement on a head-mounted display

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016527536A (en) * 2013-06-07 2016-09-08 株式会社ソニー・インタラクティブエンタテインメント Image rendering in response to user movement on a head-mounted display

Similar Documents

Publication Publication Date Title
US10055642B2 (en) Staredown to produce changes in information density and type
US9615177B2 (en) Wireless immersive experience capture and viewing
US9846304B2 (en) Display method and display apparatus in which a part of a screen area is in a through-state
CN102591016B (en) Optimized focal area for augmented reality displays
CN102566756B (en) Comprehension and intent-based content for augmented reality displays
US20210329764A1 (en) Systems and methods for retarding myopia progression
CN106662747A (en) Head-mounted display with electrochromic dimming module for augmented and virtual reality perception
CN104956252A (en) Peripheral display for a near-eye display device
JPWO2019176577A1 (en) Information processing equipment, information processing methods, and recording media
US11165938B2 (en) Animal-wearable first person view system
US20140361987A1 (en) Eye controls
US20210026142A1 (en) Information processing apparatus, information processing method, and program
JP5664677B2 (en) Imaging display device and imaging display method
US10571700B2 (en) Head-mountable display system
JP7271909B2 (en) DISPLAY DEVICE AND CONTROL METHOD OF DISPLAY DEVICE
JP6529571B1 (en) Program, method executed by computer to provide virtual space, and information processing apparatus for executing program
WO2020045141A1 (en) Information processing device, information processing method, and program
JP2013083994A (en) Display unit and display method
CA3082012A1 (en) Animal-wearable first person view system
JP5971298B2 (en) Display device and display method
US11762204B2 (en) Head mountable display system and methods
US20240087221A1 (en) Method and apparatus for determining persona of avatar object in virtual space
US20230168951A1 (en) Helmet mounted processing system
WO2023227876A1 (en) Extended reality headset, system and apparatus
GB2619367A (en) Extended reality headset, system and apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19856371

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19856371

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP