US20170092006A1 - Image generating device, image generating method, and image generating program - Google Patents
Image generating device, image generating method, and image generating program Download PDFInfo
- Publication number
- US20170092006A1 US20170092006A1 US15/271,188 US201615271188A US2017092006A1 US 20170092006 A1 US20170092006 A1 US 20170092006A1 US 201615271188 A US201615271188 A US 201615271188A US 2017092006 A1 US2017092006 A1 US 2017092006A1
- Authority
- US
- United States
- Prior art keywords
- image
- display
- area
- user
- low frequency
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G06K9/0061—
-
- G06K9/4671—
-
- G06K9/52—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/014—Head-up displays characterised by optical features comprising information/image processing systems
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0141—Head-up displays characterised by optical features characterised by the informative content of the display
-
- G06K2009/4666—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20076—Probabilistic image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Definitions
- This disclosure relates to an image generating device, an image generating method, and an image generating program.
- This disclosure has been made in view of the above-mentioned point, and has an object to provide an image generating device, an image generating method, and an image generating program that enable additional information to be displayed in a region that a user rarely looks at, in at least one embodiment.
- an image generating device including; image generating means for generating an image to be displayed on a display; frequency statistics means for creating statistical data based on a frequency that a user looks at each part of the image displayed on the display; low frequency area identifying means for identifying an area in the image, which has the frequency that falls below a threshold, as a low frequency area based on the statistical data; additional image generating means for generating an additional image to be arranged in the low frequency area in superimposition on the image; and image outputting means for outputting the image and the additional image to the display.
- an image generating method which is to be executed by a computer, the image generating method including: generating an image to be displayed on a display; outputting the image to the display; creating statistical data based on a frequency that a user looks at each part of the image displayed on the display; identifying an area in the image, which has the frequency that falls below a threshold, as a low frequency area based on the statistical data; generating an additional image to be arranged in the low frequency area in superimposition on the image; and outputting the additional image to the display.
- an image generating program for causing a computer to execute the procedures of: generating an image to be displayed on a display; outputting the image to the display; creating statistical data based on a frequency that a user looks at each part of the image displayed on the display; identifying an area in the image, which has the frequency that falls below a threshold, as a low frequency area based on the statistical data; generating an additional image to be arranged in the low frequency area in superimposition on the image; and outputting the additional image to the display.
- additional information can be displayed in a region that a user rarely looks at.
- FIG. 1 is a hardware configuration diagram of an image displaying system 100 according to at least one embodiment of this disclosure.
- FIG. 2 is a block diagram for illustrating a functional configuration of an image generating device 200 according to at least one embodiment of this disclosure.
- FIG. 3 is an example of statistical data created by a frequency statistics unit 232 .
- FIG. 4 is an example of statistical data created by the frequency statistics unit 232 .
- FIG. 5 is an example of a low frequency area.
- FIG. 6 is a flowchart for illustrating a processing procedure of the image generating device 200 according to at least one embodiment of this disclosure.
- At least one embodiment of this disclosure has the following configuration.
- An image generating device including; image generating means for generating an image to be displayed on a display; frequency statistics means for creating statistical data based on a frequency that a user looks at each part of the image displayed on the display; low frequency area identifying means for identifying an area in the image, which has the frequency that falls below a threshold, as a low frequency area based on the statistical data; additional image generating means for generating an additional image to be arranged in the low frequency area in superimposition on the image; and image outputting means for outputting the image and the additional image to the display.
- Item 2 An image generating device according to Item 1, in which the display includes a head mounted display, and in which the image includes a virtual reality image to be presented to a user wearing the head mounted display.
- Item 3 An image generating device according to Item 1 or 2, in which the frequency statistics means is configured to calculate the frequency based on a line-of-sight direction of the user detected by line-of-sight direction detecting means.
- Item 4 An image generating device according to Item 1 or 2, in which the frequency statistics means is configured to calculate the frequency based on output from a sensor configured to detect a direction of a head of the user.
- An image generating device according to Item 3 or 4, in which the additional image generating means is configured to dynamically change the additional image based on one of a current line-of-sight direction of the user detected by the line-of-sight direction detecting means, and a current direction of a head of the user detected by the sensor.
- An image generating method which is to be executed by a computer, the image generating method including: generating an image to be displayed on a display; outputting the image to the display; creating statistical data based on a frequency that a user looks at each part of the image displayed on the display; identifying an area in the image, which has the frequency that falls below a threshold, as a low frequency area based on the statistical data; generating an additional image to be arranged in the low frequency area in superimposition on the image; and outputting the additional image to the display.
- An image generating program for causing a computer to execute the procedures of: generating an image to be displayed on a display; outputting the image to the display; creating statistical data based on a frequency that a user looks at each part of the image displayed on the display; identifying an area in the image, which has the frequency that falls below a threshold, as a low frequency area based on the statistical data; generating an additional image to be arranged in the low frequency area in superimposition on the image; and outputting the additional image to the display.
- FIG. 1 is a hardware configuration diagram of an image displaying system 100 according to at least one embodiment of this disclosure.
- the image displaying system 100 includes a head mounted display (hereinafter referred to as “HMD”) 120 , and an image generating device 200 .
- the HMD 120 and the image generating device 200 are, as an example, electrically connected to each other by a cable 150 so as to enable mutual communication. Instead of the cable 150 , wireless connection may be used.
- the HMD 120 is a display device to be used by being worn on a head of a user 160 .
- the HMD 120 includes a display 122 , an eye tracking device (hereinafter referred to as “ETD”) 124 , and a sensor 126 .
- ETD eye tracking device
- the HMD 120 may further include a speaker (headphones) and a camera (not shown), in at least one embodiment.
- the display 122 is configured to present an image in a field of view of the user 160 wearing the HMD 120 .
- the display 122 may be configured as a non-transmissive display. In this case, the sight of the outside world of the HMD 120 is blocked from the field of view of the user 160 , and the user 160 can see only the image displayed on the display 122 .
- an image generated using a computer executing graphics software is displayed.
- the generated image is a virtual reality image obtained by forming an image of a space of virtual reality (for example, a world created in a computer game).
- the real world may be expressed by the computer executing the graphics software based on positional coordinate data of, for example, the actual geography or objects in the real world.
- the camera (not shown) mounted on the HMD 120 may be used to display on the display 122 a video taken from the perspective of the user 160 .
- the ETD 124 is configured to track the movement of the eyeballs of the user 160 , to thereby detect the direction of the line of sight of the user 160 .
- the ETD 124 includes an infrared light source and an infrared camera.
- the infrared light source is configured to irradiate the eye of the user 160 wearing the HMD 120 with infrared rays.
- the infrared camera is configured to take an image of the eye of the user 160 irradiated with the infrared rays.
- the infrared rays are reflected on the surface of the eye of the user 160 , but the reflectance of the infrared rays differs between the pupil and a part of the eyeball other than the pupil.
- the difference in reflectance of the infrared rays appears as contrast in the image. Based on this contrast, the pupil is identified in the image of the eye of the user 160 , and further the direction of the line of sight of the user 160 is detected based on the position of the identified pupil.
- the line-of-sight direction of the user 160 represents an area that the user 160 is gazing at in the image displayed on the display 122 .
- the sensor 126 is a sensor configured to detect the direction of the head of the user 160 wearing the HMD 120 .
- Examples of the sensor 126 include a magnetic sensor, an angular velocity sensor, an acceleration sensor, or a combination thereof.
- the sensor 126 is built into the HMD 120 , and is configured to output a value (magnetic, angular velocity, or acceleration value) based on the direction or the movement of the HMD 120 .
- a value magnetic, angular velocity, or acceleration value
- the direction of the head of the user 160 can be used to change a display image of the display 122 so as to follow the movement of the head of the user 160 when the head is moved.
- the direction of the head of the user 160 represents a rough indication of a part that the user 160 is viewing at a relatively high probability in the display image of the display 122 .
- the sensor 126 may be a sensor provided outside of the HMD 120 .
- the sensor 126 may be an infrared sensor separated from the HMD 120 .
- an infrared reflecting marker formed on the surface of the HMD 120 is detected with use of the infrared sensor, the direction of the head of the user 160 wearing the HMD 120 can be identified.
- the image generating device 200 is a device configured to generate an image to be displayed on the HMD 120 .
- the image generating device 200 at least includes a processor 202 , a non-transitory memory 204 , and a user input interface 208 .
- the image generating device 200 may further include, for example, a network interface (not shown) configured to communicate with other devices via a network.
- the image generating device 200 may be achieved as, for example, a personal computer, a game console, a smart phone, a tablet terminal, and the like.
- the memory 204 has stored therein at least an operating system and an image generating program.
- the operating system is a computer program for controlling the entire operation of the image generating device 200 .
- the image generating program is a computer program for the image generating device 200 to achieve respective functions of image generating processing to be described later.
- the memory 204 can further temporarily or permanently store data generated by the operation of the image generating device 200 .
- Specific examples of the memory 204 include a read only memory (ROM), a random access memory (RAM), a hard disk, a flash memory, and an optical disc.
- the processor 202 is configured to read out a program stored in the memory 204 , to thereby execute processing in accordance with the program.
- the processor 202 executes the image generating program stored in the memory 204 , various functions of the image generating processing to be described later are achieved.
- the processor 202 includes at least a central processing unit (CPU) and a graphics processing unit (GPU).
- the user input interface 208 is configured to receive input for operating the image generating device 200 from the user of the image displaying system 100 .
- Specific examples of the user input interface 208 include a game controller, a touch pad, a mouse, and a keyboard.
- FIG. 2 is a block diagram for illustrating a functional configuration of the image generating device 200 according to at least one embodiment of this disclosure.
- the image generating device 200 includes a storage unit 220 and a processing unit 230 .
- the processing unit 230 further includes an image generating unit 231 , a frequency statistics unit 232 , a low frequency area identifying unit 233 , an additional image generating unit 234 , and an image outputting unit 235 .
- the storage unit 220 corresponds to the memory 204 illustrated in FIG. 1 .
- the processing unit 230 and the respective units 231 to 235 included in the processing unit 230 represent the functions of the image generating processing according to this disclosure, which are achieved by reading out the image generating program stored in the memory 204 and executing the image generating program by the processor 202 illustrated in FIG. 1 , in at least one embodiment.
- the image generating unit 231 is configured to generate an image to be displayed on the HMD 120 .
- the image generating unit 231 is configured to acquire predetermined data from the storage unit 220 , to thereby generate an image by computer graphics processing based on the acquired data.
- the image generating unit 231 may generate such a virtual reality image that the user 160 wearing the HMD 120 can recognize a virtual reality space of a computer game.
- the virtual reality image represents a sight that the user can see in the virtual reality space.
- the virtual reality image to be generated by the image generating unit 231 includes characters that appear in the computer game, a landscape including buildings and trees, an interior design inside a room including furniture and walls, items on the ground, a part (hand or foot) of a body of an avatar that the user is operating, and an object (gun or sword) that the avatar is holding in its hand.
- the image generating unit 231 may generate a computer graphics image that reproduces the real world based on the actual geography data of the real world or the like.
- the image to be generated by the image generating unit 231 may be, instead of one obtained by computer graphics processing, for example, a video taken from the perspective of the user 160 by an external camera mounted on the HMD 120 .
- the image generating unit 231 may further change an image based on the output value from the sensor 126 .
- the image to be generated by the image generating unit 231 may be an image representing a state in which the field of view of the user in the virtual reality space transitions so as to follow the movement of the head of the user 160 , which is represented by the output value from the sensor 126 .
- the image generated by the image generating unit 231 is output to the HMD 120 via the image outputting unit 235 , to thereby be displayed on the display 122 .
- the frequency statistics unit 232 is configured to create statistical data based on a frequency that the user 160 wearing the HMD 120 looks at each area of the image displayed on the display 122 .
- the statistical data represents an area that is frequently looked at and an area that is not frequently looked at in the image displayed on the display 122 .
- the frequency statistics unit 232 is configured to create the statistical data of the frequency that the user 160 looks at each area of the image based on the line-of-sight direction of the user 160 detected by the ETD 124 .
- the frequency statistics unit 232 may create the statistical data of the frequency that the user 160 looks at each area of the image based on the direction of the head of the user 160 detected by the sensor 126 . Specific description is given with reference to FIG. 3 and FIG. 4 .
- FIG. 3 is an illustration of a display image 520 to be displayed on the display 122 of the HMD 120 .
- the display image 520 includes a plurality of partial regions 501 .
- the partial region 501 is a small image range obtained one by one by vertically and laterally dividing the display image 520 like a grid.
- the user 160 wearing the HMD 120 gazes at a certain partial region 501 in the display image 520 at a certain moment, and then gazes at a different partial region 501 at a different moment.
- the line of sight of the user 160 moves back and forth between different partial regions 501 of the display image 520 along with elapse of time.
- the line-of-sight direction of the user 160 that changes repeatedly is detected by the ETD 124 to be input to the frequency statistics unit 232 .
- Each of the partial regions 501 of FIG. 3 has a number from “0” to “5”.
- the number represents a frequency value based on how often the user gazes at the partial region 501 .
- a frequency value “0” represents that the user 160 has never looked at the partial region 501 .
- a frequency value “1” represents that the user 160 has looked at the partial region 501 at a low frequency
- a frequency value “5” represents that the user 160 has looked at the partial region 501 at the maximum frequency.
- Intermediate frequency values “2”, “3”, and “4” represent frequencies obtained by proportionally dividing the frequency values “1” and “5” as appropriate.
- the frequency statistics unit 232 is configured to collect the line-of-sight directions of the user 160 input from the ETD 124 for a predetermined time period, and to assign a frequency value for each partial region 501 in accordance with the collection result. For example, the frequency statistics unit 232 is configured to assign the frequency value “5” to a partial region 501 that is classified into a group in which the line of sight of the user 160 stays for the longest time during the time period, and to assign the frequency value “4” to a partial region 501 that is classified into a group in which the line of sight of the user 160 stays for the next longest time. Other frequency values are similarly assigned.
- FIG. 4 is an illustration of a region 540 that is wider than a region 530 that can be displayed at once on the display 122 of the HMD 120 .
- the region 540 represents, for example, the entire sight that the user can see around him/her in a virtual reality space of a computer game.
- the display range 530 represents a sight in a limited range that the user can view at a single time when the user turns to a certain direction in the virtual reality space.
- the display range 530 corresponds to the range of the display image 520 of FIG. 3 . For example, when the user 160 moves his/her head to various directions, the display range 530 moves in the region 540 so as to follow the movement of the head.
- the region 540 includes a plurality of partial regions 501 similarly to the case described in FIG. 3 .
- the head of the user 160 wearing the HMD 120 is directed to a certain direction in a certain time zone. While the head of the user 160 is directed in the certain direction, similarly to the case described in FIG. 3 , the frequency values are assigned to the partial regions 501 included in the display range 530 at that time. Further, the head of the user 160 is directed to a different direction in a different time zone. While the head of the user 160 is directed in the different direction, frequency values are similarly assigned to the partial regions 501 included in the display range 530 at that time. When the display ranges 530 in the two time zones partially overlap with each other, for example, the frequency value is assigned to each partial region 501 based on a total time during which the line of sight of the user 160 stays in the overlapped partial region 501 .
- the frequency value of each partial region 501 may be determined based on the direction of the head of the user 160 .
- the sensor 126 detects the direction of the head of the user 160
- the image generating unit 231 generates an image in the display range 530 based on the direction.
- the actual line of sight of the user 160 may be obtained in arbitrary parts of the generated image, but in some instances the user 160 is looking at a predetermined fixed part of the display range 530 (for example, center of the display range 530 ).
- the frequency statistics unit 232 assumes that the direction of the fixed part of the display range 530 is the line-of-sight direction of the user 160 , to thereby determine the frequency value of each partial region 501 .
- the frequency statistics unit 232 tracks the state of the display range 530 moving in the region 540 for a predetermined time period, and assigns a large frequency value to a partial region 501 that overlaps with the center of the display range 530 for a long time and assigns a small frequency value to a partial region 501 that overlaps with the center of the display range 530 for a short time.
- the frequency values of the respective partial regions 501 are statistically collected based on the line-of-sight direction or the head direction of the user 160 .
- the collected frequency values of the respective partial regions 501 form the statistical data.
- the statistical data may be stored in the storage unit 220 .
- the dotted lines (frame lines of the partial regions 501 ) and the numbers (frequency values) shown in FIG. 3 and FIG. 4 are presented for the description above, and are not elements of the image. Therefore, neither the dotted lines nor the numbers are displayed on the display 122 of the HMD 120 .
- the low frequency area identifying unit 233 is configured to identify a low frequency area based on the statistical data created by the frequency statistics unit 232 .
- the low frequency area is a partial area that is looked at by the user 160 at a low frequency in the image displayed on the display 122 of the HMD 120 .
- the low frequency area identifying unit 233 is configured to compare the frequency value of each partial region 501 forming the statistical data with a predetermined threshold, and to determine, when the frequency value of a certain partial region 501 falls below the threshold as a result of the comparison, that the partial region 501 is a part of the low frequency area.
- FIG. 5 is an example of the low frequency area. This example corresponds to the example of the frequency values of the respective partial regions 501 illustrated in FIG. 3 .
- a low frequency area 570 exemplified in FIG. 5 is an area formed of partial regions 501 to which the frequency value “0” or “1” is assigned in FIG. 3 .
- the low frequency area may include partial regions 501 to which the frequency value “0”, “1”, or “2” is assigned.
- the low frequency area identifying unit 233 may be configured to classify the low frequency area into a plurality of stages depending on the frequency value. For example, the low frequency area identifying unit 233 may be configured to set partial regions 501 having the frequency value of “0” or “1” as a first low frequency area, and to set partial regions 501 having the frequency value of “2” or “3” as a second low frequency area.
- the additional image generating unit 234 is configured to generate an additional image to be arranged in the low frequency area.
- the generated additional image is output to the HMD 120 via the image outputting unit 235 , and is displayed on the display 122 in superimposition on the image output from the image generating unit 231 .
- the additional image can be used for, for example, presenting advertisements in the virtual reality space, or displaying an enemy character or a useful item in a computer game.
- the additional image is displayed in the low frequency area that is looked at by the user 160 at a low frequency, and hence the user 160 can visually recognize the image output from the image generating unit 231 without being affected by the additional image that much.
- the attention of the user 160 can be directed not only to the image output from the image generating unit 231 (area other than the low frequency area), but also to the low frequency area.
- the game can be more amusing.
- the additional image generating unit 234 may dynamically change the additional image based on the current line-of-sight direction or the current head direction of the user 160 . For example, first, a certain character (ghost or mole in a whack-a-mole game) is displayed in the low frequency area as the additional image. The additional image generating unit 234 determines whether or not the user 160 intends to (or is about to) direct his/her line of sight (or his/her head) to the low frequency area based on the input from the ETD 124 or the sensor 126 .
- a certain character ghost or mole in a whack-a-mole game
- the additional image generating unit 234 changes the additional image of the character to, for example, such an additional image that the character is escaping from the line of sight of the user 160 (the ghost disappears from the field of view, or the mole hides into the ground).
- the escaping degree of the character may be adjusted depending on the degree that the line of sight of the user 160 approaches the low frequency area.
- the additional image generating unit 234 may further generate different additional images (first additional image, second additional image, and the like) for the respective classified low frequency areas (first low frequency area, second low frequency area, and the like). For example, an attribute value of the first additional image to be displayed in the first low frequency area differs from an attribute value of the second additional image to be displayed in the second low frequency area.
- the first low frequency area corresponds to the frequency values “0” and “1”
- the second low frequency area corresponds to the frequency values “2” and “3”
- the first low frequency area that is an area that is looked at by the user 160 at a lower frequency
- a rare item with a higher value or an enemy character that provides a higher score when being defeated may be displayed as the first additional image. In this manner, the game can be even more amusing.
- FIG. 6 is a flow chart for illustrating a processing procedure of the image generating device 200 according to one embodiment of this disclosure.
- the image generating unit 231 of the image generating device 200 generates an image to be displayed on the HMD 120 .
- the image outputting unit 235 of the image generating device 200 outputs the image generated by the image generating unit 231 to the HMD 120 . With this, the image is displayed on the display 122 of the HMD 120 .
- the frequency statistics unit 232 of the image generating device 200 creates statistical data based on the frequency that the user 160 wearing the HMD 120 looks at each part of the image displayed on the display 122 .
- the frequency statistics unit 232 of the image generating device 200 determines the frequency value of each partial region based on the line-of-sight direction of the user 160 detected by the ETD 124 , or based on the direction of the head of the user 160 detected by the output value from the sensor 126 .
- the low frequency area identifying unit 233 of the image generating device 200 identifies the low frequency area that is looked at by the user 160 at a low frequency based on the statistical data created by the frequency statistics unit 232 .
- the additional image generating unit 234 of the image generating device 200 generates the additional image to be arranged in the low frequency area.
- Step S 612 the image outputting unit 235 of the image generating device 200 outputs the additional image generated by the additional image generating unit 234 to the HMD 120 .
- the additional image is displayed in the low frequency area in superimposition on the image displayed on the display 122 of the HMD 120 .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Optics & Photonics (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Computer Graphics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Ophthalmology & Optometry (AREA)
- Multimedia (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
- Position Input By Displaying (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
To display additional information in a region that a user rarely looks at, provided is an image generating device, including a processor. The processor is configured to generate an image to be displayed on a display. The processor is further unit configured to create statistical data based on a frequency that a user looks at each area of the image displayed on the display; a low frequency area identifying unit configured to identify an area in the image, which has a frequency below a threshold, as a low frequency area based on the statistical data. The processor is further configured to generate an additional image to be arranged in the low frequency area in superimposition on the image. The processor is further configured to output the image and the additional image to the display.
Description
- The present application claims priority to Japanese Application Number 2015-191042, filed Sep. 29, 2015, the disclosure of which is hereby incorporated by reference herein in its entirety.
- This disclosure relates to an image generating device, an image generating method, and an image generating program.
- There is known a technology for displaying additional information in superimposition on a content. For example, in Japanese Patent No. 5465620, additional information is displayed in a display region determined depending on characteristics of the content.
- When additional information is displayed in a region that a user rarely looks at, a new user experience may be created.
- This disclosure has been made in view of the above-mentioned point, and has an object to provide an image generating device, an image generating method, and an image generating program that enable additional information to be displayed in a region that a user rarely looks at, in at least one embodiment.
- In order to help solve the above-mentioned problem, according to at least one embodiment of this disclosure, there is provided an image generating device, including; image generating means for generating an image to be displayed on a display; frequency statistics means for creating statistical data based on a frequency that a user looks at each part of the image displayed on the display; low frequency area identifying means for identifying an area in the image, which has the frequency that falls below a threshold, as a low frequency area based on the statistical data; additional image generating means for generating an additional image to be arranged in the low frequency area in superimposition on the image; and image outputting means for outputting the image and the additional image to the display.
- Further, according to at least one embodiment of this disclosure, there is provided an image generating method, which is to be executed by a computer, the image generating method including: generating an image to be displayed on a display; outputting the image to the display; creating statistical data based on a frequency that a user looks at each part of the image displayed on the display; identifying an area in the image, which has the frequency that falls below a threshold, as a low frequency area based on the statistical data; generating an additional image to be arranged in the low frequency area in superimposition on the image; and outputting the additional image to the display.
- Further, according to at least one embodiment of this disclosure, there is provided an image generating program for causing a computer to execute the procedures of: generating an image to be displayed on a display; outputting the image to the display; creating statistical data based on a frequency that a user looks at each part of the image displayed on the display; identifying an area in the image, which has the frequency that falls below a threshold, as a low frequency area based on the statistical data; generating an additional image to be arranged in the low frequency area in superimposition on the image; and outputting the additional image to the display.
- According to this disclosure, additional information can be displayed in a region that a user rarely looks at.
-
FIG. 1 is a hardware configuration diagram of animage displaying system 100 according to at least one embodiment of this disclosure. -
FIG. 2 is a block diagram for illustrating a functional configuration of animage generating device 200 according to at least one embodiment of this disclosure. -
FIG. 3 is an example of statistical data created by afrequency statistics unit 232. -
FIG. 4 is an example of statistical data created by thefrequency statistics unit 232. -
FIG. 5 is an example of a low frequency area. -
FIG. 6 is a flowchart for illustrating a processing procedure of the image generatingdevice 200 according to at least one embodiment of this disclosure. - First, contents of at least one embodiment of this disclosure are listed and described. At least one embodiment of this disclosure has the following configuration.
- (Item 1) An image generating device, including; image generating means for generating an image to be displayed on a display; frequency statistics means for creating statistical data based on a frequency that a user looks at each part of the image displayed on the display; low frequency area identifying means for identifying an area in the image, which has the frequency that falls below a threshold, as a low frequency area based on the statistical data; additional image generating means for generating an additional image to be arranged in the low frequency area in superimposition on the image; and image outputting means for outputting the image and the additional image to the display.
- (Item 2) An image generating device according to
Item 1, in which the display includes a head mounted display, and in which the image includes a virtual reality image to be presented to a user wearing the head mounted display. - (Item 3) An image generating device according to
Item - (Item 4) An image generating device according to
Item - (Item 5) An image generating device according to
Item - (Item 6) An image generating device according to any one of
Items 1 to 5, in which the low frequency area identifying means is configured to identify an area in the image, which has the frequency that falls below a first threshold, as a first low frequency area, and to identify an area in the image, which has the frequency that is equal to or exceeds the first threshold but falls below a second threshold larger than the first threshold, as a second low frequency area, and in which the additional image generating means is configured to arrange a first additional image in the first low frequency area in superimposition on the image, and to arrange a second additional image, which is different in attribute value from the first additional image, in the second low frequency area in superimposition on the image. - (Item 7) An image generating method, which is to be executed by a computer, the image generating method including: generating an image to be displayed on a display; outputting the image to the display; creating statistical data based on a frequency that a user looks at each part of the image displayed on the display; identifying an area in the image, which has the frequency that falls below a threshold, as a low frequency area based on the statistical data; generating an additional image to be arranged in the low frequency area in superimposition on the image; and outputting the additional image to the display.
- (Item 8) An image generating program for causing a computer to execute the procedures of: generating an image to be displayed on a display; outputting the image to the display; creating statistical data based on a frequency that a user looks at each part of the image displayed on the display; identifying an area in the image, which has the frequency that falls below a threshold, as a low frequency area based on the statistical data; generating an additional image to be arranged in the low frequency area in superimposition on the image; and outputting the additional image to the display.
- In the following, detailed description is given of at least one embodiment of this disclosure with reference to the drawings.
-
FIG. 1 is a hardware configuration diagram of animage displaying system 100 according to at least one embodiment of this disclosure. Theimage displaying system 100 includes a head mounted display (hereinafter referred to as “HMD”) 120, and animage generating device 200. The HMD 120 and theimage generating device 200 are, as an example, electrically connected to each other by acable 150 so as to enable mutual communication. Instead of thecable 150, wireless connection may be used. - The HMD 120 is a display device to be used by being worn on a head of a
user 160. The HMD 120 includes adisplay 122, an eye tracking device (hereinafter referred to as “ETD”) 124, and asensor 126. In at least one embodiment, at least one of the ETD 124 or thesensor 126 is omitted. The HMD 120 may further include a speaker (headphones) and a camera (not shown), in at least one embodiment. - The
display 122 is configured to present an image in a field of view of theuser 160 wearing the HMD 120. For example, thedisplay 122 may be configured as a non-transmissive display. In this case, the sight of the outside world of the HMD 120 is blocked from the field of view of theuser 160, and theuser 160 can see only the image displayed on thedisplay 122. On thedisplay 122, for example, an image generated using a computer executing graphics software is displayed. therein at least one embodiment, the generated image is a virtual reality image obtained by forming an image of a space of virtual reality (for example, a world created in a computer game). Alternatively, the real world may be expressed by the computer executing the graphics software based on positional coordinate data of, for example, the actual geography or objects in the real world. Further, instead of the computer executing the graphics software, the camera (not shown) mounted on the HMD 120 may be used to display on the display 122 a video taken from the perspective of theuser 160. - The ETD 124 is configured to track the movement of the eyeballs of the
user 160, to thereby detect the direction of the line of sight of theuser 160. For example, the ETD 124 includes an infrared light source and an infrared camera. The infrared light source is configured to irradiate the eye of theuser 160 wearing the HMD 120 with infrared rays. The infrared camera is configured to take an image of the eye of theuser 160 irradiated with the infrared rays. The infrared rays are reflected on the surface of the eye of theuser 160, but the reflectance of the infrared rays differs between the pupil and a part of the eyeball other than the pupil. In the image of the eye of theuser 160 taken by the infrared camera, the difference in reflectance of the infrared rays appears as contrast in the image. Based on this contrast, the pupil is identified in the image of the eye of theuser 160, and further the direction of the line of sight of theuser 160 is detected based on the position of the identified pupil. The line-of-sight direction of theuser 160 represents an area that theuser 160 is gazing at in the image displayed on thedisplay 122. - The
sensor 126 is a sensor configured to detect the direction of the head of theuser 160 wearing the HMD 120. Examples of thesensor 126 include a magnetic sensor, an angular velocity sensor, an acceleration sensor, or a combination thereof. When thesensor 126 is a magnetic sensor, an angular velocity sensor, or an acceleration sensor, thesensor 126 is built into theHMD 120, and is configured to output a value (magnetic, angular velocity, or acceleration value) based on the direction or the movement of theHMD 120. By processing the value output from thesensor 126 by an appropriate method, the direction of the head of theuser 160 wearing theHMD 120 is calculated. The direction of the head of theuser 160 can be used to change a display image of thedisplay 122 so as to follow the movement of the head of theuser 160 when the head is moved. When the display image of thedisplay 122 is changed in accordance with the movement of the head of theuser 160, the direction of the head of theuser 160 represents a rough indication of a part that theuser 160 is viewing at a relatively high probability in the display image of thedisplay 122. - The
sensor 126 may be a sensor provided outside of the HMD 120. For example, thesensor 126 may be an infrared sensor separated from theHMD 120. When an infrared reflecting marker formed on the surface of the HMD 120 is detected with use of the infrared sensor, the direction of the head of theuser 160 wearing the HMD 120 can be identified. - The
image generating device 200 is a device configured to generate an image to be displayed on the HMD 120. Theimage generating device 200 at least includes aprocessor 202, anon-transitory memory 204, and auser input interface 208. As other components, theimage generating device 200 may further include, for example, a network interface (not shown) configured to communicate with other devices via a network. Theimage generating device 200 may be achieved as, for example, a personal computer, a game console, a smart phone, a tablet terminal, and the like. - The
memory 204 has stored therein at least an operating system and an image generating program. The operating system is a computer program for controlling the entire operation of theimage generating device 200. The image generating program is a computer program for theimage generating device 200 to achieve respective functions of image generating processing to be described later. Thememory 204 can further temporarily or permanently store data generated by the operation of theimage generating device 200. Specific examples of thememory 204 include a read only memory (ROM), a random access memory (RAM), a hard disk, a flash memory, and an optical disc. - The
processor 202 is configured to read out a program stored in thememory 204, to thereby execute processing in accordance with the program. When theprocessor 202 executes the image generating program stored in thememory 204, various functions of the image generating processing to be described later are achieved. Theprocessor 202 includes at least a central processing unit (CPU) and a graphics processing unit (GPU). - The
user input interface 208 is configured to receive input for operating theimage generating device 200 from the user of theimage displaying system 100. Specific examples of theuser input interface 208 include a game controller, a touch pad, a mouse, and a keyboard. -
FIG. 2 is a block diagram for illustrating a functional configuration of theimage generating device 200 according to at least one embodiment of this disclosure. Theimage generating device 200 includes astorage unit 220 and aprocessing unit 230. Theprocessing unit 230 further includes animage generating unit 231, afrequency statistics unit 232, a low frequencyarea identifying unit 233, an additionalimage generating unit 234, and animage outputting unit 235. In at least one embodiment, thestorage unit 220 corresponds to thememory 204 illustrated inFIG. 1 . Theprocessing unit 230 and therespective units 231 to 235 included in theprocessing unit 230 represent the functions of the image generating processing according to this disclosure, which are achieved by reading out the image generating program stored in thememory 204 and executing the image generating program by theprocessor 202 illustrated inFIG. 1 , in at least one embodiment. - The
image generating unit 231 is configured to generate an image to be displayed on theHMD 120. For example, theimage generating unit 231 is configured to acquire predetermined data from thestorage unit 220, to thereby generate an image by computer graphics processing based on the acquired data. As at least one example, theimage generating unit 231 may generate such a virtual reality image that theuser 160 wearing theHMD 120 can recognize a virtual reality space of a computer game. The virtual reality image represents a sight that the user can see in the virtual reality space. For example, the virtual reality image to be generated by theimage generating unit 231 includes characters that appear in the computer game, a landscape including buildings and trees, an interior design inside a room including furniture and walls, items on the ground, a part (hand or foot) of a body of an avatar that the user is operating, and an object (gun or sword) that the avatar is holding in its hand. Further, theimage generating unit 231 may generate a computer graphics image that reproduces the real world based on the actual geography data of the real world or the like. Further, the image to be generated by theimage generating unit 231 may be, instead of one obtained by computer graphics processing, for example, a video taken from the perspective of theuser 160 by an external camera mounted on theHMD 120. - The
image generating unit 231 may further change an image based on the output value from thesensor 126. For example, the image to be generated by theimage generating unit 231 may be an image representing a state in which the field of view of the user in the virtual reality space transitions so as to follow the movement of the head of theuser 160, which is represented by the output value from thesensor 126. - The image generated by the
image generating unit 231 is output to theHMD 120 via theimage outputting unit 235, to thereby be displayed on thedisplay 122. - The
frequency statistics unit 232 is configured to create statistical data based on a frequency that theuser 160 wearing theHMD 120 looks at each area of the image displayed on thedisplay 122. The statistical data represents an area that is frequently looked at and an area that is not frequently looked at in the image displayed on thedisplay 122. For example, thefrequency statistics unit 232 is configured to create the statistical data of the frequency that theuser 160 looks at each area of the image based on the line-of-sight direction of theuser 160 detected by theETD 124. Further, thefrequency statistics unit 232 may create the statistical data of the frequency that theuser 160 looks at each area of the image based on the direction of the head of theuser 160 detected by thesensor 126. Specific description is given with reference toFIG. 3 andFIG. 4 . -
FIG. 3 is an illustration of adisplay image 520 to be displayed on thedisplay 122 of theHMD 120. Thedisplay image 520 includes a plurality ofpartial regions 501. In the example ofFIG. 3 , thepartial region 501 is a small image range obtained one by one by vertically and laterally dividing thedisplay image 520 like a grid. Theuser 160 wearing theHMD 120 gazes at a certainpartial region 501 in thedisplay image 520 at a certain moment, and then gazes at a differentpartial region 501 at a different moment. The line of sight of theuser 160 moves back and forth between differentpartial regions 501 of thedisplay image 520 along with elapse of time. The line-of-sight direction of theuser 160 that changes repeatedly is detected by theETD 124 to be input to thefrequency statistics unit 232. - Each of the
partial regions 501 ofFIG. 3 has a number from “0” to “5”. The number represents a frequency value based on how often the user gazes at thepartial region 501. For example, a frequency value “0” represents that theuser 160 has never looked at thepartial region 501. Further, a frequency value “1” represents that theuser 160 has looked at thepartial region 501 at a low frequency, and a frequency value “5” represents that theuser 160 has looked at thepartial region 501 at the maximum frequency. Intermediate frequency values “2”, “3”, and “4” represent frequencies obtained by proportionally dividing the frequency values “1” and “5” as appropriate. Thefrequency statistics unit 232 is configured to collect the line-of-sight directions of theuser 160 input from theETD 124 for a predetermined time period, and to assign a frequency value for eachpartial region 501 in accordance with the collection result. For example, thefrequency statistics unit 232 is configured to assign the frequency value “5” to apartial region 501 that is classified into a group in which the line of sight of theuser 160 stays for the longest time during the time period, and to assign the frequency value “4” to apartial region 501 that is classified into a group in which the line of sight of theuser 160 stays for the next longest time. Other frequency values are similarly assigned. -
FIG. 4 is an illustration of aregion 540 that is wider than aregion 530 that can be displayed at once on thedisplay 122 of theHMD 120. Theregion 540 represents, for example, the entire sight that the user can see around him/her in a virtual reality space of a computer game. Thedisplay range 530 represents a sight in a limited range that the user can view at a single time when the user turns to a certain direction in the virtual reality space. Thedisplay range 530 corresponds to the range of thedisplay image 520 ofFIG. 3 . For example, when theuser 160 moves his/her head to various directions, thedisplay range 530 moves in theregion 540 so as to follow the movement of the head. - As illustrated in
FIG. 4 , theregion 540 includes a plurality ofpartial regions 501 similarly to the case described inFIG. 3 . The head of theuser 160 wearing theHMD 120 is directed to a certain direction in a certain time zone. While the head of theuser 160 is directed in the certain direction, similarly to the case described inFIG. 3 , the frequency values are assigned to thepartial regions 501 included in thedisplay range 530 at that time. Further, the head of theuser 160 is directed to a different direction in a different time zone. While the head of theuser 160 is directed in the different direction, frequency values are similarly assigned to thepartial regions 501 included in thedisplay range 530 at that time. When the display ranges 530 in the two time zones partially overlap with each other, for example, the frequency value is assigned to eachpartial region 501 based on a total time during which the line of sight of theuser 160 stays in the overlappedpartial region 501. - In the example in which the
display range 530 moves in theregion 540 as inFIG. 4 , the frequency value of eachpartial region 501 may be determined based on the direction of the head of theuser 160. For example, when theuser 160 moves his/her head, thesensor 126 detects the direction of the head of theuser 160, and theimage generating unit 231 generates an image in thedisplay range 530 based on the direction. At this time, the actual line of sight of theuser 160 may be obtained in arbitrary parts of the generated image, but in some instances theuser 160 is looking at a predetermined fixed part of the display range 530 (for example, center of the display range 530). Thefrequency statistics unit 232 assumes that the direction of the fixed part of thedisplay range 530 is the line-of-sight direction of theuser 160, to thereby determine the frequency value of eachpartial region 501. For example, thefrequency statistics unit 232 tracks the state of thedisplay range 530 moving in theregion 540 for a predetermined time period, and assigns a large frequency value to apartial region 501 that overlaps with the center of thedisplay range 530 for a long time and assigns a small frequency value to apartial region 501 that overlaps with the center of thedisplay range 530 for a short time. - As described above, the frequency values of the respective
partial regions 501 are statistically collected based on the line-of-sight direction or the head direction of theuser 160. The collected frequency values of the respectivepartial regions 501 form the statistical data. The statistical data may be stored in thestorage unit 220. - The dotted lines (frame lines of the partial regions 501) and the numbers (frequency values) shown in
FIG. 3 andFIG. 4 are presented for the description above, and are not elements of the image. Therefore, neither the dotted lines nor the numbers are displayed on thedisplay 122 of theHMD 120. - The low frequency
area identifying unit 233 is configured to identify a low frequency area based on the statistical data created by thefrequency statistics unit 232. The low frequency area is a partial area that is looked at by theuser 160 at a low frequency in the image displayed on thedisplay 122 of theHMD 120. For example, the low frequencyarea identifying unit 233 is configured to compare the frequency value of eachpartial region 501 forming the statistical data with a predetermined threshold, and to determine, when the frequency value of a certainpartial region 501 falls below the threshold as a result of the comparison, that thepartial region 501 is a part of the low frequency area. -
FIG. 5 is an example of the low frequency area. This example corresponds to the example of the frequency values of the respectivepartial regions 501 illustrated inFIG. 3 . Alow frequency area 570 exemplified inFIG. 5 is an area formed ofpartial regions 501 to which the frequency value “0” or “1” is assigned inFIG. 3 . As another example, the low frequency area may includepartial regions 501 to which the frequency value “0”, “1”, or “2” is assigned. - Further, the low frequency
area identifying unit 233 may be configured to classify the low frequency area into a plurality of stages depending on the frequency value. For example, the low frequencyarea identifying unit 233 may be configured to setpartial regions 501 having the frequency value of “0” or “1” as a first low frequency area, and to setpartial regions 501 having the frequency value of “2” or “3” as a second low frequency area. - The additional
image generating unit 234 is configured to generate an additional image to be arranged in the low frequency area. The generated additional image is output to theHMD 120 via theimage outputting unit 235, and is displayed on thedisplay 122 in superimposition on the image output from theimage generating unit 231. The additional image can be used for, for example, presenting advertisements in the virtual reality space, or displaying an enemy character or a useful item in a computer game. The additional image is displayed in the low frequency area that is looked at by theuser 160 at a low frequency, and hence theuser 160 can visually recognize the image output from theimage generating unit 231 without being affected by the additional image that much. In contrast, for example, when there is assumed such a rule that an additional image having a high value for theuser 160 may be displayed in the low frequency area, the attention of theuser 160 can be directed not only to the image output from the image generating unit 231 (area other than the low frequency area), but also to the low frequency area. For example, in an example of a computer game, when a high-value rare item or an enemy character that provides a high score when being defeated is displayed in the low frequency area as the additional image, the game can be more amusing. - Further, the additional
image generating unit 234 may dynamically change the additional image based on the current line-of-sight direction or the current head direction of theuser 160. For example, first, a certain character (ghost or mole in a whack-a-mole game) is displayed in the low frequency area as the additional image. The additionalimage generating unit 234 determines whether or not theuser 160 intends to (or is about to) direct his/her line of sight (or his/her head) to the low frequency area based on the input from theETD 124 or thesensor 126. When theuser 160 intends to direct his/her line of sight to the low frequency area, the additionalimage generating unit 234 changes the additional image of the character to, for example, such an additional image that the character is escaping from the line of sight of the user 160 (the ghost disappears from the field of view, or the mole hides into the ground). The escaping degree of the character may be adjusted depending on the degree that the line of sight of theuser 160 approaches the low frequency area. - When the low frequency area is classified into a plurality of stages, the additional
image generating unit 234 may further generate different additional images (first additional image, second additional image, and the like) for the respective classified low frequency areas (first low frequency area, second low frequency area, and the like). For example, an attribute value of the first additional image to be displayed in the first low frequency area differs from an attribute value of the second additional image to be displayed in the second low frequency area. As an example, when the first low frequency area corresponds to the frequency values “0” and “1”, and the second low frequency area corresponds to the frequency values “2” and “3”, in the first low frequency area that is an area that is looked at by theuser 160 at a lower frequency, a rare item with a higher value or an enemy character that provides a higher score when being defeated may be displayed as the first additional image. In this manner, the game can be even more amusing. -
FIG. 6 is a flow chart for illustrating a processing procedure of theimage generating device 200 according to one embodiment of this disclosure. First, in Step S602, theimage generating unit 231 of theimage generating device 200 generates an image to be displayed on theHMD 120. Next, in Step S604, theimage outputting unit 235 of theimage generating device 200 outputs the image generated by theimage generating unit 231 to theHMD 120. With this, the image is displayed on thedisplay 122 of theHMD 120. Next, in Step S606, thefrequency statistics unit 232 of theimage generating device 200 creates statistical data based on the frequency that theuser 160 wearing theHMD 120 looks at each part of the image displayed on thedisplay 122. For example, thefrequency statistics unit 232 of theimage generating device 200 determines the frequency value of each partial region based on the line-of-sight direction of theuser 160 detected by theETD 124, or based on the direction of the head of theuser 160 detected by the output value from thesensor 126. Next, in Step S608, the low frequencyarea identifying unit 233 of theimage generating device 200 identifies the low frequency area that is looked at by theuser 160 at a low frequency based on the statistical data created by thefrequency statistics unit 232. Next, in Step S610, the additionalimage generating unit 234 of theimage generating device 200 generates the additional image to be arranged in the low frequency area. Next, in Step S612, theimage outputting unit 235 of theimage generating device 200 outputs the additional image generated by the additionalimage generating unit 234 to theHMD 120. With this, the additional image is displayed in the low frequency area in superimposition on the image displayed on thedisplay 122 of theHMD 120. - While a description has been given above on the embodiment of this disclosure, this disclosure is not limited thereto, and various modifications can be made without departing from the spirit of this disclosure.
Claims (7)
1. An image generating device, comprising:
a processor configured to execute instructions, wherein the processor is configured to:
generate an image to be displayed on a display;
create statistical data based on a frequency that a user looks at each area of the image displayed on the display;
identify an area in the image, which has a frequency below a threshold, as a low frequency area based on the statistical data;
generate an additional image to be arranged in the low frequency area in superimposition on the image; and
output the image and the additional image to the display,
wherein the display comprises a non-transmissive head mounted display, and
wherein the image comprises a virtual reality image to be presented to the user wearing the non-transmissive head mounted display.
2. An image generating device according to claim 1 , wherein processor is configured to calculate the frequency based on a line-of-sight direction of the user detected by a line-of-sight direction detector.
3. An image generating device according to claim 1 , wherein the processor is configured to calculate the frequency based on output from a sensor configured to detect a direction of a head of the user.
4. An image generating device according to claim 2 , wherein the processor is configured to dynamically change the additional image based on at least one of a current line-of-sight direction of the user detected by the line-of-sight direction detector or a current direction of a head of the user detected by a sensor.
5. An image generating device according to claim 1 ,
wherein the processor is configured to identify a first area in the image, which has a frequency below a first threshold, as a first low frequency area, and to identify a second area in the image, which has the frequency equal to or exceeding the first threshold but below a second threshold, as a second low frequency area, the second threshold is larger than the first threshold, and
to arrange a first additional image in the first low frequency area in superimposition on the image, and arrange a second additional image, which is different in attribute value from the first additional image, in the second low frequency area in superimposition on the image.
6. An image generating method, which is to be executed by a computer, the image generating method comprising:
generating an image to be displayed on a display;
outputting the image to the display;
creating statistical data based on a frequency that a user looks at each area of the image displayed on the display;
identifying an area in the image, which has a frequency below a threshold, as a low frequency area based on the statistical data;
generating an additional image to be arranged in the low frequency area in superimposition on the image; and
outputting the additional image to the display,
wherein the display comprises a non-transmissive head mounted display, and
wherein the image comprises a virtual reality image to be presented to the user wearing the non-transmissive head mounted display.
7. A non-transitory computer readable medium for storing an image generating program for causing a computer to execute the procedures of:
generating an image to be displayed on a display;
outputting the image to the display;
creating statistical data based on a frequency that a user looks at each area of the image displayed on the display;
identifying an area in the image, which has a frequency below a threshold, as a low frequency area based on the statistical data;
generating an additional image to be arranged in the low frequency area in superimposition on the image; and
outputting the additional image to the display,
wherein the display comprises a non-transmissive head mounted display, and
wherein the image comprises a virtual reality image to be presented to the user wearing the non-transmissive head mounted display.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015191042A JP5913709B1 (en) | 2015-09-29 | 2015-09-29 | Image generation apparatus, image generation method, and image generation program |
JP2015-191042 | 2015-09-29 |
Publications (2)
Publication Number | Publication Date |
---|---|
US20170092006A1 true US20170092006A1 (en) | 2017-03-30 |
US10008041B2 US10008041B2 (en) | 2018-06-26 |
Family
ID=55808292
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/271,188 Active US10008041B2 (en) | 2015-09-29 | 2016-09-20 | Image generating device, image generating method, and image generating program |
Country Status (3)
Country | Link |
---|---|
US (1) | US10008041B2 (en) |
JP (1) | JP5913709B1 (en) |
WO (1) | WO2017056892A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9857884B2 (en) | 2016-05-18 | 2018-01-02 | Colopl, Inc. | Visual-field information collection method and system for executing the visual-field information collection method |
US20190064199A1 (en) * | 2017-08-28 | 2019-02-28 | Otis Elevator Company | Hybrid altimeter for measuring vertical velocity |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10540825B2 (en) | 2016-05-02 | 2020-01-21 | Sony Interactive Entertainment Inc. | Image processing apparatus |
KR101844588B1 (en) * | 2017-02-08 | 2018-04-02 | 한림대학교 산학협력단 | Apparatus and method for providing image using indoor positioning |
JP6389305B1 (en) * | 2017-07-21 | 2018-09-12 | 株式会社コロプラ | Information processing method, computer, and program |
JP7277119B2 (en) * | 2017-11-30 | 2023-05-18 | 株式会社デジタルガレージ | Image processing system and image processing method |
JP2019023869A (en) * | 2018-08-03 | 2019-02-14 | 株式会社コロプラ | Information processing method, computer, and program |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090216341A1 (en) * | 2005-04-01 | 2009-08-27 | Abb Research Ltd. | Method and System for Providing a User Interface |
US20120054635A1 (en) * | 2010-08-25 | 2012-03-01 | Pantech Co., Ltd. | Terminal device to store object and attribute information and method therefor |
US8379053B1 (en) * | 2012-01-24 | 2013-02-19 | Google Inc. | Identification of areas of interest on a web page |
US20130241952A1 (en) * | 2012-03-15 | 2013-09-19 | Jason Richman | Systems and methods for delivery techniques of contextualized services on mobile devices |
US20150049002A1 (en) * | 2013-02-22 | 2015-02-19 | Sony Corporation | Head-mounted display and image display apparatus |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0761257A (en) * | 1993-08-26 | 1995-03-07 | Nissan Motor Co Ltd | Display device for vehicle |
JP4826506B2 (en) * | 2007-02-27 | 2011-11-30 | 日産自動車株式会社 | Gaze estimation device |
JP5465620B2 (en) | 2010-06-25 | 2014-04-09 | Kddi株式会社 | Video output apparatus, program and method for determining additional information area to be superimposed on video content |
JP2013255168A (en) * | 2012-06-08 | 2013-12-19 | Toyota Infotechnology Center Co Ltd | Imaging apparatus and imaging method |
US9639990B2 (en) * | 2013-10-03 | 2017-05-02 | Panasonic Intellectual Property Management Co., Ltd. | Display control apparatus, computer-implemented method, storage medium, and projection apparatus |
-
2015
- 2015-09-29 JP JP2015191042A patent/JP5913709B1/en active Active
-
2016
- 2016-09-07 WO PCT/JP2016/076307 patent/WO2017056892A1/en active Application Filing
- 2016-09-20 US US15/271,188 patent/US10008041B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090216341A1 (en) * | 2005-04-01 | 2009-08-27 | Abb Research Ltd. | Method and System for Providing a User Interface |
US20120054635A1 (en) * | 2010-08-25 | 2012-03-01 | Pantech Co., Ltd. | Terminal device to store object and attribute information and method therefor |
US8379053B1 (en) * | 2012-01-24 | 2013-02-19 | Google Inc. | Identification of areas of interest on a web page |
US20130241952A1 (en) * | 2012-03-15 | 2013-09-19 | Jason Richman | Systems and methods for delivery techniques of contextualized services on mobile devices |
US20150049002A1 (en) * | 2013-02-22 | 2015-02-19 | Sony Corporation | Head-mounted display and image display apparatus |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9857884B2 (en) | 2016-05-18 | 2018-01-02 | Colopl, Inc. | Visual-field information collection method and system for executing the visual-field information collection method |
US10488949B2 (en) | 2016-05-18 | 2019-11-26 | Colopl, Inc. | Visual-field information collection method and system for executing the visual-field information collection method |
US20190064199A1 (en) * | 2017-08-28 | 2019-02-28 | Otis Elevator Company | Hybrid altimeter for measuring vertical velocity |
Also Published As
Publication number | Publication date |
---|---|
JP5913709B1 (en) | 2016-04-27 |
WO2017056892A1 (en) | 2017-04-06 |
JP2017068411A (en) | 2017-04-06 |
US10008041B2 (en) | 2018-06-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10008041B2 (en) | Image generating device, image generating method, and image generating program | |
CN109564472B (en) | Method, medium, and system for selecting an interaction method with a virtual object | |
US10198855B2 (en) | Method of providing virtual space, method of providing virtual experience, system and medium for implementing the methods | |
US9652047B2 (en) | Visual gestures for a head mounted device | |
US10088900B2 (en) | Information processing method and information processing system | |
US9773170B2 (en) | Pupil detection | |
CN105283824B (en) | With the virtual interacting of image projection | |
JP6097377B1 (en) | Image display method and program | |
JP6017008B1 (en) | Avatar display system, user terminal, and program | |
TW201541382A (en) | Display device viewer gaze attraction | |
CN106104423A (en) | Pose parameter is regulated | |
CN116710878A (en) | Context aware augmented reality system | |
JP5980404B1 (en) | Method of instructing operation to object in virtual space, and program | |
US12106426B2 (en) | Advertisement display system | |
US11195320B2 (en) | Feed-forward collision avoidance for artificial reality environments | |
US10488949B2 (en) | Visual-field information collection method and system for executing the visual-field information collection method | |
CN108292168B (en) | Method and medium for indicating motion of object in virtual space | |
JP6707429B2 (en) | Avatar display system, user terminal, and program | |
US11474595B2 (en) | Display device and display device control method | |
JP6209252B1 (en) | Method for operating character in virtual space, program for causing computer to execute the method, and computer apparatus | |
JP2018010665A (en) | Method of giving operational instructions to objects in virtual space, and program | |
JP2019101468A (en) | Program for providing virtual experience, information processing apparatus, and information processing method | |
JP2017068818A (en) | Device, method, and program for forming images | |
JP2017097918A (en) | Image display method and program | |
JP2018045462A (en) | Display system, display method, and computer device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: COLOPL, INC., JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INOMATA, ATSUSHI;REEL/FRAME:039806/0149 Effective date: 20160826 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |