WO2023053516A1 - 仮想空間画像生成装置および方法 - Google Patents
仮想空間画像生成装置および方法 Download PDFInfo
- Publication number
- WO2023053516A1 WO2023053516A1 PCT/JP2022/012969 JP2022012969W WO2023053516A1 WO 2023053516 A1 WO2023053516 A1 WO 2023053516A1 JP 2022012969 W JP2022012969 W JP 2022012969W WO 2023053516 A1 WO2023053516 A1 WO 2023053516A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- virtual space
- viewpoint
- space image
- user
- visibility
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 49
- 230000007704 transition Effects 0.000 claims abstract description 98
- 230000008859 change Effects 0.000 claims abstract description 51
- 230000002093 peripheral effect Effects 0.000 description 50
- 230000008569 process Effects 0.000 description 38
- 238000012545 processing Methods 0.000 description 35
- 238000001514 detection method Methods 0.000 description 24
- 230000006870 function Effects 0.000 description 24
- 210000003128 head Anatomy 0.000 description 16
- 230000002123 temporal effect Effects 0.000 description 15
- 238000010586 diagram Methods 0.000 description 7
- 230000007423 decrease Effects 0.000 description 6
- 230000007613 environmental effect Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 238000013459 approach Methods 0.000 description 4
- 238000011161 development Methods 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000011946 reduction process Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000003213 activating effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000004424 eye movement Effects 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 201000003152 motion sickness Diseases 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000004304 visual acuity Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/003—Navigation within 3D models or images
Definitions
- the present invention relates to a virtual space image generation device and method.
- Patent Document 1 discloses virtual space image provision for providing a virtual space image visually recognized by a user to a head mounted display (HMD).
- HMD head mounted display
- a method is disclosed.
- the rotation direction and the rotation speed of the HMD are obtained, and in the end regions on both sides of the virtual space image in the direction on the screen corresponding to the rotation direction, with the range and intensity corresponding to the rotation speed.
- VR Virtual Reality
- the present invention has been made by focusing on the above points, and is a virtual space image generating apparatus capable of generating a virtual space image that realizes visibility close to the appearance of the real space when the user's viewpoint moves.
- the purpose is to provide a method.
- one aspect of the present invention provides a virtual space image generation device that generates a virtual space image including a visibility changing region whose visibility changes based on the movement of the user's viewpoint.
- this virtual space image generation device when the user's viewpoint moves, the visibility of the visibility changing region transitions between a first state and a second state different from the first state.
- An image generator is provided for generating the virtual space image in which the transition completion time changes based on a predetermined condition.
- the visibility of the visibility changing region is changed based on a predetermined condition so that it takes a transition completion time to transition between the first and second states. Therefore, it is possible to generate a virtual space image that achieves visibility close to the appearance of the real space when the user's viewpoint moves.
- FIG. 1 is a block diagram showing a schematic configuration of a driving simulator system to which a virtual space image generating device according to an embodiment of the invention is applied;
- FIG. It is a figure which shows an example of the virtual space image before blurring processing is performed in the said embodiment.
- FIG. 10 is a diagram showing an example of a virtual space image after blurring has been performed outside the viewpoint peripheral region in the embodiment;
- 4 is a diagram showing an example of a user's viewpoint movement on the virtual space image of FIG. 3;
- FIG. FIG. 5 is a conceptual diagram of movement of the viewpoint in FIG. 4 in virtual space as seen from above;
- FIG. 10 is a diagram showing an example of a virtual space image after a required time has passed since the user's viewpoint movement in the above embodiment.
- FIG. 4 is a flow chart showing an example of a method of generating a virtual space image in the embodiment; It is a graph which shows an example of the time change of visibility when a viewpoint moves to the direction which moves away in the said embodiment.
- FIG. 9 is a graph exemplifying changes when the value of the time constant of the function representing the shape of the curve in the graph of FIG. 8 is changed;
- FIG. It is a graph which shows an example of the time change of visibility in the case of moving to the direction where a viewpoint approaches in the said embodiment.
- 11 is a graph exemplifying deformation when the value of the time constant of the function representing the shape of the curve in the graph of FIG. 10 is changed;
- FIG. 1 is a block diagram showing a schematic configuration of a driving simulator system to which a virtual space image generating device according to one embodiment of the invention is applied.
- a driving simulator system 1 is used, for example, for evaluating the visibility of various objects in the development of a vehicle such as an automobile and for simulated experience of driving a vehicle.
- This driving simulator system 1 includes a virtual space image generation device 2, a sensor 3, and an image forming device 4 according to this embodiment.
- the virtual space image generation device 2 detects the motion of the user U based on the output signal from the sensor 3, and generates a virtual space image including an area whose visibility changes according to the detected motion of the user U.
- the virtual space image generating device 2 then transmits the generated virtual space image to the user U via an image forming device 4 such as a head mounted display (HMD) worn on the user's U head.
- the image forming apparatus 4 of this embodiment has left and right display units corresponding to the left and right eyes of the user, respectively.
- the image forming apparatus 4 causes the user to perceive a three-dimensional virtual space by displaying virtual space images having a parallax on each of the left and right display units.
- different virtual space images may be displayed on each of the left and right display units, or a common virtual space image may be output on each of the left and right display units.
- optical shutters may be provided on the left and right display units to generate parallax between the virtual space images output from the respective display units.
- the image forming apparatus 4 is not limited to a configuration in which a display device that displays a virtual space image such as an HMD is mounted on the head of the user U, and an image display device such as a liquid crystal display that is placed in front of the user U. It can be a device. Further, the image forming device 4 may be an image projection device such as a projector or a head-up display that projects a virtual space image onto a predetermined projection surface (screen, glass, wall surface). In this case, in order to perceive the virtual space from the projected virtual space image, it is preferable to separately attach optical shutter devices to the left and right eyes of the user.
- the virtual space image generation device 2 includes, for example, a viewpoint detection unit 11, an input unit 12, a storage unit 13, an image generation unit 14, and a display control unit 15 as its functional blocks.
- the hardware configuration of the virtual space image generation device 2 includes, for example, a computer system including a processor, memory, user input interface, and communication interface. That is, in the virtual space image generation device 2, the functions of the viewpoint detection unit 11, the image generation unit 14, and the display control unit 15 are realized by the processor of the computer system reading and executing the program stored in the memory. .
- the viewpoint detection unit 11 detects the user U's viewpoint using the output signal of the sensor 3 .
- the viewpoint of the user U is the point on the image forming apparatus 4 where the user U's line of sight is focused.
- the sensor 3 for example, a line-of-sight sensor or the like incorporated in the HMD worn by the user U is used.
- the sensor 3 detects the eye movement of the user U, measures the direction of the line of sight, and outputs a signal indicating the direction of the line of sight to the viewpoint detection unit 11 via the communication interface of the computer system.
- the viewpoint detection unit 11 uses the visual line direction of the user U measured by the sensor 3, the positional relationship between the eyes of the user U and the image forming device 4, and the image data of the virtual space stored in the storage unit 13.
- the position of the user U's viewpoint (coordinates on a two-dimensional plane) on the image forming apparatus 4 is detected based on the obtained positional information on the virtual space.
- a viewpoint detection function of the user U by the viewpoint detection unit 11 is sometimes called eye tracking.
- the viewpoint detection unit 11 notifies the image generation unit 14 of positional information of the detected viewpoint.
- the virtual space image generation device 2 has a head tracking function for detecting the movement of the head of the user U, and a position tracking function for detecting the movement of the user U's body. may have the function of
- the detection results of these head tracking and position tracking are also transmitted to the image generation section 14 together with the detection results of the viewpoint detection section 11 .
- This detection result includes, for example, information related to the orientation of the user U's head, and the orientation of the user U's line of sight may be estimated based on the information.
- the input unit 12 is realized by a user input interface of a computer system, and has, for example, a keyboard, a mouse, an operation controller, and the like.
- the input unit 12 also has a receiving unit that receives information from the outside by wire or wirelessly, and functions as an external information input interface that receives information from an external computer.
- conditions related to the environment of the virtual space hereinafter referred to as "environmental conditions”
- conditions related to the user U hereinafter referred to as "user conditions”
- travel conditions of the vehicle in the virtual space moving route , speed
- Environmental conditions include virtual space weather (sunny, cloudy, rainy, foggy, etc.), humidity, driving environment (outdoors, indoors, tunnels, etc.), windshield conditions, or combinations thereof.
- User conditions also include user U's age, gender, eyesight, eye health, eye openness, dominant eye, or a combination thereof.
- Information about the predetermined condition input by the input unit 12 is transmitted to the image generation unit 14 and stored in the storage unit 13 .
- the above user conditions may be obtained by conducting experiments in advance on a subject who is assumed to be the user U of the driving simulator system 1, and may be input to the virtual space image generation device 2 using the input unit 12.
- the user U may be photographed by a camera or the like separately provided in the virtual space image generation device 2, and the user condition may be determined or detected based on the photographed user image.
- the storage unit 13 is realized by a storage device (for example, a magnetic disk, an optical disk, a flash memory, etc.) connected to the computer system. It stores various setting information such as user conditions and vehicle driving conditions.
- the storage unit 13 also stores image data of a virtual space including various objects. Various objects are various objects included in the scene that the user U can see from the driver's seat of the vehicle in the virtual space.
- the image generation unit 14 uses the image data stored in the storage unit 13, the image data received from the input unit 12, and various setting information to generate a virtual space image to be displayed on the image forming device 4. At this time, based on the viewpoint detected by the viewpoint detection unit 11, the image generation unit 14 determines that the visibility within the predetermined visibility change area is in the first state and that the visibility outside the visibility change area is in the first state. A virtual space image is generated in a second state, the nature of which is different from the first state. That is, the virtual space image generated by the image generator 14 includes an image portion with relatively high visibility and an image portion with relatively low visibility.
- the image generation unit 14 creates a first state in which the image portion located within the visibility change region has relatively higher visibility than the image portion located outside the visibility change region, and The virtual space image is updated such that the image portion located outside the viewpoint peripheral region transitions between a second state in which visibility is relatively lower than that of the image portion located outside the viewpoint peripheral region.
- the visibility change area includes not only the viewpoint peripheral area of the movement source and the viewpoint peripheral area of the movement destination, which will be described later, but also areas other than the viewpoint peripheral area.
- the state of visibility is controlled, for example, by blurring the image displayed in the target area.
- Blur processing is processing that changes the amount of information in image data so that the image appears blurred.
- the blurring process is image processing that reduces the amount of information that the user U can visually confirm.
- Examples of specific blurring processing include processing to reduce the amount of information, processing to lower the resolution, processing to gradually reduce the display area, and processing to reduce the display area of the image (object) displayed in the area to be processed.
- a process of increasing the display area in stages and a process of decreasing the display area in stages can be performed in order or alternately to easily reproduce the out-of-focus state. .
- the first state in which the visibility is relatively high is, for example, a state in which the image is in focus before the blurring process is performed, and represents a state in which the user U can visually confirm the image with a large amount of information.
- the second state with relatively low visibility is, for example, a blurred state out of focus after the blurring process has been performed, and represents a state in which the amount of information that the user U can visually confirm about the image is small. ing.
- FIG. 2 shows an example of a virtual space image before blurring.
- This virtual space image is a virtual space image that is input to one of the left and right eyes.
- Another virtual space image (not shown) having a different parallax from the virtual space image in FIG. 2 is input to the other of the left and right eyes.
- the user U can perceive the virtual space from the virtual space images with different parallax input to the left and right eyes.
- FIG. 3 shows an example of the virtual space image after the blurring process has been performed outside the viewpoint peripheral region.
- the virtual space image generated by the image generating unit 14 expresses a scene in the virtual space that the user U can see from the driver's seat of the vehicle.
- the virtual space image includes, as objects representing the vehicle, the upper part of the steering wheel, the upper part of the dashboard, the right front pillar, the front edge of the roof, the rearview mirror, the right side mirror, and the like.
- the number "8" displayed at the bottom center of the virtual space image is an object for evaluating visibility near the upper end of the steering wheel.
- the virtual space image also includes roads, sidewalks, buildings, road signs (stop), etc. as objects representing stationary objects outside the vehicle.
- the user U's point of view ( ⁇ mark) is located at the position P on the number "8" displayed near the upper end of the steering wheel.
- objects located inside the viewpoint peripheral area A enclosed by broken lines in the figure are in focus, and objects located outside the viewpoint peripheral area A are out of focus and blurred. That is, the virtual space image generated by the image generation unit 14 and subjected to blurring processing according to the position P of the viewpoint of the user U is in the first state in which the visibility in the viewpoint peripheral region A is relatively high. There is a second state in which the visibility outside the viewpoint peripheral region A is relatively low. Note that the ⁇ mark indicating the viewpoint of the user U is not displayed in the actual virtual space image.
- the image generation unit 14 updates the virtual space image subjected to blurring processing when the viewpoint of the user U moves.
- This virtual space image update process can be executed for any movement of the user U's viewpoint. For example, when the viewpoint position P of the user U moves to a different position within the viewpoint peripheral region A in FIG. Further, for example, when the position P of the viewpoint of the user U has moved to a distant position outside the viewpoint peripheral region A, and the destination viewpoint peripheral region is located outside the viewpoint peripheral region of the movement source, the destination viewpoint The image of the entire viewpoint surrounding area and the image of the entire viewpoint surrounding area of the movement source are updated.
- FIG. 4 shows an example of the movement of the user U's viewpoint.
- the viewpoint ( ⁇ ) of the user U is positioned from the position Pn on the number "8" object (first object) displayed near the upper end of the steering wheel to the left sidewalk in front of the vehicle. It moves to a position Pf on the road sign object (second object) installed above.
- the road sign object is positioned farther in the depth direction in the virtual space than the object near the upper end of the steering wheel. Therefore, in the virtual space, the viewpoint of the user U moves to the upper left on a two-dimensional plane (on the virtual space image) that extends in the horizontal direction and the vertical direction, and moves farther in the depth direction. Become.
- the depth direction in the virtual space differs depending on the form of the image forming apparatus 4 .
- the depth direction is a preset unique direction (for example, the front-rear direction) in the virtual space.
- the depth direction is a predetermined direction that relatively changes according to the position of the head.
- it may be the direction in which the head faces, or the line-of-sight direction of the user U with respect to the viewpoint before the movement of the user U, that is, between the eye E of the user U and the viewpoint before the actual movement or the viewpoint in the virtual space before the movement. may be used as a direction to connect .
- FIG. 5 is a conceptual diagram of the movement of the viewpoint of the user U in the virtual space as seen from above.
- the arrow Z direction indicates the depth direction of the virtual space (vehicle front-rear direction)
- the arrow X direction indicates the horizontal direction of the virtual space (vehicle width direction).
- the viewpoint of the user U moves from position Pn to position Pf on the image forming device 4 (on the virtual space image).
- the first object (number "8") displayed at the position Pn of the viewpoint of the movement source is located at a position Pn' apart from the image forming apparatus 4 by a distance Zn in the depth direction in the virtual space.
- the second object (road sign) displayed at the destination viewpoint position Pf is located at a position Pf' away from the image forming apparatus 4 by a distance Zf in the depth direction in the virtual space.
- Distance Zf is longer than distance Zn by distance ⁇ Z.
- the focus of the eye E of the user U in the physical space is aligned with the position Pn on the image forming device 4 at the origin of the movement of the viewpoint, and is aligned with the position Pf on the image forming device 4 at the destination of the movement of the viewpoint.
- the realistic focal length of the eye E of the user U is the distance Fn from the eye E to the position Pn at the viewpoint movement source, as indicated by the solid arrow in FIG. It becomes the distance Ff to the position Pf.
- d represents the distance between the eye E of the user U and the image forming apparatus 4 in the depth direction.
- the user U changes the line of sight by moving the line of sight, so the distances Fn and Ff are different between the position Pn and the position Pf, but the amount of change ⁇ d (not shown) in the distance before and after the movement is slight.
- the virtual focal length of the eye E of the user U is the distance Fn' from the eye E to the position Pn' at the viewpoint movement source, as indicated by the dotted arrow in FIG.
- the distance from E to position Pf' is Ff'. That is, in the virtual space, the position Pn' is arranged behind the position Pn by the distance Zn in the depth direction, and the position Pf' is arranged behind the position Pf by the distance Zf in the depth direction. Also, the position Pf' is positioned behind the position Pn' by a distance ⁇ Z in the depth direction. Furthermore, in FIG. 5, the distance ⁇ Z is much larger than the distance ⁇ d.
- the difference in appearance due to the difference between the actual changes in the focal lengths Fn and Ff and the virtual changes in the focal lengths Fn' and Ff' accompanying the movement of the viewpoint of the user U is suppressed. Then, a virtual space image to be displayed on the image forming apparatus 4 is generated (updated).
- the image generation unit 14 detects changes in viewpoint positions Pn and Pf (coordinates on a two-dimensional plane) detected by the viewpoint detection unit 11, and detects the position of the image forming apparatus 4. The moving direction and moving amount of the viewpoint above (on the virtual space image) are determined. Further, the image generation unit 14 extracts depth information corresponding to the viewpoint positions Pn and Pf detected by the viewpoint detection unit 11 from among the depth information defined for each pixel (or for each object) on the virtual space image. , determine the positions Pn' and Pf' of the viewpoint in the virtual space, and determine whether the viewpoint is moving at least in the depth direction in the virtual space, that is, whether the viewpoint is moving away or approaching in the virtual space. Determine if the direction is moving. Then, the image generating unit 14 executes the process of updating the virtual space image when the viewpoint moves in the depth direction within the virtual space.
- the image generating unit 14 increases the visibility in the viewpoint surrounding area Af (visibility change area) of the movement destination from the second state to the first state, and increases the visibility from the first state to the first state.
- the transition completion time is the time from the completion of the user U's viewpoint movement to the completion of the visibility transition.
- the visibility in the viewpoint peripheral area Af of the movement destination changes from the second state to the first state over a transition completion time that changes based on a predetermined condition.
- a virtual space image is generated in which the visibility in the viewpoint peripheral region An of the movement source transitions from the first state to the second state.
- the virtual space image immediately after the viewpoint movement of the user U is in the first state (in-focus state) in the visibility in the viewpoint peripheral region An of the movement source, and the visibility around the viewpoint of the movement destination is The visibility in the area Af is in the second state (out-of-focus, blurred state).
- the state of visibility (state of blur processing) in the virtual space image immediately after such a viewpoint movement is the same as the state of visibility in the virtual space image (before movement of the user U's viewpoint) shown in FIG. 3 described above. .
- FIG. 6 shows an example of a virtual space image after the transition completion time has passed since the user U's viewpoint movement.
- the virtual space image for which the update processing (the first processing and the second processing) has been completed after the transition completion time has elapsed is such that the visibility in the viewpoint peripheral region An of the movement source is in the second state (out of focus). ), and the visibility in the viewpoint peripheral area Af of the movement destination has changed to the first state (in-focus state).
- the image generation unit 14 may change the virtual space image by following the movement of the head and body of the user U when the detection result by the head tracking or the position tracking described above is transmitted. .
- the image generation unit 14 changes the scene to the left of the user U in the virtual space according to the movement of the user U's head detected by head tracking. Change the virtual space image so that it is projected.
- the image generating unit 14 generates virtual space images so that the field of view of the user U changes according to the current position of the user U detected by position tracking. change.
- the display control unit 15 ( FIG. 1 ) generates a control signal for causing the image forming device 4 to display the virtual space image generated by the image generating unit 14 and outputs the control signal to the image forming device 4 .
- the image forming apparatus 4 that receives the control signal from the display control unit 15 displays the virtual space image according to the control signal.
- FIG. 7 is a flow chart showing an example of a virtual space image generation method by the virtual space image generation device 2 .
- the viewpoint detection unit 11 uses the output signal of the sensor 3 to detect the position of the viewpoint of the user U on the image forming device 4 ( coordinates).
- the viewpoint position detection processing by the viewpoint detection unit 11 is repeatedly executed at a predetermined cycle.
- the position information of the viewpoint of the user U detected by the viewpoint detection unit 11 is transmitted to the image generation unit 14 .
- the image generation unit 14 that has received the viewpoint position information from the viewpoint detection unit 11 uses the image data stored in the storage unit 13 (or the image data received from the input unit 12) and various setting information. to generate a virtual space image to be displayed on the image forming apparatus 4 .
- the image generator 14 blurs the image portion located outside the viewpoint peripheral region A based on the position P of the viewpoint of the user U, as in the example shown in FIG. As a result, a virtual space image is generated in which the visibility in the viewpoint peripheral region A is relatively high in the first state and the visibility outside the viewpoint peripheral region A is relatively low in the second state.
- the image generation unit 14 executes processing for determining movement of the viewpoint of the user U based on viewpoint position information transmitted from the viewpoint detection unit 11 in a predetermined cycle. In this determination processing, it is determined whether or not the destination viewpoint has moved at least in the depth direction with respect to the viewpoint before movement. If the viewpoint has been moved (YES), the process proceeds to the next step S40, and if the viewpoint has not been moved (NO), the process proceeds to step S50.
- step S40 the image generator 14 performs update processing of the virtual space image according to the movement of the user's viewpoint.
- the process proceeds to the next step S50, and the display control unit 15 controls the image forming apparatus 4 to display the virtual space image generated (or updated) by the image generation unit 14.
- FIG. When the virtual space image is displayed on the image forming device 4, the process returns to step S30 and the same processing is repeated.
- the image generation unit 14 increases the visibility in the viewpoint peripheral region Af of the destination from the second state to the first state when the viewpoint of the user U moves, and the visibility increases.
- a first process for updating the virtual space image so that the transition completion time required for the property to transition between the first state and the second state changes based on a predetermined condition;
- a second process of updating the virtual space image is performed so that the visibility within the virtual space is lowered from the first state to the second state.
- the transition from the second state to the first state of the visibility in the region around the viewpoint of the movement destination is performed by blurring the image in the region around the viewpoint of the movement destination.
- This is achieved by reducing In other words, by reducing the amount of blurring (blurring rate) of the image subjected to blurring processing to bring it closer to a focused state, the visibility in the region is increased and the state is changed from the second state to the first state.
- the process of reducing the amount of blur that causes the visibility to transition from the second state to the first state is performed over a transition completion time.
- This transition completion time includes the delay time from the completion of the movement of the user U's viewpoint to the start of the blur amount reduction process, and the transition time from the start to the completion of the blur amount reduction process.
- the transition from the first state to the second state of visibility in the viewpoint peripheral region of the movement source is realized by performing a blurring process on the image in the viewpoint peripheral region of the movement source. That is, by increasing the blurring amount (blurring ratio) of the image in the blurring process, the visibility in the region is lowered, and the state is changed from the first state to the second state.
- the delay time and the transition time included in the transition completion time change according to the moving direction of the user U's viewpoint, which is one of the predetermined conditions.
- the moving direction of the viewpoint of the user U includes the vertical and horizontal directions on the image forming device 4 (on the virtual space image), the depth direction in the virtual space, and combinations of these directions.
- the delay time and the transition time change depending on whether the viewpoint of the user U moves away or approaches in the depth direction in the virtual space.
- FIG. 8 is a graph showing an example of changes in visibility over time when the viewpoint moves away.
- the upper graph in FIG. 8 corresponds to the destination viewpoint surrounding area Af
- the lower graph in FIG. 8 corresponds to the movement source viewpoint surrounding area An.
- the vertical axis of each graph represents the state of visibility V
- the horizontal axis represents time t.
- the state of the visibility V on the vertical axis increases as the distance from the intersection (origin) with the horizontal axis increases.
- the state of the visibility V corresponds to the amount of blurring (blurring rate) of the image in the blurring process, as described above. Therefore, the vertical axis of each graph in FIG. 8 also represents the amount of blurring of the image, and the amount of blurring decreases as the distance from the origin increases.
- the user U's viewpoint is at position Pn (on the first object with the number "8") at time t1, and moves to position Pf (on the second road sign object) at time t2.
- the visibility in the viewpoint surrounding area Af of the movement destination is in the second state V2, which is relatively low
- the visibility in the viewpoint surrounding area An of the movement source is in the first state V1, which is relatively high.
- the dashed line in each graph in FIG. 8 represents the temporal change in visibility corresponding to the blurring process applied to the virtual space image in the conventional technology as described above.
- the blurring process applied to the virtual space image is executed at a speed corresponding to the performance of hardware responsible for image processing. Therefore, the visibility (blur amount) transition between the first state V1 and the second state V2 is completed in a short period of time (time t1 to t2) substantially simultaneously with the movement of the user U's viewpoint.
- the visibility in the viewpoint surrounding area Af of the movement destination gradually increases from the second state V2.
- the visibility in the viewpoint peripheral region An of the movement source is immediately lowered from the first state V1, and the state is changed to the second state V2.
- the change in visibility over time in the viewpoint surrounding area Af of the movement destination is such that a predetermined delay time L1 elapses after the viewpoint movement of the user U is completed.
- the visibility is maintained in the second state V2, and at time t3, the visibility starts to increase.
- the visibility in the destination viewpoint peripheral area Af gradually changes over time, and rises to the first state V1 at time t4.
- the process of reducing the blur processing applied to the image in the viewpoint surrounding area Af of the movement destination is started after the elapse of the delay time L1. Then, the amount of blurring (blurring ratio) of the image is gradually reduced over time, so that the visibility in the viewpoint peripheral region Af of the movement destination changes from the second state V2 (out-of-focus, blurred state) to the second state. 1 state V1 (in-focus state).
- a transition time ⁇ 1 from the start of the transition to the completion thereof is from t3 to t4.
- a transition completion time T1 (time t2 to t4) required from the completion of the viewpoint movement of the user U to the completion of the visibility transition is the sum of the delay time L1 and the transition time ⁇ 1.
- the state in which the image in the viewpoint peripheral area Af of the movement destination is in focus is when the focal point of the eye E of the user U is at the virtual position Pf of the viewpoint of the movement destination, which was described with reference to FIG. ' corresponds to a state that matches . That is, the image processing that raises the visibility in the viewpoint peripheral area Af of the movement destination to the first state on the virtual space image is performed by the user U activating the focus adjustment function of the eye E in the real space to set the focal length to Ff′. It corresponds to the action of adjusting. Therefore, the temporal change in the visibility when the visibility is increased to the first state V1 approaches the temporal change in the focal length when the focal length is set to Ff' by the focus adjustment function of the eye E. If so, it will be possible to generate a virtual space image that achieves visibility close to how it looks in the real space.
- the curve C1 over the transition time ⁇ 1 represents the temporal change in the visibility when the visibility in the viewpoint surrounding area Af of the movement destination is increased to the first state V1. .
- t represents the time [s] after the start of the blurring reduction process.
- F represents the focal length [m] at time t.
- Do represents a diopter that is the reciprocal of the focal length at the start of viewpoint movement.
- Dt represents a diopter that is the reciprocal of the focal length at the end of the viewpoint movement.
- e represents Napier's number (the base of natural logarithms).
- ⁇ represents a time constant.
- the focal length F in the above equation (1) corresponds to the state of visibility V at time t.
- the time constant ⁇ in the above equation (1) is set according to the environmental conditions and user conditions described above, and the length of the transition time ⁇ 1 (the shape of the curve C1) changes according to the time constant ⁇ .
- FIG. 9 illustrates changes in the transition time ⁇ 1 when the value of the time constant ⁇ is changed.
- the lower graph in FIG. 8 shows the temporal change in visibility in the viewpoint peripheral region An of the movement source with respect to the temporal change in visibility in the destination viewpoint peripheral region Af as described above.
- the visibility starts to decrease from the first state V1 immediately after the user U's viewpoint movement is completed, and decreases to the second state V2 at time t3'. That is, immediately after the user U's viewpoint movement is completed, the blurring process of the image in the viewpoint peripheral region An of the movement source is started.
- the visibility in the viewpoint peripheral region An of the movement source changes from the first state V1 (in-focus state) to the second state V2 (in-focus state). is not matched).
- FIG. 10 is a graph showing an example of changes in visibility over time when the viewpoint moves in the approaching direction.
- the upper graph in FIG. 10 corresponds to the destination viewpoint surrounding area An
- the lower graph in FIG. 10 corresponds to the movement source viewpoint surrounding area Af.
- the vertical axis of each graph represents the state of visibility V (blur amount)
- the horizontal axis represents time t.
- the state of the visibility V on the vertical axis increases with increasing distance from the intersection (origin) with the horizontal axis (the amount of blurring decreases with increasing distance from the origin).
- the user U's viewpoint is at position Pf (on the second object of the road sign) at time t1, and moves to position Pn (on the first object with the number "8") at time t2.
- the visibility in the viewpoint surrounding area An of the movement destination is in the second state V2, which is relatively low
- the visibility in the viewpoint surrounding area Af of the movement source is in the first state V1, which is relatively high.
- the broken line in each graph in FIG. 10 represents the temporal change in visibility corresponding to the blurring process applied to the virtual space image in the conventional technology, as in the case of FIG. 8 described above.
- the transition between the first state V1 and the second state V2 is completed in a short period of time (time t1 to t2).
- the visibility in the viewpoint peripheral region An of the movement destination gradually increases from the second state V2.
- the visibility in the viewpoint peripheral region Af of the movement source is immediately lowered from the first state V1, and the state is changed to the second state V2.
- the change in visibility over time in the viewpoint surrounding area An of the movement destination is such that a predetermined delay time L2 elapses after the viewpoint movement of the user U is completed.
- a predetermined delay time L2 elapses after the viewpoint movement of the user U is completed.
- the aforementioned delay time L1 in moving the viewpoint away is set to be longer than the delay time L2 in moving the viewpoint in the approaching direction (L1>L2).
- the delay time L1 in moving the viewpoint in the moving away direction should be at least twice the delay time L2 in moving the viewpoint in the approaching direction.
- the delay time L1 in the movement of the viewpoint in the receding direction is set in the range from 0.05 seconds to 0.2 seconds (0.05 ⁇ L1 ⁇ 0.2), and the delay time in the movement of the viewpoint in the approaching direction is Experiments have shown that it is preferable to set L2 in the range from 0 seconds to 0.05 seconds (0 ⁇ L2 ⁇ 0.05). After time t3, the visibility in the destination viewpoint peripheral area Af gradually changes over time, and rises to the first state V1 at time t5.
- the process of reducing the blur processing applied to the image in the viewpoint peripheral region An of the movement destination is started after the elapse of the delay time L2. Then, the amount of blurring (blurring ratio) of the image is gradually reduced over time, so that the visibility in the viewpoint peripheral region An of the movement destination changes from the second state V2 (out-of-focus blurred state) to the second state V2. 1 state V1 (in-focus state).
- a transition time ⁇ 2 from the start of the transition to the completion of the transition is from t3 to t5.
- the transition time ⁇ 1 in the movement of the viewpoint in the moving away direction described above is set to be longer than the transition time ⁇ 2 in the movement of the viewpoint in the approaching direction ( ⁇ 1> ⁇ 2).
- a transition completion time T2 (time t2 to t5) required from the completion of the viewpoint movement of the user U to the completion of the visibility transition is the sum of the delay time L2 and the transition time ⁇ 2.
- the shape of the curve C2 over the transition time ⁇ 2 can approximate the temporal change of the focal length due to the focus adjustment function of the eye E, for example, by making it follow the function shown in the above formula (1).
- Do in the above equation (1) means 1/Ff' and Dt means 1/Fn'.
- the time constant ⁇ is set according to the environmental conditions and user conditions described above, and the shape of the curve C2 (the length of the transition time ⁇ 2) changes according to the time constant ⁇ .
- FIG. 11 illustrates changes in the transition time ⁇ 2 when the value of the time constant ⁇ is changed. In the example of FIG.
- the time constant ⁇ for the movement of the viewpoint in the moving away direction is set in the range from 0.05 to 0.2 (0.05 ⁇ 0.2), and the time constant ⁇ for moving the viewpoint in the approaching direction is set to Experiments have confirmed that it is preferable to set it in the range of 0 to 0.1 (0 ⁇ 0.1).
- the lower graph in FIG. 10 shows the temporal change in visibility in the viewpoint peripheral region Af of the movement source in contrast to the temporal change in visibility in the destination viewpoint peripheral region An as described above.
- the visibility starts to decrease from the first state V1 immediately after the user U's viewpoint movement is completed, and decreases to the second state V2 at time t3'. That is, immediately after the user U's viewpoint movement is completed, the blurring process of the image in the viewpoint peripheral region Af of the movement source is started.
- the visibility in the viewpoint peripheral region Af of the movement source changes from the first state V1 (in-focus state) to the second state V2 (in-focus state). in a short time.
- the image generation unit 14 changes the visibility of the visibility change area located around the viewpoint of the movement source.
- a virtual space image is generated in which the transition completion times T1 and T2 required to transition between the first state V1 and the second state V2 change based on predetermined conditions.
- the visibility of the visibility changing region transitions between the first and second states V1 and V2 over transition completion times T1 and T2 that change based on a predetermined condition. It is possible to generate a virtual space image that realizes visibility close to the appearance of the real space even if the object moves.
- the driving simulator system 1 is constructed by applying such a virtual space image generation device 2, various objects in the real space can be reproduced in the virtual space in the vehicle development, etc., and the visibility of the various objects can be displayed in the virtual space image. can be evaluated accurately.
- the driving simulator system 1 to simulate driving a vehicle, it is possible to provide the user U with a more realistic driving experience.
- the transition completion times T1 and T2 in the virtual space image generation device 2 of this embodiment include delay times L1 and L2 and transition times ⁇ 1 and ⁇ 2.
- the delay times L1 and L2 included in the transition completion times T1 and T2 are appropriately set, and the transition times ⁇ 1 and ⁇ 2 are started after the delay times L1 and L2 have elapsed.
- the visibility of the generated virtual space image can be brought closer to the appearance of the real space.
- a virtual space image is generated in which the transition times ⁇ 1 and ⁇ 2 change according to the moving direction of the user U's viewpoint.
- the focal length adjustment time (the time required for the eye to come into focus) varies depending on the direction of movement of the viewpoint, so the blurring of the region around the viewpoint changes depending on the direction of movement of the viewpoint.
- the virtual space image generation device 2 of the present embodiment generates a virtual space image in which the transition time ⁇ 1 in moving the viewpoint away is longer than the transition time ⁇ 2 in moving the viewpoint in the approaching direction.
- the focal length adjustment time is longer when the viewpoint moves away than when the viewpoint moves closer.
- the transition times ⁇ 1 and ⁇ 2 in the processing of the virtual space image in accordance with the change in the focal length of the eye can be adjusted to the real space. can be closer to the appearance of
- the delay time L1 in the viewpoint movement in the moving away direction is at least twice as long as the delay time L2 in the viewpoint movement in the approaching direction
- a virtual space image is generated in which the time constant ⁇ of the function that determines the transition time ⁇ 1 is more than twice the time constant ⁇ of the function that determines the transition time ⁇ 2 in moving the viewpoint in the approaching direction.
- the delay time L2 is set in the range from 0 second to 0.05 second
- the time constant ⁇ is set in the range from 0 to 0.1, in the viewpoint movement in the approaching direction, the virtual space The visibility of the image can be reliably brought closer to the appearance of the real space.
- the virtual space image generation device 2 of the present embodiment generates a virtual space image in which the transition completion time changes based on the conditions (environmental conditions) regarding the environment of the virtual space and the conditions (user conditions) regarding the user U.
- the blurring of the area around the viewpoint when the viewpoint is moved in the real space depends on the environment of the virtual space (weather, humidity, driving environment, windshield conditions, etc.) and the user's condition (age, etc.). , gender, visual acuity, eye health, eye openness, dominant eye, etc.).
- the transition completion time is set to a predetermined value.
- a virtual space image is generated that changes based on the conditions.
- the present invention is not limited to the above-described embodiments, and various modifications and changes are possible based on the technical idea of the present invention.
- the viewpoint of the user U moves in the depth direction (the direction of moving away or the direction of approaching) in the virtual space.
- the virtual space image generation technique according to the present invention can be applied even when the viewpoint movement does not involve movement in the depth direction, for example, when the user U's viewpoint moves between the left and right taillights of the preceding vehicle displayed in the virtual space image. By applying , it is possible to realize a virtual space image that is close to the appearance of the real space.
- the visibility in the region around the viewpoint of the movement destination is maintained in the second state during the period from the completion of the movement of the viewpoint of the user U until the delay times L1 and L2 elapse.
- a virtual space image whose visibility is slightly increased may be generated between the delay times L1 and L2.
- an example was described in which the temporal change in visibility at the transition time follows a function such as formula (1). You may make it change a sex.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Human Computer Interaction (AREA)
- Geometry (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computing Systems (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
図1は、本発明の一実施形態に係る仮想空間画像生成装置が適用された運転シミュレータシステムの概略構成を示すブロック図である。
図1において、運転シミュレータシステム1は、例えば、自動車等の車両開発における各種オブジェクトの視認性評価や車両運転の模擬体験などに利用される。この運転シミュレータシステム1は、本実施形態による仮想空間画像生成装置2と、センサ3と、画像形成装置4とを備える。
図7は、仮想空間画像生成装置2による仮想空間画像の生成方法の一例を示すフローチャートである。
本実施形態生成装置2では、まず、図7のステップS10において、視点検出部11が、センサ3の出力信号を用いて、画像形成装置4上におけるユーザUの視点の位置(2次元平面上の座標)を検出する。この視点検出部11による視点位置の検出処理は、所定の周期で繰り返し実行される。視点検出部11で検出されたユーザUの視点の位置情報は、画像生成部14に伝えられる。
前述したように本実施形態による画像生成部14は、ユーザUの視点が移動した際、移動先の視点周辺領域Af内の視認性が第2状態から第1状態に上昇し、かつ、該視認性が第1状態と第2状態との間で遷移するのに要する遷移完了時間が所定条件に基づいて変化するように、仮想空間画像を更新する第1処理と、移動元の視点周辺領域An内の視認性が第1状態から第2状態に低下するように、仮想空間画像を更新する第2処理とを行う。
上記(1)式において、tは、ぼかし処理を軽減する処理を開始してからの時間[s]を表している。Fは、時間tでの焦点距離[m]を表している。Doは、視点移動開始時の焦点距離の逆数であるディオプター(diopter)を表している。Dtは、視点移動終了時の焦点距離の逆数であるディオプターを表している。eは、ネイピア数(自然対数の底)を表している。τは、時間定数を表している。図8のように視点が遠ざかる方向に移動する場合、Doは1/Fn’を意味し、Dtは1/Ff’を意味する。
2…仮想空間画像生成装置
3…センサ
4…画像形成装置
11…視点検出部
12…入力部
13…記憶部
14…画像生成部
15…表示制御部
A,An,Af…視点周辺領域
F,Fn,Fn’,Ff,Ff’…焦点距離
L1,L2…遅延時間
P,Pn,Pf…視点の位置
Pn’,Pf’…仮想空間内での視点の位置
T1,T2…遷移完了時間
U…ユーザ
V1…第1状態
V2…第2状態
α1,α2…遷移時間
τ…時間定数
Claims (11)
- ユーザの視点移動に基づいて視認性が変化する視認性変化領域を含む仮想空間画像を生成する仮想空間画像生成装置において、
前記ユーザの視点が移動する場合に、前記視認性変化領域の視認性が第1状態と該第1状態とは異なる第2状態との間で遷移するのに要する遷移完了時間が所定条件に基づいて変化する前記仮想空間画像を生成する画像生成部を備えることを特徴とする仮想空間画像生成装置。 - 前記遷移完了時間は、前記ユーザの視点移動が完了してから前記遷移を開始するまでの遅延時間と、前記遷移を開始してから完了するまでの遷移時間とを含むことを特徴とする請求項1に記載の仮想空間画像生成装置。
- 前記画像生成部は、前記ユーザの視点が移動する方向を前記所定条件として、該視点の移動方向に応じて前記遷移時間が変化する前記仮想空間画像を生成するように構成されていることを特徴とする請求項2に記載の仮想空間画像生成装置。
- 前記画像生成部は、前記視点の移動方向が遠ざかる方向である場合の前記遷移時間が、前記視点の移動方向が近づく方向である場合の前記遷移時間よりも長くなる前記仮想空間画像を生成するように構成されていることを特徴とする請求項3に記載の仮想空間画像生成装置。
- 前記画像生成部は、仮想空間内での前記ユーザの視点の仮想的な位置の変化に対応する前記ユーザの眼の焦点距離の変化を前記所定条件として、該焦点距離の変化に応じて前記遷移時間が変化する前記仮想空間画像を生成するように構成されていることを特徴とする請求項4に記載の仮想空間画像生成装置。
- 前記画像生成部は、遠ざかる方向の視点移動における前記遅延時間が、近づく方向の視点移動における前記遅延時間よりも2倍以上長く、かつ、前記遠ざかる方向の視点移動における前記遷移時間を決める関数の時間定数が、前記近づく方向の視点移動における前記遷移時間を決める関数の時間定数の2倍以上となる前記仮想空間画像を生成するように構成されていることを特徴とする請求項5に記載の仮想空間画像生成装置。
- 前記遠ざかる方向の視点移動において、前記遅延時間は0.05秒から0.2秒までの範囲に設定され、かつ、前記時間定数は0.05から0.2までの範囲に設定され、
前記近づく方向の視点移動において、前記遅延時間は0秒から0.05秒までの範囲に設定され、かつ、前記時間定数は0から0.1の範囲に設定される、ことを特徴とする請求項6に記載の仮想空間画像生成装置。 - 前記画像生成部は、仮想空間の環境に関する条件を前記所定条件として、該環境に関する条件に基づき前記遷移完了時間が変化する前記仮想空間画像を生成するように構成されていることを特徴とする請求項1に記載の仮想空間画像生成装置。
- 前記画像生成部は、前記ユーザに関する条件を前記所定条件として、該ユーザに関する条件に基づき前記遷移完了時間が変化する前記仮想空間画像を生成するように構成されていることを特徴とする請求項1に記載の仮想空間画像生成装置。
- 前記画像生成部は、前記ユーザの視点が移動して、移動先の視点周辺領域が移動元の視点周辺領域外に位置している場合に、前記遷移完了時間が前記所定条件に基づいて変化する前記仮想空間画像を生成するように構成されていることを特徴とする請求項1に記載の仮想空間画像生成装置。
- ユーザの視点移動に基づいて視認性が変化する視認性変化領域を含む仮想空間画像を生成する仮想空間画像生成方法において、
前記ユーザの視点が移動する場合に、前記視認性変化領域の視認性が第1状態と該第1状態とは異なる第2状態との間で遷移するのに要する遷移完了時間が所定条件に基づいて変化する前記仮想空間画像を生成する仮想空間画像生成方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020247007964A KR20240047407A (ko) | 2021-09-30 | 2022-03-22 | 가상 공간 화상 생성 장치 및 방법 |
CN202280064034.XA CN117980961A (zh) | 2021-09-30 | 2022-03-22 | 虚拟空间图像生成装置和方法 |
EP22875399.2A EP4411661A1 (en) | 2021-09-30 | 2022-03-22 | Virtual space image generation device and method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021-161258 | 2021-09-30 | ||
JP2021161258A JP2023050903A (ja) | 2021-09-30 | 2021-09-30 | 仮想空間画像生成装置および方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023053516A1 true WO2023053516A1 (ja) | 2023-04-06 |
Family
ID=85782153
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/012969 WO2023053516A1 (ja) | 2021-09-30 | 2022-03-22 | 仮想空間画像生成装置および方法 |
Country Status (5)
Country | Link |
---|---|
EP (1) | EP4411661A1 (ja) |
JP (1) | JP2023050903A (ja) |
KR (1) | KR20240047407A (ja) |
CN (1) | CN117980961A (ja) |
WO (1) | WO2023053516A1 (ja) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017058493A (ja) * | 2015-09-16 | 2017-03-23 | 株式会社コロプラ | 仮想現実空間映像表示方法、及び、プログラム |
JP2017138701A (ja) | 2016-02-02 | 2017-08-10 | 株式会社コロプラ | 仮想空間画像提供方法、及びそのプログラム |
WO2020049838A1 (ja) * | 2018-09-07 | 2020-03-12 | ソニー株式会社 | 情報処理装置、情報処理方法及びプログラム |
US20210096644A1 (en) * | 2019-09-26 | 2021-04-01 | Apple Inc. | Gaze-independent dithering for dynamically foveated displays |
-
2021
- 2021-09-30 JP JP2021161258A patent/JP2023050903A/ja active Pending
-
2022
- 2022-03-22 WO PCT/JP2022/012969 patent/WO2023053516A1/ja active Application Filing
- 2022-03-22 KR KR1020247007964A patent/KR20240047407A/ko unknown
- 2022-03-22 EP EP22875399.2A patent/EP4411661A1/en active Pending
- 2022-03-22 CN CN202280064034.XA patent/CN117980961A/zh active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017058493A (ja) * | 2015-09-16 | 2017-03-23 | 株式会社コロプラ | 仮想現実空間映像表示方法、及び、プログラム |
JP2017138701A (ja) | 2016-02-02 | 2017-08-10 | 株式会社コロプラ | 仮想空間画像提供方法、及びそのプログラム |
WO2020049838A1 (ja) * | 2018-09-07 | 2020-03-12 | ソニー株式会社 | 情報処理装置、情報処理方法及びプログラム |
US20210096644A1 (en) * | 2019-09-26 | 2021-04-01 | Apple Inc. | Gaze-independent dithering for dynamically foveated displays |
Also Published As
Publication number | Publication date |
---|---|
JP2023050903A (ja) | 2023-04-11 |
EP4411661A1 (en) | 2024-08-07 |
KR20240047407A (ko) | 2024-04-12 |
CN117980961A (zh) | 2024-05-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6138634B2 (ja) | ヘッドアップディスプレイ装置 | |
US9771083B2 (en) | Cognitive displays | |
US20100073773A1 (en) | Display system for vehicle and display method | |
CN111095363B (zh) | 显示系统和显示方法 | |
JP2018154154A (ja) | 車両の表示システム及び車両の表示システムの制御方法 | |
WO2023053515A1 (ja) | 仮想空間画像生成装置および方法 | |
CN113924518A (zh) | 控制机动车的增强现实平视显示器装置的显示内容 | |
JPWO2020105685A1 (ja) | 表示制御装置、方法、及びコンピュータ・プログラム | |
WO2023053516A1 (ja) | 仮想空間画像生成装置および方法 | |
JP6196840B2 (ja) | ヘッドアップディスプレイ装置 | |
JPWO2018030320A1 (ja) | 車両用表示装置 | |
KR102384426B1 (ko) | 운전 시 눈부심 방지 장치 및 방법 | |
JP2007280203A (ja) | 情報提示装置、自動車、及び情報提示方法 | |
JP2021135933A (ja) | 表示方法、表示装置及び表示システム | |
JP2021056358A (ja) | ヘッドアップディスプレイ装置 | |
WO2021200913A1 (ja) | 表示制御装置、画像表示装置、及び方法 | |
WO2023053517A1 (ja) | 視認性情報取得装置の制御方法および視認性情報取得装置 | |
JP2021160409A (ja) | 表示制御装置、画像表示装置、及び方法 | |
WO2021065700A1 (ja) | 表示制御装置、ヘッドアップディスプレイ装置、及び方法 | |
JP2022113292A (ja) | 表示制御装置、ヘッドアップディスプレイ装置、及び表示制御方法 | |
WO2020045330A1 (ja) | 車両用表示装置、方法、及びコンピュータ・プログラム | |
EP3306373A1 (en) | Method and device to render 3d content on a head-up display | |
JP2022077138A (ja) | 表示制御装置、ヘッドアップディスプレイ装置、及び表示制御方法 | |
JP2023034268A (ja) | 表示制御方法及び表示制御装置 | |
JP2022057051A (ja) | 表示制御装置、虚像表示装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22875399 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20247007964 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202280064034.X Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18695067 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202417025782 Country of ref document: IN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2022875399 Country of ref document: EP Effective date: 20240430 |