WO2019142364A1 - Dispositif de commande d'affichage, système de commande d'affichage et procédé de commande d'affichage - Google Patents

Dispositif de commande d'affichage, système de commande d'affichage et procédé de commande d'affichage Download PDF

Info

Publication number
WO2019142364A1
WO2019142364A1 PCT/JP2018/001815 JP2018001815W WO2019142364A1 WO 2019142364 A1 WO2019142364 A1 WO 2019142364A1 JP 2018001815 W JP2018001815 W JP 2018001815W WO 2019142364 A1 WO2019142364 A1 WO 2019142364A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
vehicle
display control
information
display
Prior art date
Application number
PCT/JP2018/001815
Other languages
English (en)
Japanese (ja)
Inventor
聖崇 加藤
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to PCT/JP2018/001815 priority Critical patent/WO2019142364A1/fr
Publication of WO2019142364A1 publication Critical patent/WO2019142364A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present invention relates to a display control device, a display control system, and a display control method.
  • HUD head-up display
  • video sickness a head-up display
  • Symptoms of motion sickness include, for example, nausea, dizziness, headache or eye strain.
  • AR-HUD corresponding to so-called AR (Augmented Reality) display
  • AR-HUD images are displayed so that images corresponding to various objects (hereinafter referred to as “objects”) present around the moving object are superimposed at positions near each object in the user's field of view. Be done.
  • the positional relationship between the moving object and each object changes as the moving object moves.
  • the number of objects included in the field of view of the user, the range occupied by the object in the field of view of the user, and the like change.
  • the number of images displayed on the AR-HUD, the size of the area in which images on the AR-HUD are displayed, and the like also change.
  • the more the number of images displayed on the AR-HUD the more likely to be an image sickness.
  • the size of the area in the AR-HUD in which the video is displayed is larger, the video sickness is more likely to occur. That is, in AR-HUD, the likelihood of occurrence of video sickness varies depending on the number of objects included in the field of view of the user, the range occupied by the object in the field of view of the user, and the like.
  • the image display device described in Patent Document 1 determines the likelihood of occurrence of video sickness using the analysis result of the contents of the video. That is, the image display device described in Patent Document 1 does not use information on an object when determining the likelihood of occurrence of video sickness. For this reason, when the image display device described in Patent Document 1 is used for AR-HUD, there is a problem that the determination accuracy of the susceptibility to video sickness is low.
  • the present invention has been made to solve the problems as described above, and it is an object of the present invention to estimate with high accuracy the presence or absence of the occurrence of motion sickness in an AR-HUD.
  • a display control apparatus includes an object information acquisition unit that acquires object information including information on one or more objects existing around a mobile object, and one or more objects corresponding to one or more objects.
  • a video sickness estimation unit that executes processing to estimate the presence or absence of video sickness due to the first video group including one or more videos, and at least a part of the video of one or more videos using the object information
  • a display control unit that executes control to cause the head-up display to display the second image group including the display group, and the display control unit changes the display mode of the second image group according to the result of estimation processing by the video sickness estimation unit. It is a thing.
  • FIG. 3A is an explanatory view showing an example of an object.
  • FIG. 3B is an explanatory view showing an example of a displayable area, an object area, and an object group area. It is an explanatory view showing an example of object information.
  • FIG. 5A is a block diagram showing a hardware configuration of a control device including the display control device according to Embodiment 1 of the present invention.
  • FIG. 5B is a block diagram showing another hardware configuration of a control device including the display control device according to Embodiment 1 of the present invention. It is a flowchart which shows operation
  • FIG. 9A is an explanatory view showing an example of a displayable area, a display target area, and a first image group.
  • FIG. 9A is an explanatory view showing an example of a displayable area, a display target area, and a first image group.
  • FIG. 9B is an explanatory diagram showing an example of a state in which the second image group is displayed on the head-up display.
  • FIG. 10A is an explanatory view showing another example of the displayable area, the display target area, and the first image group.
  • FIG. 10B is an explanatory view showing another example of the state in which the second image group is displayed on the head-up display.
  • FIG. 11A is an explanatory view showing another example of the displayable area, the display target area, and the first image group.
  • FIG. 11B is an explanatory view showing another example of the state in which the second image group is displayed on the head-up display.
  • FIG. 12A is an explanatory view showing another example of the displayable area, the display target area, and the first image group.
  • FIG. 12B is an explanatory view showing another example of the state in which the second image group is displayed on the head-up display.
  • FIG. 13A is an explanatory view showing another example of the object.
  • FIG. 13B is an explanatory view showing another example of the displayable area and the first image group.
  • FIG. 13C is an explanatory view showing another example of the state in which the second image group is displayed on the head-up display.
  • FIG. 14A is an explanatory view showing another example of the object.
  • FIG. 14B is an explanatory view showing another example of the displayable area and the first image group.
  • FIG. 14C is an explanatory view showing another example of the state in which the second image group is displayed on the head-up display.
  • FIG. 16A is a block diagram showing a system configuration of a display control system according to Embodiment 1 of the present invention.
  • FIG. 16B is a block diagram showing another system configuration of the display control system according to Embodiment 1 of the present invention.
  • FIG. 16C is a block diagram showing another system configuration of the display control system according to Embodiment 1 of the present invention.
  • FIG. 16D is a block diagram showing another system configuration of the display control system according to Embodiment 1 of the present invention. It is a block diagram which shows the state in which the control apparatus containing the display control apparatus which concerns on Embodiment 2 of this invention is provided in the vehicle.
  • FIG. 1 is a block diagram showing a state in which a control device including a display control device according to the first embodiment is provided in a vehicle.
  • FIG. 2 is an explanatory view showing a state in which a control device including the display control device according to the first embodiment is provided in a vehicle.
  • the display control apparatus 100 according to the first embodiment will be described with reference to FIGS. 1 and 2.
  • the vehicle 1 has a head-up display 2.
  • the head-up display 2 is configured of, for example, a windshield AR-HUD.
  • the head-up display device 3 is provided on the dashboard of the vehicle 1.
  • the head-up display device 3 has a display for displaying a video for AR display, and an optical system for projecting visible light corresponding to the video displayed on the display onto the windshield 4.
  • the display is configured of, for example, a display such as a liquid crystal display (LCD) or an organic electro luminescence display (OLED), or a projector such as a DLP (registered trademark) or a laser projector.
  • the optical system is configured of, for example, any two or more of a concave mirror, a convex mirror, and a plane mirror.
  • visible light reflected by the windshield 4 is incident on the eye E of the driver of the vehicle 1 (hereinafter sometimes simply referred to as “driver”), thereby displaying an image for AR display.
  • the corresponding virtual image VI is viewed by the driver.
  • OP1 indicates the optical path of visible light corresponding to the image for AR display
  • OP2 indicates the optical path of the visible light perceived by the driver
  • P indicates the position of the virtual image VI perceived by the driver.
  • the image displayed on the head-up display 2 includes, for example, an image emphasizing a white line on the road on which the vehicle 1 is traveling.
  • This image is displayed, for example, in a state of being superimposed on a position near the white line in the driver's field of vision.
  • This video is for presenting the presence of the white line to the driver of the vehicle 1 to prevent the vehicle 1 from deviating from the lane in which the vehicle is traveling.
  • the image displayed on the head-up display 2 includes, for example, an image emphasizing an obstacle present in a lane in which the vehicle 1 is traveling. This image is displayed, for example, in a state of being superimposed at a position near the obstacle in the driver's field of vision. This image is for urging the driver of the vehicle 1 to pay attention to the obstacle to prevent the vehicle 1 from colliding with the obstacle.
  • the image displayed on the head-up display 2 includes, for example, an image showing a traveling route being guided by a navigation system (not shown) for the vehicle 1.
  • This image includes, for example, an arrow-shaped image indicating the direction in which the vehicle 1 should travel, and an image for guiding the lane in which the vehicle 1 should travel.
  • the image displayed on the head-up display 2 includes, for example, an image emphasizing another vehicle (hereinafter referred to as “front vehicle”) traveling in front of the vehicle 1. More specifically, when the vehicle 1 is traveling by so-called “adaptive cruise control", it includes an image emphasizing a preceding vehicle to be followed by the vehicle 1. This image is for making the driver of the vehicle 1 recognize the preceding vehicle to be followed.
  • front vehicle an image emphasizing another vehicle traveling in front of the vehicle 1.
  • front vehicle an image emphasizing a preceding vehicle to be followed by the vehicle 1. This image is for making the driver of the vehicle 1 recognize the preceding vehicle to be followed.
  • the video displayed on the head-up display 2 includes, for example, a video indicating the inter-vehicle distance between the vehicle 1 and the preceding vehicle.
  • This image is displayed, for example, in a state of being superimposed on the lane between the vehicle 1 and the preceding vehicle in the field of view of the driver. This image is for causing the driver of the vehicle 1 to recognize the inter-vehicle distance between the vehicle 1 and the vehicle in front.
  • the video displayed on the head-up display 2 includes, for example, a video indicating information on a building included in the front scenery of the vehicle 1.
  • the vehicle 1 has a camera 5 for imaging outside the vehicle and a sensor 6 for obstacle detection.
  • the camera 5 is configured by, for example, a so-called "front camera”.
  • the sensor 6 is configured by, for example, at least one of a millimeter wave radar sensor, a rider sensor, or an ultrasonic sensor provided at the front end of the vehicle 1.
  • the object information generation unit 21 executes an image recognition process on an image captured by the camera 5.
  • the object information generation unit 21 detects various objects existing around the vehicle 1 (more specifically, in front of the vehicle 1), that is, an object, using the result of the image recognition process and the detection value by the sensor 6 To do.
  • the object information generation unit 21 generates information on the detected object (hereinafter referred to as “object information”) using the result of the image recognition process and the detection value by the sensor 6.
  • object information information
  • the number of objects detected by the object information generation unit 21 may be described as “N”. That is, N is an integer of 1 or more.
  • the objects to be detected by the object information generation unit 21 are, for example, obstacles, white lines on roads, signs, traffic lights, buildings, and the like.
  • the obstacles include, for example, other vehicles or pedestrians.
  • the white line of the road includes, for example, a center line or a boundary between a road and a roadside.
  • the signs include, for example, guide signs or traffic signs.
  • the building includes, for example, a gas station.
  • FIG. 3A shows an example of a state in which the front scenery of the vehicle 1 is viewed through the windshield 4.
  • object information generation unit 21 it is assumed that seven objects O 1 to O 7 are detected by the object information generation unit 21.
  • Objects O 1 to O 3 correspond to other vehicles
  • object O 4 corresponds to a guide sign
  • object O 5 corresponds to a traffic sign
  • objects O 6 and O 7 are centers It corresponds to the line.
  • an area A1 capable of displaying an image on the head-up display 2 (more specifically, a display in the head-up display device 3) will be referred to as a "displayable area”.
  • an area corresponding to each object detected by the object information generation unit 21 is referred to as an "object area”.
  • an area including all object areas is referred to as an “object group area”.
  • the shape of the displayable area A1 is a shape corresponding to the shape of the windshield 4 and is, for example, a rectangular shape.
  • the shape of each object area is, for example, rectangular.
  • the shape of the object group area is, for example, rectangular.
  • 3B is a display area A1, and seven objects O 1 ⁇ O 7 corresponding seven object area OA 1 ⁇ OA 7 in a lateral range Rh object group region, a longitudinal width Rv object group region An example is shown.
  • the lateral range Rh object group region is a value corresponding to the distance between the right end portion of the left end and the object area OA 5 of the object area OA 1.
  • the vertical width Rv object group region is a value corresponding to the distance between the lower ends of the upper and object area OA 6 of the object area OA 4.
  • FIG. 4 shows an example of object information generated by the object information generation unit 21 in the state shown in FIG.
  • the object information includes the number of objects detected by the object information generation unit 21 (ie, N), identifiers assigned to the individual objects (“ID” in the drawing), and ones corresponding to the individual objects. It includes video data ("option” in the figure) showing types, positions of individual objects, and images corresponding to individual objects.
  • “p1, p2, p3, p4” indicate position coordinates of four corners of each of the seven object areas OA 1 to OA 7 in the displayable area A1.
  • “z001” to “z007” indicate the distances between the vehicle 1 and the seven objects O 1 to O 7 respectively. These distances are calculated, for example, by the object information generation unit 21 using the detection value of the sensor 6. Alternatively, for example, these distances are measured by the object information generation unit 21 using a captured image by the camera 5 according to a method such as TOF (Time of Flight) method or triangulation (so-called “stereo vision”) method. is there.
  • the object information acquisition unit 31 acquires object information generated by the object information generation unit 21.
  • the object information acquisition unit 31 outputs the acquired object information to the video sickness estimation unit 32 and the display control unit 33.
  • a video group including videos corresponding to the N objects detected by the object information generation unit 21, that is, a video group to be subjected to estimation processing by the video sickness estimation unit 32 will be referred to as a “first video group”.
  • the video sickness estimation unit 32 uses the object information output from the object information acquisition unit 31 to temporarily detect the presence or absence of video sickness due to the first video group when the first video group is displayed on the head-up display 2. It performs processing to estimate.
  • a specific example of the estimation process by the motion sickness estimation unit 32 will be described later with reference to the flowchart of FIG.
  • the display control unit 33 uses the object information output from the object information acquisition unit 31 to display an image including at least a part of the image corresponding to the N objects detected by the object information generation unit 21. Control for displaying a group (hereinafter referred to as "second video group") on the head-up display 2 is executed.
  • N ′ the number of objects corresponding to the images included in the second image group
  • N ′ an area A2 which is a display target of the second image group in the displayable area A1 is referred to as a "display target area”.
  • the display control unit 33 changes the display mode of the second video group according to the result of the estimation process by the video sickness estimation unit 32. Specifically, for example, according to the result of estimation processing by the video sickness estimation unit 32, the number N ′ of objects corresponding to the video included in the second video group, the horizontal width Rh ′ of the display target area A2, and the display target area The vertical width Rv 'of A2 is different. A specific example of control by the display control unit 33 will be described later with reference to the flowchart of FIG.
  • the object information acquisition unit 31, the motion sickness estimation unit 32, and the display control unit 33 constitute a main part of the display control apparatus 100. Further, the object information generation unit 21, the object information acquisition unit 31, the motion sickness estimation unit 32, and the display control unit 33 constitute a main part of the control device 7.
  • the control device 7 is configured by a computer, and the computer has a processor 41 and a memory 42.
  • the memory 42 stores programs for causing the computer to function as an object information generation unit 21, an object information acquisition unit 31, a motion sickness estimation unit 32, and a display control unit 33.
  • the processor 41 reads out and executes the program stored in the memory 42, whereby the functions of the object information generation unit 21, the object information acquisition unit 31, the video sickness estimation unit 32, and the display control unit 33 are realized.
  • the processor 41 uses, for example, a central processing unit (CPU), a graphics processing unit (GPU), a microprocessor, a microcontroller, or a digital signal processor (DSP).
  • the memory 42 is, for example, a semiconductor memory such as a random access memory (RAM), a read only memory (ROM), a flash memory, an erasable programmable read only memory (EPROM), or an electrically erasable programmable read only memory (EEPROM).
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • EEPROM electrically erasable programmable read only memory
  • An optical disk or a magneto-optical disk is used.
  • the functions of the object information generation unit 21, the object information acquisition unit 31, the video sickness estimation unit 32, and the display control unit 33 may be realized by a dedicated processing circuit 43.
  • the processing circuit 43 may be, for example, an application specific integrated circuit (ASIC), a programmable logic device (PLD), a field-programmable gate array (FPGA), a system-on-a-chip (SoC), or a system LSI (Large-Scale Integration). Etc. are used.
  • the object information generation unit 21, the object information acquisition unit 31, the video sickness estimation unit 32, and the display control unit 33 are realized by the processor 41 and the memory 42, and the remaining functions are performed by the processing circuit 43. It may be realized.
  • step ST1 the object information acquisition unit 31 acquires object information generated by the object information generation unit 21.
  • the object information acquisition unit 31 outputs the acquired object information to the video sickness estimation unit 32 and the display control unit 33.
  • the video sickness estimating unit 32 uses the object information output from the object information acquiring unit 31 to temporarily display the first video group on the head-up display 2 and use the first video group. Execute processing to estimate the presence or absence of video sickness. A specific example of the estimation process in step ST2 will be described later with reference to the flowchart of FIG.
  • step ST3 the display control unit 33 executes control to display the second image group on the head-up display 2 using the object information output from the object information acquisition unit 31.
  • the display control unit 33 is configured to make the display mode of the second video group different according to the result of the estimation process by the video sickness estimation unit 32.
  • a specific example of control in step ST3 will be described later with reference to the flowchart of FIG.
  • the video sickness estimating unit 32 calculates a threshold Rh_th to be compared against the horizontal width Rh of the object group area, a threshold Rv_th to be compared against the vertical width Rv of the object group area, and the number N of objects.
  • a threshold N_th to be compared is set.
  • the threshold value Rh_th is set to, for example, a value equal to the horizontal width of the displayable area A1, a half of the horizontal width of the displayable area A1, or a quarter of the horizontal width of the displayable area A1.
  • the threshold value Rv_th has, for example, a value equal to the vertical width of the displayable area A1, a half of the vertical width of the displayable area A1, or a quarter of the vertical width of the displayable area A1. It is set.
  • the threshold N_th is set to, for example, an integer of 1 or more.
  • step ST12 the video sickness estimating unit 32 calculates the horizontal width Rh of the object group area and the vertical width Rv of the object group area using the object information output from the object information acquisition unit 31.
  • the video sickness estimating unit 32 compares Rh calculated at step ST12 with Rh_th set at step ST11. If Rh is equal to or less than Rh_th ("NO" in step ST13), the video sickness estimating unit 32 sets Rh 'to the same value as Rh in step ST14. On the other hand, if Rh exceeds Rh_th ("YES" in step ST13), the video sickness estimating unit 32 sets Rh 'to the same value as Rh_th in step ST15.
  • step ST16 the video sickness estimating unit 32 compares Rv calculated at step ST12 with Rv_th set at step ST11. If Rv is equal to or less than Rv_th ("NO" at step ST16), the video sickness estimating unit 32 sets Rv 'to the same value as Rv at step ST17. On the other hand, when Rv exceeds Rv_th ("YES" in step ST16), the video sickness estimating unit 32 sets Rv 'to the same value as Rv_th in step ST18.
  • the video sickness estimating unit 32 compares N indicated by the object information with N_th set at step ST11. If N is equal to or less than N_th ("NO" at step ST19), the video sickness estimating unit 32 sets N 'to the same value as N at step ST20. On the other hand, if N exceeds N_th ("YES" in step ST19), the video sickness estimation unit 32 sets N 'to the same value as N_th in step ST21.
  • the video sickness estimating unit 32 determines the value of Rh 'set at step ST14 or step ST15, the value of Rv' set at step ST17 or step ST18, and the value at step ST20 or step ST21.
  • the set value of N ′ is output to the display control unit 33.
  • the value output by the video sickness estimation unit 32 is “absent” as the result of the estimation process is the occurrence of video sickness due to the first video group. It shows that there is. If not, the value output by the video sickness estimation unit 32 indicates that the result of the estimation process is "presence" of the occurrence of video sickness due to the first video group.
  • step ST3 a specific example of control by the display control unit 33 in step ST3 will be described with reference to the flowchart in FIG.
  • step ST31 the display control unit 33 acquires the values of Rh ', Rv', and N 'output from the motion sickness estimation unit 32.
  • step ST32 the display control unit 33 sets a display target area A2 with a size of Rh ′ ⁇ Rv ′.
  • the display control unit 33 may set the position of the display target area A2 in the displayable area A1. Specifically, for example, the display control unit 33 may set the position coordinates of the central portion C of the display target area A2 in the displayable area A1. However, the position of the display target area A2 in the displayable area A1 may be set by the operation input to the operation input device (not shown). That is, the position of the display target area A2 in the displayable area A1 can be set to an arbitrary position.
  • the display control unit 33 may select N ′ objects based on the priorities according to the types of objects corresponding to the individual objects. This priority may be preset in the display control unit 33, or may be set by an operation input to an operation input device (not shown).
  • the priority is set to be lower in the order of the object corresponding to the other vehicle, the object corresponding to the center line, the object corresponding to the road sign, and the object corresponding to the guide sign. That is, the priority of the object corresponding to the other vehicle is set to the highest, and the priority of the object corresponding to the guide sign is set to the lowest.
  • step ST34 the display control unit 33 generates video corresponding to the object selected in step ST33, using the video data in the object information output from the object information acquisition unit 31.
  • the display control unit 33 generates N 'pieces of video corresponding to N' pieces of objects one by one.
  • Each of the N 'images is, for example, a rectangular frame image by a dotted line.
  • the size of each of the N 'images is, for example, equal to the size of the object area of the corresponding object among the N' objects.
  • Each of the N 'images is an image for marking a corresponding one of the N' objects.
  • step ST35 the display control unit 33 displays on the head-up display 2 one of the images generated in step ST34 (that is, the second image group) in the display target area A2 set in step ST32. Execute control.
  • the display control unit 33 information indicating an estimated value of the position of the driver's eye in the real space is stored in advance. This estimated value is, for example, estimated based on the position of the driver's seat in the vehicle compartment of the vehicle 1 or the like.
  • the display control unit 33 uses the object information output by the object information acquisition unit 31 to calculate the position of each of N ′ objects in the real space.
  • the display control unit 33 determines, based on the positional relationship between the driver's eyes and each of the N ′ objects, that each of the N ′ objects has a corresponding image among the N ′ images in the driver's field of vision.
  • the position of each of the N 'images in the displayable area A1 is set so as to be in the marked state.
  • FIG. 9A shows an example of the first image group in this case.
  • the first image group includes seven images V 1 to V 7 corresponding to seven objects O 1 to O 7 .
  • Each of the seven images V 1 to V 7 is an image in a rectangular frame shape by a dotted line.
  • the size of each of the seven images V 1 to V 7 is equal to the size of the object area of the corresponding one of the seven objects O 1 to O 7 .
  • Each of the seven images V 1 to V 7 is an image for marking the corresponding one of the seven objects O 1 to O 7 .
  • threshold Rh_th is set to the same value as the horizontal width of displayable area A1
  • threshold Rv_th is set to the same value as the vertical width of displayable area A1
  • FIG. 9A shows an example of the display target area A2 in this case.
  • step ST32 for example, a display target area A2 shown in FIG. 9A is set.
  • all video V 1 ⁇ V 7 of the step ST34 7 amino image V 1 ⁇ V 7 are generated.
  • the head up display 2 displays the one in the display target area A2 of the seven videos V 1 to V 7 (that is, the second video group).
  • the second image group displayed on the head-up display 2 is the same as the first image group shown in FIG. 9A.
  • FIG. 10A shows an example of the first image group in this case.
  • the first image group includes seven images V 1 to V 7 corresponding to seven objects O 1 to O 7 .
  • the threshold Rh_th is set to a half of the horizontal width of the displayable area A1
  • the threshold Rv_th is set to a half of the vertical width of the displayable area A1
  • N_th 8 is set (step ST11).
  • Rh> Rh_th Rh ′ is set to Rh_th (step ST15)
  • Rv> Rv_th Rv ′ is set to Rv_th (step ST18)
  • N ⁇ N_th 7 is set (step ST20).
  • FIG. 10A shows an example of the display target area A2 in this case.
  • step ST32 for example, a display target area A2 shown in FIG. 10A is set.
  • all video V 1 ⁇ V 7 of the step ST34 7 amino image V 1 ⁇ V 7 are generated.
  • the head up display 2 displays the image in the display target area A2 of the seven images V 1 to V 7 (that is, the second image group).
  • the second image group displayed on the head-up display 2 is different from the first image group shown in FIG. 10A.
  • FIG. 11A shows an example of the first image group in this case.
  • the first image group includes seven images V 1 to V 7 corresponding to seven objects O 1 to O 7 .
  • the threshold Rh_th is set to a half of the horizontal width of the displayable area A1
  • the threshold Rv_th is set to a half of the vertical width of the displayable area A1
  • N_th 3 is set (step ST11).
  • Rh> Rh_th Rh ′ is set to Rh_th (step ST15)
  • Rv> Rv_th Rv ′ is set to Rv_th (step ST18)
  • FIG. 11A shows an example of the display target area A2 in this case.
  • step ST32 for example, a display target area A2 shown in FIG. 11A is set.
  • the display target area A2 shown in FIG. 11A is different from the display target area A2 shown in FIG. 10A in the position coordinates of the central portion C.
  • step ST33 the display control unit 33 selects any three objects among the seven objects O 1 to O 7 .
  • the display control unit 33 selects three objects O 1 to O 3 corresponding to other vehicles based on the priority.
  • step ST34 the seven three video V 1 ⁇ V 3 in the video V 1 ⁇ V 7 are generated.
  • the head up display 2 displays the one in the display target area A2 of the three videos V 1 to V 3 (that is, the second video group). That is, the images V 6 and V 7 for marking the center line are excluded from the display targets of the head-up display 2 while being located within the display target area A 2.
  • the second image group displayed on the head-up display 2 is different from the first image group shown in FIG. 11A.
  • FIG. 12A shows an example of the first image group in this case.
  • the first image group includes seven images V 1 to V 7 corresponding to seven objects O 1 to O 7 .
  • the threshold Rh_th is set to a half of the horizontal width of the displayable area A1
  • the threshold Rv_th is set to a quarter of the vertical width of the displayable area A1
  • N_th 8 is set (step ST11).
  • Rh> Rh_th Rh ′ is set to Rh_th (step ST15)
  • Rv> Rv_th Rv ′ is set to Rv_th (step ST18)
  • N ⁇ N_th 7 is set (step ST20).
  • FIG. 12A shows an example of the display target area A2 in this case.
  • step ST32 for example, a display target area A2 shown in FIG. 12A is set.
  • all video V 1 ⁇ V 7 of the step ST34 7 amino image V 1 ⁇ V 7 are generated.
  • the head up display 2 displays the one in the display target area A2 of the seven videos V 1 to V 7 (that is, the second video group).
  • the second image group displayed on the head-up display 2 is different from the first image group shown in FIG. 12A.
  • the object information may indicate the movement (more specifically, the movement speed and movement amount) of the video corresponding to each object.
  • the video sickness estimating unit 32 may estimate the presence or absence of video sickness due to the first video group by comparing the moving speed and the moving amount of the video with predetermined threshold values. Also, the motion sickness estimation unit 32 may set the values of Rh ′, Rv ′, and N ′ according to the result of the comparison.
  • the video sickness estimating unit 32 may store the set threshold.
  • the video sickness estimating unit 32 may set a new threshold value based on the stored threshold value in step ST11 after the next time. Further, the threshold value in the video sickness estimation unit 32 may be set to a different value for each driver.
  • the threshold value in the motion sickness estimation unit 32 may be set by the operation input to the operation input device (not shown). This operation may be performed by the driver of the vehicle 1 or by a passenger of the vehicle 1.
  • the priority used when N ' ⁇ N is not limited to the above specific example.
  • the priority of each object may be set by any setting. For example, the priority may be set to be higher for an object having a higher degree of importance to the driver of the vehicle 1.
  • the video for marking individual objects is not limited to the video in the form of a rectangular frame by dotted lines.
  • the image for marking individual objects may be any image as long as it can be drawn by existing CG (Computer Graphics). For example, a linear image along the edge of an individual object, an image obtained by filling the area surrounded by the lines with a predetermined color, or an image using ⁇ blending may be used. Also, it may be a video including text or an icon.
  • the video corresponding to each object is not limited to the video for marking.
  • the video corresponding to each object is not limited to the video for marking.
  • FIG. 13 and FIG. 14 another example of the video corresponding to each object will be described.
  • the object information generation unit 21 For example, as shown in FIG. 13A, it is assumed that eight objects O 1 to O 8 are detected by the object information generation unit 21.
  • the object O 8 corresponds to the lane to be guided by the navigation system (not shown) for the vehicle 1, that is, the lane in which the vehicle 1 should travel.
  • the first image group includes eight images V 1 to V 8 corresponding to eight objects O 1 to O 8 in a one-to-one manner.
  • Video V 8 corresponding to the object O 8 (more specifically, by triangular images that are arranged in along the said lane) image for guiding the lane may be.
  • FIG. 13B shows an example of a second image group displayed on the head-up display 2 in this case.
  • the object information generation unit 21 it is assumed that eight objects O 1 to O 2 , O 4 to O 7 , O 9 to O 10 are detected by the object information generation unit 21.
  • the object O 9 corresponds to another vehicle.
  • the object O 10 corresponds to a building existing in front of the vehicle 1, more specifically, a gas station.
  • the first image group includes eight images V 1 to V 2 , V corresponding to eight objects O 1 to O 2 , O 4 to O 7 , O 9 to O 10 in a one-to-one manner. 4 to V 7 and V 9 to V 10 are included.
  • Video V 10 corresponding to the object O 10 may be a (video containing text indicating that more specifically the building is a gas station) images containing text indicating information about the building.
  • FIG. 14B shows an example of a second image group displayed on the head-up display 2 in this case.
  • the display mode to be changed by the display control unit 33 may be any one as long as it relates to the ease of occurrence of video sickness in AR-HUD, and the number N of objects corresponding to the video included in the second video group It is not limited to 'and the size of display target area A2 (Rh' ⁇ Rv ').
  • the display control unit 33 may change the position coordinates of the central portion C of the display target area A2 according to the result of the estimation process by the video sickness estimation unit 32.
  • the display control unit 33 determines that the motions of the individual videos included in the second video group (more specifically, the moving speed and the moving amount) differ according to the result of the estimation process by the video sickness estimating unit 32. It may be confusing.
  • the display control unit 33 determines the number N ′ of objects corresponding to the images included in the second image group and the individual images included in the second image group according to the result of the estimation process by the video sickness estimation unit 32. Move at least one of movement (more specifically, movement speed and movement amount), size of display target area A2 (Rh ′ ⁇ Rv ′), or position coordinates of central portion C of display target area A2 It may be something.
  • the display control unit 33 may measure the amount of solar radiation in the compartment of the vehicle 1 using an illuminance sensor (not shown) provided in the vehicle 1. Even if the display control unit 33 adjusts the brightness and contrast ratio of the video displayed on the head-up display 2 (that is, the video included in the second video group) in accordance with the measured amount of solar radiation. good. Thereby, the legibility of the image displayed on the head-up display 2 can be improved.
  • the main part of the display control system 200 may be configured by the object information acquisition unit 31, the motion sickness estimation unit 32, and the display control unit 33.
  • each of the object information acquisition unit 31, the motion sickness estimation unit 32, and the display control unit 33 may be an on-vehicle information device 51 that can be mounted on the vehicle 1, a portable information terminal 52 such as a smartphone that can be carried on the vehicle 1, or It may be provided in any of the on-vehicle information device 51 or the server device 53 capable of communicating with the portable information terminal 52.
  • FIGS. 16A to 16D shows the system configuration of the main part of the display control system 200.
  • any function of the display control system 200 may be realized by cooperation of any two or more of the in-vehicle information device 51, the portable information terminal 52, and the server device 53.
  • the head-up display 2 is not limited to the windshield type, and may be a combiner type.
  • the combiner usually occupies a smaller area in the driver's field of view than the windshield.
  • the AR-HUD of the combiner type is less likely to cause video sickness than the AR-HUD of the windshield type. Therefore, it is particularly preferable to use the display control apparatus 100 and the display control system 200 for controlling the windshield AR-HUD.
  • the head-up display 2 may be provided on a moving body different from the vehicle 1.
  • the head-up display 2 may be provided on any moving object such as a car, a rail car, an aircraft or a ship.
  • the display control apparatus 100 includes the object information acquisition unit 31 that acquires object information including information on one or more objects existing around the mobile object (vehicle 1), and the object information Using the object information and the video sickness estimation unit 32, which executes processing to estimate the presence or absence of video sickness due to the first video group including one or more videos corresponding to one or more objects, using And a display control unit 33 for executing control to cause the head-up display 2 to display a second video group including at least a part of one or more videos.
  • the display mode of the second image group is made different according to the result of the estimation process by By using object information, it is possible to estimate with high accuracy the presence or absence of video sickness in AR-HUD. Further, by making the display mode of the second video group different, it is possible to suppress the occurrence of video sickness while continuing the AR display by the head-up display 2.
  • the display control unit 33 controls the number of videos included in the second video group, the motion of each video included in the second video group, or the head-up display 2 according to the result of the estimation processing by the video sickness estimation unit 32. And at least one of the areas (display target area A2) in which the second image group is displayed is different. This makes it possible to suppress the occurrence of video sickness.
  • the display control system 200 uses the object information acquisition unit 31 that acquires object information including information on one or more objects existing around the mobile object (vehicle 1). Using the object information, the video sickness estimation unit 32 that executes processing for estimating the presence or absence of video sickness due to the first video group including one or more videos corresponding to one or more objects; And a display control unit 33 for executing control to cause the head-up display 2 to display a second image group including at least a part of the images of the images, and the display control unit 33 performs an estimation process by the video sickness estimation unit 32 The display mode of the second image group is made different according to the result of. Thereby, the same effect as the above-mentioned effect by display control 100 can be acquired.
  • the object information acquisition unit 31 acquires object information including information on one or more objects existing around the mobile object (vehicle 1); Step ST2 in which the estimation unit 32 executes a process of estimating the presence or absence of video sickness due to the first video group including one or more videos corresponding to one or more objects using object information; And a step ST3 of performing control to cause the head-up display 2 to display a second video group including at least a part of one or more videos using the object information, and the display control unit 33 changes the display mode of the second image group according to the result of the estimation processing by the video sickness estimation unit 32.
  • the same effect as the above-mentioned effect by display control 100 can be acquired.
  • FIG. 17 is a block diagram showing a state where a control device including a display control device according to Embodiment 2 is provided in a vehicle.
  • the display control device 100a according to the second embodiment will be described with reference to FIG. Note that, in FIG. 17, the same blocks as the blocks shown in FIG.
  • the vehicle 1 has a camera 8 for imaging in the passenger compartment.
  • the camera 8 is configured by, for example, a visible light camera or an infrared camera.
  • the camera 8 is disposed in the front of the vehicle compartment of the vehicle 1 and captures an image of a range including the face of the driver sitting in the driver's seat.
  • the vehicle 1 has a sensor 9.
  • the sensor 9 is configured by a contact or non-contact type biometric sensor.
  • the sensor 9 is constituted by a contact-type biological sensor, the sensor 9 is provided on the steering wheel or the driver's seat of the vehicle 1 or the like.
  • the sensor 9 is configured by a noncontact biometric sensor, the sensor 9 is disposed in the cabin of the vehicle 1.
  • the driver information generation unit 22 executes an image recognition process on an image captured by the camera 8.
  • the driver information generation unit 22 generates information on the driver of the vehicle 1 (hereinafter referred to as “driver information”) using at least one of the result of the image recognition process or the detection value by the sensor 9. is there.
  • the driver information generated using the result of the image recognition process is, for example, the position of the driver's head, the position of the driver's face, the driver's viewpoint movement amount, the driver's face color, the driver's eye opening degree Alternatively, it indicates at least one of the number of blinks of the driver.
  • the position of the driver's head and the position of the driver's face are, for example, measured by a method such as TOF method or triangulation method.
  • the driver information generated using the detection value by the sensor 9 is, for example, the heart rate of the driver, the pulse rate of the driver, the blood pressure of the driver, the temperature of the driver, the body temperature of the driver, the amount of sweat of the driver or the brain wave of the driver. It is information indicating at least one of them, that is, biological information.
  • the driver information acquisition unit 34 acquires the driver information generated by the driver information generation unit 22.
  • the driver information acquisition unit 34 outputs the acquired driver information to the video sickness estimation unit 32a.
  • the video sickness estimating unit 32a temporarily displays the first video group on the head-up display 2 using the object information output by the object information acquiring unit 31 and the driver information output by the driver information acquiring unit 34.
  • the presence or absence of the occurrence of video sickness due to the first video group in the case is estimated. More specifically, the motion sickness estimation unit 32a executes the same process as that described with reference to FIG. 7 in the first embodiment, and sets the threshold value Rh_th, Rv_th, and N_th to the driver. It uses information.
  • the video sickness estimation unit 32a determines whether the driver of the vehicle 1 is in poor health or not using the driver information. When it is determined that the driver is in poor physical condition “present”, the video sickness estimating unit 32 a is at least one of the thresholds Rh_th, Rv_th, and N_th, as compared with the case where the driver is determined to be in poor physical condition “absent”. Is set to a low value (for example, 0.5 times the value).
  • the driver information indicates the heart rate of the driver.
  • a range of values (hereinafter referred to as a “reference range”, for example, a range of 50 to 90 bpm) including the heart rate (for example, 65 bpm) of the driver at normal times is set in the video sickness estimating unit 32a.
  • the motion sickness estimation unit 32a determines whether the heart rate indicated by the driver information is within the reference range.
  • the video sickness estimating unit 32a sets the threshold Rv_th to a value that is half the vertical width of the displayable area A1. On the other hand, when the heart rate indicated by the driver information is a value outside the reference range, the video sickness estimating unit 32a sets the threshold Rv_th to a value that is one fourth of the vertical width of the displayable area A1.
  • the width Rh ′ of the display target area A2 in the case of Rh> Rh_th also decreases. Due to the decrease of the threshold value Rv_th, the vertical width Rv ′ of the display target area A2 in the case of Rv> Rv_th also decreases. As the threshold value N_th decreases, the number N ′ of objects corresponding to the images included in the second image group in the case of N> N_th also decreases. As a result, it is possible to suppress the occurrence of the video sickness even in the state where the video sickness is easily generated due to the driver's poor physical condition.
  • the object information acquisition unit 31, the visual sickness estimation unit 32a, the display control unit 33, and the driver information acquisition unit 34 constitute a main part of the display control apparatus 100a.
  • the object information generation unit 21, the driver information generation unit 22, the object information acquisition unit 31, the video sickness estimation unit 32a, the display control unit 33, and the driver information acquisition unit 34 constitute the main part of the control device 7a. There is.
  • the hardware configuration of the main part of the control device 7a is the same as that described in the first embodiment with reference to FIG. That is, the functions of the driver information generation unit 22, the video sickness estimation unit 32a, and the driver information acquisition unit 34 may be realized by the processor 41 and the memory 42, or may be realized by the processing circuit 43. It may be
  • step ST1 the object information acquisition unit 31 acquires object information generated by the object information generation unit 21.
  • the object information acquisition unit 31 outputs the acquired object information to the video sickness estimation unit 32a and the display control unit 33.
  • step ST4 the driver information acquisition unit 34 acquires the driver information generated by the driver information generation unit 22.
  • the driver information acquisition unit 34 outputs the acquired driver information to the video sickness estimation unit 32a.
  • the video sickness estimating unit 32a temporarily uses the object information output by the object information acquiring unit 31 and the driver information output by the driver information acquiring unit 34 to temporarily generate the first image group as a head.
  • a process of estimating the occurrence of video sickness due to the first video group when displayed on the up display 2 is executed.
  • step ST2a The specific example of the estimation process in step ST2a is the same as that described in the first embodiment with reference to the flowchart of FIG. However, when setting the thresholds Rh_th, Rv_th, and N_th (that is, when executing the process corresponding to the process of step ST11 shown in FIG. 7), the video sickness estimating unit 32a uses the driver information as described above. It has become.
  • step ST3 the display control unit 33 executes control to display the second image group on the head-up display 2 using the object information output from the object information acquisition unit 31.
  • the display control unit 33 is configured to make the display mode of the second image group different according to the result of the estimation process by the video sickness estimation unit 32a.
  • the specific example of control in step ST3 is the same as that described in the first embodiment with reference to the flowchart of FIG.
  • the video sickness estimating unit 32a compares the threshold value Rv_th with that in the case where the heart rate indicated by the driver information is within the reference range.
  • the threshold value Rh_th may be set to a low value (for example, 0.5 times value) instead of or in addition to setting to a low value (for example, 0.5 times value).
  • the magnification at this time is not limited to 0.5 times, and may be any magnification.
  • the video sickness estimating unit 32a compares the threshold N_th with the threshold value indicated by the driver information by a threshold value within the reference range. It may be set to a low value.
  • the display control unit 33 calculates the position of the driver's eyes in real space using these pieces of information. It may be When the display control unit 33 sets the position of each of the N 'images in the displayable area A1, the information stored in advance (that is, information indicating the estimated value of the position of the driver's eye in real space) Instead of the position of the driver's eyes shown, the calculated position of the driver's eyes may be used.
  • the video sickness estimating unit 32a determines whether the blood pressure indicated by the driver information is within the reference range, as in the above example related to the heart rate. It may be determined whether or not. If the blood pressure indicated by the driver information is a value outside the reference range, the visual sickness estimation unit 32a has the threshold Rh_th, Rv_th, and N_th compared to when the blood pressure indicated by the driver information is a value within the reference range. Alternatively, at least one of the above may be set to a low value.
  • the video sickness estimating unit 32a determines whether the pulse indicated by the driver information is within the reference range, as in the above example relating to the heart rate. It may be determined whether or not.
  • the video sickness estimating unit 32a has the threshold Rh_th, Rv_th, and N_th among the thresholds Rh_th, Rv_th, and N_th when the pulse indicated by the driver information is a value within the reference range. Alternatively, at least one of the above may be set to a low value.
  • the video sickness estimating unit 32a may compare the complexion indicated by the driver information with the driver's complexion in normal times. If the complexion indicated by the driver information indicates that the complexion indicated by the driver information is flushing or pale compared with the complexion of the driver in the normal case, the threshold value Rh_th, Rv_th, and N_th of the thresholds are compared. At least one may be set to a low value.
  • the video sickness estimation unit 32a is, in the same manner as the above example related to the complexion, the driver in normal times the viewpoint movement amount indicated by the driver information. It may be compared with the viewpoint movement amount of
  • the video sickness estimating unit 32a opens the degree of eye opening indicated by the driver information in the normal time, as in the above example related to the complexion. It may be compared with the degree.
  • the driver information is information that can be generated using at least one of the result of the image recognition process on the captured image by the camera 8 or the detection value by the sensor 9, and determines whether the driver of the vehicle 1 has poor physical condition As long as it is possible information, any information may be included.
  • the video sickness estimation unit 32a may use the driver information in the process of estimating the presence or absence of video sickness due to the first video group, and the method of using the driver information in the process is the above specific example. It is not limited to
  • the display control device 100a can adopt various modifications similar to those described in the first embodiment, that is, various modifications similar to the display control 100.
  • the main part of the display control system 200a is configured by the object information acquisition unit 31, the video sickness estimation unit 32a, the display control unit 33, and the driver information acquisition unit 34. good.
  • the system configuration of the main part of the display control system 200a is the same as that described in the first embodiment with reference to FIG. That is, any function of the display control system 200a may be realized by cooperation of any two or more of the in-vehicle information device 51, the portable information terminal 52, and the server device 53.
  • the display control device 100a includes the driver information acquisition unit 34 for acquiring driver information including information on the driver of the vehicle 1, and the video sickness estimation unit 32a includes object information and The driver information is used to estimate the presence or absence of video sickness.
  • a driver's physical condition can be considered, for example.
  • even in a state in which video sickness is likely to occur due to the driver's poor physical condition it is possible to suppress the occurrence of video sickness.
  • FIG. 20 is a block diagram showing a state where a control device including a display control device according to Embodiment 3 is provided in a vehicle.
  • the display control device 100b according to the third embodiment will be described with reference to FIG. In FIG. 20, the blocks similar to the blocks shown in FIG.
  • the vehicle information generation unit 23 is connected to the in-vehicle network 10.
  • the vehicle information generation unit 23 is output by various systems (for example, a car navigation system) connected to the in-vehicle network 10 or various ECUs (Electronic Control Unit) connected to the in-vehicle network 10 via the in-vehicle network 10 Information is obtained.
  • the vehicle information generation unit 23 generates information on the vehicle 1 (hereinafter referred to as “vehicle information”) using the acquired information.
  • the vehicle information includes, for example, information indicating the position of the vehicle 1, information indicating the traveling direction of the vehicle 1, information indicating the traveling speed of the vehicle 1, information indicating the acceleration of the vehicle 1, information indicating the vibration number of the vehicle 1, time Information related to various warnings, information related to various control signals (such as wiper on / off signal, light lighting signal, parking signal and back signal), or navigation information (congestion information, information indicating facility name, information for guidance) And at least one of the information indicating the guidance and the information indicating the route of the guidance target.
  • information indicating the position of the vehicle includes, for example, information indicating the position of the vehicle 1, information indicating the traveling direction of the vehicle 1, information indicating the traveling speed of the vehicle 1, information indicating the acceleration of the vehicle 1, information indicating the vibration number of the vehicle 1, time Information related to various warnings, information related to various control signals (such as wiper on / off signal, light lighting signal, parking signal and back signal), or navigation information (congestion information, information indicating facility name, information for guidance) And at least one of the
  • the vehicle information acquisition unit 35 acquires vehicle information generated by the vehicle information generation unit 23.
  • the vehicle information acquisition unit 35 outputs the acquired vehicle information to the video sickness estimation unit 32 b.
  • the video sickness estimation unit 32b The presence or absence of the occurrence of video sickness due to the first video group is estimated. More specifically, the motion sickness estimation unit 32b executes the same process as that described with reference to FIG. 7 in the first embodiment, and sets the threshold Rh_th, Rv_th, and N_th to vehicle information. Is used.
  • the video sickness estimation unit 32b determines whether the driving environment of the vehicle 1 is a driving environment that is likely to cause video sickness, using the vehicle information. When it is determined that the driving environment of the vehicle 1 is a driving environment that is likely to cause video sickness, the video sickness estimating unit 32b lowers at least one of the thresholds Rh_th, Rv_th, and N_th compared to the case where it is not so. Set to a value (for example, 0.5 times the value).
  • the video sickness estimating unit 32b uses the vehicle information.
  • the driving environment on the road on which the vehicle 1 is traveling is a driving environment in which it is easy to generate a video sickness (for example, an environment on a rough road, an environment on a sharp curve, a rapid acceleration) Environment, or an environment where rapid deceleration is taking place, etc.).
  • the video sickness estimating unit 32b lowers at least one of the thresholds Rh_th, Rv_th, and N_th compared to the case where it is not so. Set to a value (for example, 0.5 times the value).
  • the video sickness estimating unit 32b uses the vehicle information to set the vehicle 1 within the predetermined time. Calculate the amount of change in traveling speed. When the calculated change amount exceeds a predetermined amount (for example, ⁇ 30 kilometers per hour), the video sickness estimation unit 32b determines the threshold value Rh_th, Rv_th compared to the case where the calculated change amount is equal to or less than the predetermined amount. , N_th is set to a low value (for example, 0.5 times the value).
  • the video sickness estimation unit 32b uses the vehicle information to drive the driving environment in which the driving environment on the road on which the vehicle 1 is to travel is likely to generate video sickness (for example, It is determined whether or not the environment is such that the curve is continuous on a mountain road, or the environment such that acceleration and deceleration are continuous on a slope.
  • the video sickness estimating unit 32b lowers at least one of the thresholds Rh_th, Rv_th, and N_th compared to the case where it is not so. Set to a value (for example, 0.5 times the value).
  • the video sickness estimating unit 32b determines whether the curvature of the curve on which the vehicle 1 is to travel exceeds the predetermined value using the vehicle information. . If the curvature is greater than a predetermined value, the motion sickness estimation unit 32b lowers at least one of the thresholds Rh_th, Rv_th, and N_th (for example, 0. 0.) when the curvature is less than or equal to the predetermined value. Set to 5 times the value).
  • the video sickness estimating unit 32b uses the vehicle information to display the driving environment of the vehicle 1 It is determined whether or not there is a driving environment (such as an environment in which a plurality of warnings are output during night travel) that is likely to cause drunkenness.
  • a driving environment such as an environment in which a plurality of warnings are output during night travel
  • the video sickness estimating unit 32b lowers at least one of the thresholds Rh_th, Rv_th, and N_th compared to the case where it is not so. Set to a value (for example, 0.5 times the value).
  • the width Rh ′ of the display target area A2 in the case of Rh> Rh_th also decreases. Due to the decrease of the threshold value Rv_th, the vertical width Rv ′ of the display target area A2 in the case of Rv> Rv_th also decreases. As the threshold value N_th decreases, the number N ′ of objects corresponding to the images included in the second image group in the case of N> N_th also decreases. As a result, it is possible to suppress the occurrence of video sickness even in a driving environment where video sickness is likely to occur.
  • the object information acquisition unit 31, the visual sickness estimation unit 32b, the display control unit 33, and the vehicle information acquisition unit 35 constitute a main part of the display control device 100b. Further, the object information generation unit 21, the vehicle information generation unit 23, the object information acquisition unit 31, the video sickness estimation unit 32b, the display control unit 33, and the vehicle information acquisition unit 35 constitute a main part of the control device 7b.
  • the hardware configuration of the main part of the control device 7b is the same as that described in the first embodiment with reference to FIG. That is, the functions of the vehicle information generation unit 23, the video sickness estimation unit 32b, and the vehicle information acquisition unit 35 may be realized by the processor 41 and the memory 42, or may be realized by the processing circuit 43. It is good.
  • step ST1 the object information acquisition unit 31 acquires object information generated by the object information generation unit 21.
  • the object information acquisition unit 31 outputs the acquired object information to the video sickness estimation unit 32 b and the display control unit 33.
  • step ST5 the vehicle information acquisition unit 35 acquires the vehicle information generated by the vehicle information generation unit 23.
  • the vehicle information acquisition unit 35 outputs the acquired vehicle information to the video sickness estimation unit 32 b.
  • the video sickness estimating unit 32b temporarily uses the object information output by the object information acquiring unit 31 and the vehicle information output by the vehicle information acquiring unit 35 to temporarily display the head-up display of the first image group.
  • a process is performed to estimate the presence or absence of video sickness due to the first video group when it is displayed in 2.
  • step ST2b uses the vehicle information as described above. ing.
  • step ST3 the display control unit 33 executes control to display the second image group on the head-up display 2 using the object information output from the object information acquisition unit 31.
  • the display control unit 33 is configured to make the display mode of the second video group different according to the result of the estimation process by the video sickness estimation unit 32b.
  • the specific example of control in step ST3 is the same as that described in the first embodiment with reference to the flowchart of FIG.
  • the vehicle information may be information regarding the vehicle 1 and includes information that can be acquired via the in-vehicle network 10, and the content of the vehicle information is not limited to the above specific example.
  • the video sickness estimation unit 32b may be any device that estimates the presence or absence of video sickness due to the first video group using the object information and the vehicle information (more specifically, for the thresholds Rh_th, Rv_th, and N_th).
  • the content of the estimation processing by the video sickness estimation unit 32b is not limited to the above specific example, as long as vehicle information is used for setting).
  • the display control device 100b can adopt various modifications similar to those described in the first embodiment, that is, various modifications similar to the display control 100.
  • the main part of the display control system 200b may be configured by the object information acquisition unit 31, the video sickness estimation unit 32b, the display control unit 33, and the vehicle information acquisition unit 35.
  • the system configuration of the main part of the display control system 200b is the same as that described in the first embodiment with reference to FIG. That is, any function of the display control system 200b may be realized by cooperation of any two or more of the in-vehicle information device 51, the portable information terminal 52, and the server device 53.
  • the display control device 100 b may have the same driver information acquisition unit 34 as the display control device 100 a of the second embodiment.
  • the video sickness estimation unit 32b uses the object information output by the object information acquisition unit 31, the driver information output by the driver information acquisition unit 34, and the vehicle information output by the vehicle information acquisition unit 35. The presence or absence of the occurrence of video sickness due to the first video group may be estimated. More specifically, the motion sickness estimation unit 32b may use driver information and vehicle information for setting the thresholds Rh_th, Rv_th, and N_th. The same applies to the display control system 200b.
  • the display control device 100b includes the vehicle information acquisition unit 35 that acquires vehicle information including information related to the vehicle 1, and the video sickness estimation unit 32b uses object information and vehicle information. Estimate the presence or absence of video sickness. Thereby, when estimating the presence or absence of generation
  • FIG. 23 is a block diagram showing a state in which a control device including a display control device according to Embodiment 4 is provided in a vehicle.
  • the display control device 100c according to the fourth embodiment will be described with reference to FIG.
  • the same blocks as the blocks shown in FIG. 1 are assigned the same reference numerals and descriptions thereof will be omitted.
  • the vehicle 1 has a communication device 11.
  • the communication device 11 includes, for example, a transmitter and a receiver for Internet connection, a transmitter and a receiver for inter-vehicle communication, or a transmitter and a receiver for road-to-vehicle communication.
  • the outside environment information generation unit 24 uses information received by the communication device 11 from a server device, another vehicle, or a roadside device (all not shown), etc., to obtain information on the outside environment of the vehicle 1 (hereinafter referred to as "outside environment information") .) Is generated.
  • the outside environment information indicates, for example, at least one of the weather around the vehicle 1, the temperature around the vehicle 1, the humidity around the vehicle 1, or the degree of congestion of the road around the vehicle 1.
  • the outside environment information acquisition unit 36 acquires outside environment information generated by the outside environment information generation unit 24.
  • the outside environment information acquisition unit 36 outputs the acquired outside environment information to the video sickness estimation unit 32c.
  • the video sickness estimating unit 32 c temporarily displays the first image group on the head-up display 2 using the object information output by the object information acquiring unit 31 and the external environment information output by the external environment information acquiring unit 36.
  • the presence or absence of the occurrence of video sickness due to the first video group in the case is estimated. More specifically, the motion sickness estimation unit 32c executes the same process as that described with reference to FIG. 7 in the first embodiment, and sets the thresholds Rh_th, Rv_th, and N_th in the environment outside the vehicle. It uses information.
  • the video sickness estimation unit 32c determines whether the driving environment of the vehicle 1 is a driving environment that is likely to cause video sickness, using the environment information outside the vehicle. When it is determined that the driving environment of the vehicle 1 is a driving environment that is likely to cause video sickness, the video sickness estimating unit 32c lowers at least one of the thresholds Rh_th, Rv_th, and N_th compared to the case where it is not so. Set to a value (for example, 0.5 times the value).
  • the video sickness estimating unit 32c uses the outside environment information to drive the driving environment in which the vehicle 1 is likely to generate video sickness (for example, It is determined whether or not the environment is such that many warnings are output due to bad weather such as heavy rain or heavy snow.
  • the video sickness estimating unit 32c lowers at least one of the thresholds Rh_th, Rv_th, and N_th compared to the case where it is not so. Set to a value (for example, 0.5 times the value).
  • the video sickness estimating unit 32c causes the driving environment of the vehicle 1 to generate video sickness using the outside environment information. Whether the driving environment is easy (for example, an environment where many other vehicles exist around the vehicle 1 at an intersection and the number N of objects detected by the object information generation unit 21 increases) judge. When it is determined that the driving environment of the vehicle 1 is a driving environment that is likely to cause video sickness, the video sickness estimating unit 32c lowers at least one of the thresholds Rh_th, Rv_th, and N_th compared to the case where it is not so. Set to a value (for example, 0.5 times the value).
  • the video sickness estimating unit 32c uses the outside environment information to easily drive the driving environment of the vehicle 1 to generate video sickness.
  • the video sickness estimating unit 32c uses the outside environment information to easily drive the driving environment of the vehicle 1 to generate video sickness.
  • the video sickness estimating unit 32c lowers at least one of the thresholds Rh_th, Rv_th, and N_th compared to the case where it is not so. Set to a value (for example, 0.5 times the value).
  • the width Rh ′ of the display target area A2 in the case of Rh> Rh_th also decreases. Due to the decrease of the threshold value Rv_th, the vertical width Rv ′ of the display target area A2 in the case of Rv> Rv_th also decreases. As the threshold value N_th decreases, the number N ′ of objects corresponding to the images included in the second image group in the case of N> N_th also decreases. As a result, it is possible to suppress the occurrence of video sickness even in a driving environment where video sickness is likely to occur.
  • the object information acquisition unit 31, the visual sickness estimation unit 32c, the display control unit 33, and the external environment information acquisition unit 36 constitute a main part of the display control device 100c. Further, the object information generation unit 21, the outside environment information generation unit 24, the object information acquisition unit 31, the video sickness estimation unit 32c, the display control unit 33, and the outside environment information acquisition unit 36 constitute a main part of the control device 7c. There is.
  • the hardware configuration of the main part of the control device 7c is the same as that described in the first embodiment with reference to FIG. That is, the functions of the external environment information generation unit 24, the video sickness estimation unit 32c, and the external environment information acquisition unit 36 may be realized by the processor 41 and the memory 42, or realized by the processing circuit 43. It may be
  • step ST1 the object information acquisition unit 31 acquires object information generated by the object information generation unit 21.
  • the object information acquisition unit 31 outputs the acquired object information to the video sickness estimation unit 32 c and the display control unit 33.
  • step ST6 the outside environment information acquiring unit 36 acquires outside environment information generated by the outside environment information generating unit 24.
  • the outside environment information acquisition unit 36 outputs the acquired outside environment information to the video sickness estimation unit 32c.
  • step ST2c the video sickness estimating unit 32c temporarily uses the object information output by the object information acquiring unit 31 and the external environment information output by the external environment information acquiring unit 36 to temporarily generate the first image group as a head.
  • a process of estimating the occurrence of video sickness due to the first video group when displayed on the up display 2 is executed.
  • step ST2c The specific example of the estimation process in step ST2c is the same as that described in the first embodiment with reference to the flowchart of FIG. However, when setting the thresholds Rh_th, Rv_th, and N_th (that is, when executing the process corresponding to the process of step ST11 shown in FIG. 7), the video sickness estimating unit 32c uses the environment information outside the vehicle as described above. It has become.
  • step ST3 the display control unit 33 executes control to display the second image group on the head-up display 2 using the object information output from the object information acquisition unit 31.
  • the display control unit 33 is configured to make the display mode of the second video group different according to the result of the estimation process by the video sickness estimation unit 32c.
  • the specific example of control in step ST3 is the same as that described in the first embodiment with reference to the flowchart of FIG.
  • the outside environment information may be any information regarding the outside environment of the vehicle 1 and may include information that can be received by the communication device 11, and the contents of the outside environment information are not limited to the above specific example.
  • the video sickness estimation unit 32c may be any device that estimates the presence or absence of video sickness due to the first video group using the object information and the environment outside the vehicle (more specifically, the thresholds Rh_th, Rv_th, N_th The contents of the estimation process by the video sickness estimation unit 32c are not limited to the above specific example, as long as the environment outside the vehicle is used for setting (1).
  • the display control device 100c can adopt various modifications similar to those described in the first embodiment, that is, various modifications similar to the display control 100.
  • the main part of the display control system 200c is configured by the object information acquisition unit 31, the video sickness estimation unit 32c, the display control unit 33, and the external environment information acquisition unit 36. good.
  • the system configuration of the main part of the display control system 200c is the same as that described in the first embodiment with reference to FIG. That is, any function of the display control system 200c may be realized by cooperation of any two or more of the in-vehicle information device 51, the portable information terminal 52, and the server device 53.
  • the display control device 100 c may have the same driver information acquisition unit 34 as the display control device 100 a of the second embodiment.
  • the video sickness estimation unit 32c calculates the object information output by the object information acquisition unit 31, the driver information output by the driver information acquisition unit 34, and the outside environment information output by the outside environment information acquisition unit 36. It may be used to estimate the presence or absence of video sickness due to the first video group. More specifically, the video sickness estimating unit 32c may use driver information and outside environment information for setting the thresholds Rh_th, Rv_th, and N_th. The same applies to the display control system 200c.
  • the display control device 100c may have the same vehicle information acquisition unit 35 as the display control device 100b of the third embodiment.
  • the video sickness estimation unit 32 c uses the object information output by the object information acquisition unit 31, the vehicle information output by the vehicle information acquisition unit 35, and the outside environment information output by the outside environment information acquisition unit 36. The presence or absence of the occurrence of video sickness due to the first video group may be estimated. More specifically, the video sickness estimating unit 32c may use vehicle information and outside environment information for setting the thresholds Rh_th, Rv_th, and N_th. The same applies to the display control system 200c.
  • the display control device 100c has a driver information acquisition unit 34 similar to the display control device 100a of the second embodiment and a vehicle information acquisition unit 35 similar to the display control device 100b of the third embodiment, Also good.
  • the video sickness estimation unit 32c determines the object information output by the object information acquisition unit 31, the driver information output by the driver information acquisition unit 34, the vehicle information output by the vehicle information acquisition unit 35, and the environment outside the vehicle It is also possible to estimate the presence or absence of the occurrence of video sickness due to the first video group using the external environment information output by the information acquisition unit 36. More specifically, the video sickness estimating unit 32c may use driver information, vehicle information, and external environment information for setting the thresholds Rh_th, Rv_th, and N_th. The same applies to the display control system 200c.
  • the display control device 100c includes the outside environment information acquisition unit 36 that acquires outside environment information including information related to the outside environment of the vehicle 1, and the video sickness estimation unit 32c includes object information and The presence or absence of video sickness is estimated using the environment information outside the vehicle.
  • the driving environment of the vehicle 1 can be considered, for example.
  • the occurrence of video sickness can be suppressed.
  • the present invention allows free combination of each embodiment, or modification of any component of each embodiment, or omission of any component in each embodiment. .
  • the display control device of the present invention can be used, for example, to control a windshield AR-HUD.

Abstract

L'invention concerne un dispositif de commande d'affichage (100) comprenant : une partie d'acquisition d'informations d'objet (31) pour acquérir des informations d'objet comprenant des informations relatives à un ou plusieurs objets présents autour d'un véhicule (1) ; une partie d'inférence de mal des transports visuellement induit (32) pour, à l'aide des informations d'objet, exécuter un processus pour inférer si un mal des transports visuellement induit a été induit par un premier groupe vidéo comprenant une ou plusieurs vidéos correspondant au ou aux objets ; et une partie de commande d'affichage (33) pour, à l'aide des informations d'objet, exécuter une commande pour amener un affichage tête haute (2) à afficher un second groupe vidéo comprenant au moins une partie de l'une ou des vidéos. La partie de commande d'affichage (33) fait varier l'état d'affichage du second groupe vidéo en fonction d'un résultat du processus d'inférence effectué par la partie d'inférence de mal des transports visuellement induit (32).
PCT/JP2018/001815 2018-01-22 2018-01-22 Dispositif de commande d'affichage, système de commande d'affichage et procédé de commande d'affichage WO2019142364A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/001815 WO2019142364A1 (fr) 2018-01-22 2018-01-22 Dispositif de commande d'affichage, système de commande d'affichage et procédé de commande d'affichage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/001815 WO2019142364A1 (fr) 2018-01-22 2018-01-22 Dispositif de commande d'affichage, système de commande d'affichage et procédé de commande d'affichage

Publications (1)

Publication Number Publication Date
WO2019142364A1 true WO2019142364A1 (fr) 2019-07-25

Family

ID=67300966

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/001815 WO2019142364A1 (fr) 2018-01-22 2018-01-22 Dispositif de commande d'affichage, système de commande d'affichage et procédé de commande d'affichage

Country Status (1)

Country Link
WO (1) WO2019142364A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010066042A (ja) * 2008-09-09 2010-03-25 Toshiba Corp 画像照射システムおよび画像照射方法
JP2015182755A (ja) * 2014-03-26 2015-10-22 三菱電機株式会社 移動体機器制御装置、携帯端末、移動体機器制御システムおよび移動体機器制御方法
JP2017058493A (ja) * 2015-09-16 2017-03-23 株式会社コロプラ 仮想現実空間映像表示方法、及び、プログラム
JP2017211916A (ja) * 2016-05-27 2017-11-30 京セラ株式会社 携帯電子機器、携帯電子機器制御方法及び携帯電子機器制御プログラム

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010066042A (ja) * 2008-09-09 2010-03-25 Toshiba Corp 画像照射システムおよび画像照射方法
JP2015182755A (ja) * 2014-03-26 2015-10-22 三菱電機株式会社 移動体機器制御装置、携帯端末、移動体機器制御システムおよび移動体機器制御方法
JP2017058493A (ja) * 2015-09-16 2017-03-23 株式会社コロプラ 仮想現実空間映像表示方法、及び、プログラム
JP2017211916A (ja) * 2016-05-27 2017-11-30 京セラ株式会社 携帯電子機器、携帯電子機器制御方法及び携帯電子機器制御プログラム

Similar Documents

Publication Publication Date Title
US10857942B2 (en) Image generating device, image generating method, and program
US8536995B2 (en) Information display apparatus and information display method
US9760782B2 (en) Method for representing objects surrounding a vehicle on the display of a display device
KR101855940B1 (ko) 차량용 증강현실 제공 장치 및 그 제어방법
US9267808B2 (en) Visual guidance system
US10647201B2 (en) Drive assist device and drive assist method
JP2016500352A (ja) 車両のためのシステム
US9836814B2 (en) Display control apparatus and method for stepwise deforming of presentation image radially by increasing display ratio
US11803053B2 (en) Display control device and non-transitory tangible computer-readable medium therefor
US9849835B2 (en) Operating a head-up display of a vehicle and image determining system for the head-up display
JP7255608B2 (ja) 表示制御装置、方法、及びコンピュータ・プログラム
KR20170083798A (ko) 헤드업 디스플레이 장치 및 그 제어방법
JP6186905B2 (ja) 車載表示装置およびプログラム
JP7459883B2 (ja) 表示制御装置、ヘッドアップディスプレイ装置、及び方法
JP2020112698A (ja) 表示制御装置、表示装置、表示システム、移動体、プログラム、画像生成方法
WO2022230995A1 (fr) Dispositif de commande d'affichage, dispositif d'affichage tête haute et procédé de commande d'affichage
WO2019142364A1 (fr) Dispositif de commande d'affichage, système de commande d'affichage et procédé de commande d'affichage
US11412205B2 (en) Vehicle display device
WO2021132259A1 (fr) Appareil et procédé d'affichage et programme
WO2020158601A1 (fr) Dispositif, procédé et programme informatique de commande d'affichage
JP2020158014A (ja) ヘッドアップディスプレイ装置、表示制御装置、及び表示制御プログラム
US20240042857A1 (en) Vehicle display system, vehicle display method, and computer-readable non-transitory storage medium storing vehicle display program
JP7434894B2 (ja) 車両用表示装置
WO2021132250A1 (fr) Dispositif d'affichage embarqué dans un véhicule et programme
WO2023145852A1 (fr) Dispositif de commande d'affichage, système d'affichage et procédé de commande d'affichage

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18900759

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18900759

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP