WO2019142364A1 - Display control device, display control system, and display control method - Google Patents

Display control device, display control system, and display control method Download PDF

Info

Publication number
WO2019142364A1
WO2019142364A1 PCT/JP2018/001815 JP2018001815W WO2019142364A1 WO 2019142364 A1 WO2019142364 A1 WO 2019142364A1 JP 2018001815 W JP2018001815 W JP 2018001815W WO 2019142364 A1 WO2019142364 A1 WO 2019142364A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
vehicle
display control
information
display
Prior art date
Application number
PCT/JP2018/001815
Other languages
French (fr)
Japanese (ja)
Inventor
聖崇 加藤
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to PCT/JP2018/001815 priority Critical patent/WO2019142364A1/en
Publication of WO2019142364A1 publication Critical patent/WO2019142364A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present invention relates to a display control device, a display control system, and a display control method.
  • HUD head-up display
  • video sickness a head-up display
  • Symptoms of motion sickness include, for example, nausea, dizziness, headache or eye strain.
  • AR-HUD corresponding to so-called AR (Augmented Reality) display
  • AR-HUD images are displayed so that images corresponding to various objects (hereinafter referred to as “objects”) present around the moving object are superimposed at positions near each object in the user's field of view. Be done.
  • the positional relationship between the moving object and each object changes as the moving object moves.
  • the number of objects included in the field of view of the user, the range occupied by the object in the field of view of the user, and the like change.
  • the number of images displayed on the AR-HUD, the size of the area in which images on the AR-HUD are displayed, and the like also change.
  • the more the number of images displayed on the AR-HUD the more likely to be an image sickness.
  • the size of the area in the AR-HUD in which the video is displayed is larger, the video sickness is more likely to occur. That is, in AR-HUD, the likelihood of occurrence of video sickness varies depending on the number of objects included in the field of view of the user, the range occupied by the object in the field of view of the user, and the like.
  • the image display device described in Patent Document 1 determines the likelihood of occurrence of video sickness using the analysis result of the contents of the video. That is, the image display device described in Patent Document 1 does not use information on an object when determining the likelihood of occurrence of video sickness. For this reason, when the image display device described in Patent Document 1 is used for AR-HUD, there is a problem that the determination accuracy of the susceptibility to video sickness is low.
  • the present invention has been made to solve the problems as described above, and it is an object of the present invention to estimate with high accuracy the presence or absence of the occurrence of motion sickness in an AR-HUD.
  • a display control apparatus includes an object information acquisition unit that acquires object information including information on one or more objects existing around a mobile object, and one or more objects corresponding to one or more objects.
  • a video sickness estimation unit that executes processing to estimate the presence or absence of video sickness due to the first video group including one or more videos, and at least a part of the video of one or more videos using the object information
  • a display control unit that executes control to cause the head-up display to display the second image group including the display group, and the display control unit changes the display mode of the second image group according to the result of estimation processing by the video sickness estimation unit. It is a thing.
  • FIG. 3A is an explanatory view showing an example of an object.
  • FIG. 3B is an explanatory view showing an example of a displayable area, an object area, and an object group area. It is an explanatory view showing an example of object information.
  • FIG. 5A is a block diagram showing a hardware configuration of a control device including the display control device according to Embodiment 1 of the present invention.
  • FIG. 5B is a block diagram showing another hardware configuration of a control device including the display control device according to Embodiment 1 of the present invention. It is a flowchart which shows operation
  • FIG. 9A is an explanatory view showing an example of a displayable area, a display target area, and a first image group.
  • FIG. 9A is an explanatory view showing an example of a displayable area, a display target area, and a first image group.
  • FIG. 9B is an explanatory diagram showing an example of a state in which the second image group is displayed on the head-up display.
  • FIG. 10A is an explanatory view showing another example of the displayable area, the display target area, and the first image group.
  • FIG. 10B is an explanatory view showing another example of the state in which the second image group is displayed on the head-up display.
  • FIG. 11A is an explanatory view showing another example of the displayable area, the display target area, and the first image group.
  • FIG. 11B is an explanatory view showing another example of the state in which the second image group is displayed on the head-up display.
  • FIG. 12A is an explanatory view showing another example of the displayable area, the display target area, and the first image group.
  • FIG. 12B is an explanatory view showing another example of the state in which the second image group is displayed on the head-up display.
  • FIG. 13A is an explanatory view showing another example of the object.
  • FIG. 13B is an explanatory view showing another example of the displayable area and the first image group.
  • FIG. 13C is an explanatory view showing another example of the state in which the second image group is displayed on the head-up display.
  • FIG. 14A is an explanatory view showing another example of the object.
  • FIG. 14B is an explanatory view showing another example of the displayable area and the first image group.
  • FIG. 14C is an explanatory view showing another example of the state in which the second image group is displayed on the head-up display.
  • FIG. 16A is a block diagram showing a system configuration of a display control system according to Embodiment 1 of the present invention.
  • FIG. 16B is a block diagram showing another system configuration of the display control system according to Embodiment 1 of the present invention.
  • FIG. 16C is a block diagram showing another system configuration of the display control system according to Embodiment 1 of the present invention.
  • FIG. 16D is a block diagram showing another system configuration of the display control system according to Embodiment 1 of the present invention. It is a block diagram which shows the state in which the control apparatus containing the display control apparatus which concerns on Embodiment 2 of this invention is provided in the vehicle.
  • FIG. 1 is a block diagram showing a state in which a control device including a display control device according to the first embodiment is provided in a vehicle.
  • FIG. 2 is an explanatory view showing a state in which a control device including the display control device according to the first embodiment is provided in a vehicle.
  • the display control apparatus 100 according to the first embodiment will be described with reference to FIGS. 1 and 2.
  • the vehicle 1 has a head-up display 2.
  • the head-up display 2 is configured of, for example, a windshield AR-HUD.
  • the head-up display device 3 is provided on the dashboard of the vehicle 1.
  • the head-up display device 3 has a display for displaying a video for AR display, and an optical system for projecting visible light corresponding to the video displayed on the display onto the windshield 4.
  • the display is configured of, for example, a display such as a liquid crystal display (LCD) or an organic electro luminescence display (OLED), or a projector such as a DLP (registered trademark) or a laser projector.
  • the optical system is configured of, for example, any two or more of a concave mirror, a convex mirror, and a plane mirror.
  • visible light reflected by the windshield 4 is incident on the eye E of the driver of the vehicle 1 (hereinafter sometimes simply referred to as “driver”), thereby displaying an image for AR display.
  • the corresponding virtual image VI is viewed by the driver.
  • OP1 indicates the optical path of visible light corresponding to the image for AR display
  • OP2 indicates the optical path of the visible light perceived by the driver
  • P indicates the position of the virtual image VI perceived by the driver.
  • the image displayed on the head-up display 2 includes, for example, an image emphasizing a white line on the road on which the vehicle 1 is traveling.
  • This image is displayed, for example, in a state of being superimposed on a position near the white line in the driver's field of vision.
  • This video is for presenting the presence of the white line to the driver of the vehicle 1 to prevent the vehicle 1 from deviating from the lane in which the vehicle is traveling.
  • the image displayed on the head-up display 2 includes, for example, an image emphasizing an obstacle present in a lane in which the vehicle 1 is traveling. This image is displayed, for example, in a state of being superimposed at a position near the obstacle in the driver's field of vision. This image is for urging the driver of the vehicle 1 to pay attention to the obstacle to prevent the vehicle 1 from colliding with the obstacle.
  • the image displayed on the head-up display 2 includes, for example, an image showing a traveling route being guided by a navigation system (not shown) for the vehicle 1.
  • This image includes, for example, an arrow-shaped image indicating the direction in which the vehicle 1 should travel, and an image for guiding the lane in which the vehicle 1 should travel.
  • the image displayed on the head-up display 2 includes, for example, an image emphasizing another vehicle (hereinafter referred to as “front vehicle”) traveling in front of the vehicle 1. More specifically, when the vehicle 1 is traveling by so-called “adaptive cruise control", it includes an image emphasizing a preceding vehicle to be followed by the vehicle 1. This image is for making the driver of the vehicle 1 recognize the preceding vehicle to be followed.
  • front vehicle an image emphasizing another vehicle traveling in front of the vehicle 1.
  • front vehicle an image emphasizing a preceding vehicle to be followed by the vehicle 1. This image is for making the driver of the vehicle 1 recognize the preceding vehicle to be followed.
  • the video displayed on the head-up display 2 includes, for example, a video indicating the inter-vehicle distance between the vehicle 1 and the preceding vehicle.
  • This image is displayed, for example, in a state of being superimposed on the lane between the vehicle 1 and the preceding vehicle in the field of view of the driver. This image is for causing the driver of the vehicle 1 to recognize the inter-vehicle distance between the vehicle 1 and the vehicle in front.
  • the video displayed on the head-up display 2 includes, for example, a video indicating information on a building included in the front scenery of the vehicle 1.
  • the vehicle 1 has a camera 5 for imaging outside the vehicle and a sensor 6 for obstacle detection.
  • the camera 5 is configured by, for example, a so-called "front camera”.
  • the sensor 6 is configured by, for example, at least one of a millimeter wave radar sensor, a rider sensor, or an ultrasonic sensor provided at the front end of the vehicle 1.
  • the object information generation unit 21 executes an image recognition process on an image captured by the camera 5.
  • the object information generation unit 21 detects various objects existing around the vehicle 1 (more specifically, in front of the vehicle 1), that is, an object, using the result of the image recognition process and the detection value by the sensor 6 To do.
  • the object information generation unit 21 generates information on the detected object (hereinafter referred to as “object information”) using the result of the image recognition process and the detection value by the sensor 6.
  • object information information
  • the number of objects detected by the object information generation unit 21 may be described as “N”. That is, N is an integer of 1 or more.
  • the objects to be detected by the object information generation unit 21 are, for example, obstacles, white lines on roads, signs, traffic lights, buildings, and the like.
  • the obstacles include, for example, other vehicles or pedestrians.
  • the white line of the road includes, for example, a center line or a boundary between a road and a roadside.
  • the signs include, for example, guide signs or traffic signs.
  • the building includes, for example, a gas station.
  • FIG. 3A shows an example of a state in which the front scenery of the vehicle 1 is viewed through the windshield 4.
  • object information generation unit 21 it is assumed that seven objects O 1 to O 7 are detected by the object information generation unit 21.
  • Objects O 1 to O 3 correspond to other vehicles
  • object O 4 corresponds to a guide sign
  • object O 5 corresponds to a traffic sign
  • objects O 6 and O 7 are centers It corresponds to the line.
  • an area A1 capable of displaying an image on the head-up display 2 (more specifically, a display in the head-up display device 3) will be referred to as a "displayable area”.
  • an area corresponding to each object detected by the object information generation unit 21 is referred to as an "object area”.
  • an area including all object areas is referred to as an “object group area”.
  • the shape of the displayable area A1 is a shape corresponding to the shape of the windshield 4 and is, for example, a rectangular shape.
  • the shape of each object area is, for example, rectangular.
  • the shape of the object group area is, for example, rectangular.
  • 3B is a display area A1, and seven objects O 1 ⁇ O 7 corresponding seven object area OA 1 ⁇ OA 7 in a lateral range Rh object group region, a longitudinal width Rv object group region An example is shown.
  • the lateral range Rh object group region is a value corresponding to the distance between the right end portion of the left end and the object area OA 5 of the object area OA 1.
  • the vertical width Rv object group region is a value corresponding to the distance between the lower ends of the upper and object area OA 6 of the object area OA 4.
  • FIG. 4 shows an example of object information generated by the object information generation unit 21 in the state shown in FIG.
  • the object information includes the number of objects detected by the object information generation unit 21 (ie, N), identifiers assigned to the individual objects (“ID” in the drawing), and ones corresponding to the individual objects. It includes video data ("option” in the figure) showing types, positions of individual objects, and images corresponding to individual objects.
  • “p1, p2, p3, p4” indicate position coordinates of four corners of each of the seven object areas OA 1 to OA 7 in the displayable area A1.
  • “z001” to “z007” indicate the distances between the vehicle 1 and the seven objects O 1 to O 7 respectively. These distances are calculated, for example, by the object information generation unit 21 using the detection value of the sensor 6. Alternatively, for example, these distances are measured by the object information generation unit 21 using a captured image by the camera 5 according to a method such as TOF (Time of Flight) method or triangulation (so-called “stereo vision”) method. is there.
  • the object information acquisition unit 31 acquires object information generated by the object information generation unit 21.
  • the object information acquisition unit 31 outputs the acquired object information to the video sickness estimation unit 32 and the display control unit 33.
  • a video group including videos corresponding to the N objects detected by the object information generation unit 21, that is, a video group to be subjected to estimation processing by the video sickness estimation unit 32 will be referred to as a “first video group”.
  • the video sickness estimation unit 32 uses the object information output from the object information acquisition unit 31 to temporarily detect the presence or absence of video sickness due to the first video group when the first video group is displayed on the head-up display 2. It performs processing to estimate.
  • a specific example of the estimation process by the motion sickness estimation unit 32 will be described later with reference to the flowchart of FIG.
  • the display control unit 33 uses the object information output from the object information acquisition unit 31 to display an image including at least a part of the image corresponding to the N objects detected by the object information generation unit 21. Control for displaying a group (hereinafter referred to as "second video group") on the head-up display 2 is executed.
  • N ′ the number of objects corresponding to the images included in the second image group
  • N ′ an area A2 which is a display target of the second image group in the displayable area A1 is referred to as a "display target area”.
  • the display control unit 33 changes the display mode of the second video group according to the result of the estimation process by the video sickness estimation unit 32. Specifically, for example, according to the result of estimation processing by the video sickness estimation unit 32, the number N ′ of objects corresponding to the video included in the second video group, the horizontal width Rh ′ of the display target area A2, and the display target area The vertical width Rv 'of A2 is different. A specific example of control by the display control unit 33 will be described later with reference to the flowchart of FIG.
  • the object information acquisition unit 31, the motion sickness estimation unit 32, and the display control unit 33 constitute a main part of the display control apparatus 100. Further, the object information generation unit 21, the object information acquisition unit 31, the motion sickness estimation unit 32, and the display control unit 33 constitute a main part of the control device 7.
  • the control device 7 is configured by a computer, and the computer has a processor 41 and a memory 42.
  • the memory 42 stores programs for causing the computer to function as an object information generation unit 21, an object information acquisition unit 31, a motion sickness estimation unit 32, and a display control unit 33.
  • the processor 41 reads out and executes the program stored in the memory 42, whereby the functions of the object information generation unit 21, the object information acquisition unit 31, the video sickness estimation unit 32, and the display control unit 33 are realized.
  • the processor 41 uses, for example, a central processing unit (CPU), a graphics processing unit (GPU), a microprocessor, a microcontroller, or a digital signal processor (DSP).
  • the memory 42 is, for example, a semiconductor memory such as a random access memory (RAM), a read only memory (ROM), a flash memory, an erasable programmable read only memory (EPROM), or an electrically erasable programmable read only memory (EEPROM).
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • EEPROM electrically erasable programmable read only memory
  • An optical disk or a magneto-optical disk is used.
  • the functions of the object information generation unit 21, the object information acquisition unit 31, the video sickness estimation unit 32, and the display control unit 33 may be realized by a dedicated processing circuit 43.
  • the processing circuit 43 may be, for example, an application specific integrated circuit (ASIC), a programmable logic device (PLD), a field-programmable gate array (FPGA), a system-on-a-chip (SoC), or a system LSI (Large-Scale Integration). Etc. are used.
  • the object information generation unit 21, the object information acquisition unit 31, the video sickness estimation unit 32, and the display control unit 33 are realized by the processor 41 and the memory 42, and the remaining functions are performed by the processing circuit 43. It may be realized.
  • step ST1 the object information acquisition unit 31 acquires object information generated by the object information generation unit 21.
  • the object information acquisition unit 31 outputs the acquired object information to the video sickness estimation unit 32 and the display control unit 33.
  • the video sickness estimating unit 32 uses the object information output from the object information acquiring unit 31 to temporarily display the first video group on the head-up display 2 and use the first video group. Execute processing to estimate the presence or absence of video sickness. A specific example of the estimation process in step ST2 will be described later with reference to the flowchart of FIG.
  • step ST3 the display control unit 33 executes control to display the second image group on the head-up display 2 using the object information output from the object information acquisition unit 31.
  • the display control unit 33 is configured to make the display mode of the second video group different according to the result of the estimation process by the video sickness estimation unit 32.
  • a specific example of control in step ST3 will be described later with reference to the flowchart of FIG.
  • the video sickness estimating unit 32 calculates a threshold Rh_th to be compared against the horizontal width Rh of the object group area, a threshold Rv_th to be compared against the vertical width Rv of the object group area, and the number N of objects.
  • a threshold N_th to be compared is set.
  • the threshold value Rh_th is set to, for example, a value equal to the horizontal width of the displayable area A1, a half of the horizontal width of the displayable area A1, or a quarter of the horizontal width of the displayable area A1.
  • the threshold value Rv_th has, for example, a value equal to the vertical width of the displayable area A1, a half of the vertical width of the displayable area A1, or a quarter of the vertical width of the displayable area A1. It is set.
  • the threshold N_th is set to, for example, an integer of 1 or more.
  • step ST12 the video sickness estimating unit 32 calculates the horizontal width Rh of the object group area and the vertical width Rv of the object group area using the object information output from the object information acquisition unit 31.
  • the video sickness estimating unit 32 compares Rh calculated at step ST12 with Rh_th set at step ST11. If Rh is equal to or less than Rh_th ("NO" in step ST13), the video sickness estimating unit 32 sets Rh 'to the same value as Rh in step ST14. On the other hand, if Rh exceeds Rh_th ("YES" in step ST13), the video sickness estimating unit 32 sets Rh 'to the same value as Rh_th in step ST15.
  • step ST16 the video sickness estimating unit 32 compares Rv calculated at step ST12 with Rv_th set at step ST11. If Rv is equal to or less than Rv_th ("NO" at step ST16), the video sickness estimating unit 32 sets Rv 'to the same value as Rv at step ST17. On the other hand, when Rv exceeds Rv_th ("YES" in step ST16), the video sickness estimating unit 32 sets Rv 'to the same value as Rv_th in step ST18.
  • the video sickness estimating unit 32 compares N indicated by the object information with N_th set at step ST11. If N is equal to or less than N_th ("NO" at step ST19), the video sickness estimating unit 32 sets N 'to the same value as N at step ST20. On the other hand, if N exceeds N_th ("YES" in step ST19), the video sickness estimation unit 32 sets N 'to the same value as N_th in step ST21.
  • the video sickness estimating unit 32 determines the value of Rh 'set at step ST14 or step ST15, the value of Rv' set at step ST17 or step ST18, and the value at step ST20 or step ST21.
  • the set value of N ′ is output to the display control unit 33.
  • the value output by the video sickness estimation unit 32 is “absent” as the result of the estimation process is the occurrence of video sickness due to the first video group. It shows that there is. If not, the value output by the video sickness estimation unit 32 indicates that the result of the estimation process is "presence" of the occurrence of video sickness due to the first video group.
  • step ST3 a specific example of control by the display control unit 33 in step ST3 will be described with reference to the flowchart in FIG.
  • step ST31 the display control unit 33 acquires the values of Rh ', Rv', and N 'output from the motion sickness estimation unit 32.
  • step ST32 the display control unit 33 sets a display target area A2 with a size of Rh ′ ⁇ Rv ′.
  • the display control unit 33 may set the position of the display target area A2 in the displayable area A1. Specifically, for example, the display control unit 33 may set the position coordinates of the central portion C of the display target area A2 in the displayable area A1. However, the position of the display target area A2 in the displayable area A1 may be set by the operation input to the operation input device (not shown). That is, the position of the display target area A2 in the displayable area A1 can be set to an arbitrary position.
  • the display control unit 33 may select N ′ objects based on the priorities according to the types of objects corresponding to the individual objects. This priority may be preset in the display control unit 33, or may be set by an operation input to an operation input device (not shown).
  • the priority is set to be lower in the order of the object corresponding to the other vehicle, the object corresponding to the center line, the object corresponding to the road sign, and the object corresponding to the guide sign. That is, the priority of the object corresponding to the other vehicle is set to the highest, and the priority of the object corresponding to the guide sign is set to the lowest.
  • step ST34 the display control unit 33 generates video corresponding to the object selected in step ST33, using the video data in the object information output from the object information acquisition unit 31.
  • the display control unit 33 generates N 'pieces of video corresponding to N' pieces of objects one by one.
  • Each of the N 'images is, for example, a rectangular frame image by a dotted line.
  • the size of each of the N 'images is, for example, equal to the size of the object area of the corresponding object among the N' objects.
  • Each of the N 'images is an image for marking a corresponding one of the N' objects.
  • step ST35 the display control unit 33 displays on the head-up display 2 one of the images generated in step ST34 (that is, the second image group) in the display target area A2 set in step ST32. Execute control.
  • the display control unit 33 information indicating an estimated value of the position of the driver's eye in the real space is stored in advance. This estimated value is, for example, estimated based on the position of the driver's seat in the vehicle compartment of the vehicle 1 or the like.
  • the display control unit 33 uses the object information output by the object information acquisition unit 31 to calculate the position of each of N ′ objects in the real space.
  • the display control unit 33 determines, based on the positional relationship between the driver's eyes and each of the N ′ objects, that each of the N ′ objects has a corresponding image among the N ′ images in the driver's field of vision.
  • the position of each of the N 'images in the displayable area A1 is set so as to be in the marked state.
  • FIG. 9A shows an example of the first image group in this case.
  • the first image group includes seven images V 1 to V 7 corresponding to seven objects O 1 to O 7 .
  • Each of the seven images V 1 to V 7 is an image in a rectangular frame shape by a dotted line.
  • the size of each of the seven images V 1 to V 7 is equal to the size of the object area of the corresponding one of the seven objects O 1 to O 7 .
  • Each of the seven images V 1 to V 7 is an image for marking the corresponding one of the seven objects O 1 to O 7 .
  • threshold Rh_th is set to the same value as the horizontal width of displayable area A1
  • threshold Rv_th is set to the same value as the vertical width of displayable area A1
  • FIG. 9A shows an example of the display target area A2 in this case.
  • step ST32 for example, a display target area A2 shown in FIG. 9A is set.
  • all video V 1 ⁇ V 7 of the step ST34 7 amino image V 1 ⁇ V 7 are generated.
  • the head up display 2 displays the one in the display target area A2 of the seven videos V 1 to V 7 (that is, the second video group).
  • the second image group displayed on the head-up display 2 is the same as the first image group shown in FIG. 9A.
  • FIG. 10A shows an example of the first image group in this case.
  • the first image group includes seven images V 1 to V 7 corresponding to seven objects O 1 to O 7 .
  • the threshold Rh_th is set to a half of the horizontal width of the displayable area A1
  • the threshold Rv_th is set to a half of the vertical width of the displayable area A1
  • N_th 8 is set (step ST11).
  • Rh> Rh_th Rh ′ is set to Rh_th (step ST15)
  • Rv> Rv_th Rv ′ is set to Rv_th (step ST18)
  • N ⁇ N_th 7 is set (step ST20).
  • FIG. 10A shows an example of the display target area A2 in this case.
  • step ST32 for example, a display target area A2 shown in FIG. 10A is set.
  • all video V 1 ⁇ V 7 of the step ST34 7 amino image V 1 ⁇ V 7 are generated.
  • the head up display 2 displays the image in the display target area A2 of the seven images V 1 to V 7 (that is, the second image group).
  • the second image group displayed on the head-up display 2 is different from the first image group shown in FIG. 10A.
  • FIG. 11A shows an example of the first image group in this case.
  • the first image group includes seven images V 1 to V 7 corresponding to seven objects O 1 to O 7 .
  • the threshold Rh_th is set to a half of the horizontal width of the displayable area A1
  • the threshold Rv_th is set to a half of the vertical width of the displayable area A1
  • N_th 3 is set (step ST11).
  • Rh> Rh_th Rh ′ is set to Rh_th (step ST15)
  • Rv> Rv_th Rv ′ is set to Rv_th (step ST18)
  • FIG. 11A shows an example of the display target area A2 in this case.
  • step ST32 for example, a display target area A2 shown in FIG. 11A is set.
  • the display target area A2 shown in FIG. 11A is different from the display target area A2 shown in FIG. 10A in the position coordinates of the central portion C.
  • step ST33 the display control unit 33 selects any three objects among the seven objects O 1 to O 7 .
  • the display control unit 33 selects three objects O 1 to O 3 corresponding to other vehicles based on the priority.
  • step ST34 the seven three video V 1 ⁇ V 3 in the video V 1 ⁇ V 7 are generated.
  • the head up display 2 displays the one in the display target area A2 of the three videos V 1 to V 3 (that is, the second video group). That is, the images V 6 and V 7 for marking the center line are excluded from the display targets of the head-up display 2 while being located within the display target area A 2.
  • the second image group displayed on the head-up display 2 is different from the first image group shown in FIG. 11A.
  • FIG. 12A shows an example of the first image group in this case.
  • the first image group includes seven images V 1 to V 7 corresponding to seven objects O 1 to O 7 .
  • the threshold Rh_th is set to a half of the horizontal width of the displayable area A1
  • the threshold Rv_th is set to a quarter of the vertical width of the displayable area A1
  • N_th 8 is set (step ST11).
  • Rh> Rh_th Rh ′ is set to Rh_th (step ST15)
  • Rv> Rv_th Rv ′ is set to Rv_th (step ST18)
  • N ⁇ N_th 7 is set (step ST20).
  • FIG. 12A shows an example of the display target area A2 in this case.
  • step ST32 for example, a display target area A2 shown in FIG. 12A is set.
  • all video V 1 ⁇ V 7 of the step ST34 7 amino image V 1 ⁇ V 7 are generated.
  • the head up display 2 displays the one in the display target area A2 of the seven videos V 1 to V 7 (that is, the second video group).
  • the second image group displayed on the head-up display 2 is different from the first image group shown in FIG. 12A.
  • the object information may indicate the movement (more specifically, the movement speed and movement amount) of the video corresponding to each object.
  • the video sickness estimating unit 32 may estimate the presence or absence of video sickness due to the first video group by comparing the moving speed and the moving amount of the video with predetermined threshold values. Also, the motion sickness estimation unit 32 may set the values of Rh ′, Rv ′, and N ′ according to the result of the comparison.
  • the video sickness estimating unit 32 may store the set threshold.
  • the video sickness estimating unit 32 may set a new threshold value based on the stored threshold value in step ST11 after the next time. Further, the threshold value in the video sickness estimation unit 32 may be set to a different value for each driver.
  • the threshold value in the motion sickness estimation unit 32 may be set by the operation input to the operation input device (not shown). This operation may be performed by the driver of the vehicle 1 or by a passenger of the vehicle 1.
  • the priority used when N ' ⁇ N is not limited to the above specific example.
  • the priority of each object may be set by any setting. For example, the priority may be set to be higher for an object having a higher degree of importance to the driver of the vehicle 1.
  • the video for marking individual objects is not limited to the video in the form of a rectangular frame by dotted lines.
  • the image for marking individual objects may be any image as long as it can be drawn by existing CG (Computer Graphics). For example, a linear image along the edge of an individual object, an image obtained by filling the area surrounded by the lines with a predetermined color, or an image using ⁇ blending may be used. Also, it may be a video including text or an icon.
  • the video corresponding to each object is not limited to the video for marking.
  • the video corresponding to each object is not limited to the video for marking.
  • FIG. 13 and FIG. 14 another example of the video corresponding to each object will be described.
  • the object information generation unit 21 For example, as shown in FIG. 13A, it is assumed that eight objects O 1 to O 8 are detected by the object information generation unit 21.
  • the object O 8 corresponds to the lane to be guided by the navigation system (not shown) for the vehicle 1, that is, the lane in which the vehicle 1 should travel.
  • the first image group includes eight images V 1 to V 8 corresponding to eight objects O 1 to O 8 in a one-to-one manner.
  • Video V 8 corresponding to the object O 8 (more specifically, by triangular images that are arranged in along the said lane) image for guiding the lane may be.
  • FIG. 13B shows an example of a second image group displayed on the head-up display 2 in this case.
  • the object information generation unit 21 it is assumed that eight objects O 1 to O 2 , O 4 to O 7 , O 9 to O 10 are detected by the object information generation unit 21.
  • the object O 9 corresponds to another vehicle.
  • the object O 10 corresponds to a building existing in front of the vehicle 1, more specifically, a gas station.
  • the first image group includes eight images V 1 to V 2 , V corresponding to eight objects O 1 to O 2 , O 4 to O 7 , O 9 to O 10 in a one-to-one manner. 4 to V 7 and V 9 to V 10 are included.
  • Video V 10 corresponding to the object O 10 may be a (video containing text indicating that more specifically the building is a gas station) images containing text indicating information about the building.
  • FIG. 14B shows an example of a second image group displayed on the head-up display 2 in this case.
  • the display mode to be changed by the display control unit 33 may be any one as long as it relates to the ease of occurrence of video sickness in AR-HUD, and the number N of objects corresponding to the video included in the second video group It is not limited to 'and the size of display target area A2 (Rh' ⁇ Rv ').
  • the display control unit 33 may change the position coordinates of the central portion C of the display target area A2 according to the result of the estimation process by the video sickness estimation unit 32.
  • the display control unit 33 determines that the motions of the individual videos included in the second video group (more specifically, the moving speed and the moving amount) differ according to the result of the estimation process by the video sickness estimating unit 32. It may be confusing.
  • the display control unit 33 determines the number N ′ of objects corresponding to the images included in the second image group and the individual images included in the second image group according to the result of the estimation process by the video sickness estimation unit 32. Move at least one of movement (more specifically, movement speed and movement amount), size of display target area A2 (Rh ′ ⁇ Rv ′), or position coordinates of central portion C of display target area A2 It may be something.
  • the display control unit 33 may measure the amount of solar radiation in the compartment of the vehicle 1 using an illuminance sensor (not shown) provided in the vehicle 1. Even if the display control unit 33 adjusts the brightness and contrast ratio of the video displayed on the head-up display 2 (that is, the video included in the second video group) in accordance with the measured amount of solar radiation. good. Thereby, the legibility of the image displayed on the head-up display 2 can be improved.
  • the main part of the display control system 200 may be configured by the object information acquisition unit 31, the motion sickness estimation unit 32, and the display control unit 33.
  • each of the object information acquisition unit 31, the motion sickness estimation unit 32, and the display control unit 33 may be an on-vehicle information device 51 that can be mounted on the vehicle 1, a portable information terminal 52 such as a smartphone that can be carried on the vehicle 1, or It may be provided in any of the on-vehicle information device 51 or the server device 53 capable of communicating with the portable information terminal 52.
  • FIGS. 16A to 16D shows the system configuration of the main part of the display control system 200.
  • any function of the display control system 200 may be realized by cooperation of any two or more of the in-vehicle information device 51, the portable information terminal 52, and the server device 53.
  • the head-up display 2 is not limited to the windshield type, and may be a combiner type.
  • the combiner usually occupies a smaller area in the driver's field of view than the windshield.
  • the AR-HUD of the combiner type is less likely to cause video sickness than the AR-HUD of the windshield type. Therefore, it is particularly preferable to use the display control apparatus 100 and the display control system 200 for controlling the windshield AR-HUD.
  • the head-up display 2 may be provided on a moving body different from the vehicle 1.
  • the head-up display 2 may be provided on any moving object such as a car, a rail car, an aircraft or a ship.
  • the display control apparatus 100 includes the object information acquisition unit 31 that acquires object information including information on one or more objects existing around the mobile object (vehicle 1), and the object information Using the object information and the video sickness estimation unit 32, which executes processing to estimate the presence or absence of video sickness due to the first video group including one or more videos corresponding to one or more objects, using And a display control unit 33 for executing control to cause the head-up display 2 to display a second video group including at least a part of one or more videos.
  • the display mode of the second image group is made different according to the result of the estimation process by By using object information, it is possible to estimate with high accuracy the presence or absence of video sickness in AR-HUD. Further, by making the display mode of the second video group different, it is possible to suppress the occurrence of video sickness while continuing the AR display by the head-up display 2.
  • the display control unit 33 controls the number of videos included in the second video group, the motion of each video included in the second video group, or the head-up display 2 according to the result of the estimation processing by the video sickness estimation unit 32. And at least one of the areas (display target area A2) in which the second image group is displayed is different. This makes it possible to suppress the occurrence of video sickness.
  • the display control system 200 uses the object information acquisition unit 31 that acquires object information including information on one or more objects existing around the mobile object (vehicle 1). Using the object information, the video sickness estimation unit 32 that executes processing for estimating the presence or absence of video sickness due to the first video group including one or more videos corresponding to one or more objects; And a display control unit 33 for executing control to cause the head-up display 2 to display a second image group including at least a part of the images of the images, and the display control unit 33 performs an estimation process by the video sickness estimation unit 32 The display mode of the second image group is made different according to the result of. Thereby, the same effect as the above-mentioned effect by display control 100 can be acquired.
  • the object information acquisition unit 31 acquires object information including information on one or more objects existing around the mobile object (vehicle 1); Step ST2 in which the estimation unit 32 executes a process of estimating the presence or absence of video sickness due to the first video group including one or more videos corresponding to one or more objects using object information; And a step ST3 of performing control to cause the head-up display 2 to display a second video group including at least a part of one or more videos using the object information, and the display control unit 33 changes the display mode of the second image group according to the result of the estimation processing by the video sickness estimation unit 32.
  • the same effect as the above-mentioned effect by display control 100 can be acquired.
  • FIG. 17 is a block diagram showing a state where a control device including a display control device according to Embodiment 2 is provided in a vehicle.
  • the display control device 100a according to the second embodiment will be described with reference to FIG. Note that, in FIG. 17, the same blocks as the blocks shown in FIG.
  • the vehicle 1 has a camera 8 for imaging in the passenger compartment.
  • the camera 8 is configured by, for example, a visible light camera or an infrared camera.
  • the camera 8 is disposed in the front of the vehicle compartment of the vehicle 1 and captures an image of a range including the face of the driver sitting in the driver's seat.
  • the vehicle 1 has a sensor 9.
  • the sensor 9 is configured by a contact or non-contact type biometric sensor.
  • the sensor 9 is constituted by a contact-type biological sensor, the sensor 9 is provided on the steering wheel or the driver's seat of the vehicle 1 or the like.
  • the sensor 9 is configured by a noncontact biometric sensor, the sensor 9 is disposed in the cabin of the vehicle 1.
  • the driver information generation unit 22 executes an image recognition process on an image captured by the camera 8.
  • the driver information generation unit 22 generates information on the driver of the vehicle 1 (hereinafter referred to as “driver information”) using at least one of the result of the image recognition process or the detection value by the sensor 9. is there.
  • the driver information generated using the result of the image recognition process is, for example, the position of the driver's head, the position of the driver's face, the driver's viewpoint movement amount, the driver's face color, the driver's eye opening degree Alternatively, it indicates at least one of the number of blinks of the driver.
  • the position of the driver's head and the position of the driver's face are, for example, measured by a method such as TOF method or triangulation method.
  • the driver information generated using the detection value by the sensor 9 is, for example, the heart rate of the driver, the pulse rate of the driver, the blood pressure of the driver, the temperature of the driver, the body temperature of the driver, the amount of sweat of the driver or the brain wave of the driver. It is information indicating at least one of them, that is, biological information.
  • the driver information acquisition unit 34 acquires the driver information generated by the driver information generation unit 22.
  • the driver information acquisition unit 34 outputs the acquired driver information to the video sickness estimation unit 32a.
  • the video sickness estimating unit 32a temporarily displays the first video group on the head-up display 2 using the object information output by the object information acquiring unit 31 and the driver information output by the driver information acquiring unit 34.
  • the presence or absence of the occurrence of video sickness due to the first video group in the case is estimated. More specifically, the motion sickness estimation unit 32a executes the same process as that described with reference to FIG. 7 in the first embodiment, and sets the threshold value Rh_th, Rv_th, and N_th to the driver. It uses information.
  • the video sickness estimation unit 32a determines whether the driver of the vehicle 1 is in poor health or not using the driver information. When it is determined that the driver is in poor physical condition “present”, the video sickness estimating unit 32 a is at least one of the thresholds Rh_th, Rv_th, and N_th, as compared with the case where the driver is determined to be in poor physical condition “absent”. Is set to a low value (for example, 0.5 times the value).
  • the driver information indicates the heart rate of the driver.
  • a range of values (hereinafter referred to as a “reference range”, for example, a range of 50 to 90 bpm) including the heart rate (for example, 65 bpm) of the driver at normal times is set in the video sickness estimating unit 32a.
  • the motion sickness estimation unit 32a determines whether the heart rate indicated by the driver information is within the reference range.
  • the video sickness estimating unit 32a sets the threshold Rv_th to a value that is half the vertical width of the displayable area A1. On the other hand, when the heart rate indicated by the driver information is a value outside the reference range, the video sickness estimating unit 32a sets the threshold Rv_th to a value that is one fourth of the vertical width of the displayable area A1.
  • the width Rh ′ of the display target area A2 in the case of Rh> Rh_th also decreases. Due to the decrease of the threshold value Rv_th, the vertical width Rv ′ of the display target area A2 in the case of Rv> Rv_th also decreases. As the threshold value N_th decreases, the number N ′ of objects corresponding to the images included in the second image group in the case of N> N_th also decreases. As a result, it is possible to suppress the occurrence of the video sickness even in the state where the video sickness is easily generated due to the driver's poor physical condition.
  • the object information acquisition unit 31, the visual sickness estimation unit 32a, the display control unit 33, and the driver information acquisition unit 34 constitute a main part of the display control apparatus 100a.
  • the object information generation unit 21, the driver information generation unit 22, the object information acquisition unit 31, the video sickness estimation unit 32a, the display control unit 33, and the driver information acquisition unit 34 constitute the main part of the control device 7a. There is.
  • the hardware configuration of the main part of the control device 7a is the same as that described in the first embodiment with reference to FIG. That is, the functions of the driver information generation unit 22, the video sickness estimation unit 32a, and the driver information acquisition unit 34 may be realized by the processor 41 and the memory 42, or may be realized by the processing circuit 43. It may be
  • step ST1 the object information acquisition unit 31 acquires object information generated by the object information generation unit 21.
  • the object information acquisition unit 31 outputs the acquired object information to the video sickness estimation unit 32a and the display control unit 33.
  • step ST4 the driver information acquisition unit 34 acquires the driver information generated by the driver information generation unit 22.
  • the driver information acquisition unit 34 outputs the acquired driver information to the video sickness estimation unit 32a.
  • the video sickness estimating unit 32a temporarily uses the object information output by the object information acquiring unit 31 and the driver information output by the driver information acquiring unit 34 to temporarily generate the first image group as a head.
  • a process of estimating the occurrence of video sickness due to the first video group when displayed on the up display 2 is executed.
  • step ST2a The specific example of the estimation process in step ST2a is the same as that described in the first embodiment with reference to the flowchart of FIG. However, when setting the thresholds Rh_th, Rv_th, and N_th (that is, when executing the process corresponding to the process of step ST11 shown in FIG. 7), the video sickness estimating unit 32a uses the driver information as described above. It has become.
  • step ST3 the display control unit 33 executes control to display the second image group on the head-up display 2 using the object information output from the object information acquisition unit 31.
  • the display control unit 33 is configured to make the display mode of the second image group different according to the result of the estimation process by the video sickness estimation unit 32a.
  • the specific example of control in step ST3 is the same as that described in the first embodiment with reference to the flowchart of FIG.
  • the video sickness estimating unit 32a compares the threshold value Rv_th with that in the case where the heart rate indicated by the driver information is within the reference range.
  • the threshold value Rh_th may be set to a low value (for example, 0.5 times value) instead of or in addition to setting to a low value (for example, 0.5 times value).
  • the magnification at this time is not limited to 0.5 times, and may be any magnification.
  • the video sickness estimating unit 32a compares the threshold N_th with the threshold value indicated by the driver information by a threshold value within the reference range. It may be set to a low value.
  • the display control unit 33 calculates the position of the driver's eyes in real space using these pieces of information. It may be When the display control unit 33 sets the position of each of the N 'images in the displayable area A1, the information stored in advance (that is, information indicating the estimated value of the position of the driver's eye in real space) Instead of the position of the driver's eyes shown, the calculated position of the driver's eyes may be used.
  • the video sickness estimating unit 32a determines whether the blood pressure indicated by the driver information is within the reference range, as in the above example related to the heart rate. It may be determined whether or not. If the blood pressure indicated by the driver information is a value outside the reference range, the visual sickness estimation unit 32a has the threshold Rh_th, Rv_th, and N_th compared to when the blood pressure indicated by the driver information is a value within the reference range. Alternatively, at least one of the above may be set to a low value.
  • the video sickness estimating unit 32a determines whether the pulse indicated by the driver information is within the reference range, as in the above example relating to the heart rate. It may be determined whether or not.
  • the video sickness estimating unit 32a has the threshold Rh_th, Rv_th, and N_th among the thresholds Rh_th, Rv_th, and N_th when the pulse indicated by the driver information is a value within the reference range. Alternatively, at least one of the above may be set to a low value.
  • the video sickness estimating unit 32a may compare the complexion indicated by the driver information with the driver's complexion in normal times. If the complexion indicated by the driver information indicates that the complexion indicated by the driver information is flushing or pale compared with the complexion of the driver in the normal case, the threshold value Rh_th, Rv_th, and N_th of the thresholds are compared. At least one may be set to a low value.
  • the video sickness estimation unit 32a is, in the same manner as the above example related to the complexion, the driver in normal times the viewpoint movement amount indicated by the driver information. It may be compared with the viewpoint movement amount of
  • the video sickness estimating unit 32a opens the degree of eye opening indicated by the driver information in the normal time, as in the above example related to the complexion. It may be compared with the degree.
  • the driver information is information that can be generated using at least one of the result of the image recognition process on the captured image by the camera 8 or the detection value by the sensor 9, and determines whether the driver of the vehicle 1 has poor physical condition As long as it is possible information, any information may be included.
  • the video sickness estimation unit 32a may use the driver information in the process of estimating the presence or absence of video sickness due to the first video group, and the method of using the driver information in the process is the above specific example. It is not limited to
  • the display control device 100a can adopt various modifications similar to those described in the first embodiment, that is, various modifications similar to the display control 100.
  • the main part of the display control system 200a is configured by the object information acquisition unit 31, the video sickness estimation unit 32a, the display control unit 33, and the driver information acquisition unit 34. good.
  • the system configuration of the main part of the display control system 200a is the same as that described in the first embodiment with reference to FIG. That is, any function of the display control system 200a may be realized by cooperation of any two or more of the in-vehicle information device 51, the portable information terminal 52, and the server device 53.
  • the display control device 100a includes the driver information acquisition unit 34 for acquiring driver information including information on the driver of the vehicle 1, and the video sickness estimation unit 32a includes object information and The driver information is used to estimate the presence or absence of video sickness.
  • a driver's physical condition can be considered, for example.
  • even in a state in which video sickness is likely to occur due to the driver's poor physical condition it is possible to suppress the occurrence of video sickness.
  • FIG. 20 is a block diagram showing a state where a control device including a display control device according to Embodiment 3 is provided in a vehicle.
  • the display control device 100b according to the third embodiment will be described with reference to FIG. In FIG. 20, the blocks similar to the blocks shown in FIG.
  • the vehicle information generation unit 23 is connected to the in-vehicle network 10.
  • the vehicle information generation unit 23 is output by various systems (for example, a car navigation system) connected to the in-vehicle network 10 or various ECUs (Electronic Control Unit) connected to the in-vehicle network 10 via the in-vehicle network 10 Information is obtained.
  • the vehicle information generation unit 23 generates information on the vehicle 1 (hereinafter referred to as “vehicle information”) using the acquired information.
  • the vehicle information includes, for example, information indicating the position of the vehicle 1, information indicating the traveling direction of the vehicle 1, information indicating the traveling speed of the vehicle 1, information indicating the acceleration of the vehicle 1, information indicating the vibration number of the vehicle 1, time Information related to various warnings, information related to various control signals (such as wiper on / off signal, light lighting signal, parking signal and back signal), or navigation information (congestion information, information indicating facility name, information for guidance) And at least one of the information indicating the guidance and the information indicating the route of the guidance target.
  • information indicating the position of the vehicle includes, for example, information indicating the position of the vehicle 1, information indicating the traveling direction of the vehicle 1, information indicating the traveling speed of the vehicle 1, information indicating the acceleration of the vehicle 1, information indicating the vibration number of the vehicle 1, time Information related to various warnings, information related to various control signals (such as wiper on / off signal, light lighting signal, parking signal and back signal), or navigation information (congestion information, information indicating facility name, information for guidance) And at least one of the
  • the vehicle information acquisition unit 35 acquires vehicle information generated by the vehicle information generation unit 23.
  • the vehicle information acquisition unit 35 outputs the acquired vehicle information to the video sickness estimation unit 32 b.
  • the video sickness estimation unit 32b The presence or absence of the occurrence of video sickness due to the first video group is estimated. More specifically, the motion sickness estimation unit 32b executes the same process as that described with reference to FIG. 7 in the first embodiment, and sets the threshold Rh_th, Rv_th, and N_th to vehicle information. Is used.
  • the video sickness estimation unit 32b determines whether the driving environment of the vehicle 1 is a driving environment that is likely to cause video sickness, using the vehicle information. When it is determined that the driving environment of the vehicle 1 is a driving environment that is likely to cause video sickness, the video sickness estimating unit 32b lowers at least one of the thresholds Rh_th, Rv_th, and N_th compared to the case where it is not so. Set to a value (for example, 0.5 times the value).
  • the video sickness estimating unit 32b uses the vehicle information.
  • the driving environment on the road on which the vehicle 1 is traveling is a driving environment in which it is easy to generate a video sickness (for example, an environment on a rough road, an environment on a sharp curve, a rapid acceleration) Environment, or an environment where rapid deceleration is taking place, etc.).
  • the video sickness estimating unit 32b lowers at least one of the thresholds Rh_th, Rv_th, and N_th compared to the case where it is not so. Set to a value (for example, 0.5 times the value).
  • the video sickness estimating unit 32b uses the vehicle information to set the vehicle 1 within the predetermined time. Calculate the amount of change in traveling speed. When the calculated change amount exceeds a predetermined amount (for example, ⁇ 30 kilometers per hour), the video sickness estimation unit 32b determines the threshold value Rh_th, Rv_th compared to the case where the calculated change amount is equal to or less than the predetermined amount. , N_th is set to a low value (for example, 0.5 times the value).
  • the video sickness estimation unit 32b uses the vehicle information to drive the driving environment in which the driving environment on the road on which the vehicle 1 is to travel is likely to generate video sickness (for example, It is determined whether or not the environment is such that the curve is continuous on a mountain road, or the environment such that acceleration and deceleration are continuous on a slope.
  • the video sickness estimating unit 32b lowers at least one of the thresholds Rh_th, Rv_th, and N_th compared to the case where it is not so. Set to a value (for example, 0.5 times the value).
  • the video sickness estimating unit 32b determines whether the curvature of the curve on which the vehicle 1 is to travel exceeds the predetermined value using the vehicle information. . If the curvature is greater than a predetermined value, the motion sickness estimation unit 32b lowers at least one of the thresholds Rh_th, Rv_th, and N_th (for example, 0. 0.) when the curvature is less than or equal to the predetermined value. Set to 5 times the value).
  • the video sickness estimating unit 32b uses the vehicle information to display the driving environment of the vehicle 1 It is determined whether or not there is a driving environment (such as an environment in which a plurality of warnings are output during night travel) that is likely to cause drunkenness.
  • a driving environment such as an environment in which a plurality of warnings are output during night travel
  • the video sickness estimating unit 32b lowers at least one of the thresholds Rh_th, Rv_th, and N_th compared to the case where it is not so. Set to a value (for example, 0.5 times the value).
  • the width Rh ′ of the display target area A2 in the case of Rh> Rh_th also decreases. Due to the decrease of the threshold value Rv_th, the vertical width Rv ′ of the display target area A2 in the case of Rv> Rv_th also decreases. As the threshold value N_th decreases, the number N ′ of objects corresponding to the images included in the second image group in the case of N> N_th also decreases. As a result, it is possible to suppress the occurrence of video sickness even in a driving environment where video sickness is likely to occur.
  • the object information acquisition unit 31, the visual sickness estimation unit 32b, the display control unit 33, and the vehicle information acquisition unit 35 constitute a main part of the display control device 100b. Further, the object information generation unit 21, the vehicle information generation unit 23, the object information acquisition unit 31, the video sickness estimation unit 32b, the display control unit 33, and the vehicle information acquisition unit 35 constitute a main part of the control device 7b.
  • the hardware configuration of the main part of the control device 7b is the same as that described in the first embodiment with reference to FIG. That is, the functions of the vehicle information generation unit 23, the video sickness estimation unit 32b, and the vehicle information acquisition unit 35 may be realized by the processor 41 and the memory 42, or may be realized by the processing circuit 43. It is good.
  • step ST1 the object information acquisition unit 31 acquires object information generated by the object information generation unit 21.
  • the object information acquisition unit 31 outputs the acquired object information to the video sickness estimation unit 32 b and the display control unit 33.
  • step ST5 the vehicle information acquisition unit 35 acquires the vehicle information generated by the vehicle information generation unit 23.
  • the vehicle information acquisition unit 35 outputs the acquired vehicle information to the video sickness estimation unit 32 b.
  • the video sickness estimating unit 32b temporarily uses the object information output by the object information acquiring unit 31 and the vehicle information output by the vehicle information acquiring unit 35 to temporarily display the head-up display of the first image group.
  • a process is performed to estimate the presence or absence of video sickness due to the first video group when it is displayed in 2.
  • step ST2b uses the vehicle information as described above. ing.
  • step ST3 the display control unit 33 executes control to display the second image group on the head-up display 2 using the object information output from the object information acquisition unit 31.
  • the display control unit 33 is configured to make the display mode of the second video group different according to the result of the estimation process by the video sickness estimation unit 32b.
  • the specific example of control in step ST3 is the same as that described in the first embodiment with reference to the flowchart of FIG.
  • the vehicle information may be information regarding the vehicle 1 and includes information that can be acquired via the in-vehicle network 10, and the content of the vehicle information is not limited to the above specific example.
  • the video sickness estimation unit 32b may be any device that estimates the presence or absence of video sickness due to the first video group using the object information and the vehicle information (more specifically, for the thresholds Rh_th, Rv_th, and N_th).
  • the content of the estimation processing by the video sickness estimation unit 32b is not limited to the above specific example, as long as vehicle information is used for setting).
  • the display control device 100b can adopt various modifications similar to those described in the first embodiment, that is, various modifications similar to the display control 100.
  • the main part of the display control system 200b may be configured by the object information acquisition unit 31, the video sickness estimation unit 32b, the display control unit 33, and the vehicle information acquisition unit 35.
  • the system configuration of the main part of the display control system 200b is the same as that described in the first embodiment with reference to FIG. That is, any function of the display control system 200b may be realized by cooperation of any two or more of the in-vehicle information device 51, the portable information terminal 52, and the server device 53.
  • the display control device 100 b may have the same driver information acquisition unit 34 as the display control device 100 a of the second embodiment.
  • the video sickness estimation unit 32b uses the object information output by the object information acquisition unit 31, the driver information output by the driver information acquisition unit 34, and the vehicle information output by the vehicle information acquisition unit 35. The presence or absence of the occurrence of video sickness due to the first video group may be estimated. More specifically, the motion sickness estimation unit 32b may use driver information and vehicle information for setting the thresholds Rh_th, Rv_th, and N_th. The same applies to the display control system 200b.
  • the display control device 100b includes the vehicle information acquisition unit 35 that acquires vehicle information including information related to the vehicle 1, and the video sickness estimation unit 32b uses object information and vehicle information. Estimate the presence or absence of video sickness. Thereby, when estimating the presence or absence of generation
  • FIG. 23 is a block diagram showing a state in which a control device including a display control device according to Embodiment 4 is provided in a vehicle.
  • the display control device 100c according to the fourth embodiment will be described with reference to FIG.
  • the same blocks as the blocks shown in FIG. 1 are assigned the same reference numerals and descriptions thereof will be omitted.
  • the vehicle 1 has a communication device 11.
  • the communication device 11 includes, for example, a transmitter and a receiver for Internet connection, a transmitter and a receiver for inter-vehicle communication, or a transmitter and a receiver for road-to-vehicle communication.
  • the outside environment information generation unit 24 uses information received by the communication device 11 from a server device, another vehicle, or a roadside device (all not shown), etc., to obtain information on the outside environment of the vehicle 1 (hereinafter referred to as "outside environment information") .) Is generated.
  • the outside environment information indicates, for example, at least one of the weather around the vehicle 1, the temperature around the vehicle 1, the humidity around the vehicle 1, or the degree of congestion of the road around the vehicle 1.
  • the outside environment information acquisition unit 36 acquires outside environment information generated by the outside environment information generation unit 24.
  • the outside environment information acquisition unit 36 outputs the acquired outside environment information to the video sickness estimation unit 32c.
  • the video sickness estimating unit 32 c temporarily displays the first image group on the head-up display 2 using the object information output by the object information acquiring unit 31 and the external environment information output by the external environment information acquiring unit 36.
  • the presence or absence of the occurrence of video sickness due to the first video group in the case is estimated. More specifically, the motion sickness estimation unit 32c executes the same process as that described with reference to FIG. 7 in the first embodiment, and sets the thresholds Rh_th, Rv_th, and N_th in the environment outside the vehicle. It uses information.
  • the video sickness estimation unit 32c determines whether the driving environment of the vehicle 1 is a driving environment that is likely to cause video sickness, using the environment information outside the vehicle. When it is determined that the driving environment of the vehicle 1 is a driving environment that is likely to cause video sickness, the video sickness estimating unit 32c lowers at least one of the thresholds Rh_th, Rv_th, and N_th compared to the case where it is not so. Set to a value (for example, 0.5 times the value).
  • the video sickness estimating unit 32c uses the outside environment information to drive the driving environment in which the vehicle 1 is likely to generate video sickness (for example, It is determined whether or not the environment is such that many warnings are output due to bad weather such as heavy rain or heavy snow.
  • the video sickness estimating unit 32c lowers at least one of the thresholds Rh_th, Rv_th, and N_th compared to the case where it is not so. Set to a value (for example, 0.5 times the value).
  • the video sickness estimating unit 32c causes the driving environment of the vehicle 1 to generate video sickness using the outside environment information. Whether the driving environment is easy (for example, an environment where many other vehicles exist around the vehicle 1 at an intersection and the number N of objects detected by the object information generation unit 21 increases) judge. When it is determined that the driving environment of the vehicle 1 is a driving environment that is likely to cause video sickness, the video sickness estimating unit 32c lowers at least one of the thresholds Rh_th, Rv_th, and N_th compared to the case where it is not so. Set to a value (for example, 0.5 times the value).
  • the video sickness estimating unit 32c uses the outside environment information to easily drive the driving environment of the vehicle 1 to generate video sickness.
  • the video sickness estimating unit 32c uses the outside environment information to easily drive the driving environment of the vehicle 1 to generate video sickness.
  • the video sickness estimating unit 32c lowers at least one of the thresholds Rh_th, Rv_th, and N_th compared to the case where it is not so. Set to a value (for example, 0.5 times the value).
  • the width Rh ′ of the display target area A2 in the case of Rh> Rh_th also decreases. Due to the decrease of the threshold value Rv_th, the vertical width Rv ′ of the display target area A2 in the case of Rv> Rv_th also decreases. As the threshold value N_th decreases, the number N ′ of objects corresponding to the images included in the second image group in the case of N> N_th also decreases. As a result, it is possible to suppress the occurrence of video sickness even in a driving environment where video sickness is likely to occur.
  • the object information acquisition unit 31, the visual sickness estimation unit 32c, the display control unit 33, and the external environment information acquisition unit 36 constitute a main part of the display control device 100c. Further, the object information generation unit 21, the outside environment information generation unit 24, the object information acquisition unit 31, the video sickness estimation unit 32c, the display control unit 33, and the outside environment information acquisition unit 36 constitute a main part of the control device 7c. There is.
  • the hardware configuration of the main part of the control device 7c is the same as that described in the first embodiment with reference to FIG. That is, the functions of the external environment information generation unit 24, the video sickness estimation unit 32c, and the external environment information acquisition unit 36 may be realized by the processor 41 and the memory 42, or realized by the processing circuit 43. It may be
  • step ST1 the object information acquisition unit 31 acquires object information generated by the object information generation unit 21.
  • the object information acquisition unit 31 outputs the acquired object information to the video sickness estimation unit 32 c and the display control unit 33.
  • step ST6 the outside environment information acquiring unit 36 acquires outside environment information generated by the outside environment information generating unit 24.
  • the outside environment information acquisition unit 36 outputs the acquired outside environment information to the video sickness estimation unit 32c.
  • step ST2c the video sickness estimating unit 32c temporarily uses the object information output by the object information acquiring unit 31 and the external environment information output by the external environment information acquiring unit 36 to temporarily generate the first image group as a head.
  • a process of estimating the occurrence of video sickness due to the first video group when displayed on the up display 2 is executed.
  • step ST2c The specific example of the estimation process in step ST2c is the same as that described in the first embodiment with reference to the flowchart of FIG. However, when setting the thresholds Rh_th, Rv_th, and N_th (that is, when executing the process corresponding to the process of step ST11 shown in FIG. 7), the video sickness estimating unit 32c uses the environment information outside the vehicle as described above. It has become.
  • step ST3 the display control unit 33 executes control to display the second image group on the head-up display 2 using the object information output from the object information acquisition unit 31.
  • the display control unit 33 is configured to make the display mode of the second video group different according to the result of the estimation process by the video sickness estimation unit 32c.
  • the specific example of control in step ST3 is the same as that described in the first embodiment with reference to the flowchart of FIG.
  • the outside environment information may be any information regarding the outside environment of the vehicle 1 and may include information that can be received by the communication device 11, and the contents of the outside environment information are not limited to the above specific example.
  • the video sickness estimation unit 32c may be any device that estimates the presence or absence of video sickness due to the first video group using the object information and the environment outside the vehicle (more specifically, the thresholds Rh_th, Rv_th, N_th The contents of the estimation process by the video sickness estimation unit 32c are not limited to the above specific example, as long as the environment outside the vehicle is used for setting (1).
  • the display control device 100c can adopt various modifications similar to those described in the first embodiment, that is, various modifications similar to the display control 100.
  • the main part of the display control system 200c is configured by the object information acquisition unit 31, the video sickness estimation unit 32c, the display control unit 33, and the external environment information acquisition unit 36. good.
  • the system configuration of the main part of the display control system 200c is the same as that described in the first embodiment with reference to FIG. That is, any function of the display control system 200c may be realized by cooperation of any two or more of the in-vehicle information device 51, the portable information terminal 52, and the server device 53.
  • the display control device 100 c may have the same driver information acquisition unit 34 as the display control device 100 a of the second embodiment.
  • the video sickness estimation unit 32c calculates the object information output by the object information acquisition unit 31, the driver information output by the driver information acquisition unit 34, and the outside environment information output by the outside environment information acquisition unit 36. It may be used to estimate the presence or absence of video sickness due to the first video group. More specifically, the video sickness estimating unit 32c may use driver information and outside environment information for setting the thresholds Rh_th, Rv_th, and N_th. The same applies to the display control system 200c.
  • the display control device 100c may have the same vehicle information acquisition unit 35 as the display control device 100b of the third embodiment.
  • the video sickness estimation unit 32 c uses the object information output by the object information acquisition unit 31, the vehicle information output by the vehicle information acquisition unit 35, and the outside environment information output by the outside environment information acquisition unit 36. The presence or absence of the occurrence of video sickness due to the first video group may be estimated. More specifically, the video sickness estimating unit 32c may use vehicle information and outside environment information for setting the thresholds Rh_th, Rv_th, and N_th. The same applies to the display control system 200c.
  • the display control device 100c has a driver information acquisition unit 34 similar to the display control device 100a of the second embodiment and a vehicle information acquisition unit 35 similar to the display control device 100b of the third embodiment, Also good.
  • the video sickness estimation unit 32c determines the object information output by the object information acquisition unit 31, the driver information output by the driver information acquisition unit 34, the vehicle information output by the vehicle information acquisition unit 35, and the environment outside the vehicle It is also possible to estimate the presence or absence of the occurrence of video sickness due to the first video group using the external environment information output by the information acquisition unit 36. More specifically, the video sickness estimating unit 32c may use driver information, vehicle information, and external environment information for setting the thresholds Rh_th, Rv_th, and N_th. The same applies to the display control system 200c.
  • the display control device 100c includes the outside environment information acquisition unit 36 that acquires outside environment information including information related to the outside environment of the vehicle 1, and the video sickness estimation unit 32c includes object information and The presence or absence of video sickness is estimated using the environment information outside the vehicle.
  • the driving environment of the vehicle 1 can be considered, for example.
  • the occurrence of video sickness can be suppressed.
  • the present invention allows free combination of each embodiment, or modification of any component of each embodiment, or omission of any component in each embodiment. .
  • the display control device of the present invention can be used, for example, to control a windshield AR-HUD.

Abstract

Provided is a display control device (100) comprising: an object information acquisition part (31) for acquiring object information including information relating to one or more objects present around a vehicle (1); a visually-induced motion sickness inference part (32) for, using the object information, executing a process for inferring whether visually-induced motion sickness has been induced by a first video group including one or more videos corresponding to the one or more objects; and a display control part (33) for, using the object information, executing control for causing a heads-up display (2) to display a second video group including at least a portion of the one or more videos. The display control part (33) varies the display state of the second video group according to a result of the inference process performed by the visually-induced motion sickness inference part (32).

Description

表示制御装置、表示制御システム及び表示制御方法Display control apparatus, display control system and display control method
 本発明は、表示制御装置、表示制御システム及び表示制御方法に関する。 The present invention relates to a display control device, a display control system, and a display control method.
 従来、移動体用のヘッドアップディスプレイ(以下「HUD」と記載することがある。)が開発されている。また、HUDに表示される映像の内容を解析することにより、いわゆる「映像酔い」の発生しやすさを判定する技術が開発されている(例えば、特許文献1参照)。映像酔いの症状は、例えば、吐き気、めまい、頭痛又は眼精疲労などを含むものである。 Conventionally, a head-up display (hereinafter sometimes referred to as “HUD”) for mobiles has been developed. In addition, a technology has been developed to determine the likelihood of occurrence of so-called “video sickness” by analyzing the content of the video displayed on the HUD (see, for example, Patent Document 1). Symptoms of motion sickness include, for example, nausea, dizziness, headache or eye strain.
特開2006-40056号公報JP, 2006-40056, A
 近年、いわゆるAR(Augmented Reality)表示に対応したHUD(以下「AR-HUD」と記載することがある。)が開発されている。AR-HUDにおいては、移動体の周囲に存在する種々のもの(以下「オブジェクト」という。)に対応する映像が、ユーザの視野において各オブジェクトに対する近傍の位置に重畳された状態となるように表示される。 In recent years, HUDs (hereinafter sometimes referred to as “AR-HUD”) corresponding to so-called AR (Augmented Reality) display have been developed. In AR-HUD, images are displayed so that images corresponding to various objects (hereinafter referred to as “objects”) present around the moving object are superimposed at positions near each object in the user's field of view. Be done.
 通常、移動体が移動することにより移動体と各オブジェクト間の位置関係が変化する。これにより、ユーザの視野に含まれるオブジェクトの個数、及び、ユーザの視野におけるオブジェクトが占める範囲などが変化する。この結果、AR-HUDに表示される映像の個数、及び、AR-HUDにおける映像が表示される領域のサイズなども変化する。 Usually, the positional relationship between the moving object and each object changes as the moving object moves. As a result, the number of objects included in the field of view of the user, the range occupied by the object in the field of view of the user, and the like change. As a result, the number of images displayed on the AR-HUD, the size of the area in which images on the AR-HUD are displayed, and the like also change.
 ここで、AR-HUDに表示される映像の個数が多いほど、映像酔いが発生しやすい状態となる。また、AR-HUDにおける映像が表示される領域のサイズが大きいほど、映像酔いが発生しやすい状態となる。すなわち、AR-HUDにおいては、ユーザの視野に含まれるオブジェクトの個数、及び、ユーザの視野におけるオブジェクトが占める範囲などに応じて、映像酔いの発生しやすさが異なるものとなる。 Here, the more the number of images displayed on the AR-HUD, the more likely to be an image sickness. In addition, as the size of the area in the AR-HUD in which the video is displayed is larger, the video sickness is more likely to occur. That is, in AR-HUD, the likelihood of occurrence of video sickness varies depending on the number of objects included in the field of view of the user, the range occupied by the object in the field of view of the user, and the like.
 これに対して、特許文献1記載の画像表示装置は、映像の内容の解析結果を用いて映像酔いの発生しやすさを判定するものである。すなわち、特許文献1記載の画像表示装置は、映像酔いの発生しやすさを判定するとき、オブジェクトに関する情報を用いるものではない。このため、仮に特許文献1記載の画像表示装置をAR-HUDに用いた場合、映像酔いの発生しやすさの判定精度が低いという問題があった。 On the other hand, the image display device described in Patent Document 1 determines the likelihood of occurrence of video sickness using the analysis result of the contents of the video. That is, the image display device described in Patent Document 1 does not use information on an object when determining the likelihood of occurrence of video sickness. For this reason, when the image display device described in Patent Document 1 is used for AR-HUD, there is a problem that the determination accuracy of the susceptibility to video sickness is low.
 本発明は、上記のような課題を解決するためになされたものであり、AR-HUDにおける映像酔いの発生の有無を高精度に推定することを目的とする。 The present invention has been made to solve the problems as described above, and it is an object of the present invention to estimate with high accuracy the presence or absence of the occurrence of motion sickness in an AR-HUD.
 本発明の表示制御装置は、移動体の周囲に存在する1個以上のオブジェクトに関する情報を含むオブジェクト情報を取得するオブジェクト情報取得部と、オブジェクト情報を用いて、1個以上のオブジェクトに対応する1個以上の映像を含む第1映像群による映像酔いの発生の有無を推定する処理を実行する映像酔い推定部と、オブジェクト情報を用いて、1個以上の映像のうちの少なくとも一部の映像を含む第2映像群をヘッドアップディスプレイに表示させる制御を実行する表示制御部とを備え、表示制御部は、映像酔い推定部による推定処理の結果に応じて第2映像群の表示態様を異ならしめるものである。 A display control apparatus according to the present invention includes an object information acquisition unit that acquires object information including information on one or more objects existing around a mobile object, and one or more objects corresponding to one or more objects. A video sickness estimation unit that executes processing to estimate the presence or absence of video sickness due to the first video group including one or more videos, and at least a part of the video of one or more videos using the object information And a display control unit that executes control to cause the head-up display to display the second image group including the display group, and the display control unit changes the display mode of the second image group according to the result of estimation processing by the video sickness estimation unit. It is a thing.
 本発明によれば、上記のように構成したので、AR-HUDにおける映像酔いの発生の有無を高精度に推定することができる。 According to the present invention, as configured as described above, it is possible to estimate with high accuracy the presence or absence of video sickness in the AR-HUD.
本発明の実施の形態1に係る表示制御装置を含む制御装置が車両に設けられている状態を示すブロック図である。It is a block diagram which shows the state in which the control apparatus containing the display control apparatus which concerns on Embodiment 1 of this invention is provided in the vehicle. 本発明の実施の形態1に係る表示制御装置を含む制御装置が車両に設けられている状態を示す説明図である。BRIEF DESCRIPTION OF THE DRAWINGS It is explanatory drawing which shows the state in which the control apparatus containing the display control apparatus which concerns on Embodiment 1 of this invention is provided in the vehicle. 図3Aは、オブジェクトの一例を示す説明図である。図3Bは、表示可能領域、オブジェクト領域及びオブジェクト群領域の一例を示す説明図である。FIG. 3A is an explanatory view showing an example of an object. FIG. 3B is an explanatory view showing an example of a displayable area, an object area, and an object group area. オブジェクト情報の一例を示す説明図である。It is an explanatory view showing an example of object information. 図5Aは、本発明の実施の形態1に係る表示制御装置を含む制御装置のハードウェア構成を示すブロック図である。図5Bは、本発明の実施の形態1に係る表示制御装置を含む制御装置の他のハードウェア構成を示すブロック図である。FIG. 5A is a block diagram showing a hardware configuration of a control device including the display control device according to Embodiment 1 of the present invention. FIG. 5B is a block diagram showing another hardware configuration of a control device including the display control device according to Embodiment 1 of the present invention. 本発明の実施の形態1に係る表示制御装置の動作を示すフローチャートである。It is a flowchart which shows operation | movement of the display control apparatus which concerns on Embodiment 1 of this invention. 本発明の実施の形態1に係る表示制御装置のうちの映像酔い推定部の詳細な動作を示すフローチャートである。It is a flowchart which shows detailed operation | movement of the imaging sickness estimation part of the display control apparatuses which concern on Embodiment 1 of this invention. 本発明の実施の形態1に係る表示制御装置のうちの表示制御部の詳細な動作を示すフローチャートである。It is a flowchart which shows the detailed operation | movement of the display control part among the display control apparatuses which concern on Embodiment 1 of this invention. 図9Aは、表示可能領域、表示対象領域及び第1映像群の一例を示す説明図である。図9Bは、第2映像群がヘッドアップディスプレイに表示されている状態の一例を示す説明図である。FIG. 9A is an explanatory view showing an example of a displayable area, a display target area, and a first image group. FIG. 9B is an explanatory diagram showing an example of a state in which the second image group is displayed on the head-up display. 図10Aは、表示可能領域、表示対象領域及び第1映像群の他の例を示す説明図である。図10Bは、第2映像群がヘッドアップディスプレイに表示されている状態の他の例を示す説明図である。FIG. 10A is an explanatory view showing another example of the displayable area, the display target area, and the first image group. FIG. 10B is an explanatory view showing another example of the state in which the second image group is displayed on the head-up display. 図11Aは、表示可能領域、表示対象領域及び第1映像群の他の例を示す説明図である。図11Bは、第2映像群がヘッドアップディスプレイに表示されている状態の他の例を示す説明図である。FIG. 11A is an explanatory view showing another example of the displayable area, the display target area, and the first image group. FIG. 11B is an explanatory view showing another example of the state in which the second image group is displayed on the head-up display. 図12Aは、表示可能領域、表示対象領域及び第1映像群の他の例を示す説明図である。図12Bは、第2映像群がヘッドアップディスプレイに表示されている状態の他の例を示す説明図である。FIG. 12A is an explanatory view showing another example of the displayable area, the display target area, and the first image group. FIG. 12B is an explanatory view showing another example of the state in which the second image group is displayed on the head-up display. 図13Aは、オブジェクトの他の例を示す説明図である。図13Bは、表示可能領域及び第1映像群の他の例を示す説明図である。図13Cは、第2映像群がヘッドアップディスプレイに表示されている状態の他の例を示す説明図である。FIG. 13A is an explanatory view showing another example of the object. FIG. 13B is an explanatory view showing another example of the displayable area and the first image group. FIG. 13C is an explanatory view showing another example of the state in which the second image group is displayed on the head-up display. 図14Aは、オブジェクトの他の例を示す説明図である。図14Bは、表示可能領域及び第1映像群の他の例を示す説明図である。図14Cは、第2映像群がヘッドアップディスプレイに表示されている状態の他の例を示す説明図である。FIG. 14A is an explanatory view showing another example of the object. FIG. 14B is an explanatory view showing another example of the displayable area and the first image group. FIG. 14C is an explanatory view showing another example of the state in which the second image group is displayed on the head-up display. 本発明の実施の形態1に係る表示制御システムの要部を示すブロック図である。It is a block diagram showing an important section of a display control system concerning Embodiment 1 of the present invention. 図16Aは、本発明の実施の形態1に係る表示制御システムのシステム構成を示すブロック図である。図16Bは、本発明の実施の形態1に係る表示制御システムの他のシステム構成を示すブロック図である。図16Cは、本発明の実施の形態1に係る表示制御システムの他のシステム構成を示すブロック図である。図16Dは、本発明の実施の形態1に係る表示制御システムの他のシステム構成を示すブロック図である。FIG. 16A is a block diagram showing a system configuration of a display control system according to Embodiment 1 of the present invention. FIG. 16B is a block diagram showing another system configuration of the display control system according to Embodiment 1 of the present invention. FIG. 16C is a block diagram showing another system configuration of the display control system according to Embodiment 1 of the present invention. FIG. 16D is a block diagram showing another system configuration of the display control system according to Embodiment 1 of the present invention. 本発明の実施の形態2に係る表示制御装置を含む制御装置が車両に設けられている状態を示すブロック図である。It is a block diagram which shows the state in which the control apparatus containing the display control apparatus which concerns on Embodiment 2 of this invention is provided in the vehicle. 本発明の実施の形態2に係る表示制御装置の動作を示すフローチャートである。It is a flowchart which shows operation | movement of the display control apparatus which concerns on Embodiment 2 of this invention. 本発明の実施の形態2に係る表示制御システムの要部を示すブロック図である。It is a block diagram which shows the principal part of the display control system which concerns on Embodiment 2 of this invention. 本発明の実施の形態3に係る表示制御装置を含む制御装置が車両に設けられている状態を示すブロック図である。It is a block diagram which shows the state in which the control apparatus containing the display control apparatus which concerns on Embodiment 3 of this invention is provided in the vehicle. 本発明の実施の形態3に係る表示制御装置の動作を示すフローチャートである。It is a flowchart which shows operation | movement of the display control apparatus which concerns on Embodiment 3 of this invention. 本発明の実施の形態3に係る表示制御システムの要部を示すブロック図である。It is a block diagram which shows the principal part of the display control system which concerns on Embodiment 3 of this invention. 本発明の実施の形態4に係る表示制御装置を含む制御装置が車両に設けられている状態を示すブロック図である。It is a block diagram which shows the state in which the control apparatus containing the display control apparatus which concerns on Embodiment 4 of this invention is provided in the vehicle. 本発明の実施の形態4に係る表示制御装置の動作を示すフローチャートである。It is a flowchart which shows operation | movement of the display control apparatus which concerns on Embodiment 4 of this invention. 本発明の実施の形態4に係る表示制御システムの要部を示すブロック図である。It is a block diagram which shows the principal part of the display control system which concerns on Embodiment 4 of this invention.
 以下、この発明をより詳細に説明するために、この発明を実施するための形態について、添付の図面に従って説明する。 Hereinafter, in order to explain the present invention in more detail, a mode for carrying out the present invention will be described according to the attached drawings.
実施の形態1.
 図1は、実施の形態1に係る表示制御装置を含む制御装置が車両に設けられている状態を示すブロック図である。図2は、実施の形態1に係る表示制御装置を含む制御装置が車両に設けられている状態を示す説明図である。図1及び図2を参照して、実施の形態1の表示制御装置100について説明する。
Embodiment 1
FIG. 1 is a block diagram showing a state in which a control device including a display control device according to the first embodiment is provided in a vehicle. FIG. 2 is an explanatory view showing a state in which a control device including the display control device according to the first embodiment is provided in a vehicle. The display control apparatus 100 according to the first embodiment will be described with reference to FIGS. 1 and 2.
 図1に示す如く、車両1はヘッドアップディスプレイ2を有している。ヘッドアップディスプレイ2は、例えば、ウインドシールド型のAR-HUDにより構成されている。 As shown in FIG. 1, the vehicle 1 has a head-up display 2. The head-up display 2 is configured of, for example, a windshield AR-HUD.
 すなわち、車両1のダッシュボードにヘッドアップディスプレイ装置3が設けられている。ヘッドアップディスプレイ装置3は、AR表示用の映像を表示する表示器と、表示器に表示された映像に対応する可視光をウインドシールド4に投射する光学系とを有している。表示器は、例えば、LCD(Liquid Crystal Display)若しくはOLED(Organic Electro Luminescence Display)などのディスプレイ、又は、DLP(登録商標)若しくはレーザープロジェクターなどのプロジェクターにより構成されている。光学系は、例えば、凹面鏡、凸面鏡又は平面鏡のうちのいずれか二以上により構成されている。 That is, the head-up display device 3 is provided on the dashboard of the vehicle 1. The head-up display device 3 has a display for displaying a video for AR display, and an optical system for projecting visible light corresponding to the video displayed on the display onto the windshield 4. The display is configured of, for example, a display such as a liquid crystal display (LCD) or an organic electro luminescence display (OLED), or a projector such as a DLP (registered trademark) or a laser projector. The optical system is configured of, for example, any two or more of a concave mirror, a convex mirror, and a plane mirror.
 図2に示す如く、ウインドシールド4により反射された可視光が車両1の運転者(以下、単に「運転者」ということがある。)の眼Eに入射することにより、AR表示用の映像に対応する虚像VIが運転者により視認される。図中、OP1はAR表示用の映像に対応する可視光の光路を示しており、OP2は運転者により知覚される当該可視光の光路を示している。また、Pは運転者により知覚される虚像VIの位置を示している。 As shown in FIG. 2, visible light reflected by the windshield 4 is incident on the eye E of the driver of the vehicle 1 (hereinafter sometimes simply referred to as “driver”), thereby displaying an image for AR display. The corresponding virtual image VI is viewed by the driver. In the figure, OP1 indicates the optical path of visible light corresponding to the image for AR display, and OP2 indicates the optical path of the visible light perceived by the driver. P indicates the position of the virtual image VI perceived by the driver.
 ヘッドアップディスプレイ2に表示される映像は、例えば、車両1が走行中の道路における白線を強調する映像を含むものである。この映像は、例えば、運転者の視野において当該白線の近傍の位置に重畳された状態となるように表示されるものである。この映像は、当該白線の存在を車両1の運転者に提示して、車両1が走行中の車線から逸脱するのを防ぐためのものである。 The image displayed on the head-up display 2 includes, for example, an image emphasizing a white line on the road on which the vehicle 1 is traveling. This image is displayed, for example, in a state of being superimposed on a position near the white line in the driver's field of vision. This video is for presenting the presence of the white line to the driver of the vehicle 1 to prevent the vehicle 1 from deviating from the lane in which the vehicle is traveling.
 また、ヘッドアップディスプレイ2に表示される映像は、例えば、車両1が走行中の車線内に存在する障害物を強調する映像を含むものである。この映像は、例えば、運転者の視野において当該障害物の近傍の位置に重畳された状態となるように表示されるものである。この映像は、当該障害物に対する注意を車両1の運転者に促して、車両1が当該障害物に衝突するのを防ぐためのものである。 Further, the image displayed on the head-up display 2 includes, for example, an image emphasizing an obstacle present in a lane in which the vehicle 1 is traveling. This image is displayed, for example, in a state of being superimposed at a position near the obstacle in the driver's field of vision. This image is for urging the driver of the vehicle 1 to pay attention to the obstacle to prevent the vehicle 1 from colliding with the obstacle.
 また、ヘッドアップディスプレイ2に表示される映像は、例えば、車両1用のナビゲーションシステム(不図示)による案内中の走行経路を示す映像を含むものである。この映像は、例えば、車両1が進行すべき方向を示す矢印状の映像、及び、車両1が走行すべき車線をガイドする映像などを含むものである。 Further, the image displayed on the head-up display 2 includes, for example, an image showing a traveling route being guided by a navigation system (not shown) for the vehicle 1. This image includes, for example, an arrow-shaped image indicating the direction in which the vehicle 1 should travel, and an image for guiding the lane in which the vehicle 1 should travel.
 また、ヘッドアップディスプレイ2に表示される映像は、例えば、車両1の前方を走行中の他車両(以下「前走車両」という。)を強調する映像を含むものである。より具体的には、いわゆる「アダプティブクルーズコントロール」により車両1が走行しているとき、車両1による追従対象となる前走車両を強調する映像を含むものである。この映像は、車両1の運転者に追従対象となる前走車両を認識させるためのものである。 In addition, the image displayed on the head-up display 2 includes, for example, an image emphasizing another vehicle (hereinafter referred to as “front vehicle”) traveling in front of the vehicle 1. More specifically, when the vehicle 1 is traveling by so-called "adaptive cruise control", it includes an image emphasizing a preceding vehicle to be followed by the vehicle 1. This image is for making the driver of the vehicle 1 recognize the preceding vehicle to be followed.
 また、ヘッドアップディスプレイ2に表示される映像は、例えば、車両1と前走車両間の車間距離を示す映像を含むものである。この映像は、例えば、運転者の視野において車両1と前走車両間の車線に重畳された状態となるように表示されるものである。この映像は、車両1と前走車両間の車間距離を車両1の運転者に認識させるためのものである。 Further, the video displayed on the head-up display 2 includes, for example, a video indicating the inter-vehicle distance between the vehicle 1 and the preceding vehicle. This image is displayed, for example, in a state of being superimposed on the lane between the vehicle 1 and the preceding vehicle in the field of view of the driver. This image is for causing the driver of the vehicle 1 to recognize the inter-vehicle distance between the vehicle 1 and the vehicle in front.
 また、ヘッドアップディスプレイ2に表示される映像は、例えば、車両1の前方風景に含まれる建物に関する情報を示す映像を含むものである。 Further, the video displayed on the head-up display 2 includes, for example, a video indicating information on a building included in the front scenery of the vehicle 1.
 車両1は、車外撮像用のカメラ5及び障害物検知用のセンサ6を有している。カメラ5は、例えば、いわゆる「フロントカメラ」により構成されている。センサ6は、例えば、車両1の前端部に設けられているミリ波レーダーセンサ、ライダーセンサ又は超音波センサのうちの少なくとも一つにより構成されている。 The vehicle 1 has a camera 5 for imaging outside the vehicle and a sensor 6 for obstacle detection. The camera 5 is configured by, for example, a so-called "front camera". The sensor 6 is configured by, for example, at least one of a millimeter wave radar sensor, a rider sensor, or an ultrasonic sensor provided at the front end of the vehicle 1.
 オブジェクト情報生成部21は、カメラ5による撮像画像に対する画像認識処理を実行するものである。オブジェクト情報生成部21は、画像認識処理の結果及びセンサ6による検出値を用いて、車両1の周囲(より具体的には車両1の前方)に存在する種々のもの、すなわちオブジェクトを検出する処理を実行するものである。オブジェクト情報生成部21は、画像認識処理の結果及びセンサ6による検出値を用いて、当該検出されたオブジェクトに関する情報(以下「オブジェクト情報」という。)を生成するものである。以下、オブジェクト情報生成部21により検出されたオブジェクトの個数を「N」と記載することがある。すなわち、Nは1以上の整数である。 The object information generation unit 21 executes an image recognition process on an image captured by the camera 5. The object information generation unit 21 detects various objects existing around the vehicle 1 (more specifically, in front of the vehicle 1), that is, an object, using the result of the image recognition process and the detection value by the sensor 6 To do. The object information generation unit 21 generates information on the detected object (hereinafter referred to as “object information”) using the result of the image recognition process and the detection value by the sensor 6. Hereinafter, the number of objects detected by the object information generation unit 21 may be described as “N”. That is, N is an integer of 1 or more.
 オブジェクト情報生成部21による検出対象となるオブジェクトは、例えば、障害物、道路の白線、標識、信号機及び建物などである。障害物は、例えば、他車両又は歩行者などを含むものである。道路の白線は、例えば、センターライン又は車道と路側間の境界線などを含むものである。標識は、例えば、案内標識又は交通標識などを含むものである。建物は、例えば、ガソリンスタンドなどを含むものである。 The objects to be detected by the object information generation unit 21 are, for example, obstacles, white lines on roads, signs, traffic lights, buildings, and the like. The obstacles include, for example, other vehicles or pedestrians. The white line of the road includes, for example, a center line or a boundary between a road and a roadside. The signs include, for example, guide signs or traffic signs. The building includes, for example, a gas station.
 ここで、図3及び図4を参照して、オブジェクト情報の具体例について説明する。 Here, a specific example of the object information will be described with reference to FIGS. 3 and 4.
 図3Aは、車両1の前方風景をウインドシールド4越しに見た状態の一例を示している。図3Aに示す如く、オブジェクト情報生成部21により7個のオブジェクトO~Oが検出されたものとする。オブジェクトO~Oは他車両に対応するものであり、オブジェクトOは案内標識に対応するものであり、オブジェクトOは交通標識に対応するものであり、オブジェクトO,Oはセンターラインに対応するものである。 FIG. 3A shows an example of a state in which the front scenery of the vehicle 1 is viewed through the windshield 4. As shown in FIG. 3A, it is assumed that seven objects O 1 to O 7 are detected by the object information generation unit 21. Objects O 1 to O 3 correspond to other vehicles, object O 4 corresponds to a guide sign, object O 5 corresponds to a traffic sign, and objects O 6 and O 7 are centers It corresponds to the line.
 以下、ヘッドアップディスプレイ2(より具体的にはヘッドアップディスプレイ装置3内の表示器)における映像表示が可能な領域A1を「表示可能領域」という。また、表示可能領域A1のうち、オブジェクト情報生成部21により検出された個々のオブジェクトに対応する領域を「オブジェクト領域」という。また、表示可能領域A1のうち、全てのオブジェクト領域を含む領域を「オブジェクト群領域」という。表示可能領域A1の形状は、ウインドシールド4の形状に応じた形状であり、例えば長方形状である。個々のオブジェクト領域の形状は、例えば矩形状である。オブジェクト群領域の形状は、例えば矩形状である。 Hereinafter, an area A1 capable of displaying an image on the head-up display 2 (more specifically, a display in the head-up display device 3) will be referred to as a "displayable area". Further, in the displayable area A1, an area corresponding to each object detected by the object information generation unit 21 is referred to as an "object area". Further, in the displayable area A1, an area including all object areas is referred to as an “object group area”. The shape of the displayable area A1 is a shape corresponding to the shape of the windshield 4 and is, for example, a rectangular shape. The shape of each object area is, for example, rectangular. The shape of the object group area is, for example, rectangular.
 図3Bは、表示可能領域A1と、7個のオブジェクトO~Oに対応する7個のオブジェクト領域OA~OAと、オブジェクト群領域の横幅Rhと、オブジェクト群領域の縦幅Rvとの一例を示している。図3Bに示す例において、オブジェクト群領域の横幅Rhは、オブジェクト領域OAの左端部とオブジェクト領域OAの右端部間の距離に対応する値である。また、オブジェクト群領域の縦幅Rvは、オブジェクト領域OAの上端部とオブジェクト領域OAの下端部間の距離に対応する値である。 3B is a display area A1, and seven objects O 1 ~ O 7 corresponding seven object area OA 1 ~ OA 7 in a lateral range Rh object group region, a longitudinal width Rv object group region An example is shown. In the example shown in FIG. 3B, the lateral range Rh object group region is a value corresponding to the distance between the right end portion of the left end and the object area OA 5 of the object area OA 1. The vertical width Rv object group region is a value corresponding to the distance between the lower ends of the upper and object area OA 6 of the object area OA 4.
 図4は、図3に示す状態にてオブジェクト情報生成部21により生成されたオブジェクト情報の一例を示している。図4に示す如く、オブジェクト情報は、オブジェクト情報生成部21により検出されたオブジェクトの個数(すなわちN)、個々のオブジェクトに割り当てられた識別子(図中「ID」)、個々のオブジェクトに対応するものの種類、個々のオブジェクトの位置、及び、個々のオブジェクトに対応する映像を示す映像データ(図中「オプション」)を含むものである。 FIG. 4 shows an example of object information generated by the object information generation unit 21 in the state shown in FIG. As shown in FIG. 4, the object information includes the number of objects detected by the object information generation unit 21 (ie, N), identifiers assigned to the individual objects (“ID” in the drawing), and ones corresponding to the individual objects. It includes video data ("option" in the figure) showing types, positions of individual objects, and images corresponding to individual objects.
 図中「p1,p2,p3,p4」は、表示可能領域A1における7個のオブジェクト領域OA~OAの各々の四隅部の位置座標を示している。図中「z001」~「z007」は、車両1と7個のオブジェクトO~O間の距離をそれぞれ示している。これらの距離は、例えば、オブジェクト情報生成部21がセンサ6による検出値を用いて算出したものである。または、例えば、これらの距離は、オブジェクト情報生成部21がカメラ5による撮像画像を用いて、TOF(Time of Flight)法又は三角測量(いわゆる「ステレオ視」)法などの方法により計測したものである。 In the figure, “p1, p2, p3, p4” indicate position coordinates of four corners of each of the seven object areas OA 1 to OA 7 in the displayable area A1. In the figure, “z001” to “z007” indicate the distances between the vehicle 1 and the seven objects O 1 to O 7 respectively. These distances are calculated, for example, by the object information generation unit 21 using the detection value of the sensor 6. Alternatively, for example, these distances are measured by the object information generation unit 21 using a captured image by the camera 5 according to a method such as TOF (Time of Flight) method or triangulation (so-called “stereo vision”) method. is there.
 オブジェクト情報取得部31は、オブジェクト情報生成部21により生成されたオブジェクト情報を取得するものである。オブジェクト情報取得部31は、当該取得されたオブジェクト情報を映像酔い推定部32及び表示制御部33に出力するものである。 The object information acquisition unit 31 acquires object information generated by the object information generation unit 21. The object information acquisition unit 31 outputs the acquired object information to the video sickness estimation unit 32 and the display control unit 33.
 以下、オブジェクト情報生成部21により検出されたN個のオブジェクトに対応する映像を含む映像群、すなわち映像酔い推定部32による推定処理の対象となる映像群を「第1映像群」という。映像酔い推定部32は、オブジェクト情報取得部31により出力されたオブジェクト情報を用いて、仮に第1映像群がヘッドアップディスプレイ2に表示された場合における第1映像群による映像酔いの発生の有無を推定する処理を実行するものである。映像酔い推定部32による推定処理の具体例については、図7のフローチャートを参照して後述する。 Hereinafter, a video group including videos corresponding to the N objects detected by the object information generation unit 21, that is, a video group to be subjected to estimation processing by the video sickness estimation unit 32 will be referred to as a “first video group”. The video sickness estimation unit 32 uses the object information output from the object information acquisition unit 31 to temporarily detect the presence or absence of video sickness due to the first video group when the first video group is displayed on the head-up display 2. It performs processing to estimate. A specific example of the estimation process by the motion sickness estimation unit 32 will be described later with reference to the flowchart of FIG.
 表示制御部33には、オブジェクト情報取得部31により出力されたオブジェクト情報を用いて、オブジェクト情報生成部21により検出されたN個のオブジェクトに対応する映像のうちの少なくとも一部の映像を含む映像群(以下「第2映像群」という。)をヘッドアップディスプレイ2に表示させる制御を実行するものである。以下、第2映像群に含まれる映像に対応するオブジェクトの個数を「N’」と記載することがある。すなわち、N’は1以上かつN以下の整数である。また、表示可能領域A1のうちの第2映像群の表示対象となる領域A2を「表示対象領域」という。 The display control unit 33 uses the object information output from the object information acquisition unit 31 to display an image including at least a part of the image corresponding to the N objects detected by the object information generation unit 21. Control for displaying a group (hereinafter referred to as "second video group") on the head-up display 2 is executed. Hereinafter, the number of objects corresponding to the images included in the second image group may be described as “N ′”. That is, N 'is an integer of 1 or more and N or less. Further, an area A2 which is a display target of the second image group in the displayable area A1 is referred to as a "display target area".
 ここで、表示制御部33は、映像酔い推定部32による推定処理の結果に応じて第2映像群の表示態様を異ならしめるものである。具体的には、例えば、映像酔い推定部32による推定処理の結果に応じて、第2映像群に含まれる映像に対応するオブジェクトの個数N’、表示対象領域A2の横幅Rh’及び表示対象領域A2の縦幅Rv’が異なるものである。表示制御部33による制御の具体例については、図8のフローチャートを参照して後述する。 Here, the display control unit 33 changes the display mode of the second video group according to the result of the estimation process by the video sickness estimation unit 32. Specifically, for example, according to the result of estimation processing by the video sickness estimation unit 32, the number N ′ of objects corresponding to the video included in the second video group, the horizontal width Rh ′ of the display target area A2, and the display target area The vertical width Rv 'of A2 is different. A specific example of control by the display control unit 33 will be described later with reference to the flowchart of FIG.
 オブジェクト情報取得部31、映像酔い推定部32及び表示制御部33により、表示制御装置100の要部が構成されている。また、オブジェクト情報生成部21、オブジェクト情報取得部31、映像酔い推定部32及び表示制御部33により、制御装置7の要部が構成されている。 The object information acquisition unit 31, the motion sickness estimation unit 32, and the display control unit 33 constitute a main part of the display control apparatus 100. Further, the object information generation unit 21, the object information acquisition unit 31, the motion sickness estimation unit 32, and the display control unit 33 constitute a main part of the control device 7.
 次に、図5を参照して、制御装置7の要部のハードウェア構成について説明する。 Next, the hardware configuration of the main part of the control device 7 will be described with reference to FIG.
 図5Aに示す如く、制御装置7はコンピュータにより構成されており、当該コンピュータはプロセッサ41及びメモリ42を有している。メモリ42には、当該コンピュータをオブジェクト情報生成部21、オブジェクト情報取得部31、映像酔い推定部32及び表示制御部33として機能させるためのプログラムが記憶されている。メモリ42に記憶されているプログラムをプロセッサ41が読み出して実行することにより、オブジェクト情報生成部21、オブジェクト情報取得部31、映像酔い推定部32及び表示制御部33の機能が実現される。 As shown in FIG. 5A, the control device 7 is configured by a computer, and the computer has a processor 41 and a memory 42. The memory 42 stores programs for causing the computer to function as an object information generation unit 21, an object information acquisition unit 31, a motion sickness estimation unit 32, and a display control unit 33. The processor 41 reads out and executes the program stored in the memory 42, whereby the functions of the object information generation unit 21, the object information acquisition unit 31, the video sickness estimation unit 32, and the display control unit 33 are realized.
 プロセッサ41は、例えば、CPU(Central Processing Unit)、GPU(Graphics Processing Unit)、マイクロプロセッサ、マイクロコントローラ又はDSP(Digital Signal Processor)などを用いたものである。メモリ42は、例えば、RAM(Random Access Memory)、ROM(Read Only Memory)、フラッシュメモリ、EPROM(Erasable Programmable Read Only Memory)若しくはEEPROM(Electrically Erasable Programmable Read-Only Memory)などの半導体メモリ、磁気ディスク、光ディスク又は光磁気ディスクなどを用いたものである。 The processor 41 uses, for example, a central processing unit (CPU), a graphics processing unit (GPU), a microprocessor, a microcontroller, or a digital signal processor (DSP). The memory 42 is, for example, a semiconductor memory such as a random access memory (RAM), a read only memory (ROM), a flash memory, an erasable programmable read only memory (EPROM), or an electrically erasable programmable read only memory (EEPROM). An optical disk or a magneto-optical disk is used.
 または、図5Bに示す如く、オブジェクト情報生成部21、オブジェクト情報取得部31、映像酔い推定部32及び表示制御部33の機能が専用の処理回路43により実現されるものであっても良い。処理回路43は、例えば、ASIC(Application Specific Integrated Circuit)、PLD(Programmable Logic Device)、FPGA(Field-Programmable Gate Array)、SoC(System-on-a-Chip)又はシステムLSI(Large-Scale Integration)などを用いたものである。 Alternatively, as shown in FIG. 5B, the functions of the object information generation unit 21, the object information acquisition unit 31, the video sickness estimation unit 32, and the display control unit 33 may be realized by a dedicated processing circuit 43. The processing circuit 43 may be, for example, an application specific integrated circuit (ASIC), a programmable logic device (PLD), a field-programmable gate array (FPGA), a system-on-a-chip (SoC), or a system LSI (Large-Scale Integration). Etc. are used.
 または、オブジェクト情報生成部21、オブジェクト情報取得部31、映像酔い推定部32及び表示制御部33のうちの一部の機能がプロセッサ41及びメモリ42により実現されて、残余の機能が処理回路43により実現されるものであっても良い。 Alternatively, some functions of the object information generation unit 21, the object information acquisition unit 31, the video sickness estimation unit 32, and the display control unit 33 are realized by the processor 41 and the memory 42, and the remaining functions are performed by the processing circuit 43. It may be realized.
 次に、図6のフローチャートを参照して、表示制御装置100の動作について説明する。 Next, the operation of the display control device 100 will be described with reference to the flowchart of FIG.
 まず、ステップST1にて、オブジェクト情報取得部31は、オブジェクト情報生成部21により生成されたオブジェクト情報を取得する。オブジェクト情報取得部31は、当該取得されたオブジェクト情報を映像酔い推定部32及び表示制御部33に出力する。 First, in step ST1, the object information acquisition unit 31 acquires object information generated by the object information generation unit 21. The object information acquisition unit 31 outputs the acquired object information to the video sickness estimation unit 32 and the display control unit 33.
 次いで、ステップST2にて、映像酔い推定部32は、オブジェクト情報取得部31により出力されたオブジェクト情報を用いて、仮に第1映像群がヘッドアップディスプレイ2に表示された場合における第1映像群による映像酔いの発生の有無を推定する処理を実行する。ステップST2における推定処理の具体例については、図7のフローチャートを参照して後述する。 Next, at step ST2, the video sickness estimating unit 32 uses the object information output from the object information acquiring unit 31 to temporarily display the first video group on the head-up display 2 and use the first video group. Execute processing to estimate the presence or absence of video sickness. A specific example of the estimation process in step ST2 will be described later with reference to the flowchart of FIG.
 次いで、ステップST3にて、表示制御部33は、オブジェクト情報取得部31により出力されたオブジェクト情報を用いて、第2映像群をヘッドアップディスプレイ2に表示させる制御を実行する。このとき、表示制御部33は、映像酔い推定部32による推定処理の結果に応じて第2映像群の表示態様を異ならしめるようになっている。ステップST3における制御の具体例については、図8のフローチャートを参照して後述する。 Next, in step ST3, the display control unit 33 executes control to display the second image group on the head-up display 2 using the object information output from the object information acquisition unit 31. At this time, the display control unit 33 is configured to make the display mode of the second video group different according to the result of the estimation process by the video sickness estimation unit 32. A specific example of control in step ST3 will be described later with reference to the flowchart of FIG.
 次に、図7のフローチャートを参照して、ステップST2における映像酔い推定部32による推定処理の具体例について説明する。 Next, with reference to the flowchart in FIG. 7, a specific example of the estimation process by the video sickness estimation unit 32 in step ST2 will be described.
 まず、ステップST11にて、映像酔い推定部32は、オブジェクト群領域の横幅Rhに対する比較対象となる閾値Rh_th、オブジェクト群領域の縦幅Rvに対する比較対象となる閾値Rv_th、及び、オブジェクトの個数Nに対する比較対象となる閾値N_thを設定する。 First, at step ST11, the video sickness estimating unit 32 calculates a threshold Rh_th to be compared against the horizontal width Rh of the object group area, a threshold Rv_th to be compared against the vertical width Rv of the object group area, and the number N of objects. A threshold N_th to be compared is set.
 閾値Rh_thは、例えば、表示可能領域A1の横幅と同一の値、表示可能領域A1の横幅に対する2分の1の値、又は、表示可能領域A1の横幅に対する4分の1の値に設定される。閾値Rv_thは、例えば、表示可能領域A1の縦幅と同一の値、表示可能領域A1の縦幅に対する2分の1の値、又は、表示可能領域A1の縦幅に対する4分の1の値に設定される。閾値N_thは、例えば、1以上の整数に設定される。 The threshold value Rh_th is set to, for example, a value equal to the horizontal width of the displayable area A1, a half of the horizontal width of the displayable area A1, or a quarter of the horizontal width of the displayable area A1. . The threshold value Rv_th has, for example, a value equal to the vertical width of the displayable area A1, a half of the vertical width of the displayable area A1, or a quarter of the vertical width of the displayable area A1. It is set. The threshold N_th is set to, for example, an integer of 1 or more.
 次いで、ステップST12にて、映像酔い推定部32は、オブジェクト情報取得部31により出力されたオブジェクト情報を用いて、オブジェクト群領域の横幅Rh及びオブジェクト群領域の縦幅Rvを算出する。 Next, in step ST12, the video sickness estimating unit 32 calculates the horizontal width Rh of the object group area and the vertical width Rv of the object group area using the object information output from the object information acquisition unit 31.
 次いで、ステップST13にて、映像酔い推定部32は、ステップST12で算出されたRhをステップST11で設定されたRh_thと比較する。RhがRh_th以下である場合(ステップST13“NO”)、ステップST14にて、映像酔い推定部32はRh’をRhと同一の値に設定する。他方、RhがRh_thを超えている場合(ステップST13“YES”)、ステップST15にて、映像酔い推定部32はRh’をRh_thと同一の値に設定する。 Next, at step ST13, the video sickness estimating unit 32 compares Rh calculated at step ST12 with Rh_th set at step ST11. If Rh is equal to or less than Rh_th ("NO" in step ST13), the video sickness estimating unit 32 sets Rh 'to the same value as Rh in step ST14. On the other hand, if Rh exceeds Rh_th ("YES" in step ST13), the video sickness estimating unit 32 sets Rh 'to the same value as Rh_th in step ST15.
 次いで、ステップST16にて、映像酔い推定部32は、ステップST12で算出されたRvをステップST11で設定されたRv_thと比較する。RvがRv_th以下である場合(ステップST16“NO”)、ステップST17にて、映像酔い推定部32はRv’をRvと同一の値に設定する。他方、RvがRv_thを超えている場合(ステップST16“YES”)、ステップST18にて、映像酔い推定部32はRv’をRv_thと同一の値に設定する。 Next, at step ST16, the video sickness estimating unit 32 compares Rv calculated at step ST12 with Rv_th set at step ST11. If Rv is equal to or less than Rv_th ("NO" at step ST16), the video sickness estimating unit 32 sets Rv 'to the same value as Rv at step ST17. On the other hand, when Rv exceeds Rv_th ("YES" in step ST16), the video sickness estimating unit 32 sets Rv 'to the same value as Rv_th in step ST18.
 次いで、ステップST19にて、映像酔い推定部32は、オブジェクト情報が示すNをステップST11で設定されたN_thと比較する。NがN_th以下である場合(ステップST19“NO”)、ステップST20にて、映像酔い推定部32はN’をNと同一の値に設定する。他方、NがN_thを超えている場合(ステップST19“YES”)、ステップST21にて、映像酔い推定部32はN’をN_thと同一の値に設定する。 Next, at step ST19, the video sickness estimating unit 32 compares N indicated by the object information with N_th set at step ST11. If N is equal to or less than N_th ("NO" at step ST19), the video sickness estimating unit 32 sets N 'to the same value as N at step ST20. On the other hand, if N exceeds N_th ("YES" in step ST19), the video sickness estimation unit 32 sets N 'to the same value as N_th in step ST21.
 次いで、ステップST22にて、映像酔い推定部32は、ステップST14又はステップST15で設定されたRh’の値、ステップST17又はステップST18で設定されたRv’の値、及び、ステップST20又はステップST21で設定されたN’の値を表示制御部33に出力する。 Next, at step ST22, the video sickness estimating unit 32 determines the value of Rh 'set at step ST14 or step ST15, the value of Rv' set at step ST17 or step ST18, and the value at step ST20 or step ST21. The set value of N ′ is output to the display control unit 33.
 すなわち、Rh’=RhかつRv’=RvかつN’=Nである場合、映像酔い推定部32により出力される値は、推定処理の結果が第1映像群による映像酔いの発生「無」であることを示している。そうでない場合、映像酔い推定部32により出力される値は、推定処理の結果が第1映像群による映像酔いの発生「有」であることを示している。 That is, when Rh ′ = Rh and Rv ′ = Rv and N ′ = N, the value output by the video sickness estimation unit 32 is “absent” as the result of the estimation process is the occurrence of video sickness due to the first video group. It shows that there is. If not, the value output by the video sickness estimation unit 32 indicates that the result of the estimation process is "presence" of the occurrence of video sickness due to the first video group.
 次に、図8のフローチャートを参照して、ステップST3における表示制御部33による制御の具体例について説明する。 Next, a specific example of control by the display control unit 33 in step ST3 will be described with reference to the flowchart in FIG.
 まず、ステップST31にて、表示制御部33は、映像酔い推定部32により出力されたRh’、Rv’及びN’の値を取得する。 First, in step ST31, the display control unit 33 acquires the values of Rh ', Rv', and N 'output from the motion sickness estimation unit 32.
 次いで、ステップST32にて、表示制御部33は、Rh’×Rv’のサイズによる表示対象領域A2を設定する。 Next, in step ST32, the display control unit 33 sets a display target area A2 with a size of Rh ′ × Rv ′.
 このとき、表示制御部33は、表示可能領域A1における表示対象領域A2の位置を設定するものであっても良い。具体的には、例えば、表示制御部33は、表示可能領域A1における表示対象領域A2の中心部Cの位置座標を設定するものであっても良い。ただし、表示可能領域A1における表示対象領域A2の位置は、操作入力装置(不図示)に入力された操作により設定されるものであっても良い。すなわち、表示可能領域A1における表示対象領域A2の位置は、任意の位置に設定することができる。 At this time, the display control unit 33 may set the position of the display target area A2 in the displayable area A1. Specifically, for example, the display control unit 33 may set the position coordinates of the central portion C of the display target area A2 in the displayable area A1. However, the position of the display target area A2 in the displayable area A1 may be set by the operation input to the operation input device (not shown). That is, the position of the display target area A2 in the displayable area A1 can be set to an arbitrary position.
 次いで、ステップST33にて、表示制御部33は、オブジェクト情報取得部31により出力されたオブジェクト情報を用いて、N個のオブジェクトのうちのN’個のオブジェクトを選択する。すなわち、N’=Nである場合、オブジェクト情報生成部21により検出されたN個のオブジェクトのうちの全てのオブジェクトが選択される。他方、N’<Nである場合、オブジェクト情報生成部21により検出されたN個のオブジェクトのうちの一部のオブジェクトが選択される。 Next, in step ST33, the display control unit 33 uses the object information output from the object information acquisition unit 31 to select N 'objects among the N objects. That is, when N '= N, all objects among the N objects detected by the object information generation unit 21 are selected. On the other hand, if N ′ <N, part of the N objects detected by the object information generation unit 21 is selected.
 ここで、N’<Nである場合、表示制御部33は、個々のオブジェクトに対応するものの種類に応じた優先度に基づきN’個のオブジェクトを選択するものであっても良い。この優先度は、表示制御部33に予め設定されているものであっても良く、又は操作入力装置(不図示)に入力された操作により設定されるものであっても良い。 Here, if N ′ <N, the display control unit 33 may select N ′ objects based on the priorities according to the types of objects corresponding to the individual objects. This priority may be preset in the display control unit 33, or may be set by an operation input to an operation input device (not shown).
 具体的には、例えば、他車両に対応するオブジェクト、センターラインに対応するオブジェクト、道路標識に対応するオブジェクト、案内標識に対応するオブジェクトの順に優先度が低くなるように設定されている。すなわち、他車両に対応するオブジェクトの優先度が最も高く、案内標識に対応するオブジェクトの優先度が最も低くなるように設定されている。 Specifically, for example, the priority is set to be lower in the order of the object corresponding to the other vehicle, the object corresponding to the center line, the object corresponding to the road sign, and the object corresponding to the guide sign. That is, the priority of the object corresponding to the other vehicle is set to the highest, and the priority of the object corresponding to the guide sign is set to the lowest.
 次いで、ステップST34にて、表示制御部33は、オブジェクト情報取得部31により出力されたオブジェクト情報内の映像データを用いて、ステップST33で選択されたオブジェクトに対応する映像を生成する。 Next, in step ST34, the display control unit 33 generates video corresponding to the object selected in step ST33, using the video data in the object information output from the object information acquisition unit 31.
 具体的には、例えば、表示制御部33は、N’個のオブジェクトと一対一に対応するN’個の映像を生成する。N’個の映像の各々は、例えば、点線による矩形枠状の映像である。N’個の映像の各々のサイズは、例えば、N’個のオブジェクトのうちの対応するオブジェクトのオブジェクト領域のサイズと同等である。N’個の映像の各々は、N’個のオブジェクトのうちの対応するオブジェクトに対するマーキング用の映像である。 Specifically, for example, the display control unit 33 generates N 'pieces of video corresponding to N' pieces of objects one by one. Each of the N 'images is, for example, a rectangular frame image by a dotted line. The size of each of the N 'images is, for example, equal to the size of the object area of the corresponding object among the N' objects. Each of the N 'images is an image for marking a corresponding one of the N' objects.
 次いで、ステップST35にて、表示制御部33は、ステップST34で生成された映像(すなわち第2映像群)のうちのステップST32で設定された表示対象領域A2内のものをヘッドアップディスプレイ2に表示させる制御を実行する。 Next, in step ST35, the display control unit 33 displays on the head-up display 2 one of the images generated in step ST34 (that is, the second image group) in the display target area A2 set in step ST32. Execute control.
 ここで、表示制御部33には、実空間における運転者の眼の位置の推定値を示す情報が予め記憶されている。この推定値は、例えば、車両1の車室内における運転席の位置などに基づき推定されたものである。表示制御部33は、オブジェクト情報取得部31により出力されたオブジェクト情報を用いて、実空間におけるN’個のオブジェクトの各々の位置を算出する。表示制御部33は、運転者の眼とN’個のオブジェクトの各々との位置関係に基づき、運転者の視野においてN’個のオブジェクトの各々がN’個の映像のうちの対応する映像によりマーキングされた状態となるように、表示可能領域A1におけるN’個の映像の各々の位置を設定する。 Here, in the display control unit 33, information indicating an estimated value of the position of the driver's eye in the real space is stored in advance. This estimated value is, for example, estimated based on the position of the driver's seat in the vehicle compartment of the vehicle 1 or the like. The display control unit 33 uses the object information output by the object information acquisition unit 31 to calculate the position of each of N ′ objects in the real space. The display control unit 33 determines, based on the positional relationship between the driver's eyes and each of the N ′ objects, that each of the N ′ objects has a corresponding image among the N ′ images in the driver's field of vision. The position of each of the N 'images in the displayable area A1 is set so as to be in the marked state.
 ステップST35の制御により、ヘッドアップディスプレイ2に第2映像群が表示される。すなわち、表示対象領域A2がオブジェクト群領域の全体を含むものであり、かつ、N’=Nである場合、ヘッドアップディスプレイ2に表示される第2映像群は第1映像群と同一のものとなる。そうでない場合、ヘッドアップディスプレイ2に表示される第2映像群は第1映像群と異なるものとなる。 The second image group is displayed on the head-up display 2 by the control of step ST35. That is, when the display target area A2 includes the entire object group area and N ′ = N, the second image group displayed on the head-up display 2 is the same as the first image group. Become. Otherwise, the second image group displayed on the head-up display 2 is different from the first image group.
 次に、図3及び図9を参照して、第1映像群及びヘッドアップディスプレイ2に表示される第2映像群の一例について説明する。 Next, with reference to FIG. 3 and FIG. 9, an example of the first image group and the second image group displayed on the head-up display 2 will be described.
 図3Aに示す如く、オブジェクト情報生成部21により7個のオブジェクトO~Oが検出されたものとする。図9Aは、この場合における第1映像群の一例を示している。図9Aに示す如く、第1映像群は、7個のオブジェクトO~Oに対応する7個の映像V~Vを含むものである。 As shown in FIG. 3A, it is assumed that seven objects O 1 to O 7 are detected by the object information generation unit 21. FIG. 9A shows an example of the first image group in this case. As shown in FIG. 9A, the first image group includes seven images V 1 to V 7 corresponding to seven objects O 1 to O 7 .
 7個の映像V~Vの各々は、点線による矩形枠状の映像である。7個の映像V~Vの各々のサイズは、7個のオブジェクトO~Oのうちの対応するオブジェクトのオブジェクト領域のサイズと同等である。7個の映像V~Vの各々は、7個のオブジェクトO~Oのうちの対応するオブジェクトに対するマーキング用の映像である。 Each of the seven images V 1 to V 7 is an image in a rectangular frame shape by a dotted line. The size of each of the seven images V 1 to V 7 is equal to the size of the object area of the corresponding one of the seven objects O 1 to O 7 . Each of the seven images V 1 to V 7 is an image for marking the corresponding one of the seven objects O 1 to O 7 .
 これに対して、閾値Rh_thが表示可能領域A1の横幅と同一の値に設定されて、かつ、閾値Rv_thが表示可能領域A1の縦幅と同一の値に設定されて、かつ、N_th=8に設定されたものとする(ステップST11)。この場合、Rh≦Rh_thであるためRh’=Rhに設定されて(ステップST14)、かつ、Rv≦Rv_thであるためRv’=Rvに設定されて(ステップST17)、かつ、N≦N_thであるためN’=N=7に設定される(ステップST20)。 In contrast, threshold Rh_th is set to the same value as the horizontal width of displayable area A1, and threshold Rv_th is set to the same value as the vertical width of displayable area A1, and N_th = 8. It is assumed that it has been set (step ST11). In this case, since Rh ≦ Rh_th, Rh ′ = Rh is set (step ST14), and since Rv ≦ Rv_th, Rv ′ = Rv is set (step ST17), and N ≦ N_th. Therefore, N '= N = 7 is set (step ST20).
 図9Aは、この場合における表示対象領域A2の一例を示している。ステップST32にて、例えば図9Aに示す表示対象領域A2が設定される。また、N’=Nであるため、ステップST33にて7個のオブジェクトO~Oのうちの全てのオブジェクトO~Oが選択される。この結果、ステップST34にて7個の映像V~Vのうちの全ての映像V~Vが生成される。 FIG. 9A shows an example of the display target area A2 in this case. In step ST32, for example, a display target area A2 shown in FIG. 9A is set. Further, N '= order is N, all objects O 1 ~ O 7 of the step ST33 7 amino object O 1 ~ O 7 is selected. As a result, all video V 1 ~ V 7 of the step ST34 7 amino image V 1 ~ V 7 are generated.
 ステップST35の制御により、図9Bに示す如く、7個の映像V~V(すなわち第2映像群)のうちの表示対象領域A2内のものがヘッドアップディスプレイ2に表示される。この場合、ヘッドアップディスプレイ2に表示される第2映像群は、図9Aに示す第1映像群と同一のものとなる。 By the control of step ST35, as shown in FIG. 9B, the head up display 2 displays the one in the display target area A2 of the seven videos V 1 to V 7 (that is, the second video group). In this case, the second image group displayed on the head-up display 2 is the same as the first image group shown in FIG. 9A.
 次に、図3及び図10を参照して、第1映像群及びヘッドアップディスプレイ2に表示される第2映像群の他の例について説明する。 Next, with reference to FIGS. 3 and 10, another example of the first image group and the second image group displayed on the head-up display 2 will be described.
 図3Aに示す如く、オブジェクト情報生成部21により7個のオブジェクトO~Oが検出されたものとする。図10Aは、この場合における第1映像群の一例を示している。図10Aに示す如く、第1映像群は、7個のオブジェクトO~Oに対応する7個の映像V~Vを含むものである。 As shown in FIG. 3A, it is assumed that seven objects O 1 to O 7 are detected by the object information generation unit 21. FIG. 10A shows an example of the first image group in this case. As shown in FIG. 10A, the first image group includes seven images V 1 to V 7 corresponding to seven objects O 1 to O 7 .
 これに対して、閾値Rh_thが表示可能領域A1の横幅に対する2分の1の値に設定されて、かつ、閾値Rv_thが表示可能領域A1の縦幅に対する2分の1の値に設定されて、かつ、N_th=8に設定されたものとする(ステップST11)。この場合、Rh>Rh_thであるためRh’=Rh_thに設定されて(ステップST15)、かつ、Rv>Rv_thであるためRv’=Rv_thに設定されて(ステップST18)、かつ、N≦N_thであるためN’=N=7に設定される(ステップST20)。 On the other hand, the threshold Rh_th is set to a half of the horizontal width of the displayable area A1, and the threshold Rv_th is set to a half of the vertical width of the displayable area A1, Also, it is assumed that N_th = 8 is set (step ST11). In this case, since Rh> Rh_th, Rh ′ is set to Rh_th (step ST15), and since Rv> Rv_th, Rv ′ is set to Rv_th (step ST18), and N ≦ N_th. Therefore, N '= N = 7 is set (step ST20).
 図10Aは、この場合における表示対象領域A2の一例を示している。ステップST32にて、例えば図10Aに示す表示対象領域A2が設定される。また、N’=Nであるため、ステップST33にて7個のオブジェクトO~Oのうちの全てのオブジェクトO~Oが選択される。この結果、ステップST34にて7個の映像V~Vのうちの全ての映像V~Vが生成される。 FIG. 10A shows an example of the display target area A2 in this case. In step ST32, for example, a display target area A2 shown in FIG. 10A is set. Further, N '= order is N, all objects O 1 ~ O 7 of the step ST33 7 amino object O 1 ~ O 7 is selected. As a result, all video V 1 ~ V 7 of the step ST34 7 amino image V 1 ~ V 7 are generated.
 ステップST35の制御により、図10Bに示す如く、7個の映像V~V(すなわち第2映像群)のうちの表示対象領域A2内のものがヘッドアップディスプレイ2に表示される。この場合、ヘッドアップディスプレイ2に表示される第2映像群は、図10Aに示す第1映像群と異なるものとなる。 By the control of step ST35, as shown in FIG. 10B, the head up display 2 displays the image in the display target area A2 of the seven images V 1 to V 7 (that is, the second image group). In this case, the second image group displayed on the head-up display 2 is different from the first image group shown in FIG. 10A.
 次に、図3及び図11を参照して、第1映像群及びヘッドアップディスプレイ2に表示される第2映像群の他の例について説明する。 Next, another example of the first image group and the second image group displayed on the head-up display 2 will be described with reference to FIGS. 3 and 11.
 図3Aに示す如く、オブジェクト情報生成部21により7個のオブジェクトO~Oが検出されたものとする。図11Aは、この場合における第1映像群の一例を示している。図11Aに示す如く、第1映像群は、7個のオブジェクトO~Oに対応する7個の映像V~Vを含むものである。 As shown in FIG. 3A, it is assumed that seven objects O 1 to O 7 are detected by the object information generation unit 21. FIG. 11A shows an example of the first image group in this case. As shown in FIG. 11A, the first image group includes seven images V 1 to V 7 corresponding to seven objects O 1 to O 7 .
 これに対して、閾値Rh_thが表示可能領域A1の横幅に対する2分の1の値に設定されて、かつ、閾値Rv_thが表示可能領域A1の縦幅に対する2分の1の値に設定されて、かつ、N_th=3に設定されたものとする(ステップST11)。この場合、Rh>Rh_thであるためRh’=Rh_thに設定されて(ステップST15)、かつ、Rv>Rv_thであるためRv’=Rv_thに設定されて(ステップST18)、かつ、N>N_thであるためN’=N_th=3に設定される(ステップST21)。 On the other hand, the threshold Rh_th is set to a half of the horizontal width of the displayable area A1, and the threshold Rv_th is set to a half of the vertical width of the displayable area A1, Also, it is assumed that N_th = 3 is set (step ST11). In this case, since Rh> Rh_th, Rh ′ is set to Rh_th (step ST15), and since Rv> Rv_th, Rv ′ is set to Rv_th (step ST18), and N> N_th. Therefore, N '= N_th = 3 is set (step ST21).
 図11Aは、この場合における表示対象領域A2の一例を示している。ステップST32にて、例えば図11Aに示す表示対象領域A2が設定される。図11Aに示す表示対象領域A2は、図10Aに示す表示対象領域A2に対して中心部Cの位置座標が異なるものである。 FIG. 11A shows an example of the display target area A2 in this case. In step ST32, for example, a display target area A2 shown in FIG. 11A is set. The display target area A2 shown in FIG. 11A is different from the display target area A2 shown in FIG. 10A in the position coordinates of the central portion C.
 また、N’<Nであるため、ステップST33にて、表示制御部33は、7個のオブジェクトO~Oのうちのいずれか3個のオブジェクトを選択する。例えば、表示制御部33は、優先度に基づき、他車両に対応する3個のオブジェクトO~Oを選択する。この結果、ステップST34にて、7個の映像V~Vのうちの3個の映像V~Vが生成される。 Further, since N ′ <N, in step ST33, the display control unit 33 selects any three objects among the seven objects O 1 to O 7 . For example, the display control unit 33 selects three objects O 1 to O 3 corresponding to other vehicles based on the priority. As a result, in step ST34, the seven three video V 1 ~ V 3 in the video V 1 ~ V 7 are generated.
 ステップST35の制御により、図11Bに示す如く、3個の映像V~V(すなわち第2映像群)のうちの表示対象領域A2内のものがヘッドアップディスプレイ2に表示される。すなわち、センターラインに対するマーキング用の映像V,Vは、表示対象領域A2内に位置するものでありながらヘッドアップディスプレイ2の表示対象から除外される。この場合、ヘッドアップディスプレイ2に表示される第2映像群は、図11Aに示す第1映像群と異なるものとなる。 By the control of step ST35, as shown in FIG. 11B, the head up display 2 displays the one in the display target area A2 of the three videos V 1 to V 3 (that is, the second video group). That is, the images V 6 and V 7 for marking the center line are excluded from the display targets of the head-up display 2 while being located within the display target area A 2. In this case, the second image group displayed on the head-up display 2 is different from the first image group shown in FIG. 11A.
 次に、図3及び図12を参照して、第1映像群及びヘッドアップディスプレイ2に表示される第2映像群の他の例について説明する。 Next, another example of the first image group and the second image group displayed on the head-up display 2 will be described with reference to FIGS. 3 and 12.
 図3Aに示す如く、オブジェクト情報生成部21により7個のオブジェクトO~Oが検出されたものとする。図12Aは、この場合における第1映像群の一例を示している。図12Aに示す如く、第1映像群は、7個のオブジェクトO~Oに対応する7個の映像V~Vを含むものである。 As shown in FIG. 3A, it is assumed that seven objects O 1 to O 7 are detected by the object information generation unit 21. FIG. 12A shows an example of the first image group in this case. As shown in FIG. 12A, the first image group includes seven images V 1 to V 7 corresponding to seven objects O 1 to O 7 .
 これに対して、閾値Rh_thが表示可能領域A1の横幅に対する2分の1の値に設定されて、かつ、閾値Rv_thが表示可能領域A1の縦幅に対する4分の1の値に設定されて、かつ、N_th=8に設定されたものとする(ステップST11)。この場合、Rh>Rh_thであるためRh’=Rh_thに設定されて(ステップST15)、かつ、Rv>Rv_thであるためRv’=Rv_thに設定されて(ステップST18)、かつ、N≦N_thであるためN’=N=7に設定される(ステップST20)。 On the other hand, the threshold Rh_th is set to a half of the horizontal width of the displayable area A1, and the threshold Rv_th is set to a quarter of the vertical width of the displayable area A1, Also, it is assumed that N_th = 8 is set (step ST11). In this case, since Rh> Rh_th, Rh ′ is set to Rh_th (step ST15), and since Rv> Rv_th, Rv ′ is set to Rv_th (step ST18), and N ≦ N_th. Therefore, N '= N = 7 is set (step ST20).
 図12Aは、この場合における表示対象領域A2の一例を示している。ステップST32にて、例えば図12Aに示す表示対象領域A2が設定される。また、N’=Nであるため、ステップST33にて7個のオブジェクトO~Oのうちの全てのオブジェクトO~Oが選択される。この結果、ステップST34にて7個の映像V~Vのうちの全ての映像V~Vが生成される。 FIG. 12A shows an example of the display target area A2 in this case. In step ST32, for example, a display target area A2 shown in FIG. 12A is set. Further, N '= order is N, all objects O 1 ~ O 7 of the step ST33 7 amino object O 1 ~ O 7 is selected. As a result, all video V 1 ~ V 7 of the step ST34 7 amino image V 1 ~ V 7 are generated.
 ステップST35の制御により、図12Bに示す如く、7個の映像V~V(すなわち第2映像群)のうちの表示対象領域A2内のものがヘッドアップディスプレイ2に表示される。この場合、ヘッドアップディスプレイ2に表示される第2映像群は、図12Aに示す第1映像群と異なるものとなる。 By the control of step ST35, as shown in FIG. 12B, the head up display 2 displays the one in the display target area A2 of the seven videos V 1 to V 7 (that is, the second video group). In this case, the second image group displayed on the head-up display 2 is different from the first image group shown in FIG. 12A.
 このように、映像酔い推定部32による推定処理の結果に応じて、第2映像群に含まれる映像に対応するオブジェクトの個数N’が異なるものとなる。より具体的には、オブジェクト情報生成部21により検出されたオブジェクトの個数Nが多いとき(N>N_th)、推定処理の結果が第1映像群による映像酔いの発生「有」を示すものとなり、第2映像群に含まれる映像に対応するオブジェクトの個数N’が低減される(N’=N_th)。これにより、車両1の運転者による映像酔いの発生を抑制することができる。また、運転者の目の疲れを軽減することができる。 As described above, the number N ′ of objects corresponding to the video included in the second video group differs according to the result of the estimation processing by the video sickness estimation unit 32. More specifically, when the number N of objects detected by the object information generation unit 21 is large (N> N_th), the result of the estimation process indicates the occurrence of presence of video sickness due to the first video group, and The number N ′ of objects corresponding to the video included in the second video group is reduced (N ′ = N_th). Thereby, the occurrence of video sickness by the driver of the vehicle 1 can be suppressed. In addition, it is possible to reduce the tiredness of the driver's eyes.
 また、映像酔い推定部32による推定処理の結果に応じて、表示対象領域A2のサイズ(Rh’×Rv’)が異なるものとなる。より具体的には、オブジェクト群領域の横幅Rhが大きいとき(Rh>Rh_th)、推定処理の結果が第1映像群による映像酔いの発生「有」を示すものとなり、表示対象領域A2の横幅Rh’が低減される(Rh’=Rh_th)。同様に、オブジェクト群領域の縦幅Rvが大きいとき(Rv>Rv_th)、推定処理の結果が第1映像群による映像酔いの発生「有」を示すものとなり、表示対象領域A2の縦幅Rv’が低減される(Rv’=Rv_th)。これにより、車両1の運転者による映像酔いの発生を抑制することができる。また、運転車の目の疲れを軽減することができる。 Further, the size (Rh ′ × Rv ′) of the display target area A2 differs according to the result of the estimation process by the video sickness estimation unit 32. More specifically, when the horizontal width Rh of the object group area is large (Rh> Rh_th), the result of the estimation process indicates that the first video group is "present" in the occurrence of video sickness, and the horizontal width Rh of the display target area A2 'Is reduced (Rh' = Rh_th). Similarly, when the vertical width Rv of the object group area is large (Rv> Rv_th), the result of the estimation processing indicates occurrence of video sickness due to the first video group, and the vertical width Rv ′ of the display target area A2 Is reduced (Rv '= Rv_th). Thereby, the occurrence of video sickness by the driver of the vehicle 1 can be suppressed. In addition, it is possible to reduce eyestrain of the driving car.
 なお、オブジェクト情報は、個々のオブジェクトに対応する映像の動き(より具体的には移動速度及び移動量)を示すものであっても良い。映像酔い推定部32は、これらの映像の移動速度及び移動量を所定の閾値と比較することにより、第1映像群による映像酔いの発生の有無を推定するものであっても良い。また、映像酔い推定部32は、当該比較の結果に応じてRh’、Rv’及びN’の値を設定するものであっても良い。 The object information may indicate the movement (more specifically, the movement speed and movement amount) of the video corresponding to each object. The video sickness estimating unit 32 may estimate the presence or absence of video sickness due to the first video group by comparing the moving speed and the moving amount of the video with predetermined threshold values. Also, the motion sickness estimation unit 32 may set the values of Rh ′, Rv ′, and N ′ according to the result of the comparison.
 また、映像酔い推定部32は、ステップST11で閾値を設定したとき、当該設定された閾値を記憶するものであっても良い。映像酔い推定部32は、次回以降のステップST11にて、当該記憶された閾値に基づき新たな閾値を設定するものであっても良い。また、映像酔い推定部32における閾値は、個々の運転者毎に異なる値に設定されるものであっても良い。 Further, when the video sickness estimating unit 32 sets the threshold in step ST11, the video sickness estimating unit 32 may store the set threshold. The video sickness estimating unit 32 may set a new threshold value based on the stored threshold value in step ST11 after the next time. Further, the threshold value in the video sickness estimation unit 32 may be set to a different value for each driver.
 また、映像酔い推定部32における閾値は、操作入力装置(不図示)に入力された操作により設定されるものであっても良い。この操作は、車両1の運転者によるものであっても良く、又は車両1の同乗者によるものであっても良い。 Further, the threshold value in the motion sickness estimation unit 32 may be set by the operation input to the operation input device (not shown). This operation may be performed by the driver of the vehicle 1 or by a passenger of the vehicle 1.
 また、N’<Nの場合に用いられる優先度は、上記の具体例に限定されるものではない。各オブジェクトの優先度は如何なる設定によるものであっても良い。例えば、車両1の運転者にとって重要度が高いオブジェクトほど優先度が高くなるように設定されているものであっても良い。 Moreover, the priority used when N '<N is not limited to the above specific example. The priority of each object may be set by any setting. For example, the priority may be set to be higher for an object having a higher degree of importance to the driver of the vehicle 1.
 また、個々のオブジェクトに対するマーキング用の映像は、点線による矩形枠状の映像に限定されるものではない。個々のオブジェクトに対するマーキング用の映像は、既存のCG(Computer Graphics)により描画可能なものであれば、如何なる映像であっても良い。例えば、個々のオブジェクトの縁部に沿う線状の映像、当該線により囲まれた領域内を所定の色にて塗りつぶしてなる映像、又は、αブレンディングを用いた映像であっても良い。また、テキスト又はアイコンなどを含む映像であっても良い。 Also, the video for marking individual objects is not limited to the video in the form of a rectangular frame by dotted lines. The image for marking individual objects may be any image as long as it can be drawn by existing CG (Computer Graphics). For example, a linear image along the edge of an individual object, an image obtained by filling the area surrounded by the lines with a predetermined color, or an image using α blending may be used. Also, it may be a video including text or an icon.
 また、個々のオブジェクトに対応する映像は、マーキング用の映像に限定されるものではない。以下、図13及び図14を参照して、個々のオブジェクトに対応する映像の他の例について説明する。 Also, the video corresponding to each object is not limited to the video for marking. Hereinafter, with reference to FIG. 13 and FIG. 14, another example of the video corresponding to each object will be described.
 例えば、図13Aに示す如く、オブジェクト情報生成部21により8個のオブジェクトO~Oが検出されたものとする。オブジェクトOは、車両1用のナビゲーションシステム(不図示)による案内対象となる車線、すなわち車両1が走行すべき車線に対応するものである。図13Bに示す如く、第1映像群は、8個のオブジェクトO~Oと一対一に対応する8個の映像V~Vを含むものである。オブジェクトOに対応する映像Vは、当該車線をガイドする映像(より具体的には、当該車線に沿うようにして配列された三角形状の映像)であっても良い。図13Bは、この場合における、ヘッドアップディスプレイ2に表示される第2映像群の一例を示している。 For example, as shown in FIG. 13A, it is assumed that eight objects O 1 to O 8 are detected by the object information generation unit 21. The object O 8 corresponds to the lane to be guided by the navigation system (not shown) for the vehicle 1, that is, the lane in which the vehicle 1 should travel. As shown in FIG. 13B, the first image group includes eight images V 1 to V 8 corresponding to eight objects O 1 to O 8 in a one-to-one manner. Video V 8 corresponding to the object O 8 (more specifically, by triangular images that are arranged in along the said lane) image for guiding the lane may be. FIG. 13B shows an example of a second image group displayed on the head-up display 2 in this case.
 または、例えば、図14Aに示す如く、オブジェクト情報生成部21により8個のオブジェクトO~O,O~O,O~O10が検出されたものとする。オブジェクトOは、他車両に対応するものである。オブジェクトO10は、車両1の前方に存在する建物、より具体的にはガソリンスタンドに対応するものである。図14Bに示す如く、第1映像群は、8個のオブジェクトO~O,O~O,O~O10と一対一に対応する8個の映像V~V,V~V,V~V10を含むものである。オブジェクトO10に対応する映像V10は、当該建物に関する情報を示すテキストを含む映像(より具体的には当該建物がガソリンスタンドであることを示すテキストを含む映像)であっても良い。図14Bは、この場合における、ヘッドアップディスプレイ2に表示される第2映像群の一例を示している。 Alternatively, for example, as shown in FIG. 14A, it is assumed that eight objects O 1 to O 2 , O 4 to O 7 , O 9 to O 10 are detected by the object information generation unit 21. The object O 9 corresponds to another vehicle. The object O 10 corresponds to a building existing in front of the vehicle 1, more specifically, a gas station. As shown in FIG. 14B, the first image group includes eight images V 1 to V 2 , V corresponding to eight objects O 1 to O 2 , O 4 to O 7 , O 9 to O 10 in a one-to-one manner. 4 to V 7 and V 9 to V 10 are included. Video V 10 corresponding to the object O 10 may be a (video containing text indicating that more specifically the building is a gas station) images containing text indicating information about the building. FIG. 14B shows an example of a second image group displayed on the head-up display 2 in this case.
 また、表示制御部33による変更対象となる表示態様は、AR-HUDにおける映像酔いの発生しやすさに関連するものであれば良く、第2映像群に含まれる映像に対応するオブジェクトの個数N’及び表示対象領域A2のサイズ(Rh’×Rv’)に限定されるものではない。例えば、表示制御部33は、映像酔い推定部32による推定処理の結果に応じて、表示対象領域A2の中心部Cの位置座標を異ならしめるものであっても良い。または、例えば、表示制御部33は、映像酔い推定部32による推定処理の結果に応じて、第2映像群に含まれる個々の映像の動き(より具体的には移動速度及び移動量)を異ならしめるものであっても良い。すなわち、表示制御部33は、映像酔い推定部32による推定処理の結果に応じて、第2映像群に含まれる映像に対応するオブジェクトの個数N’、第2映像群に含まれる個々の映像の動き(より具体的には移動速度及び移動量)、表示対象領域A2のサイズ(Rh’×Rv’)、又は、表示対象領域A2の中心部Cの位置座標のうちの少なくとも一つを異ならしめるものであっても良い。 Further, the display mode to be changed by the display control unit 33 may be any one as long as it relates to the ease of occurrence of video sickness in AR-HUD, and the number N of objects corresponding to the video included in the second video group It is not limited to 'and the size of display target area A2 (Rh' × Rv '). For example, the display control unit 33 may change the position coordinates of the central portion C of the display target area A2 according to the result of the estimation process by the video sickness estimation unit 32. Alternatively, for example, if the display control unit 33 determines that the motions of the individual videos included in the second video group (more specifically, the moving speed and the moving amount) differ according to the result of the estimation process by the video sickness estimating unit 32. It may be confusing. That is, the display control unit 33 determines the number N ′ of objects corresponding to the images included in the second image group and the individual images included in the second image group according to the result of the estimation process by the video sickness estimation unit 32. Move at least one of movement (more specifically, movement speed and movement amount), size of display target area A2 (Rh ′ × Rv ′), or position coordinates of central portion C of display target area A2 It may be something.
 また、表示制御部33は、車両1に設けられている照度センサ(不図示)を用いて、車両1の車室内における日射量を測定するものであっても良い。表示制御部33は、当該測定された日射量に応じて、ヘッドアップディスプレイ2に表示される映像(すなわち第2映像群に含まれる映像)の輝度及びコントラスト比などを調整するものであっても良い。これにより、ヘッドアップディスプレイ2に表示される映像の見やすさを向上することができる。 In addition, the display control unit 33 may measure the amount of solar radiation in the compartment of the vehicle 1 using an illuminance sensor (not shown) provided in the vehicle 1. Even if the display control unit 33 adjusts the brightness and contrast ratio of the video displayed on the head-up display 2 (that is, the video included in the second video group) in accordance with the measured amount of solar radiation. good. Thereby, the legibility of the image displayed on the head-up display 2 can be improved.
 また、図15に示す如く、オブジェクト情報取得部31、映像酔い推定部32及び表示制御部33により、表示制御システム200の要部が構成されているものであっても良い。この場合、オブジェクト情報取得部31、映像酔い推定部32及び表示制御部33の各々は、車両1に搭載自在な車載情報機器51、車両1に持ち込み自在なスマートフォンなどの携帯情報端末52、又は、車載情報機器51若しくは携帯情報端末52と通信自在なサーバ装置53のうちのいずれかに設けられているものであれば良い。 Further, as shown in FIG. 15, the main part of the display control system 200 may be configured by the object information acquisition unit 31, the motion sickness estimation unit 32, and the display control unit 33. In this case, each of the object information acquisition unit 31, the motion sickness estimation unit 32, and the display control unit 33 may be an on-vehicle information device 51 that can be mounted on the vehicle 1, a portable information terminal 52 such as a smartphone that can be carried on the vehicle 1, or It may be provided in any of the on-vehicle information device 51 or the server device 53 capable of communicating with the portable information terminal 52.
 図16A~図16Dの各々は、表示制御システム200の要部のシステム構成を示している。図16A~図16Dに示す如く、車載情報機器51、携帯情報端末52又はサーバ装置53のうちのいずれか二以上が連携することにより表示制御システム200の機能が実現されるものであれば良い。 Each of FIGS. 16A to 16D shows the system configuration of the main part of the display control system 200. As shown in FIGS. 16A to 16D, any function of the display control system 200 may be realized by cooperation of any two or more of the in-vehicle information device 51, the portable information terminal 52, and the server device 53.
 また、ヘッドアップディスプレイ2はウインドシールド型に限定されるものではなく、コンバイナ型であっても良い。ただし、通常、コンバイナはウインドシールドに比して運転者の視野に占める範囲が小さい。このため、コンバイナ型のAR-HUDは、ウインドシールド型のAR-HUDに比して映像酔いが発生し難い。したがって、表示制御装置100及び表示制御システム200は、ウインドシールド型のAR-HUDの制御に用いるのが特に好適である。 Also, the head-up display 2 is not limited to the windshield type, and may be a combiner type. However, the combiner usually occupies a smaller area in the driver's field of view than the windshield. For this reason, the AR-HUD of the combiner type is less likely to cause video sickness than the AR-HUD of the windshield type. Therefore, it is particularly preferable to use the display control apparatus 100 and the display control system 200 for controlling the windshield AR-HUD.
 また、ヘッドアップディスプレイ2は、車両1と異なる移動体に設けられているものであっても良い。ヘッドアップディスプレイ2は、自動車、鉄道車両、航空機又は船舶などの如何なる移動体に設けられているものであっても良い。 The head-up display 2 may be provided on a moving body different from the vehicle 1. The head-up display 2 may be provided on any moving object such as a car, a rail car, an aircraft or a ship.
 以上のように、実施の形態1の表示制御装置100は、移動体(車両1)の周囲に存在する1個以上のオブジェクトに関する情報を含むオブジェクト情報を取得するオブジェクト情報取得部31と、オブジェクト情報を用いて、1個以上のオブジェクトに対応する1個以上の映像を含む第1映像群による映像酔いの発生の有無を推定する処理を実行する映像酔い推定部32と、オブジェクト情報を用いて、1個以上の映像のうちの少なくとも一部の映像を含む第2映像群をヘッドアップディスプレイ2に表示させる制御を実行する表示制御部33とを備え、表示制御部33は、映像酔い推定部32による推定処理の結果に応じて第2映像群の表示態様を異ならしめるものである。オブジェクト情報を用いることにより、AR-HUDにおける映像酔いの発生の有無を高精度に推定することができる。また、第2映像群の表示態様を異ならしめることにより、ヘッドアップディスプレイ2によるAR表示を継続しつつ映像酔いの発生を抑制することができる。 As described above, the display control apparatus 100 according to the first embodiment includes the object information acquisition unit 31 that acquires object information including information on one or more objects existing around the mobile object (vehicle 1), and the object information Using the object information and the video sickness estimation unit 32, which executes processing to estimate the presence or absence of video sickness due to the first video group including one or more videos corresponding to one or more objects, using And a display control unit 33 for executing control to cause the head-up display 2 to display a second video group including at least a part of one or more videos. The display mode of the second image group is made different according to the result of the estimation process by By using object information, it is possible to estimate with high accuracy the presence or absence of video sickness in AR-HUD. Further, by making the display mode of the second video group different, it is possible to suppress the occurrence of video sickness while continuing the AR display by the head-up display 2.
 また、表示制御部33は、映像酔い推定部32による推定処理の結果に応じて、第2映像群に含まれる映像の個数、第2映像群に含まれる個々の映像の動き又はヘッドアップディスプレイ2における第2映像群が表示される領域(表示対象領域A2)のうちの少なくとも一つを異ならしめる。これにより、映像酔いの発生を抑制することができる。 Further, the display control unit 33 controls the number of videos included in the second video group, the motion of each video included in the second video group, or the head-up display 2 according to the result of the estimation processing by the video sickness estimation unit 32. And at least one of the areas (display target area A2) in which the second image group is displayed is different. This makes it possible to suppress the occurrence of video sickness.
 また、実施の形態1の表示制御システム200は、移動体(車両1)の周囲に存在する1個以上のオブジェクトに関する情報を含むオブジェクト情報を取得するオブジェクト情報取得部31と、オブジェクト情報を用いて、1個以上のオブジェクトに対応する1個以上の映像を含む第1映像群による映像酔いの発生の有無を推定する処理を実行する映像酔い推定部32と、オブジェクト情報を用いて、1個以上の映像のうちの少なくとも一部の映像を含む第2映像群をヘッドアップディスプレイ2に表示させる制御を実行する表示制御部33とを備え、表示制御部33は、映像酔い推定部32による推定処理の結果に応じて第2映像群の表示態様を異ならしめるものである。これにより、表示制御装置100による上記効果と同様の効果を得ることができる。 In addition, the display control system 200 according to the first embodiment uses the object information acquisition unit 31 that acquires object information including information on one or more objects existing around the mobile object (vehicle 1). Using the object information, the video sickness estimation unit 32 that executes processing for estimating the presence or absence of video sickness due to the first video group including one or more videos corresponding to one or more objects; And a display control unit 33 for executing control to cause the head-up display 2 to display a second image group including at least a part of the images of the images, and the display control unit 33 performs an estimation process by the video sickness estimation unit 32 The display mode of the second image group is made different according to the result of. Thereby, the same effect as the above-mentioned effect by display control 100 can be acquired.
 また、実施の形態1の表示制御方法は、オブジェクト情報取得部31が、移動体(車両1)の周囲に存在する1個以上のオブジェクトに関する情報を含むオブジェクト情報を取得するステップST1と、映像酔い推定部32が、オブジェクト情報を用いて、1個以上のオブジェクトに対応する1個以上の映像を含む第1映像群による映像酔いの発生の有無を推定する処理を実行するステップST2と、表示制御部33が、オブジェクト情報を用いて、1個以上の映像のうちの少なくとも一部の映像を含む第2映像群をヘッドアップディスプレイ2に表示させる制御を実行するステップST3とを備え、表示制御部33は、映像酔い推定部32による推定処理の結果に応じて第2映像群の表示態様を異ならしめる。これにより、表示制御装置100による上記効果と同様の効果を得ることができる。 In the display control method according to the first embodiment, the object information acquisition unit 31 acquires object information including information on one or more objects existing around the mobile object (vehicle 1); Step ST2 in which the estimation unit 32 executes a process of estimating the presence or absence of video sickness due to the first video group including one or more videos corresponding to one or more objects using object information; And a step ST3 of performing control to cause the head-up display 2 to display a second video group including at least a part of one or more videos using the object information, and the display control unit 33 changes the display mode of the second image group according to the result of the estimation processing by the video sickness estimation unit 32. Thereby, the same effect as the above-mentioned effect by display control 100 can be acquired.
実施の形態2.
 図17は、実施の形態2に係る表示制御装置を含む制御装置が車両に設けられている状態を示すブロック図である。図17を参照して、実施の形態2の表示制御装置100aについて説明する。なお、図17において、図1に示すブロックと同様のブロックには同一符号を付して説明を省略する。
Second Embodiment
FIG. 17 is a block diagram showing a state where a control device including a display control device according to Embodiment 2 is provided in a vehicle. The display control device 100a according to the second embodiment will be described with reference to FIG. Note that, in FIG. 17, the same blocks as the blocks shown in FIG.
 車両1は、車室内撮像用のカメラ8を有している。カメラ8は、例えば、可視光カメラ又は赤外線カメラにより構成されている。カメラ8は、車両1の車室内前方部に配置されており、運転席に着座している運転者の顔を含む範囲を撮像するものである。 The vehicle 1 has a camera 8 for imaging in the passenger compartment. The camera 8 is configured by, for example, a visible light camera or an infrared camera. The camera 8 is disposed in the front of the vehicle compartment of the vehicle 1 and captures an image of a range including the face of the driver sitting in the driver's seat.
 車両1は、センサ9を有している。センサ9は、接触型又は非接触型の生体センサにより構成されている。センサ9が接触型の生体センサにより構成されている場合、センサ9は車両1のハンドル又は運転席などに設けられている。センサ9が非接触型の生体センサにより構成されている場合、センサ9は車両1の車室内に配置されている。 The vehicle 1 has a sensor 9. The sensor 9 is configured by a contact or non-contact type biometric sensor. When the sensor 9 is constituted by a contact-type biological sensor, the sensor 9 is provided on the steering wheel or the driver's seat of the vehicle 1 or the like. When the sensor 9 is configured by a noncontact biometric sensor, the sensor 9 is disposed in the cabin of the vehicle 1.
 運転者情報生成部22は、カメラ8による撮像画像に対する画像認識処理を実行するものである。運転者情報生成部22は、画像認識処理の結果又はセンサ9による検出値のうちの少なくとも一方を用いて、車両1の運転者に関する情報(以下「運転者情報」という。)を生成するものである。 The driver information generation unit 22 executes an image recognition process on an image captured by the camera 8. The driver information generation unit 22 generates information on the driver of the vehicle 1 (hereinafter referred to as “driver information”) using at least one of the result of the image recognition process or the detection value by the sensor 9. is there.
 画像認識処理の結果を用いて生成される運転者情報は、例えば、運転者の頭部の位置、運転者の顔の位置、運転者の視点移動量、運転者の顔色、運転者の開眼度又は運転者の瞬き回数のうちの少なくとも一つを示すものである。運転者の頭部の位置及び運転者の顔の位置は、例えば、TOF法又は三角測量法などの方法により計測されたものである The driver information generated using the result of the image recognition process is, for example, the position of the driver's head, the position of the driver's face, the driver's viewpoint movement amount, the driver's face color, the driver's eye opening degree Alternatively, it indicates at least one of the number of blinks of the driver. The position of the driver's head and the position of the driver's face are, for example, measured by a method such as TOF method or triangulation method.
 センサ9による検出値を用いて生成される運転者情報は、例えば、運転者の心拍数、運転者の脈拍、運転者の血圧、運転者の体温、運転者の発汗量又は運転者の脳波のうちの少なくとも一つを示す情報、すなわち生体情報である。 The driver information generated using the detection value by the sensor 9 is, for example, the heart rate of the driver, the pulse rate of the driver, the blood pressure of the driver, the temperature of the driver, the body temperature of the driver, the amount of sweat of the driver or the brain wave of the driver. It is information indicating at least one of them, that is, biological information.
 運転者情報取得部34は、運転者情報生成部22により生成された運転者情報を取得するものである。運転者情報取得部34は、当該取得された運転者情報を映像酔い推定部32aに出力するものである。 The driver information acquisition unit 34 acquires the driver information generated by the driver information generation unit 22. The driver information acquisition unit 34 outputs the acquired driver information to the video sickness estimation unit 32a.
 映像酔い推定部32aは、オブジェクト情報取得部31により出力されたオブジェクト情報及び運転者情報取得部34により出力された運転者情報を用いて、仮に第1映像群がヘッドアップディスプレイ2に表示された場合における第1映像群による映像酔いの発生の有無を推定するものである。より具体的には、映像酔い推定部32aは、実施の形態1にて図7を参照して説明したものと同様の処理を実行するものであり、閾値Rh_th,Rv_th,N_thの設定に運転者情報を用いるものである。 The video sickness estimating unit 32a temporarily displays the first video group on the head-up display 2 using the object information output by the object information acquiring unit 31 and the driver information output by the driver information acquiring unit 34. The presence or absence of the occurrence of video sickness due to the first video group in the case is estimated. More specifically, the motion sickness estimation unit 32a executes the same process as that described with reference to FIG. 7 in the first embodiment, and sets the threshold value Rh_th, Rv_th, and N_th to the driver. It uses information.
 すなわち、映像酔い推定部32aは、運転者情報を用いて、車両1の運転者の体調不良の有無を判定する。映像酔い推定部32aは、運転者の体調不良「有」と判定された場合、運転者の体調不良「無」と判定された場合に比して閾値Rh_th,Rv_th,N_thのうちの少なくとも一つを低い値(例えば0.5倍の値)に設定する。 That is, the video sickness estimation unit 32a determines whether the driver of the vehicle 1 is in poor health or not using the driver information. When it is determined that the driver is in poor physical condition “present”, the video sickness estimating unit 32 a is at least one of the thresholds Rh_th, Rv_th, and N_th, as compared with the case where the driver is determined to be in poor physical condition “absent”. Is set to a low value (for example, 0.5 times the value).
 例えば、運転者情報が運転者の心拍数を示すものであるとする。映像酔い推定部32aには、平常時における運転者の心拍数(例えば65bpm)を含む値の範囲(以下「基準範囲」という。例えば50~90bpmの範囲)が設定されている。映像酔い推定部32aは、運転者情報の示す心拍数が基準範囲内の値であるか否かを判定する。 For example, it is assumed that the driver information indicates the heart rate of the driver. A range of values (hereinafter referred to as a “reference range”, for example, a range of 50 to 90 bpm) including the heart rate (for example, 65 bpm) of the driver at normal times is set in the video sickness estimating unit 32a. The motion sickness estimation unit 32a determines whether the heart rate indicated by the driver information is within the reference range.
 運転者情報の示す心拍数が基準範囲内の値である場合、映像酔い推定部32aは、閾値Rv_thを表示可能領域A1の縦幅に対する2分の1の値に設定する。他方、運転者情報の示す心拍数が基準範囲外の値である場合、映像酔い推定部32aは、閾値Rv_thを表示可能領域A1の縦幅に対する4分の1の値に設定する。 If the heart rate indicated by the driver information is within the reference range, the video sickness estimating unit 32a sets the threshold Rv_th to a value that is half the vertical width of the displayable area A1. On the other hand, when the heart rate indicated by the driver information is a value outside the reference range, the video sickness estimating unit 32a sets the threshold Rv_th to a value that is one fourth of the vertical width of the displayable area A1.
 閾値Rh_thの低下により、Rh>Rh_thの場合における表示対象領域A2の横幅Rh’も小さくなる。閾値Rv_thの低下により、Rv>Rv_thの場合における表示対象領域A2の縦幅Rv’も小さくなる。閾値N_thの低下により、N>N_thの場合における第2映像群に含まれる映像に対応するオブジェクトの個数N’も少なくなる。この結果、運転者の体調不良により映像酔いが発生しやすい状態においても、映像酔いの発生を抑制することができる。 As the threshold value Rh_th decreases, the width Rh ′ of the display target area A2 in the case of Rh> Rh_th also decreases. Due to the decrease of the threshold value Rv_th, the vertical width Rv ′ of the display target area A2 in the case of Rv> Rv_th also decreases. As the threshold value N_th decreases, the number N ′ of objects corresponding to the images included in the second image group in the case of N> N_th also decreases. As a result, it is possible to suppress the occurrence of the video sickness even in the state where the video sickness is easily generated due to the driver's poor physical condition.
 オブジェクト情報取得部31、映像酔い推定部32a、表示制御部33及び運転者情報取得部34により、表示制御装置100aの要部が構成されている。また、オブジェクト情報生成部21、運転者情報生成部22、オブジェクト情報取得部31、映像酔い推定部32a、表示制御部33及び運転者情報取得部34により、制御装置7aの要部が構成されている。 The object information acquisition unit 31, the visual sickness estimation unit 32a, the display control unit 33, and the driver information acquisition unit 34 constitute a main part of the display control apparatus 100a. The object information generation unit 21, the driver information generation unit 22, the object information acquisition unit 31, the video sickness estimation unit 32a, the display control unit 33, and the driver information acquisition unit 34 constitute the main part of the control device 7a. There is.
 制御装置7aの要部のハードウェア構成は、実施の形態1にて図5を参照して説明したものと同様であるため、図示及び説明を省略する。すなわち、運転者情報生成部22、映像酔い推定部32a及び運転者情報取得部34の機能は、プロセッサ41及びメモリ42により実現されるものであっても良く、又は処理回路43により実現されるものであっても良い。 The hardware configuration of the main part of the control device 7a is the same as that described in the first embodiment with reference to FIG. That is, the functions of the driver information generation unit 22, the video sickness estimation unit 32a, and the driver information acquisition unit 34 may be realized by the processor 41 and the memory 42, or may be realized by the processing circuit 43. It may be
 次に、図18のフローチャートを参照して、表示制御装置100aの動作について説明する。 Next, the operation of the display control device 100a will be described with reference to the flowchart of FIG.
 まず、ステップST1にて、オブジェクト情報取得部31は、オブジェクト情報生成部21により生成されたオブジェクト情報を取得する。オブジェクト情報取得部31は、当該取得されたオブジェクト情報を映像酔い推定部32a及び表示制御部33に出力する。 First, in step ST1, the object information acquisition unit 31 acquires object information generated by the object information generation unit 21. The object information acquisition unit 31 outputs the acquired object information to the video sickness estimation unit 32a and the display control unit 33.
 次いで、ステップST4にて、運転者情報取得部34は、運転者情報生成部22により生成された運転者情報を取得する。運転者情報取得部34は、当該取得された運転者情報を映像酔い推定部32aに出力する。 Next, in step ST4, the driver information acquisition unit 34 acquires the driver information generated by the driver information generation unit 22. The driver information acquisition unit 34 outputs the acquired driver information to the video sickness estimation unit 32a.
 次いで、ステップST2aにて、映像酔い推定部32aは、オブジェクト情報取得部31により出力されたオブジェクト情報及び運転者情報取得部34により出力された運転者情報を用いて、仮に第1映像群がヘッドアップディスプレイ2に表示された場合における第1映像群による映像酔いの発生の有無を推定する処理を実行する。 Next, at step ST2a, the video sickness estimating unit 32a temporarily uses the object information output by the object information acquiring unit 31 and the driver information output by the driver information acquiring unit 34 to temporarily generate the first image group as a head. A process of estimating the occurrence of video sickness due to the first video group when displayed on the up display 2 is executed.
 ステップST2aにおける推定処理の具体例は、実施の形態1にて図7のフローチャートを参照して説明したものと同様であるため、図示及び説明を省略する。ただし、映像酔い推定部32aは、閾値Rh_th,Rv_th,N_thを設定するとき(すなわち図7に示すステップST11の処理に対応する処理を実行するとき)、上記のように運転者情報を用いるようになっている。 The specific example of the estimation process in step ST2a is the same as that described in the first embodiment with reference to the flowchart of FIG. However, when setting the thresholds Rh_th, Rv_th, and N_th (that is, when executing the process corresponding to the process of step ST11 shown in FIG. 7), the video sickness estimating unit 32a uses the driver information as described above. It has become.
 次いで、ステップST3にて、表示制御部33は、オブジェクト情報取得部31により出力されたオブジェクト情報を用いて、第2映像群をヘッドアップディスプレイ2に表示させる制御を実行する。このとき、表示制御部33は、映像酔い推定部32aによる推定処理の結果に応じて第2映像群の表示態様を異ならしめるようになっている。ステップST3における制御の具体例は、実施の形態1にて図8のフローチャートを参照して説明したものと同様であるため、図示及び説明を省略する。 Next, in step ST3, the display control unit 33 executes control to display the second image group on the head-up display 2 using the object information output from the object information acquisition unit 31. At this time, the display control unit 33 is configured to make the display mode of the second image group different according to the result of the estimation process by the video sickness estimation unit 32a. The specific example of control in step ST3 is the same as that described in the first embodiment with reference to the flowchart of FIG.
 なお、映像酔い推定部32aは、運転者情報の示す心拍数が基準範囲外の値である場合、運転者情報の示す心拍数が基準範囲内の値である場合に比して、閾値Rv_thを低い値(例えば0.5倍の値)に設定するのに代えて又は加えて、閾値Rh_thを低い値(例えば0.5倍の値)に設定するものであっても良い。また、このときの倍率は0.5倍に限定されるものではなく、如何なる倍率であっても良い。 When the heart rate indicated by the driver information is out of the reference range, the video sickness estimating unit 32a compares the threshold value Rv_th with that in the case where the heart rate indicated by the driver information is within the reference range. The threshold value Rh_th may be set to a low value (for example, 0.5 times value) instead of or in addition to setting to a low value (for example, 0.5 times value). Moreover, the magnification at this time is not limited to 0.5 times, and may be any magnification.
 また、映像酔い推定部32aは、運転者情報の示す心拍数が基準範囲外の値である場合、運転者情報の示す心拍数が基準範囲内の値である場合に比して、閾値N_thを低い値に設定するものであっても良い。 Further, when the heart rate indicated by the driver information is a value outside the reference range, the video sickness estimating unit 32a compares the threshold N_th with the threshold value indicated by the driver information by a threshold value within the reference range. It may be set to a low value.
 また、運転者情報が運転者の頭部の位置又は運転者の眼の位置を示すものである場合、表示制御部33は、これらの情報を用いて実空間における運転者の眼の位置を算出するものであっても良い。表示制御部33は、表示可能領域A1におけるN’個の映像の各々の位置を設定するとき、予め記憶されている情報(すなわち実空間における運転者の眼の位置の推定値を示す情報)が示す運転者の眼の位置に代えて、当該算出された運転者の眼の位置を用いるものであっても良い。 In addition, when the driver information indicates the position of the driver's head or the position of the driver's eyes, the display control unit 33 calculates the position of the driver's eyes in real space using these pieces of information. It may be When the display control unit 33 sets the position of each of the N 'images in the displayable area A1, the information stored in advance (that is, information indicating the estimated value of the position of the driver's eye in real space) Instead of the position of the driver's eyes shown, the calculated position of the driver's eyes may be used.
 また、運転者情報が運転者の血圧を示すものである場合、映像酔い推定部32aは、心拍数に係る上記の例と同様に、運転者情報の示す血圧が基準範囲内の値であるか否かを判定するものであっても良い。映像酔い推定部32aは、運転者情報の示す血圧が基準範囲外の値である場合、運転者情報の示す血圧が基準範囲内の値である場合に比して閾値Rh_th,Rv_th,N_thのうちの少なくとも一つを低い値に設定するものであっても良い。 In addition, if the driver information indicates the blood pressure of the driver, the video sickness estimating unit 32a determines whether the blood pressure indicated by the driver information is within the reference range, as in the above example related to the heart rate. It may be determined whether or not. If the blood pressure indicated by the driver information is a value outside the reference range, the visual sickness estimation unit 32a has the threshold Rh_th, Rv_th, and N_th compared to when the blood pressure indicated by the driver information is a value within the reference range. Alternatively, at least one of the above may be set to a low value.
 また、運転者情報が運転者の脈拍を示すものである場合、映像酔い推定部32aは、心拍数に係る上記の例と同様に、運転者情報の示す脈拍が基準範囲内の値であるか否かを判定するものであっても良い。映像酔い推定部32aは、運転者情報の示す脈拍が基準範囲外の値である場合、運転者情報の示す脈拍が基準範囲内の値である場合に比して閾値Rh_th,Rv_th,N_thのうちの少なくとも一つを低い値に設定するものであっても良い。 If the driver information indicates the pulse of the driver, the video sickness estimating unit 32a determines whether the pulse indicated by the driver information is within the reference range, as in the above example relating to the heart rate. It may be determined whether or not. When the pulse indicated by the driver information is a value outside the reference range, the video sickness estimating unit 32a has the threshold Rh_th, Rv_th, and N_th among the thresholds Rh_th, Rv_th, and N_th when the pulse indicated by the driver information is a value within the reference range. Alternatively, at least one of the above may be set to a low value.
 また、運転者情報が運転者の顔色を示すものである場合、映像酔い推定部32aは、運転者情報が示す顔色を平常時における運転者の顔色と比較するものであっても良い。映像酔い推定部32aは、運転者情報の示す顔色が平常時における運転者の顔色に比して紅潮している場合又は青白い場合、そうでない場合に比して閾値Rh_th,Rv_th,N_thのうちの少なくとも一つを低い値に設定するものであっても良い。 Also, if the driver information indicates the driver's complexion, the video sickness estimating unit 32a may compare the complexion indicated by the driver information with the driver's complexion in normal times. If the complexion indicated by the driver information indicates that the complexion indicated by the driver information is flushing or pale compared with the complexion of the driver in the normal case, the threshold value Rh_th, Rv_th, and N_th of the thresholds are compared. At least one may be set to a low value.
 また、運転者情報が運転者の視点移動量を示すものである場合、映像酔い推定部32aは、顔色に係る上記の例と同様に、運転者情報が示す視点移動量を平常時における運転者の視点移動量と比較するものであっても良い。 In addition, when the driver information indicates the driver's viewpoint movement amount, the video sickness estimation unit 32a is, in the same manner as the above example related to the complexion, the driver in normal times the viewpoint movement amount indicated by the driver information. It may be compared with the viewpoint movement amount of
 また、運転者情報が運転者の開眼度を示すものである場合、映像酔い推定部32aは、顔色に係る上記の例と同様に、運転者情報が示す開眼度を平常時における運転者の開眼度と比較するものであっても良い。 In addition, when the driver information indicates the degree of eye opening of the driver, the video sickness estimating unit 32a opens the degree of eye opening indicated by the driver information in the normal time, as in the above example related to the complexion. It may be compared with the degree.
 また、運転者情報の内容は上記の具体例に限定されるものではない。運転者情報は、カメラ8による撮像画像に対する画像認識処理の結果又はセンサ9による検出値のうちの少なくとも一方を用いて生成可能な情報であって、車両1の運転者の体調不良の有無を判定可能な情報であれば、如何なる内容の情報を含むものであっても良い。 Further, the content of the driver information is not limited to the above specific example. The driver information is information that can be generated using at least one of the result of the image recognition process on the captured image by the camera 8 or the detection value by the sensor 9, and determines whether the driver of the vehicle 1 has poor physical condition As long as it is possible information, any information may be included.
 また、映像酔い推定部32aは、第1映像群による映像酔いの発生の有無を推定する処理に運転者情報を用いるものであれば良く、当該処理における運転者情報の使用方法は上記の具体例に限定されるものではない。 Also, the video sickness estimation unit 32a may use the driver information in the process of estimating the presence or absence of video sickness due to the first video group, and the method of using the driver information in the process is the above specific example. It is not limited to
 また、表示制御装置100aは、実施の形態1にて説明したものと同様の種々の変形例、すなわち表示制御装置100と同様の種々の変形例を採用することができる。 Further, the display control device 100a can adopt various modifications similar to those described in the first embodiment, that is, various modifications similar to the display control 100.
 また、図19に示す如く、オブジェクト情報取得部31、映像酔い推定部32a、表示制御部33及び運転者情報取得部34により、表示制御システム200aの要部が構成されているものであっても良い。表示制御システム200aの要部のシステム構成は、実施の形態1にて図16を参照して説明したものと同様であるため、図示及び説明を省略する。すなわち、車載情報機器51、携帯情報端末52又はサーバ装置53のうちのいずれか二以上が連携することにより表示制御システム200aの機能が実現されるものであれば良い。 Further, as shown in FIG. 19, even if the main part of the display control system 200a is configured by the object information acquisition unit 31, the video sickness estimation unit 32a, the display control unit 33, and the driver information acquisition unit 34. good. The system configuration of the main part of the display control system 200a is the same as that described in the first embodiment with reference to FIG. That is, any function of the display control system 200a may be realized by cooperation of any two or more of the in-vehicle information device 51, the portable information terminal 52, and the server device 53.
 以上のように、実施の形態2の表示制御装置100aは、車両1の運転者に関する情報を含む運転者情報を取得する運転者情報取得部34を備え、映像酔い推定部32aは、オブジェクト情報及び運転者情報を用いて映像酔いの発生の有無を推定する。これにより、映像酔いの発生の有無を推定するとき、例えば運転者の体調を考慮することができる。この結果、映像酔いの発生の有無の推定精度を更に向上することができる。また、運転者の体調不良により映像酔いが発生しやすい状態においても、映像酔いの発生を抑制することができる。 As described above, the display control device 100a according to the second embodiment includes the driver information acquisition unit 34 for acquiring driver information including information on the driver of the vehicle 1, and the video sickness estimation unit 32a includes object information and The driver information is used to estimate the presence or absence of video sickness. Thereby, when estimating the presence or absence of generation | occurrence | production of imaging sickness, a driver's physical condition can be considered, for example. As a result, it is possible to further improve the estimation accuracy of the occurrence of video sickness. In addition, even in a state in which video sickness is likely to occur due to the driver's poor physical condition, it is possible to suppress the occurrence of video sickness.
実施の形態3.
 図20は、実施の形態3に係る表示制御装置を含む制御装置が車両に設けられている状態を示すブロック図である。図20を参照して、実施の形態3の表示制御装置100bについて説明する。なお、図20において、図1に示すブロックと同様のブロックには同一符号を付して説明を省略する。
Third Embodiment
FIG. 20 is a block diagram showing a state where a control device including a display control device according to Embodiment 3 is provided in a vehicle. The display control device 100b according to the third embodiment will be described with reference to FIG. In FIG. 20, the blocks similar to the blocks shown in FIG.
 車両情報生成部23は、車内ネットワーク10に接続されている。車両情報生成部23は、車内ネットワーク10を介して、車内ネットワーク10に接続されている各種システム(例えばカーナビゲーションシステム)又は車内ネットワーク10に接続されている各種ECU(Electronic Control Unit)などにより出力された情報を取得するものである。車両情報生成部23は、当該取得された情報を用いて、車両1に関する情報(以下「車両情報」という。)を生成するものである。 The vehicle information generation unit 23 is connected to the in-vehicle network 10. The vehicle information generation unit 23 is output by various systems (for example, a car navigation system) connected to the in-vehicle network 10 or various ECUs (Electronic Control Unit) connected to the in-vehicle network 10 via the in-vehicle network 10 Information is obtained. The vehicle information generation unit 23 generates information on the vehicle 1 (hereinafter referred to as “vehicle information”) using the acquired information.
 車両情報は、例えば、車両1の位置を示す情報、車両1の進行方向を示す情報、車両1の走行速度を示す情報、車両1の加速度を示す情報、車両1の振動数を示す情報、時刻を示す情報、各種警告に関する情報、各種制御信号(ワイパーのオンオフ信号、ライトの点灯信号、パーキング信号及びバック信号など)に関する情報、又は、ナビゲーション情報(渋滞情報、施設名を示す情報、案内用のガイダンスを示す情報及び案内対象のルートを示す情報など)のうちの少なくとも一つを含むものである。 The vehicle information includes, for example, information indicating the position of the vehicle 1, information indicating the traveling direction of the vehicle 1, information indicating the traveling speed of the vehicle 1, information indicating the acceleration of the vehicle 1, information indicating the vibration number of the vehicle 1, time Information related to various warnings, information related to various control signals (such as wiper on / off signal, light lighting signal, parking signal and back signal), or navigation information (congestion information, information indicating facility name, information for guidance) And at least one of the information indicating the guidance and the information indicating the route of the guidance target.
 車両情報取得部35は、車両情報生成部23により生成された車両情報を取得するものである。車両情報取得部35は、当該取得された車両情報を映像酔い推定部32bに出力するものである。 The vehicle information acquisition unit 35 acquires vehicle information generated by the vehicle information generation unit 23. The vehicle information acquisition unit 35 outputs the acquired vehicle information to the video sickness estimation unit 32 b.
 映像酔い推定部32bは、オブジェクト情報取得部31により出力されたオブジェクト情報及び車両情報取得部35により出力された車両情報を用いて、仮に第1映像群がヘッドアップディスプレイ2に表示された場合における第1映像群による映像酔いの発生の有無を推定するものである。より具体的には、映像酔い推定部32bは、実施の形態1にて図7を参照して説明したものと同様の処理を実行するものであり、閾値Rh_th,Rv_th,N_thの設定に車両情報を用いるものである。 In the case where the first image group is temporarily displayed on the head-up display 2 using the object information output by the object information acquisition unit 31 and the vehicle information output by the vehicle information acquisition unit 35, the video sickness estimation unit 32b The presence or absence of the occurrence of video sickness due to the first video group is estimated. More specifically, the motion sickness estimation unit 32b executes the same process as that described with reference to FIG. 7 in the first embodiment, and sets the threshold Rh_th, Rv_th, and N_th to vehicle information. Is used.
 すなわち、映像酔い推定部32bは、車両情報を用いて、車両1の運転環境が映像酔いを発生させやすい運転環境であるか否かを判定する。映像酔い推定部32bは、車両1の運転環境が映像酔いを発生させやすい運転環境であると判定された場合、そうでない場合に比して閾値Rh_th,Rv_th,N_thのうちの少なくとも一つを低い値(例えば0.5倍の値)に設定する。 That is, the video sickness estimation unit 32b determines whether the driving environment of the vehicle 1 is a driving environment that is likely to cause video sickness, using the vehicle information. When it is determined that the driving environment of the vehicle 1 is a driving environment that is likely to cause video sickness, the video sickness estimating unit 32b lowers at least one of the thresholds Rh_th, Rv_th, and N_th compared to the case where it is not so. Set to a value (for example, 0.5 times the value).
 例えば、車両1の走行速度を示す情報、車両1の加速度を示す情報及び車両1の振動数を示す情報が車両情報に含まれている場合、映像酔い推定部32bは、車両情報を用いて、車両1が走行中の道路における運転環境が映像酔いを発生させやすい運転環境(例えば、でこぼこ道を走行中であるような環境、急カーブを走行中であるような環境、急加速中であるような環境、又は、急減速中であるような環境など)であるか否かを判定する。映像酔い推定部32bは、当該道路における運転環境が映像酔いを発生させやすい運転環境であると判定された場合、そうでない場合に比して閾値Rh_th,Rv_th,N_thのうちの少なくとも一つを低い値(例えば0.5倍の値)に設定する。 For example, when the information indicating the traveling speed of the vehicle 1, the information indicating the acceleration of the vehicle 1, and the information indicating the frequency of the vehicle 1 are included in the vehicle information, the video sickness estimating unit 32b uses the vehicle information. The driving environment on the road on which the vehicle 1 is traveling is a driving environment in which it is easy to generate a video sickness (for example, an environment on a rough road, an environment on a sharp curve, a rapid acceleration) Environment, or an environment where rapid deceleration is taking place, etc.). When it is determined that the driving environment on the road is a driving environment that is likely to cause video sickness, the video sickness estimating unit 32b lowers at least one of the thresholds Rh_th, Rv_th, and N_th compared to the case where it is not so. Set to a value (for example, 0.5 times the value).
 または、例えば、車両1の走行速度を示す情報及び車両1の加速度を示す情報が車両情報に含まれている場合、映像酔い推定部32bは、車両情報を用いて、所定時間内における車両1の走行速度の変化量を算出する。映像酔い推定部32bは、当該算出された変化量が所定量(例えば±30キロメートル毎時)を超えている場合、当該算出された変化量が所定量以下である場合に比して閾値Rh_th,Rv_th,N_thのうちの少なくとも一つを低い値(例えば0.5倍の値)に設定する。 Alternatively, for example, when the information indicating the traveling speed of the vehicle 1 and the information indicating the acceleration of the vehicle 1 are included in the vehicle information, the video sickness estimating unit 32b uses the vehicle information to set the vehicle 1 within the predetermined time. Calculate the amount of change in traveling speed. When the calculated change amount exceeds a predetermined amount (for example, ± 30 kilometers per hour), the video sickness estimation unit 32b determines the threshold value Rh_th, Rv_th compared to the case where the calculated change amount is equal to or less than the predetermined amount. , N_th is set to a low value (for example, 0.5 times the value).
 または、例えば、ナビゲーション情報が車両情報に含まれている場合、映像酔い推定部32bは、車両情報を用いて、車両1が走行予定の道路における運転環境が映像酔いを発生させやすい運転環境(例えば、山道にてカーブが連続するような環境、又は、坂道にて加減速が連続するような環境など)であるか否かを判定する。映像酔い推定部32bは、当該道路における運転環境が映像酔いを発生させやすい運転環境であると判定された場合、そうでない場合に比して閾値Rh_th,Rv_th,N_thのうちの少なくとも一つを低い値(例えば0.5倍の値)に設定する。 Alternatively, for example, when navigation information is included in the vehicle information, the video sickness estimation unit 32b uses the vehicle information to drive the driving environment in which the driving environment on the road on which the vehicle 1 is to travel is likely to generate video sickness (for example, It is determined whether or not the environment is such that the curve is continuous on a mountain road, or the environment such that acceleration and deceleration are continuous on a slope. When it is determined that the driving environment on the road is a driving environment that is likely to cause video sickness, the video sickness estimating unit 32b lowers at least one of the thresholds Rh_th, Rv_th, and N_th compared to the case where it is not so. Set to a value (for example, 0.5 times the value).
 または、例えば、ナビゲーション情報が車両情報に含まれている場合、映像酔い推定部32bは、車両情報を用いて、車両1が走行予定のカーブにおける曲率が所定値を超えているか否かを判定する。映像酔い推定部32bは、当該曲率が所定値を超えている場合、当該曲率が所定値以下である場合に比して閾値Rh_th,Rv_th,N_thのうちの少なくとも一つを低い値(例えば0.5倍の値)に設定する。 Alternatively, for example, when navigation information is included in the vehicle information, the video sickness estimating unit 32b determines whether the curvature of the curve on which the vehicle 1 is to travel exceeds the predetermined value using the vehicle information. . If the curvature is greater than a predetermined value, the motion sickness estimation unit 32b lowers at least one of the thresholds Rh_th, Rv_th, and N_th (for example, 0. 0.) when the curvature is less than or equal to the predetermined value. Set to 5 times the value).
 または、例えば、ライトの点灯信号に関する情報、時刻を示す情報及び各種警告に関する情報が車両情報に含まれている場合、映像酔い推定部32bは、車両情報を用いて、車両1の運転環境が映像酔いを発生させやすい運転環境(例えば、夜間の走行中に複数個の警告が出力されているような環境)であるか否かを判定する。映像酔い推定部32bは、車両1の運転環境が映像酔いを発生させやすい運転環境であると判定された場合、そうでない場合に比して閾値Rh_th,Rv_th,N_thのうちの少なくとも一つを低い値(例えば0.5倍の値)に設定する。 Alternatively, for example, when the information on the lighting signal of the light, the information indicating the time, and the information on various warnings are included in the vehicle information, the video sickness estimating unit 32b uses the vehicle information to display the driving environment of the vehicle 1 It is determined whether or not there is a driving environment (such as an environment in which a plurality of warnings are output during night travel) that is likely to cause drunkenness. When it is determined that the driving environment of the vehicle 1 is a driving environment that is likely to cause video sickness, the video sickness estimating unit 32b lowers at least one of the thresholds Rh_th, Rv_th, and N_th compared to the case where it is not so. Set to a value (for example, 0.5 times the value).
 閾値Rh_thの低下により、Rh>Rh_thの場合における表示対象領域A2の横幅Rh’も小さくなる。閾値Rv_thの低下により、Rv>Rv_thの場合における表示対象領域A2の縦幅Rv’も小さくなる。閾値N_thの低下により、N>N_thの場合における第2映像群に含まれる映像に対応するオブジェクトの個数N’も少なくなる。この結果、映像酔いが発生しやすい運転環境においても、映像酔いの発生を抑制することができる。 As the threshold value Rh_th decreases, the width Rh ′ of the display target area A2 in the case of Rh> Rh_th also decreases. Due to the decrease of the threshold value Rv_th, the vertical width Rv ′ of the display target area A2 in the case of Rv> Rv_th also decreases. As the threshold value N_th decreases, the number N ′ of objects corresponding to the images included in the second image group in the case of N> N_th also decreases. As a result, it is possible to suppress the occurrence of video sickness even in a driving environment where video sickness is likely to occur.
 オブジェクト情報取得部31、映像酔い推定部32b、表示制御部33及び車両情報取得部35により、表示制御装置100bの要部が構成されている。また、オブジェクト情報生成部21、車両情報生成部23、オブジェクト情報取得部31、映像酔い推定部32b、表示制御部33及び車両情報取得部35により、制御装置7bの要部が構成されている。 The object information acquisition unit 31, the visual sickness estimation unit 32b, the display control unit 33, and the vehicle information acquisition unit 35 constitute a main part of the display control device 100b. Further, the object information generation unit 21, the vehicle information generation unit 23, the object information acquisition unit 31, the video sickness estimation unit 32b, the display control unit 33, and the vehicle information acquisition unit 35 constitute a main part of the control device 7b.
 制御装置7bの要部のハードウェア構成は、実施の形態1にて図5を参照して説明したものと同様であるため、図示及び説明を省略する。すなわち、車両情報生成部23、映像酔い推定部32b及び車両情報取得部35の機能は、プロセッサ41及びメモリ42により実現されるものであっても良く、又は処理回路43により実現されるものであっても良い。 The hardware configuration of the main part of the control device 7b is the same as that described in the first embodiment with reference to FIG. That is, the functions of the vehicle information generation unit 23, the video sickness estimation unit 32b, and the vehicle information acquisition unit 35 may be realized by the processor 41 and the memory 42, or may be realized by the processing circuit 43. It is good.
 次に、図21のフローチャートを参照して、表示制御装置100bの動作について説明する。 Next, the operation of the display control device 100b will be described with reference to the flowchart of FIG.
 まず、ステップST1にて、オブジェクト情報取得部31は、オブジェクト情報生成部21により生成されたオブジェクト情報を取得する。オブジェクト情報取得部31は、当該取得されたオブジェクト情報を映像酔い推定部32b及び表示制御部33に出力する。 First, in step ST1, the object information acquisition unit 31 acquires object information generated by the object information generation unit 21. The object information acquisition unit 31 outputs the acquired object information to the video sickness estimation unit 32 b and the display control unit 33.
 次いで、ステップST5にて、車両情報取得部35は、車両情報生成部23により生成された車両情報を取得する。車両情報取得部35は、当該取得された車両情報を映像酔い推定部32bに出力する。 Next, in step ST5, the vehicle information acquisition unit 35 acquires the vehicle information generated by the vehicle information generation unit 23. The vehicle information acquisition unit 35 outputs the acquired vehicle information to the video sickness estimation unit 32 b.
 次いで、ステップST2bにて、映像酔い推定部32bは、オブジェクト情報取得部31により出力されたオブジェクト情報及び車両情報取得部35により出力された車両情報を用いて、仮に第1映像群がヘッドアップディスプレイ2に表示された場合における第1映像群による映像酔いの発生の有無を推定する処理を実行する。 Next, at step ST2b, the video sickness estimating unit 32b temporarily uses the object information output by the object information acquiring unit 31 and the vehicle information output by the vehicle information acquiring unit 35 to temporarily display the head-up display of the first image group. A process is performed to estimate the presence or absence of video sickness due to the first video group when it is displayed in 2.
 ステップST2bにおける推定処理の具体例は、実施の形態1にて図7のフローチャートを参照して説明したものと同様であるため、図示及び説明を省略する。ただし、映像酔い推定部32bは、閾値Rh_th,Rv_th,N_thを設定するとき(すなわち図7に示すステップST11の処理に対応する処理を実行するとき)、上記のように車両情報を用いるようになっている。 The specific example of the estimation process in step ST2b is the same as that described in the first embodiment with reference to the flowchart of FIG. However, when setting the thresholds Rh_th, Rv_th, and N_th (that is, when executing the process corresponding to the process of step ST11 shown in FIG. 7), the video sickness estimating unit 32b uses the vehicle information as described above. ing.
 次いで、ステップST3にて、表示制御部33は、オブジェクト情報取得部31により出力されたオブジェクト情報を用いて、第2映像群をヘッドアップディスプレイ2に表示させる制御を実行する。このとき、表示制御部33は、映像酔い推定部32bによる推定処理の結果に応じて第2映像群の表示態様を異ならしめるようになっている。ステップST3における制御の具体例は、実施の形態1にて図8のフローチャートを参照して説明したものと同様であるため、図示及び説明を省略する。 Next, in step ST3, the display control unit 33 executes control to display the second image group on the head-up display 2 using the object information output from the object information acquisition unit 31. At this time, the display control unit 33 is configured to make the display mode of the second video group different according to the result of the estimation process by the video sickness estimation unit 32b. The specific example of control in step ST3 is the same as that described in the first embodiment with reference to the flowchart of FIG.
 なお、車両情報は、車両1に関する情報であって車内ネットワーク10を介して取得可能な情報を含むものであれば良く、車両情報の内容は上記の具体例に限定されるものではない。また、映像酔い推定部32bは、オブジェクト情報及び車両情報を用いて第1映像群による映像酔いの発生の有無を推定するものであれば良く(より具体的には、閾値Rh_th,Rv_th,N_thの設定に車両情報を用いるものであれば良く)、映像酔い推定部32bによる推定処理の内容は上記の具体例に限定されるものではない。 The vehicle information may be information regarding the vehicle 1 and includes information that can be acquired via the in-vehicle network 10, and the content of the vehicle information is not limited to the above specific example. In addition, the video sickness estimation unit 32b may be any device that estimates the presence or absence of video sickness due to the first video group using the object information and the vehicle information (more specifically, for the thresholds Rh_th, Rv_th, and N_th). The content of the estimation processing by the video sickness estimation unit 32b is not limited to the above specific example, as long as vehicle information is used for setting).
 また、表示制御装置100bは、実施の形態1にて説明したものと同様の種々の変形例、すなわち表示制御装置100と同様の種々の変形例を採用することができる。 Further, the display control device 100b can adopt various modifications similar to those described in the first embodiment, that is, various modifications similar to the display control 100.
 また、図22に示す如く、オブジェクト情報取得部31、映像酔い推定部32b、表示制御部33及び車両情報取得部35により、表示制御システム200bの要部が構成されているものであっても良い。表示制御システム200bの要部のシステム構成は、実施の形態1にて図16を参照して説明したものと同様であるため、図示及び説明を省略する。すなわち、車載情報機器51、携帯情報端末52又はサーバ装置53のうちのいずれか二以上が連携することにより表示制御システム200bの機能が実現されるものであれば良い。 Further, as shown in FIG. 22, the main part of the display control system 200b may be configured by the object information acquisition unit 31, the video sickness estimation unit 32b, the display control unit 33, and the vehicle information acquisition unit 35. . The system configuration of the main part of the display control system 200b is the same as that described in the first embodiment with reference to FIG. That is, any function of the display control system 200b may be realized by cooperation of any two or more of the in-vehicle information device 51, the portable information terminal 52, and the server device 53.
 また、表示制御装置100bは、実施の形態2の表示制御装置100aと同様の運転者情報取得部34を有するものであっても良い。この場合、映像酔い推定部32bは、オブジェクト情報取得部31により出力されたオブジェクト情報、運転者情報取得部34により出力された運転者情報及び車両情報取得部35により出力された車両情報を用いて、第1映像群による映像酔いの発生の有無を推定するものであっても良い。より具体的には、映像酔い推定部32bは、閾値Rh_th,Rv_th,N_thの設定に運転者情報及び車両情報を用いるものであっても良い。表示制御システム200bも同様である。 Further, the display control device 100 b may have the same driver information acquisition unit 34 as the display control device 100 a of the second embodiment. In this case, the video sickness estimation unit 32b uses the object information output by the object information acquisition unit 31, the driver information output by the driver information acquisition unit 34, and the vehicle information output by the vehicle information acquisition unit 35. The presence or absence of the occurrence of video sickness due to the first video group may be estimated. More specifically, the motion sickness estimation unit 32b may use driver information and vehicle information for setting the thresholds Rh_th, Rv_th, and N_th. The same applies to the display control system 200b.
 以上のように、実施の形態3の表示制御装置100bは、車両1に関する情報を含む車両情報を取得する車両情報取得部35を備え、映像酔い推定部32bは、オブジェクト情報及び車両情報を用いて映像酔いの発生の有無を推定する。これにより、映像酔いの発生の有無を推定するとき、例えば車両1の運転環境を考慮することができる。この結果、映像酔いの発生の有無の推定精度を更に向上することができる。また、映像酔いが発生しやすい運転環境においても、映像酔いの発生を抑制することができる。 As described above, the display control device 100b according to the third embodiment includes the vehicle information acquisition unit 35 that acquires vehicle information including information related to the vehicle 1, and the video sickness estimation unit 32b uses object information and vehicle information. Estimate the presence or absence of video sickness. Thereby, when estimating the presence or absence of generation | occurrence | production of video sickness, the driving environment of the vehicle 1 can be considered, for example. As a result, it is possible to further improve the estimation accuracy of the occurrence of video sickness. In addition, even in a driving environment where video sickness is likely to occur, the occurrence of video sickness can be suppressed.
実施の形態4.
 図23は、実施の形態4に係る表示制御装置を含む制御装置が車両に設けられている状態を示すブロック図である。図23を参照して、実施の形態4の表示制御装置100cについて説明する。なお、図23において、図1に示すブロックと同様のブロックには同一符号を付して説明を省略する。
Fourth Embodiment
FIG. 23 is a block diagram showing a state in which a control device including a display control device according to Embodiment 4 is provided in a vehicle. The display control device 100c according to the fourth embodiment will be described with reference to FIG. In FIG. 23, the same blocks as the blocks shown in FIG. 1 are assigned the same reference numerals and descriptions thereof will be omitted.
 車両1は通信装置11を有している。通信装置11は、例えば、インターネット接続用の送信機及び受信機、車車間通信用の送信機及び受信機、又は、路車間通信用の送信機及び受信機により構成されている。 The vehicle 1 has a communication device 11. The communication device 11 includes, for example, a transmitter and a receiver for Internet connection, a transmitter and a receiver for inter-vehicle communication, or a transmitter and a receiver for road-to-vehicle communication.
 車外環境情報生成部24は、通信装置11がサーバ装置、他車両又は路側機(いずれも不図示)などから受信した情報を用いて、車両1の車外環境に関する情報(以下「車外環境情報」という。)を生成するものである。車外環境情報は、例えば、車両1の周囲における天気、車両1の周囲における気温、車両1の周囲における湿度又は車両1の周囲における道路の混雑度合のうちの少なくとも一つを示すものである。 The outside environment information generation unit 24 uses information received by the communication device 11 from a server device, another vehicle, or a roadside device (all not shown), etc., to obtain information on the outside environment of the vehicle 1 (hereinafter referred to as "outside environment information") .) Is generated. The outside environment information indicates, for example, at least one of the weather around the vehicle 1, the temperature around the vehicle 1, the humidity around the vehicle 1, or the degree of congestion of the road around the vehicle 1.
 車外環境情報取得部36は、車外環境情報生成部24により生成された車外環境情報を取得するものである。車外環境情報取得部36は、当該取得された車外環境情報を映像酔い推定部32cに出力するものである。 The outside environment information acquisition unit 36 acquires outside environment information generated by the outside environment information generation unit 24. The outside environment information acquisition unit 36 outputs the acquired outside environment information to the video sickness estimation unit 32c.
 映像酔い推定部32cは、オブジェクト情報取得部31により出力されたオブジェクト情報及び車外環境情報取得部36により出力された車外環境情報を用いて、仮に第1映像群がヘッドアップディスプレイ2に表示された場合における第1映像群による映像酔いの発生の有無を推定するものである。より具体的には、映像酔い推定部32cは、実施の形態1にて図7を参照して説明したものと同様の処理を実行するものであり、閾値Rh_th,Rv_th,N_thの設定に車外環境情報を用いるものである。 The video sickness estimating unit 32 c temporarily displays the first image group on the head-up display 2 using the object information output by the object information acquiring unit 31 and the external environment information output by the external environment information acquiring unit 36. The presence or absence of the occurrence of video sickness due to the first video group in the case is estimated. More specifically, the motion sickness estimation unit 32c executes the same process as that described with reference to FIG. 7 in the first embodiment, and sets the thresholds Rh_th, Rv_th, and N_th in the environment outside the vehicle. It uses information.
 すなわち、映像酔い推定部32cは、車外環境情報を用いて、車両1の運転環境が映像酔いを発生させやすい運転環境であるか否かを判定する。映像酔い推定部32cは、車両1の運転環境が映像酔いを発生させやすい運転環境であると判定された場合、そうでない場合に比して閾値Rh_th,Rv_th,N_thのうちの少なくとも一つを低い値(例えば0.5倍の値)に設定する。 That is, the video sickness estimation unit 32c determines whether the driving environment of the vehicle 1 is a driving environment that is likely to cause video sickness, using the environment information outside the vehicle. When it is determined that the driving environment of the vehicle 1 is a driving environment that is likely to cause video sickness, the video sickness estimating unit 32c lowers at least one of the thresholds Rh_th, Rv_th, and N_th compared to the case where it is not so. Set to a value (for example, 0.5 times the value).
 例えば、車外環境情報が車両1の周囲における天気を示す情報を含むものである場合、映像酔い推定部32cは、車外環境情報を用いて、車両1の運転環境が映像酔いを発生させやすい運転環境(例えば、豪雨又は豪雪などの悪天候により多数の警告が出力されるような環境)であるか否かを判定する。映像酔い推定部32cは、車両1の運転環境が映像酔いを発生させやすい運転環境であると判定された場合、そうでない場合に比して閾値Rh_th,Rv_th,N_thのうちの少なくとも一つを低い値(例えば0.5倍の値)に設定する。 For example, when the outside environment information includes information indicating the weather around the vehicle 1, the video sickness estimating unit 32c uses the outside environment information to drive the driving environment in which the vehicle 1 is likely to generate video sickness (for example, It is determined whether or not the environment is such that many warnings are output due to bad weather such as heavy rain or heavy snow. When it is determined that the driving environment of the vehicle 1 is a driving environment that is likely to cause video sickness, the video sickness estimating unit 32c lowers at least one of the thresholds Rh_th, Rv_th, and N_th compared to the case where it is not so. Set to a value (for example, 0.5 times the value).
 または、例えば、車外環境情報が車両1の周囲における道路の混雑度合を示す情報を含むものである場合、映像酔い推定部32cは、車外環境情報を用いて、車両1の運転環境が映像酔いを発生させやすい運転環境(例えば、交差点にて車両1の周囲に多数の他車両が存在しており、オブジェクト情報生成部21により検出されるオブジェクトの個数Nが多くなるような環境)であるか否かを判定する。映像酔い推定部32cは、車両1の運転環境が映像酔いを発生させやすい運転環境であると判定された場合、そうでない場合に比して閾値Rh_th,Rv_th,N_thのうちの少なくとも一つを低い値(例えば0.5倍の値)に設定する。 Alternatively, for example, when the outside environment information includes information indicating the degree of congestion of the road around the vehicle 1, the video sickness estimating unit 32c causes the driving environment of the vehicle 1 to generate video sickness using the outside environment information. Whether the driving environment is easy (for example, an environment where many other vehicles exist around the vehicle 1 at an intersection and the number N of objects detected by the object information generation unit 21 increases) judge. When it is determined that the driving environment of the vehicle 1 is a driving environment that is likely to cause video sickness, the video sickness estimating unit 32c lowers at least one of the thresholds Rh_th, Rv_th, and N_th compared to the case where it is not so. Set to a value (for example, 0.5 times the value).
 または、例えば、車外環境情報が車両1の周囲における天気を示す情報を含むものである場合、映像酔い推定部32cは、車外環境情報を用いて、車両1の運転環境が映像酔いを発生させやすい運転環境(例えば、ウインドシールド4越しに見える前方風景の明るさに対するヘッドアップディスプレイ2に表示される映像の輝度の差が大きくなるような環境、又は、これらの映像のコントラスト比が高くなるような環境)であるか否かを判定する。映像酔い推定部32cは、車両1の運転環境が映像酔いを発生させやすい運転環境であると判定された場合、そうでない場合に比して閾値Rh_th,Rv_th,N_thのうちの少なくとも一つを低い値(例えば0.5倍の値)に設定する。 Alternatively, for example, when the outside environment information includes the information indicating the weather around the vehicle 1, the video sickness estimating unit 32c uses the outside environment information to easily drive the driving environment of the vehicle 1 to generate video sickness. (For example, an environment in which the difference in luminance of the image displayed on the head-up display 2 with respect to the brightness of the front scenery seen through the windshield 4 is large, or an environment in which the contrast ratio of these images is high) It is determined whether the When it is determined that the driving environment of the vehicle 1 is a driving environment that is likely to cause video sickness, the video sickness estimating unit 32c lowers at least one of the thresholds Rh_th, Rv_th, and N_th compared to the case where it is not so. Set to a value (for example, 0.5 times the value).
 閾値Rh_thの低下により、Rh>Rh_thの場合における表示対象領域A2の横幅Rh’も小さくなる。閾値Rv_thの低下により、Rv>Rv_thの場合における表示対象領域A2の縦幅Rv’も小さくなる。閾値N_thの低下により、N>N_thの場合における第2映像群に含まれる映像に対応するオブジェクトの個数N’も少なくなる。この結果、映像酔いが発生しやすい運転環境においても、映像酔いの発生を抑制することができる。 As the threshold value Rh_th decreases, the width Rh ′ of the display target area A2 in the case of Rh> Rh_th also decreases. Due to the decrease of the threshold value Rv_th, the vertical width Rv ′ of the display target area A2 in the case of Rv> Rv_th also decreases. As the threshold value N_th decreases, the number N ′ of objects corresponding to the images included in the second image group in the case of N> N_th also decreases. As a result, it is possible to suppress the occurrence of video sickness even in a driving environment where video sickness is likely to occur.
 オブジェクト情報取得部31、映像酔い推定部32c、表示制御部33及び車外環境情報取得部36により、表示制御装置100cの要部が構成されている。また、オブジェクト情報生成部21、車外環境情報生成部24、オブジェクト情報取得部31、映像酔い推定部32c、表示制御部33及び車外環境情報取得部36により、制御装置7cの要部が構成されている。 The object information acquisition unit 31, the visual sickness estimation unit 32c, the display control unit 33, and the external environment information acquisition unit 36 constitute a main part of the display control device 100c. Further, the object information generation unit 21, the outside environment information generation unit 24, the object information acquisition unit 31, the video sickness estimation unit 32c, the display control unit 33, and the outside environment information acquisition unit 36 constitute a main part of the control device 7c. There is.
 制御装置7cの要部のハードウェア構成は、実施の形態1にて図5を参照して説明したものと同様であるため、図示及び説明を省略する。すなわち、車外環境情報生成部24、映像酔い推定部32c及び車外環境情報取得部36の機能は、プロセッサ41及びメモリ42により実現されるものであっても良く、又は処理回路43により実現されるものであっても良い。 The hardware configuration of the main part of the control device 7c is the same as that described in the first embodiment with reference to FIG. That is, the functions of the external environment information generation unit 24, the video sickness estimation unit 32c, and the external environment information acquisition unit 36 may be realized by the processor 41 and the memory 42, or realized by the processing circuit 43. It may be
 次に、図24のフローチャートを参照して、表示制御装置100cの動作について説明する。 Next, the operation of the display control device 100c will be described with reference to the flowchart of FIG.
 まず、ステップST1にて、オブジェクト情報取得部31は、オブジェクト情報生成部21により生成されたオブジェクト情報を取得する。オブジェクト情報取得部31は、当該取得されたオブジェクト情報を映像酔い推定部32c及び表示制御部33に出力する。 First, in step ST1, the object information acquisition unit 31 acquires object information generated by the object information generation unit 21. The object information acquisition unit 31 outputs the acquired object information to the video sickness estimation unit 32 c and the display control unit 33.
 次いで、ステップST6にて、車外環境情報取得部36は、車外環境情報生成部24により生成された車外環境情報を取得する。車外環境情報取得部36は、当該取得された車外環境情報を映像酔い推定部32cに出力する。 Next, in step ST6, the outside environment information acquiring unit 36 acquires outside environment information generated by the outside environment information generating unit 24. The outside environment information acquisition unit 36 outputs the acquired outside environment information to the video sickness estimation unit 32c.
 次いで、ステップST2cにて、映像酔い推定部32cは、オブジェクト情報取得部31により出力されたオブジェクト情報及び車外環境情報取得部36により出力された車外環境情報を用いて、仮に第1映像群がヘッドアップディスプレイ2に表示された場合における第1映像群による映像酔いの発生の有無を推定する処理を実行する。 Next, in step ST2c, the video sickness estimating unit 32c temporarily uses the object information output by the object information acquiring unit 31 and the external environment information output by the external environment information acquiring unit 36 to temporarily generate the first image group as a head. A process of estimating the occurrence of video sickness due to the first video group when displayed on the up display 2 is executed.
 ステップST2cにおける推定処理の具体例は、実施の形態1にて図7のフローチャートを参照して説明したものと同様であるため、図示及び説明を省略する。ただし、映像酔い推定部32cは、閾値Rh_th,Rv_th,N_thを設定するとき(すなわち図7に示すステップST11の処理に対応する処理を実行するとき)、上記のように車外環境情報を用いるようになっている。 The specific example of the estimation process in step ST2c is the same as that described in the first embodiment with reference to the flowchart of FIG. However, when setting the thresholds Rh_th, Rv_th, and N_th (that is, when executing the process corresponding to the process of step ST11 shown in FIG. 7), the video sickness estimating unit 32c uses the environment information outside the vehicle as described above. It has become.
 次いで、ステップST3にて、表示制御部33は、オブジェクト情報取得部31により出力されたオブジェクト情報を用いて、第2映像群をヘッドアップディスプレイ2に表示させる制御を実行する。このとき、表示制御部33は、映像酔い推定部32cによる推定処理の結果に応じて第2映像群の表示態様を異ならしめるようになっている。ステップST3における制御の具体例は、実施の形態1にて図8のフローチャートを参照して説明したものと同様であるため、図示及び説明を省略する。 Next, in step ST3, the display control unit 33 executes control to display the second image group on the head-up display 2 using the object information output from the object information acquisition unit 31. At this time, the display control unit 33 is configured to make the display mode of the second video group different according to the result of the estimation process by the video sickness estimation unit 32c. The specific example of control in step ST3 is the same as that described in the first embodiment with reference to the flowchart of FIG.
 なお、車外環境情報は、車両1の車外環境に関する情報であって通信装置11により受信可能な情報を含むものであれば良く、車外環境情報の内容は上記の具体例に限定されるものではない。また、映像酔い推定部32cは、オブジェクト情報及び車外環境情報を用いて第1映像群による映像酔いの発生の有無を推定するものであれば良く(より具体的には、閾値Rh_th,Rv_th,N_thの設定に車外環境情報を用いるものであれば良く)、映像酔い推定部32cによる推定処理の内容は上記の具体例に限定されるものではない。 The outside environment information may be any information regarding the outside environment of the vehicle 1 and may include information that can be received by the communication device 11, and the contents of the outside environment information are not limited to the above specific example. . Also, the video sickness estimation unit 32c may be any device that estimates the presence or absence of video sickness due to the first video group using the object information and the environment outside the vehicle (more specifically, the thresholds Rh_th, Rv_th, N_th The contents of the estimation process by the video sickness estimation unit 32c are not limited to the above specific example, as long as the environment outside the vehicle is used for setting (1).
 また、表示制御装置100cは、実施の形態1にて説明したものと同様の種々の変形例、すなわち表示制御装置100と同様の種々の変形例を採用することができる。 Further, the display control device 100c can adopt various modifications similar to those described in the first embodiment, that is, various modifications similar to the display control 100.
 また、図25に示す如く、オブジェクト情報取得部31、映像酔い推定部32c、表示制御部33及び車外環境情報取得部36により、表示制御システム200cの要部が構成されているものであっても良い。表示制御システム200cの要部のシステム構成は、実施の形態1にて図16を参照して説明したものと同様であるため、図示及び説明を省略する。すなわち、車載情報機器51、携帯情報端末52又はサーバ装置53のうちのいずれか二以上が連携することにより表示制御システム200cの機能が実現されるものであれば良い。 Further, as shown in FIG. 25, even if the main part of the display control system 200c is configured by the object information acquisition unit 31, the video sickness estimation unit 32c, the display control unit 33, and the external environment information acquisition unit 36. good. The system configuration of the main part of the display control system 200c is the same as that described in the first embodiment with reference to FIG. That is, any function of the display control system 200c may be realized by cooperation of any two or more of the in-vehicle information device 51, the portable information terminal 52, and the server device 53.
 また、表示制御装置100cは、実施の形態2の表示制御装置100aと同様の運転者情報取得部34を有するものであっても良い。この場合、映像酔い推定部32cは、オブジェクト情報取得部31により出力されたオブジェクト情報、運転者情報取得部34により出力された運転者情報及び車外環境情報取得部36により出力された車外環境情報を用いて、第1映像群による映像酔いの発生の有無を推定するものであっても良い。より具体的には、映像酔い推定部32cは、閾値Rh_th,Rv_th,N_thの設定に運転者情報及び車外環境情報を用いるものであっても良い。表示制御システム200cも同様である。 Further, the display control device 100 c may have the same driver information acquisition unit 34 as the display control device 100 a of the second embodiment. In this case, the video sickness estimation unit 32c calculates the object information output by the object information acquisition unit 31, the driver information output by the driver information acquisition unit 34, and the outside environment information output by the outside environment information acquisition unit 36. It may be used to estimate the presence or absence of video sickness due to the first video group. More specifically, the video sickness estimating unit 32c may use driver information and outside environment information for setting the thresholds Rh_th, Rv_th, and N_th. The same applies to the display control system 200c.
 また、表示制御装置100cは、実施の形態3の表示制御装置100bと同様の車両情報取得部35を有するものであっても良い。この場合、映像酔い推定部32cは、オブジェクト情報取得部31により出力されたオブジェクト情報、車両情報取得部35により出力された車両情報及び車外環境情報取得部36により出力された車外環境情報を用いて、第1映像群による映像酔いの発生の有無を推定するものであっても良い。より具体的には、映像酔い推定部32cは、閾値Rh_th,Rv_th,N_thの設定に車両情報及び車外環境情報を用いるものであっても良い。表示制御システム200cも同様である。 Further, the display control device 100c may have the same vehicle information acquisition unit 35 as the display control device 100b of the third embodiment. In this case, the video sickness estimation unit 32 c uses the object information output by the object information acquisition unit 31, the vehicle information output by the vehicle information acquisition unit 35, and the outside environment information output by the outside environment information acquisition unit 36. The presence or absence of the occurrence of video sickness due to the first video group may be estimated. More specifically, the video sickness estimating unit 32c may use vehicle information and outside environment information for setting the thresholds Rh_th, Rv_th, and N_th. The same applies to the display control system 200c.
 また、表示制御装置100cは、実施の形態2の表示制御装置100aと同様の運転者情報取得部34及び実施の形態3の表示制御装置100bと同様の車両情報取得部35を有するものであっても良い。この場合、映像酔い推定部32cは、オブジェクト情報取得部31により出力されたオブジェクト情報、運転者情報取得部34により出力された運転者情報、車両情報取得部35により出力された車両情報及び車外環境情報取得部36により出力された車外環境情報を用いて、第1映像群による映像酔いの発生の有無を推定するものであっても良い。より具体的には、映像酔い推定部32cは、閾値Rh_th,Rv_th,N_thの設定に運転者情報、車両情報及び車外環境情報を用いるものであっても良い。表示制御システム200cも同様である。 Further, the display control device 100c has a driver information acquisition unit 34 similar to the display control device 100a of the second embodiment and a vehicle information acquisition unit 35 similar to the display control device 100b of the third embodiment, Also good. In this case, the video sickness estimation unit 32c determines the object information output by the object information acquisition unit 31, the driver information output by the driver information acquisition unit 34, the vehicle information output by the vehicle information acquisition unit 35, and the environment outside the vehicle It is also possible to estimate the presence or absence of the occurrence of video sickness due to the first video group using the external environment information output by the information acquisition unit 36. More specifically, the video sickness estimating unit 32c may use driver information, vehicle information, and external environment information for setting the thresholds Rh_th, Rv_th, and N_th. The same applies to the display control system 200c.
 以上のように、実施の形態4の表示制御装置100cは、車両1の車外環境に関する情報を含む車外環境情報を取得する車外環境情報取得部36を備え、映像酔い推定部32cは、オブジェクト情報及び車外環境情報を用いて映像酔いの発生の有無を推定する。これにより、映像酔いの発生の有無を推定するとき、例えば車両1の運転環境を考慮することができる。この結果、映像酔いの発生の有無の推定精度を更に向上することができる。また、映像酔いが発生しやすい運転環境においても、映像酔いの発生を抑制することができる。 As described above, the display control device 100c according to the fourth embodiment includes the outside environment information acquisition unit 36 that acquires outside environment information including information related to the outside environment of the vehicle 1, and the video sickness estimation unit 32c includes object information and The presence or absence of video sickness is estimated using the environment information outside the vehicle. Thereby, when estimating the presence or absence of generation | occurrence | production of video sickness, the driving environment of the vehicle 1 can be considered, for example. As a result, it is possible to further improve the estimation accuracy of the occurrence of video sickness. In addition, even in a driving environment where video sickness is likely to occur, the occurrence of video sickness can be suppressed.
 なお、本願発明はその発明の範囲内において、各実施の形態の自由な組み合わせ、あるいは各実施の形態の任意の構成要素の変形、もしくは各実施の形態において任意の構成要素の省略が可能である。 In the scope of the invention, the present invention allows free combination of each embodiment, or modification of any component of each embodiment, or omission of any component in each embodiment. .
 本発明の表示制御装置は、例えば、ウインドシールド型のAR-HUDの制御に用いることができる。 The display control device of the present invention can be used, for example, to control a windshield AR-HUD.
 1 車両、2 ヘッドアップディスプレイ、3 ヘッドアップディスプレイ装置、4 ウインドシールド、5 カメラ、6 センサ、7,7a,7b,7c 制御装置、8 カメラ、9 センサ、10 車内ネットワーク、11 通信装置、21 オブジェクト情報生成部、22 運転者情報生成部、23 車両情報生成部、24 車外環境情報生成部、31 オブジェクト情報取得部、32,32a,32b,32c 映像酔い推定部、33 表示制御部、34 運転者情報取得部、35 車両情報取得部、36 車外環境情報取得部、41 プロセッサ、42 メモリ、43 処理回路、51 車載情報機器、52 携帯情報端末、53 サーバ装置、100,100a,100b,100c 表示制御装置、200,200a,200b,200c 表示制御システム。 Reference Signs List 1 vehicle 2 head up display 3 head up display device 4 windshield 5 camera 6 sensor 7 7a 7b 7c control device 8 camera 9 sensor 10 in-vehicle network 11 communication device 21 object Information generation unit, 22 Driver information generation unit, 23 Vehicle information generation unit, 24 Vehicle external information generation unit, 31 Object information acquisition unit, 32, 32a, 32b, 32c Video sickness estimation unit, 33 Display control unit, 34 Driver Information acquisition unit, 35 Vehicle information acquisition unit, 36 Vehicle external environment information acquisition unit, 41 processor, 42 memory, 43 processing circuit, 51 vehicle information device, 52 portable information terminal, 53 server device, 100, 100a, 100b, 100c Display control Device, 200, 200a, 20 b, 200c display control system.

Claims (13)

  1.  移動体の周囲に存在する1個以上のオブジェクトに関する情報を含むオブジェクト情報を取得するオブジェクト情報取得部と、
     前記オブジェクト情報を用いて、前記1個以上のオブジェクトに対応する1個以上の映像を含む第1映像群による映像酔いの発生の有無を推定する処理を実行する映像酔い推定部と、
     前記オブジェクト情報を用いて、前記1個以上の映像のうちの少なくとも一部の映像を含む第2映像群をヘッドアップディスプレイに表示させる制御を実行する表示制御部と、を備え、
     前記表示制御部は、前記映像酔い推定部による推定処理の結果に応じて前記第2映像群の表示態様を異ならしめるものである
     ことを特徴とする表示制御装置。
    An object information acquisition unit that acquires object information including information on one or more objects existing around the mobile object;
    A video sickness estimation unit that executes processing of estimating presence or absence of video sickness due to a first video group including one or more videos corresponding to the one or more objects using the object information;
    A display control unit that executes control to cause a head-up display to display a second video group including at least a part of the one or more videos using the object information;
    The display control unit is configured to make the display mode of the second image group different according to the result of the estimation process by the video sickness estimation unit.
  2.  前記表示制御部は、前記映像酔い推定部による推定処理の結果に応じて、前記第2映像群に含まれる映像の個数、前記第2映像群に含まれる個々の映像の動き又は前記ヘッドアップディスプレイにおける前記第2映像群が表示される領域のうちの少なくとも一つを異ならしめるものであることを特徴とする請求項1記載の表示制御装置。 The display control unit controls the number of videos included in the second video group, the motion of each video included in the second video group, or the head-up display according to the result of the estimation process performed by the video sickness estimation unit. The display control device according to claim 1, wherein at least one of the regions in which the second image group is displayed is different.
  3.  前記移動体が車両であることを特徴とする請求項1又は請求項2記載の表示制御装置。 The display control apparatus according to claim 1, wherein the moving body is a vehicle.
  4.  前記車両の運転者に関する情報を含む運転者情報を取得する運転者情報取得部を備え、
     前記映像酔い推定部は、前記オブジェクト情報及び前記運転者情報を用いて前記映像酔いの発生の有無を推定する処理を実行する
     ことを特徴とする請求項3記載の表示制御装置。
    A driver information acquisition unit configured to acquire driver information including information on a driver of the vehicle;
    The display control device according to claim 3, wherein the video sickness estimation unit performs a process of estimating the presence or absence of the video sickness occurrence using the object information and the driver information.
  5.  前記運転者情報は、前記運転者の頭部の位置、前記運転者の視点移動量、前記運転者の顔色、前記運転者の開眼度又は前記運転者の瞬き回数のうちの少なくとも一つを示すものであることを特徴とする請求項4記載の表示制御装置。 The driver information indicates at least one of the position of the driver's head, the amount of viewpoint movement of the driver, the face color of the driver, the degree of eye opening of the driver, and the number of blinks of the driver. The display control apparatus according to claim 4, wherein the display control apparatus is a digital camera.
  6.  前記運転者情報は、前記運転者の心拍数又は前記運転者の血圧のうちの少なくとも一方を示すものであることを特徴とする請求項4記載の表示制御装置。 The display control device according to claim 4, wherein the driver information indicates at least one of a heart rate of the driver and a blood pressure of the driver.
  7.  前記車両に関する情報を含む車両情報を取得する車両情報取得部を備え、
     前記映像酔い推定部は、前記オブジェクト情報及び前記車両情報を用いて前記映像酔いの発生の有無を推定する処理を実行する
     ことを特徴とする請求項3記載の表示制御装置。
    A vehicle information acquisition unit that acquires vehicle information including information on the vehicle;
    The display control device according to claim 3, wherein the video sickness estimation unit performs a process of estimating the presence or absence of the video sickness occurrence using the object information and the vehicle information.
  8.  前記車両情報は、前記車両の走行速度、前記車両の加速度又は前記車両の振動数のうちの少なくとも一つを示すものであることを特徴とする請求項7記載の表示制御装置。 The display control device according to claim 7, wherein the vehicle information indicates at least one of a traveling speed of the vehicle, an acceleration of the vehicle, or a frequency of the vehicle.
  9.  前記車両の車外環境に関する情報を含む車外環境情報を取得する車外環境情報取得部を備え、
     前記映像酔い推定部は、前記オブジェクト情報及び前記車外環境情報を用いて前記映像酔いの発生の有無を推定する処理を実行する
     ことを特徴とする請求項3記載の表示制御装置。
    An external environment information acquisition unit for acquiring external environment information including information on the external environment of the vehicle;
    The display control device according to claim 3, wherein the video sickness estimation unit performs a process of estimating the presence or absence of the video sickness occurrence using the object information and the external environment information.
  10.  前記車外環境情報は、前記車両の周囲における天気、前記車両の周囲における気温、前記車両の周囲における湿度又は前記車両の周囲における道路の混雑度合のうちの少なくとも一つを示すものであることを特徴とする請求項9記載の表示制御装置。 The outside environment information indicates at least one of the weather around the vehicle, the temperature around the vehicle, the humidity around the vehicle, and the degree of congestion of the road around the vehicle. The display control device according to claim 9, wherein
  11.  前記ヘッドアップディスプレイがウインドシールド型であることを特徴とする請求項3記載の表示制御装置。 The display control device according to claim 3, wherein the head-up display is a windshield type.
  12.  移動体の周囲に存在する1個以上のオブジェクトに関する情報を含むオブジェクト情報を取得するオブジェクト情報取得部と、
     前記オブジェクト情報を用いて、前記1個以上のオブジェクトに対応する1個以上の映像を含む第1映像群による映像酔いの発生の有無を推定する処理を実行する映像酔い推定部と、
     前記オブジェクト情報を用いて、前記1個以上の映像のうちの少なくとも一部の映像を含む第2映像群をヘッドアップディスプレイに表示させる制御を実行する表示制御部と、を備え、
     前記表示制御部は、前記映像酔い推定部による推定処理の結果に応じて前記第2映像群の表示態様を異ならしめるものである
     ことを特徴とする表示制御システム。
    An object information acquisition unit that acquires object information including information on one or more objects existing around the mobile object;
    A video sickness estimation unit that executes processing of estimating presence or absence of video sickness due to a first video group including one or more videos corresponding to the one or more objects using the object information;
    A display control unit that executes control to cause a head-up display to display a second video group including at least a part of the one or more videos using the object information;
    The display control unit is configured to make the display mode of the second image group different according to a result of estimation processing by the video sickness estimation unit.
  13.  オブジェクト情報取得部が、移動体の周囲に存在する1個以上のオブジェクトに関する情報を含むオブジェクト情報を取得するステップと、
     映像酔い推定部が、前記オブジェクト情報を用いて、前記1個以上のオブジェクトに対応する1個以上の映像を含む第1映像群による映像酔いの発生の有無を推定する処理を実行するステップと、
     表示制御部が、前記オブジェクト情報を用いて、前記1個以上の映像のうちの少なくとも一部の映像を含む第2映像群をヘッドアップディスプレイに表示させる制御を実行するステップと、を備え、
     前記表示制御部は、前記映像酔い推定部による推定処理の結果に応じて前記第2映像群の表示態様を異ならしめる
     ことを特徴とする表示制御方法。
    The object information acquisition unit acquires object information including information on one or more objects existing around the mobile object;
    Performing a process of estimating the presence or absence of occurrence of video sickness by a first video group including one or more videos corresponding to the one or more objects using the object information;
    Performing a control of causing the head-up display to display a second video group including at least a part of the one or more videos using the object information;
    The display control unit is configured to make the display mode of the second image group different according to the result of the estimation process by the video sickness estimation unit.
PCT/JP2018/001815 2018-01-22 2018-01-22 Display control device, display control system, and display control method WO2019142364A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/001815 WO2019142364A1 (en) 2018-01-22 2018-01-22 Display control device, display control system, and display control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/001815 WO2019142364A1 (en) 2018-01-22 2018-01-22 Display control device, display control system, and display control method

Publications (1)

Publication Number Publication Date
WO2019142364A1 true WO2019142364A1 (en) 2019-07-25

Family

ID=67300966

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/001815 WO2019142364A1 (en) 2018-01-22 2018-01-22 Display control device, display control system, and display control method

Country Status (1)

Country Link
WO (1) WO2019142364A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010066042A (en) * 2008-09-09 2010-03-25 Toshiba Corp Image irradiating system and image irradiating method
JP2015182755A (en) * 2014-03-26 2015-10-22 三菱電機株式会社 Movable body equipment control device, portable terminal, movable body equipment control system, and movable body equipment control method
JP2017058493A (en) * 2015-09-16 2017-03-23 株式会社コロプラ Virtual reality space video display method and program
JP2017211916A (en) * 2016-05-27 2017-11-30 京セラ株式会社 Portable electronic apparatus, method for controlling portable electronic apparatus, and program for controlling portable electronic apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010066042A (en) * 2008-09-09 2010-03-25 Toshiba Corp Image irradiating system and image irradiating method
JP2015182755A (en) * 2014-03-26 2015-10-22 三菱電機株式会社 Movable body equipment control device, portable terminal, movable body equipment control system, and movable body equipment control method
JP2017058493A (en) * 2015-09-16 2017-03-23 株式会社コロプラ Virtual reality space video display method and program
JP2017211916A (en) * 2016-05-27 2017-11-30 京セラ株式会社 Portable electronic apparatus, method for controlling portable electronic apparatus, and program for controlling portable electronic apparatus

Similar Documents

Publication Publication Date Title
US10857942B2 (en) Image generating device, image generating method, and program
US8536995B2 (en) Information display apparatus and information display method
US9760782B2 (en) Method for representing objects surrounding a vehicle on the display of a display device
KR101855940B1 (en) Augmented reality providing apparatus for vehicle and control method for the same
US9267808B2 (en) Visual guidance system
US10647201B2 (en) Drive assist device and drive assist method
JP2016500352A (en) Systems for vehicles
US9836814B2 (en) Display control apparatus and method for stepwise deforming of presentation image radially by increasing display ratio
US11803053B2 (en) Display control device and non-transitory tangible computer-readable medium therefor
US9849835B2 (en) Operating a head-up display of a vehicle and image determining system for the head-up display
JP7255608B2 (en) DISPLAY CONTROLLER, METHOD, AND COMPUTER PROGRAM
KR20170083798A (en) Head-up display apparatus and control method for the same
JP6186905B2 (en) In-vehicle display device and program
JP7459883B2 (en) Display control device, head-up display device, and method
JP2020112698A (en) Display control device, display unit, display system, movable body, program, and image creation method
WO2022230995A1 (en) Display control device, head-up display device, and display control method
WO2019142364A1 (en) Display control device, display control system, and display control method
US11412205B2 (en) Vehicle display device
WO2021132259A1 (en) Display apparatus, display method, and program
WO2020158601A1 (en) Display control device, method, and computer program
JP2020158014A (en) Head-up display device, display control device, and display control program
US20240042857A1 (en) Vehicle display system, vehicle display method, and computer-readable non-transitory storage medium storing vehicle display program
JP7434894B2 (en) Vehicle display device
WO2021132250A1 (en) In-vehicle display device and program
WO2023145852A1 (en) Display control device, display system, and display control method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18900759

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18900759

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP