WO2016088227A1 - Video display device and method - Google Patents

Video display device and method Download PDF

Info

Publication number
WO2016088227A1
WO2016088227A1 PCT/JP2014/082025 JP2014082025W WO2016088227A1 WO 2016088227 A1 WO2016088227 A1 WO 2016088227A1 JP 2014082025 W JP2014082025 W JP 2014082025W WO 2016088227 A1 WO2016088227 A1 WO 2016088227A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
information
visual field
specific object
display
Prior art date
Application number
PCT/JP2014/082025
Other languages
French (fr)
Japanese (ja)
Inventor
大西 邦一
Original Assignee
日立マクセル株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日立マクセル株式会社 filed Critical 日立マクセル株式会社
Priority to PCT/JP2014/082025 priority Critical patent/WO2016088227A1/en
Publication of WO2016088227A1 publication Critical patent/WO2016088227A1/en

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/001Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/37Details of the operation on graphic patterns
    • G09G5/377Details of the operation on graphic patterns for mixing or overlaying two or more graphic patterns

Definitions

  • the present invention relates to an image display apparatus and method, and more particularly, to a technique that is mounted on a user's head and displays predetermined image information in front of the user's eyes.
  • an information video display system using a so-called head mounted display device (a head mounted display is referred to as an HMD) of a goggles type or glasses type that is worn on a user's head has been rapidly spreading.
  • a so-called see-through type HMD device having a function of directly displaying an external scene through a translucent display screen while a predetermined information image is superimposed on the directly-viewed scene is directly viewed by the user. Since the predetermined related information can be additionally displayed on the external scene, the application range is wide.
  • Patent Document 1 states that “in an external light transmissive head-mounted display, an imaging device that electronically captures an object in the line-of-sight direction, and the object is recognized from this electronic data.
  • Image recognition means for displaying information relating to the object stored in advance in the storage means, and enabling to register in advance the template data corresponding to the electronic data captured by the imaging device in the imaging device during the video recognition.
  • the HMD device described in Patent Document 1 performs additional display by identifying the type of a specific object captured by a camera.
  • the position of the user's visual field area changes before and after the viewpoint movement, and the positional relationship between the additional display and the target object changes in the user's visual field area, and visibility decreases. There is a problem of doing.
  • This viewpoint movement of the user is not considered at all in Patent Document 1.
  • the present invention has been made to solve the above-described problems, and an object of the present invention is to provide a technique for improving the visibility of the additional display even when the user moves the viewpoint.
  • the present invention captures a user's external scene to generate video information, and extracts from the video information the specific target that is a target for providing information to the user. Then, it is a display screen that detects the user's line-of-sight direction to generate line-of-sight information and displays the video information, and the user can directly view the external scene through the display screen. Photographing the external scene by calculating a direct visual field area, which is a visual field when the user visually recognizes the external scene through the display screen, from the gaze direction information of the user via a transparent display screen. The additional field associated with the extracted specific object is extracted from the additional information storage unit that calculates and captures the field of view of the photographic field that is the field of view when the image is captured.
  • the above A configuration in which a relative positional relationship between a shadow visual field region and the direct visual field region is calculated, and the extracted additional information is displayed on the display screen by following the movement of the direct visual field region using the relative positional relationship. It is characterized by that.
  • FIG. 1 Schematic diagram showing an example of an HMD system in the first embodiment
  • the figure which shows the hardware constitutions of the danger avoidance warning apparatus 5 Schematic block diagram showing the electronic circuit configuration of the danger avoidance warning device
  • the flowchart which shows the flow of a process of the driving assistance system which concerns on 1st embodiment. It is a figure for demonstrating a risk determination process, Comprising: (a) is the picked-up image schematic of the electronic camera 3 for outside scene photography which caught the moment when the pedestrian is going to jump out from the side road ahead of the own vehicle.
  • FIG. 6 is a schematic diagram showing a case where the display visual field direction and the direct visual field direction of the driver are deviated, where (a) shows a state before correction of the deviation of the dynamic visual field direction, and (b) shows a state after correction.
  • FIG. 1 It is a figure which shows the additional display example in case a monitoring target object exists in a driver
  • Figure showing another example of additional display The schematic block diagram which showed the electronic circuit structure of the driving assistance device which concerns on 2nd embodiment.
  • the flowchart which shows the flow of operation
  • the first embodiment is an embodiment in which the present invention is applied to a driving support system that supports a driver who drives a vehicle using an HMD device, and a danger avoidance warning item is additionally displayed.
  • a schematic configuration of the present embodiment will be described with reference to FIGS. 1 and 2.
  • FIG. 1 is a schematic diagram showing an example of the HMD system in the first embodiment.
  • FIG. 2 is a perspective view showing an outline of an external configuration example of an HMD device that is a main part of the HMD system according to the first embodiment and has a function of a video display device.
  • the driving support system 10 shown in FIG. 1 includes the HMD device 1 that is mounted on the head of the driver 20 driving the automobile 50.
  • the HMD device 1 is connected to a danger avoidance warning device 5 as shown in FIG.
  • the danger avoidance warning device 5 which is a separate device from the HMD device 1, is illustrated with a wired connection, but may be wirelessly connected or configured integrally with the HMD device 1. May be.
  • the HMD device 1 includes a so-called see-through that includes a transflective display screen 2 having a function of displaying an image in the visual field of the driver 20 in a state where the outside world in front of the eyes of the driver 20 is visible.
  • This is a type of HMD device 1.
  • the HMD device 1 holds a display screen 2 and is mounted on a user's head, a mounting body 1a, an electronic camera 3 for photographing an outside scene, and an eyeball camera including an electronic camera that images the eyeball of the driver 20, It has.
  • the mounting body 1a on the head the state in which the display screen 2 is positioned in front of the eyes can be maintained.
  • the outside scene photographing electronic camera 3 is installed so as to photograph a scene of the outside world from almost the same direction as the line of sight of the driver 20 to which the HMD device 1 is mounted. Has the ability to shoot.
  • the outside scene photographing electronic camera 3 may be provided with a normal visible light photographing function, or may be provided with a non-visible light photographing function such as infrared rays.
  • outside scene photographing electronic camera 3 may be provided with a distance measuring unit for measuring the distance between the subject to be photographed and the outside scene photographing electronic camera 3, such as an autofocus function or an infrared distance measuring sensor.
  • the eyeball camera is, for example, a small electronic camera arranged at a position that does not obstruct the field of view of the driver 20.
  • the eyeball camera captures the eyeball of the driver 20 obliquely from the front, and the driver or eye movement of the driver 20 in the eyeball image is viewed from the driver. 20 line-of-sight directions are detected.
  • the eyeball image is output to a user gaze direction calculation unit 106 of the danger avoidance warning device 5 described later, and a gaze direction calculation process is executed.
  • the gaze direction detection device is not limited to the configuration including the eyeball camera and the user gaze direction detection unit, and any configuration can be used as long as it can detect the gaze direction of the driver 20 or the visual field area visually recognized by the driver 20. It may be a configuration.
  • FIG. 3 is a diagram illustrating a hardware configuration of the danger avoidance warning device 5.
  • the danger avoidance warning device 5 includes a CPU (Central Processing Unit) 51, a RAM (Random Access Memory) 52, a ROM (Read Only Memory) 53, an HDD (Hard Disk Drive) 54, an I / F 55, And a bus 58.
  • the CPU 51, RAM 52, ROM 53, HDD 54, and I / F 55 are connected to each other via a bus 58.
  • the danger avoidance warning device 5 is connected to each of the display screen 2, the outside scene photographing electronic camera 3, and the eyeball camera via the I / F 55.
  • FIG. 4 is a schematic block diagram showing an electronic circuit configuration of the danger avoidance warning device of the danger avoidance warning device. The function of each part of this block diagram will be described below.
  • the danger avoidance warning device 5 includes a main control unit 100, a specific object extraction unit 101, a specific object related information detection unit 102, an imaging visual field region calculation unit 103, a monitoring object determination unit 104, and a monitoring object risk degree determination unit 105.
  • User gaze direction calculation unit 106 user visual field calculation unit 107, display video control unit 108, memory 109, graphic memory 110, own vehicle related information acquisition unit 112, map data memory 113, and GPS (Global Positioning System)
  • a device communication antenna 60 is included.
  • the user visual field direction calculation unit 106, the user visual field region calculation unit 107, the display video control unit 108, and the own vehicle related information acquisition unit 112 include software that realizes the functions of these blocks, and the hardware illustrated in FIG. Are configured by collaboration.
  • Each of the memory 109, the graphic memory 110, and the map data memory 113 is configured by the RAM 52 and / or the ROM 53.
  • the specific object extraction unit 101 is connected to the electronic camera 3 for shooting an outside scene.
  • This specific object extraction unit 101 uses a video image of a specific object such as a moving person, a car, a traffic light, a railroad crossing, a road sign, a lane, or the like from an outside scene image captured by the outside scene photographing electronic camera 3.
  • a function is provided for identifying the type of the specific object extracted by extracting the information and collating it with the video information for specific object collation stored in the memory 109 in advance.
  • the electronic camera 3 for photographing outside scenes has a specific target for detecting related information such as the moving direction and the moving speed of the specific target in addition to the specific target extracting unit 101 if it is moving.
  • An object-related information detection unit 102 and an imaging field area calculation unit 103 that detects the imaging field of view of the outside scene electronic camera 3 are connected to each other.
  • the monitoring object determination unit 104 has a risk that the specific object will pose a danger to the traveling of the vehicle in the future based on the information acquired by the specific object extraction unit 101 and the specific object related information detection unit 102. In addition, it has a function of determining whether or not it is a monitoring object that needs to be continuously monitored.
  • the monitoring object risk degree determination unit 105 includes information related to the monitoring object obtained from each of the above parts, and the vehicle related information such as the vehicle position acquired by the vehicle related information acquisition unit 112 and the traveling speed and direction of the vehicle. Whether the risk level of the monitored object with respect to the vehicle is comprehensively determined from each data such as map data around the vehicle position stored in the map data memory 113, and a danger avoidance warning is displayed to the driver It has a function to decide whether or not.
  • the GPS device communication antenna 60 is connected to the own vehicle related information acquisition unit 112.
  • the vehicle 50 is equipped with a GPS device (not shown) that receives a positioning radio wave from a GPS satellite and detects the position of the vehicle, and receives GPS data from the GPS device via the GPS device communication antenna 60.
  • the configuration for acquiring the vehicle-related information is not limited to this, and an inertial measurement device (Inertial Measurement Unit), a vehicle traveling speed detection device mounted on the automobile 50, and the like may be used in combination.
  • the monitoring object risk determination unit 105 determines whether to display a danger avoidance warning. Therefore, the monitoring object risk degree determination unit 105 corresponds to an additional display determination unit.
  • the display video control unit 108 performs control to select and extract a predetermined danger avoidance warning item from the graphic memory 110 and to superimpose it on a predetermined position on the display screen 2 of the HMD device 1.
  • the displayed danger avoidance warning item is a display item for a monitoring object within the visual field that the driver 20 himself / herself directly views through the display screen 2 of the HMD device 1 or a monitoring target object outside the driver's visual field. It is preferable that the driver 20 can recognize it correctly.
  • the danger avoidance warning item superimposed on the display screen 2 of the HMD device 1 is correctly positioned in the vicinity of the monitoring object that the driver 20 is directly viewing or in the display position associated with the monitoring object. Need to be displayed.
  • a user's line-of-sight direction calculation unit 106 that calculates the line-of-sight direction of the user, that is, the driver 20, of the eyeball camera and the HMD device 1 as the line-of-sight detection device, and the driver uses the line-of-sight information
  • a user visual field area calculation unit 107 that detects a visual field area that is directly viewed through one display screen 2.
  • the relative visual relationship between the direct visual field of view of the driver obtained by the user visual field calculation unit 107 and the photographing visual field of the outside scene photographing electronic camera 3 obtained by the photographing visual field region calculation unit 103 is constantly monitored.
  • the correct display position of the danger avoidance warning item to be displayed in the display image control unit 108 is calculated from the monitor data. A specific example of the danger avoidance warning item display position calculation processing flow will be described in detail later.
  • FIG. 5 is a flowchart showing a process flow of the driving support system according to the first embodiment.
  • the main power supply of the HMD device 1 and the danger avoidance warning device 5 is turned on.
  • the user visual field detection process from steps S01 to S03 and the danger avoidance warning process from steps S11 to S20 are started in parallel.
  • a user visual field detection process and a danger avoidance warning process will be described in this order.
  • the visual line direction detection device 4 captures an image (eyeball image) of the eye of the user, that is, the driver 20 (step S01), and outputs it to the user visual line direction calculation unit 106.
  • the user gaze direction calculation unit 106 performs, for example, black eye detection processing from the eyeball image, detects the user's gaze direction from the position and direction of the black eye (step S02), and outputs the gaze direction to the user visual field region calculation unit 107. To do.
  • the user visual field area calculation unit 107 detects the visual field area of the driver 20 based on the line-of-sight direction information (step S03). For example, the intersection of the vector indicating the line-of-sight direction and the display screen 2 is determined as the user's visual field center, and a predetermined in-plane range of the display screen 2 from the visual field center is defined as the user visual field region. Thereafter, the process proceeds to step S19.
  • the outside scene photographing electronic camera 3 captures an image of the outside scene (hereinafter referred to as “outside scene image”) from substantially the same viewpoint as the driver 20 (step S11), and the specific object extraction unit 101, the specific object The information is output to the object related information detection unit 102 and the photographing visual field region calculation unit 103.
  • the imaging field area calculation unit 103 detects an area (hereinafter referred to as “imaging field of view”) captured as an outside scene image based on the outside scene image (step S12), and then proceeds to step S18.
  • the specific object extraction unit 101 extracts video information of a specific object such as a moving person, a car, a traffic light, a railroad crossing, a road sign, a lane, and the like on a road from an outside scene image, and is stored in a memory in advance. The type of the specific object extracted by collating with each specific object collation video information is identified. The specific object extraction unit 101 outputs the extracted specific object to the specific object related information detection unit 102.
  • the specific object related information detection unit 102 detects the related information of the specific object acquired from the specific object extraction unit 101, for example, the moving direction and the moving speed of the specific object (step S13).
  • the related information detection method may be detected based on, for example, the amount of change in a region where the specific object is captured between successive frames, or an output value from a distance measuring unit that measures the distance to the specific object.
  • the related information may be detected based on the above.
  • the extracted specific object and the related information about it are output to the monitoring object determination unit 104.
  • the monitoring object determination unit 104 determines whether or not the specific object extracted and identified in step S13 is a monitoring object having a risk of danger with respect to traveling of the host vehicle (step S14). If the determination result is “Yes”, the process proceeds to the next step 16. On the other hand, if the determination result is “No”, the process returns to step S01 and step S11. For example, based on the moving direction and speed of the monitored object, the determination as to whether or not the object is a monitored object is performed when a vector indicating the moving direction of the monitored object intersects with the moving direction vector of the own vehicle (the vector length indicates the moving speed). It is determined that there is a possibility of interference between the own vehicle and the monitoring object.
  • the monitoring object determination unit 104 continues to acquire related information such as the relative position, relative speed, and relative movement direction of the specific object that has been determined as the monitoring object in step S14 (step S15). Output to the risk determination unit 105.
  • the monitored object risk level determination unit 105 determines the risk level of the monitored object on the vehicle from the related information of the monitored object obtained in step S13, and determines whether or not to display a predetermined danger avoidance warning item. Determination is made (step S16). If the determination result is “Yes”, the process proceeds to the next step S17. On the other hand, if the determination result is “No”, the process returns to step S01 and step S11. Specific processing contents in steps S15 and S18 will be described later.
  • the monitored object risk level determination unit 105 determines the type of danger avoidance warning item to be displayed (S17).
  • the monitoring object risk level determination unit 105 may display a danger avoidance warning item that can alert the user more strongly when the distance to the own vehicle is closer even if the monitoring object is the same. .
  • the display video control unit 108 determines the display position in the display screen 2 based on the user's visual field information and photographing visual field information (step S18), and additionally displays a warning item in the display screen 2 (step S19). . If the main power supply of the driving support system 10 is ON, the processing of the driving support system 10 is continued (step S20 / Yes), and the process returns to steps S01 and S11. The process of the driving support system 10 in which the main power is turned off is terminated (Step S20 / No).
  • FIG. 6A is an outline of a captured image of the electronic camera 3 for capturing an outside scene that captures a moment when a pedestrian is about to jump out of the side of the road. It is a figure and (b) is the schematic diagram which looked down on the condition of Fig.6 (a).
  • the specific object extraction unit 101 extracts and identifies an image of a pedestrian 31a that is about to jump out from the outside scene image captured in the imaging field of view area 11 in the forward direction of the host vehicle.
  • the specific object related information detection unit 102 detects related information related to the pedestrian 31a.
  • the monitoring object determination unit 104 determines that this is a monitoring object and enters a mode for monitoring its movement. Then, in the subsequent shooting frames, when the pedestrian 31a approaches the front direction of the host vehicle, for example, as shown in FIG.
  • a pedestrian movement vector “u” is calculated from points 41a and 41b corresponding to the position coordinates of the pedestrians 31a and 31b on the map data 40 around the own vehicle stored in the data memory 113, and A movement vector “v” of the point 42 corresponding to the own vehicle is calculated on the map data 40 from the vehicle related information.
  • the collision risk between the target pedestrian and the subject vehicle is calculated, and the danger avoidance is performed according to the collision risk. It is determined whether or not the warning item is displayed in the HMD screen.
  • Each danger avoidance warning item superimposed and displayed on the display screen 2 of the HMD device 1 is directly visible through the display screen 2 by the driver 20, or is near or near the corresponding monitoring object outside the field of view. From the viewpoint of improving the visibility, it is preferable that the information is accurately superimposed and displayed at a predetermined position associated with the object as much as possible.
  • the display is superimposed on the display screen 2. It is necessary that the viewing direction of the image to be displayed and the viewing direction of the outside scene that is directly viewed by the driver match completely.
  • the visual field area directly seen by the driver 20 wearing the HMD device 1 is always changing according to the line-of-sight direction in which the driver 20 is gazing.
  • the outside scene photographing electronic camera 3 fixed to the HMD device 1 is fixed in the direction in which the HMD device 1 is facing, that is, the direction in which the head of the driver 20 is facing.
  • the visual field direction of the photographing visual field region formed from the photographed video of the outside scene photographing electronic camera 3 and the direct visual field of view of the driver 20 itself are easily shifted. More specifically, the field of view for photographing and the field of view of the display image captured with the field of view differ depending on the position on the display screen 2 where the image is displayed.
  • the difference between the field of view of the photographing field and the field of view of the display image is that the center point of the photographing field of view and the center point of the display region of the display screen 2 are fixed because the electronic camera 3 for photographing the outside scene is fixed to the HMD device 1.
  • the correction amount of the error can be found from the geometric positional relationship. Therefore, the center point of the photographing visual field area and the center point of the display screen 2 are treated as being coincident using the correction amount. Accordingly, in the following description, it is assumed that the visual field of the display image and the photographing region visual field are the same. Accordingly, the principle of occurrence of positional deviation between the visual field of the display image (imaging area visual field) and the direct visual field of vision will be described with reference to FIG.
  • FIG. 7 is a diagram showing the principle of occurrence of positional deviation between the visual field of the display image (imaging area visual field) and the direct visual field of view.
  • FIG. 7 shows a state where the driver 20 is facing the front.
  • the fact that the driver 20 visually recognizes an object to be monitored, for example, a pedestrian 31, means that visible light reflected by the surface of the pedestrian 31 is incident on each of the left eye 21L and the right eye 21R, and in the left eye 21L and the right eye 21R. It is in a state where it reaches within each retina and forms an image.
  • the path of incident light is indicated by the symbol ⁇ .
  • the middle point of the intersection of each path ⁇ of visible light incident on the left and right eyes of the pedestrian 31 and the display screen 2 is indicated by Q.
  • the lower part of FIG. 7 shows a state where the driver 20 is facing right.
  • Q ′ represents the midpoint of the intersection of the path ⁇ ′ of the incident light and the display screen 2 when the visible light reflected by the surface of the pedestrian 31 enters each of the left eye 21L and the right eye 21R.
  • the midpoint Q ′ is different in position from the midpoint Q. That is, even if the relative positions of the HMD device 1 and the pedestrian 31 are the same, if the line of sight of the driver 20 deviates from the vector ⁇ ′ (upper part in FIG. 7) to the vector ⁇ ′ (lower part in FIG. 7), the display screen 2 The visual field of the display image of the pedestrian 31 and the direct visual field of view of the driver 20 are shifted.
  • the midpoint P of the intersection point between the line-of-sight vector ⁇ of each of the left and right eyes of the driver 20 and the display screen 2 is determined as the center of the display field.
  • the point P is the center of the display field
  • the point P ′ is the center of the display field. Therefore, the point P is at a position that is displaced by ⁇ x in the horizontal direction from the point P on the surface of the display screen 2.
  • the horizontal cross section is described as an example, but the line of sight also changes in the vertical direction. In this case, the point P is at a position that is displaced by ⁇ y in the vertical direction from the point P on the surface of the display screen 2.
  • the display visual field direction dynamic correction process is performed as follows. First, the field of view of the electronic camera 3 for photographing the outside scene is set to be sufficiently wider than the field of view directly visible through the display screen 2 by the driver 20. Thereafter, dynamic correction of the display visual field direction as described with reference to FIGS. 8 and 9 is performed.
  • FIG. 8 is a schematic diagram showing an example of the relationship between the display field of view of the HMD device 1 and the direct visual field of view of the driver 20.
  • a region surrounded by a broken-line square frame in FIG. 8 is the entire region of the outside scene video, and coincides with the photographing field region 11 of the outside scene photographing electronic camera 3.
  • An area surrounded by a solid square frame is a display visual field area 12 of the HMD device 1, and an area surrounded by an elliptical frame is a direct visual field area 13 of the driver 20.
  • a point O and a virtual XY coordinate axis with this point O as the origin are set.
  • the display position (coordinates) of each display item is calculated with reference to this XY coordinate axis.
  • the origin O and the XY coordinate axes are virtually set in the processing step S18 in the flowchart of FIG. 4 and are not actually displayed on the display screen 2.
  • the center point of the visual field region is determined from the direct visual field region 13 of the HMD user, that is, the driver 20, determined through the eye camera, the user's visual line direction calculation unit 106, and the user visual field region calculation unit 107. P is obtained, and the coordinate position on the XY coordinates provided in the display visual field region 12 is calculated.
  • the coordinates of the point P are (0, 0), that is, the center point P of the direct visual field region 13 of the driver 20 coincides with the origin O of the display visual field region 12.
  • the display visual field direction of the HMD device 1 and the direct visual field direction of the driver coincide with each other.
  • FIG. 9 is a schematic diagram showing a case where the display visual field direction of the HMD device 1 is shifted from the direct visual field direction of the driver.
  • the coordinates of the center point P of the area 13 are coordinates (Xp, Yp) deviating from the origin O of the display visual field area 12.
  • the system automatically shifts the origin O of the display visual field region 12 to the same position O ′ as the position of the point P, and a new coordinate axis X having this O ′ as the origin.
  • '-Y' the display image control unit 108 adjusts the installation position of the origin O of the display visual field region 12 and the coordinate axes XY so that the center point P of the driver visual recognition visual field region 13 always coincides with the origin of the display visual field region 12.
  • the display position (coordinates) of each display item is calculated using the new XY coordinate axes after the adjustment.
  • each display item can be accurately displayed at a position corresponding to the target object to be directly viewed by the driver. .
  • FIG. 10 is a diagram showing an example of additional display when a monitoring target is present in the driver visual field of view, wherein (a) shows an example of additional display for a pedestrian jumping out, and (b) is an addition to a red signal. It is a display example.
  • FIG. 10A is an example in which the pedestrian described in FIG. 6 is about to jump out from the side road in the forward direction of the host vehicle.
  • a danger avoidance warning item such as a frame line 201 displayed so as to surround a pedestrian (displayed with a broken line) 201 and a warning sentence 202 of “jump out caution!” Displayed near the center of the field of view is superimposed.
  • FIG. 10B is an example in which the traffic light in front of the host vehicle is a red signal.
  • the signal is displayed so as to surround the red signal in the front.
  • Danger avoidance warning items such as a framed line (displayed by a broken line) 203 and a warning sentence 204 of “Signal light red light stop!” Displayed near the center of the visual field are superimposed.
  • 10 (a) and 10 (b) show examples of danger avoidance warning items for danger monitoring objects in the direct visual field region 13 of the driver 20, but in the present invention, an electronic camera for photographing outside scenes
  • an electronic camera for photographing outside scenes By taking the 3 field of view sufficiently wider than the driver's direct field of view 13, it is possible to display a danger avoidance warning item even for a monitoring object outside the driver's visual field of view, and alert the driver. It is also possible.
  • FIG. 11 is a schematic view showing such a display example.
  • the pedestrian is about to jump out from the left outside of the direct visual field area 13 of the driver 20 in the forward direction of the vehicle.
  • the danger avoidance warning device 5 of the present invention quickly identifies the pedestrian as a monitoring object from the captured image of the outside scene photographing electronic camera 3 and collides from the movement. It is determined that there is a danger risk, and a danger avoidance warning item such as an arrow 205 for alerting or a warning sentence 206 of “Jump out from the left!” Is displayed near the center in the direct visual field area 13 of the driver.
  • the example of FIG. 11 is an example of a pedestrian jumping out, but is not limited to this, and a red signal as in the example of FIG. 10B is outside the direct visual field region 13 of the driver 20. Even in some cases, similar danger avoidance warning items can be displayed.
  • monitoring objects are not limited to pedestrians jumping out and traffic lights, but targets that may cause some danger risk in vehicle driving such as car jumping out, road signs, railroad crossings, road steps, lanes, etc. Any object can be used as long as it is an object.
  • a more advanced vehicle driving support system can be realized by selecting and displaying appropriate danger avoidance warning items from information such as the type, position, and movement of each monitored object.
  • the driver can always drive in an appropriate danger avoidance warning environment even when driving a vehicle without a driving support system.
  • the present embodiment not only the type of the monitoring object but also related information such as the position, moving speed, and moving direction is acquired, and the risk avoidance warning item taking this into consideration is displayed, thereby promptly responding to the driver.
  • the convenience of the driving support system can be further improved.
  • the viewpoint movement of the driver is detected, and the origin of the direct visual field area and the display visual field area associated with the movement of the visual point are detected. Corrects misalignment and performs additional display. Thereby, since there is no position shift between the monitoring object and the danger availability warning item as seen from the driver, or it is mitigated, the recognizability of the driver can be improved.
  • the second embodiment is an embodiment in which the present invention is applied to a driving route guidance system, that is, a so-called navigation system, among driving assistance systems.
  • the second embodiment will be described below with reference to FIGS.
  • FIG. 12 is a schematic block diagram illustrating an electronic circuit configuration of the driving support apparatus according to the second embodiment.
  • FIG. 13 is a flowchart showing a flow of operations of the driving support system according to the second embodiment.
  • FIG. 14 is a screen display example of the display screen of the HMD device according to the second embodiment. The function of each part of this block diagram will be described below. In this figure, blocks having the same functions as those in the schematic block diagram of the first embodiment shown in FIG.
  • the difference between the driving support device 5 a according to the second embodiment and the danger avoidance warning device 5 according to the first embodiment is that the degree of risk of the monitoring object included in the danger avoidance warning device 5.
  • the determination unit 105 is not provided, and a route determination unit 111 is provided instead. Further, even though the block names are the same, there are differences in the objects recognized as the specific objects, which will be described in detail below.
  • the specific object extraction unit 101 is connected to the electronic camera 3 for photographing outside scenes.
  • This specific object extraction unit 101 is often used along a road such as a road sign, a traffic light, an intersection, a railroad crossing, or a convenience store, a family restaurant, a gas station, etc.
  • Video information that becomes a search point when searching for a travel route such as a signboard of a store to be seen is extracted, and the specific object extracted by collating with the video information of each specific object stored in the memory 109 in advance
  • a function for identifying the type is provided.
  • the specific object related information detection unit 102 that detects related information such as the position of the specific object and the outside scene photographing electronic camera 3 are photographed.
  • a photographing visual field region calculation unit 103 for detecting the visual field is connected.
  • the monitoring target determination unit 104 needs the specific target video for future route search of the own vehicle from various information acquired by the specific target extraction unit 101 and the specific target related information detection unit 102. It has a function of determining whether or not it is a monitoring target that needs to monitor the relative positional relationship between the specific target and the vehicle.
  • the route determination unit 111 detects the vehicle-related information acquired by the vehicle-related information acquisition unit 112 that detects the information obtained from the above-described units, the vehicle position, the traveling speed of the vehicle, the traveling direction, and the like.
  • the map data stored in the map data memory 113 is compared with the map data in the vicinity of the own vehicle position to search for the driving route of the own vehicle, and further to guide the driving route to the driver, the driving route for the monitored object It has a function of determining whether to display guidance items.
  • the vehicle-related information acquisition unit 112 acquires, for example, GPS data.
  • the present invention is not limited to this example.
  • the vehicle-related information acquisition unit 112 is not limited to this example. You may provide the structure which detects an own vehicle position, an own vehicle travel speed, a direction, etc. in an HMD system.
  • the own vehicle related information acquisition unit 112 corresponds to a position information acquisition unit that acquires position information indicating the current position of the user.
  • the display image control unit 108 selects and extracts a predetermined travel route guidance item such as an arrow indicating a left turn or a right turn from the graphic memory 110, and extracts it. Control is performed so as to superimpose and display on the display screen 2 of the HMD device 1.
  • the displayed travel route guidance item can be correctly recognized as a display item associated with a corresponding monitoring object that the driver 20 himself / herself directly views through the display screen 2. It is necessary to accurately superimpose and display at a predetermined position in the display screen 2.
  • the driver directly recognizes the user's line-of-sight direction calculation unit 106 that detects the line-of-sight direction of the user of the HMD device 1, that is, the driver through the eyeball camera and the line-of-sight information directly through the display screen 2.
  • a user visual field calculation unit 107 that detects the visual field to be detected is provided. Then, the relative position relationship between the direct visual field of view of the driver obtained by the user visual field calculation unit 107 and the photographing visual field of the outside scene photographing electronic camera 3 obtained by the photographing visual field region calculation unit 103 is constantly monitored. From the data, the correct display position of the danger avoidance warning item to be displayed in the display image control unit 108 is calculated. Note that a specific example of the above warning item display position calculation processing flow has already been described in the first embodiment (FIGS. 8 and 9), and thus the description thereof is omitted here.
  • the function and operation of each unit in the block are controlled by the main control unit 100.
  • the visual line direction detection device 4 captures an image of the eye of the user, that is, the driver 20 (user visual line image) (step S01), and outputs it to the user visual line direction calculation unit 106. Then, the user gaze direction calculation unit 106 detects the gaze direction (step S02), and the user visual field region calculation unit 107 determines the direct visual field region of the driver 20 based on the gaze direction information (step S03).
  • the outside scene photographing electronic camera 3 sequentially acquires predetermined video information (step S11), and the photographing visual field region calculation unit 103 detects the photographing visual field based on the video information (step S12). )
  • the specific object extraction unit 101 uses the above-described route search specific objects from the video information acquired in step S11, such as road signs, traffic lights, intersections, railroad crossings, convenience stores, family restaurants, gas stations, and the like.
  • the video information that becomes the point on the travel route search such as a signboard of a shop that is often seen in the store is extracted, and the type of the specific object extracted by comparing with the video information of each specific object stored in the memory in advance Identify (step S13).
  • the route determination unit 111 acquires the position on the map data of the specific object extracted and identified in step S13, and acquires the position on the map data of the host vehicle and route data to be traveled in the future (step S21). ).
  • the route determination unit 111 collates the position of the specific object acquired in step S13 with the position of the own vehicle acquired in step S22 and the route data that the own vehicle should travel (step S22). It is determined whether or not any travel route guidance item, that is, a navigation item is displayed on the object (step S23). And if a determination result is "Yes”, it will transfer to the following step S25. On the other hand, if the determination result is “No”, the process returns to step S01 and step S11.
  • the route determination unit 111 determines the type of navigation item to be displayed (step S24).
  • the display video control unit 108 displays the display position (origin) on the HMD display screen based on the direct visual field information of the driver 20 detected in step S03 and the photographing visual field information of the outside scene photographing electronic camera 3 detected in step S12. To decide. Then, the display video control unit 108 displays the navigation item at a predetermined position on the HMD display screen determined in step S25 (step S25) (step S26).
  • Step S27 / Yes If the main power supply of the driving support system 10 is ON, the processing of the driving support system 10 is continued (step S27 / Yes), and the process returns to steps S01 and S11.
  • the process of the driving support system 10 in which the main power is turned off is terminated (Step S27 / No).
  • FIG. 14 shows a display example of navigation items by the navigation system.
  • reference numeral 11 denotes a photographing field of view of the outside scene photographing electronic camera 3
  • 13 denotes a direct field region 13 that the driver directly views through the HMD display screen.
  • the system first extracts and identifies, for example, an image of a signboard of a specific restaurant (portion surrounded by a broken line in the drawing) from the image of the shooting field of view 11. Then, it is determined from the map data, route data, and own vehicle position data stored therein that it is necessary to turn left at the intersection in front of this restaurant as a route to be run, for example. Therefore, the position of the intersection in front of the restaurant is extracted and identified from the photographed video, and an appropriate comment such as an arrow 207 for prompting a left turn to the intersection position within the driver's visual field of view or a guidance comment such as “turn left at this corner” is appropriate. Navigation items are superimposed.
  • the driver By overlaying navigation items in the field of view directly visible to the driver in this way, the driver frequently looks at the outside scene and the navigation screen while driving as in a conventional navigation system with a built-in vehicle display. This eliminates the hassle of moving the vehicle and significantly increases the driver's awareness of driving safety and navigation information. Furthermore, by incorporating such a system into the HMD device, the driver can drive under an appropriate navigation environment even when driving a vehicle without a navigation system.
  • the functions and the like of the present invention described above may be realized by hardware by designing a part or all of them with, for example, an integrated circuit. Further, the microprocessor unit or the like may be realized by software by interpreting and executing an operation program that realizes each function or the like. Hardware and software may be used together.
  • control lines and information lines shown in the figure are those that are considered necessary for the explanation, and not all control lines and information lines on the product are necessarily shown. Actually, it may be considered that almost all the components are connected to each other.
  • the present invention may be used as a driving support system for a driver who drives a two-wheeled vehicle such as a motorcycle or a bicycle, or the present invention may be used as a general pedestrian safety assistance or destination guidance system. .
  • the present invention is not limited to the field of driving support, and can be applied to technologies that perform extended display (Augmented Reality, AR).
  • AR technology such as
  • AR technology can be applied regardless of its use.
  • it can be applied to games and pedestrian navigation systems.
  • it is also used for the attraction explanation device in the theme park (target age, height, waiting time display, etc.), the explanation device for the exhibits in the museum (however, the shooting function is locked), etc.
  • the invention can be applied.
  • the necessary danger avoidance warning and the travel route guidance need to use audio or the like instead of displaying the video item on the HMD display screen.
  • it may replace with HMD apparatus and the video display apparatus for extended displays provided with the display screen which has transparency may be sufficient.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The objective of the present invention is to provide a technology that improves the visibility of an additional display even if a user changes the viewpoint. In order to accomplish the objective, the present invention generates video information by imaging the external visual scene of a user (S11), and extracts from the video information a specific object 12 information of which is to be provided to the user (S12). In addition, the direction of the line of sight of the user is detected to generate line-of-sight direction information (S02). Then, a direct visual field area that is a visual field when the user views the external visual scene through a display screen is calculated from the line-of-sight direction information of the user through the semi-transparent display screen which is a display screen to display the video information and which allows the user to directly view the external visual scene through the display screen, and an imaging visual field area that is a visual field when the external visual scene is imaged is calculated (S03). From an additional information storage unit that stores therein, in an associated manner, additional information indicating information on specific objects, the additional information that is associated with the extracted specific object is extracted (S17). A relative positional relationship between the imaging visual field area and the direct visual field area is calculated (S18), and by using the relative positional information, the extracted additional information is displayed on the display screen so as to follow the movement of the direct visual field area (S19).

Description

映像表示装置及び方法Video display apparatus and method
 本発明は、映像表示装置及び方法に係り、特に使用者の頭部に装着し、該使用者の眼前に所定の映像情報を表示する技術に関する。 The present invention relates to an image display apparatus and method, and more particularly, to a technique that is mounted on a user's head and displays predetermined image information in front of the user's eyes.
 近年、使用者の頭部に装着するゴーグル型や眼鏡型のいわゆるヘッドマウントディスプレイ装置(ヘッドマウントディスプレイをHMDと記す)よる情報映像表示システムが急速に普及しつつある。特に、使用者が半透明の表示画面越しに外部情景を直接視認しつつ、該直接視認情景に所定の情報映像を重畳表示する機能を備えたいわゆるシースルー型HMD装置は、使用者が直接視認している外部情景に所定の関連情報を付加表示できるので、その応用範囲は広い。 In recent years, an information video display system using a so-called head mounted display device (a head mounted display is referred to as an HMD) of a goggles type or glasses type that is worn on a user's head has been rapidly spreading. In particular, a so-called see-through type HMD device having a function of directly displaying an external scene through a translucent display screen while a predetermined information image is superimposed on the directly-viewed scene is directly viewed by the user. Since the predetermined related information can be additionally displayed on the external scene, the application range is wide.
 HMD装置に付加表示を行う技術の一例として、特許文献1は、「外界光透過型のヘッドマウントディスプレイにおいて、視線方向の対象物を電子的に捉える撮像装置と、この電子データから対象物を認識する映像認識手段を備え、予め記憶手段に記憶された対象物に関する情報を表示する。また、映像認識の際、撮像装置で捉えた電子データと対応するテンプレートデータを予め撮像装置で登録可能にする。」(要約抜粋)構成を開示している。 As an example of a technique for performing additional display on an HMD device, Patent Document 1 states that “in an external light transmissive head-mounted display, an imaging device that electronically captures an object in the line-of-sight direction, and the object is recognized from this electronic data. Image recognition means for displaying information relating to the object stored in advance in the storage means, and enabling to register in advance the template data corresponding to the electronic data captured by the imaging device in the imaging device during the video recognition. "(Summary excerpt).
特開2006-267887号公報JP 2006-267887 A
 特許文献1に記載されたHMD装置は、カメラが捉えた特定対象物の種別を識別して付加表示を行う。しかし、HMD装置のユーザが視点を移動すると、視点移動の前後ではユーザの視野領域の位置が変わり、ユーザの視野領域内において付加表示とその対象物との位置関係が変化し、視認性が低下するという課題がある。このユーザの視点移動について、特許文献1では何ら考慮されていない。 The HMD device described in Patent Document 1 performs additional display by identifying the type of a specific object captured by a camera. However, when the user of the HMD device moves the viewpoint, the position of the user's visual field area changes before and after the viewpoint movement, and the positional relationship between the additional display and the target object changes in the user's visual field area, and visibility decreases. There is a problem of doing. This viewpoint movement of the user is not considered at all in Patent Document 1.
 本発明は上記課題を解決するためになされたものであり、ユーザが視点を移動した際にも付加表示の視認性を向上させる技術を提供することを目的とする。 The present invention has been made to solve the above-described problems, and an object of the present invention is to provide a technique for improving the visibility of the additional display even when the user moves the viewpoint.
 上記目的を達成するために、本発明は使用者の外部情景を撮影して映像情報を生成し、前記映像情報から、前記使用者に対して情報提供を行う対象となる前記特定対象物を抽出し、前記使用者の視線方向を検知して視線方向情報を生成し、前記映像情報を表示する表示画面であってかつ該表示画面越しに前記使用者が外部情景を直接視認することができる半透明の表示画面を介して、前記使用者が前記外部情景を前記表示画面越しに視認する際の視野である直接視野領域を、前記使用者の視線方向情報からを演算し、前記外部情景を撮影した際の視野である撮影視野領域を演算し、前記特定対象物に関する情報を示す付加情報を関連付けて格納する付加情報記憶部から、前記抽出された特定対象物に関連づけられた付加情報を抽出し、前記撮影視野領域と前記直接視野領域との相対位置関係を演算し、その相対位置関係を用いて前記抽出された付加情報を前記直接視野領域の移動に追従させて前記表示画面に表示する構成を含むことを特徴とする。 In order to achieve the above object, the present invention captures a user's external scene to generate video information, and extracts from the video information the specific target that is a target for providing information to the user. Then, it is a display screen that detects the user's line-of-sight direction to generate line-of-sight information and displays the video information, and the user can directly view the external scene through the display screen. Photographing the external scene by calculating a direct visual field area, which is a visual field when the user visually recognizes the external scene through the display screen, from the gaze direction information of the user via a transparent display screen The additional field associated with the extracted specific object is extracted from the additional information storage unit that calculates and captures the field of view of the photographic field that is the field of view when the image is captured. The above A configuration in which a relative positional relationship between a shadow visual field region and the direct visual field region is calculated, and the extracted additional information is displayed on the display screen by following the movement of the direct visual field region using the relative positional relationship. It is characterized by that.
 本発明によれば、ユーザが視点を移動した際にも付加表示の視認性を向上させる技術を提供することができる。なお、上記した以外の課題、構成及び効果は、以下の実施形態の説明により明らかにされる。 According to the present invention, it is possible to provide a technique for improving the visibility of the additional display even when the user moves the viewpoint. Problems, configurations, and effects other than those described above will be clarified by the following description of the embodiments.
第一実施形態におけるHMDシステムの一例を表した概要図Schematic diagram showing an example of an HMD system in the first embodiment 第一実施形態におけるHMDシステムの主要部であって映像表示装置の機能を担うHMD装置の外観構成例の概要を示した斜視図The perspective view which showed the outline | summary of the external appearance structural example of the HMD apparatus which is the principal part of the HMD system in 1st embodiment, and bears the function of a video display apparatus. 危険回避警告装置5のハードウェア構成を示す図The figure which shows the hardware constitutions of the danger avoidance warning apparatus 5 危険回避警告装置の危険回避警告装置の電子回路構成を示した概略ブロック図Schematic block diagram showing the electronic circuit configuration of the danger avoidance warning device 第一実施形態に係る運転支援システムの処理の流れを示すフローチャートThe flowchart which shows the flow of a process of the driving assistance system which concerns on 1st embodiment. 危険度判定処理を説明するための図であって、(a)は、歩行者がわき道から自車前方に飛び出してこようとしている瞬間をとらえた外景撮影用電子カメラ3の撮影映像概略図であり、(b)は、図6(a)の状況を俯瞰した模式図である。It is a figure for demonstrating a risk determination process, Comprising: (a) is the picked-up image schematic of the electronic camera 3 for outside scene photography which caught the moment when the pedestrian is going to jump out from the side road ahead of the own vehicle. (B) is the schematic diagram which looked down at the condition of Fig.6 (a). 表示映像の視野(撮影領域視野)と直接視認視野との位置ずれの発生原理を示す図Diagram showing the principle of positional deviation between the visual field of the displayed image (field of view of the shooting area) and the direct visual field of view 表示映像の視野領域とドライバーの直接視野領域との関係の一例を示す概略図Schematic showing an example of the relationship between the visual field area of the displayed video and the direct visual field area of the driver 表示視野方向とドライバーの直接視野方向がずれている場合を示した概略図で、(a)は動的視野方向のずれの補正前の状態を示し、(b)は補正後の状態を示す。FIG. 6 is a schematic diagram showing a case where the display visual field direction and the direct visual field direction of the driver are deviated, where (a) shows a state before correction of the deviation of the dynamic visual field direction, and (b) shows a state after correction. ドライバー視認視野領域内に監視対象物が存在する場合の付加表示例を示す図であって、(a)は歩行者の飛び出しに対する付加表示例を示し、(b)は赤信号に対する付加表示例である。It is a figure which shows the additional display example in case a monitoring target object exists in a driver | operator visual recognition visual field area, Comprising: (a) shows the additional display example with respect to a pedestrian's jumping out, (b) is an additional display example with respect to a red signal. is there. 付加表示の他例を示す図Figure showing another example of additional display 第二実施形態に係る運転支援装置の電子回路構成を示した概略ブロック図The schematic block diagram which showed the electronic circuit structure of the driving assistance device which concerns on 2nd embodiment. 第二実施形態に係る運転支援システムの動作の流れを示すフローチャートThe flowchart which shows the flow of operation | movement of the driving assistance system which concerns on 2nd embodiment. 第二実施形態係るHMD装置の表示スクリーンの画面表示例を示す図The figure which shows the example of a screen display of the display screen of the HMD apparatus which concerns on 2nd embodiment.
 以下、図面を用いて本発明の実施形態について説明する。全図を通じて同一の構成には同一の符号を付して重複説明を省略する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings. Throughout the drawings, the same components are denoted by the same reference numerals, and redundant description is omitted.
<第一実施形態>
 第一実施形態は、HMD装置を用いて車両を運転するドライバーを支援する運転支援システムに本発明を適用し、危険回避警告アイテムを付加表示する実施形態である。まず、図1及び図2を参照して、本実施形態の概略構成について説明する。ここで図1は、第一実施形態におけるHMDシステムの一例を表した概要図である。図2は、第一実施形態におけるHMDシステムの主要部であって映像表示装置の機能を担うHMD装置の外観構成例の概要を示した斜視図である。
<First embodiment>
The first embodiment is an embodiment in which the present invention is applied to a driving support system that supports a driver who drives a vehicle using an HMD device, and a danger avoidance warning item is additionally displayed. First, a schematic configuration of the present embodiment will be described with reference to FIGS. 1 and 2. Here, FIG. 1 is a schematic diagram showing an example of the HMD system in the first embodiment. FIG. 2 is a perspective view showing an outline of an external configuration example of an HMD device that is a main part of the HMD system according to the first embodiment and has a function of a video display device.
 図1に示す運転支援システム10は、自動車50を運転しているドライバー20の頭部に装着されるHMD装置1を含む。HMD装置1は、図2に示すように、危険回避警告装置5に接続される。図2では説明の便宜のため、HMD装置1とは別体の装置からなる危険回避警告装置5を有線接続した構成で図示したが、無線接続してもよいし、HMD装置1と一体に構成してもよい。 The driving support system 10 shown in FIG. 1 includes the HMD device 1 that is mounted on the head of the driver 20 driving the automobile 50. The HMD device 1 is connected to a danger avoidance warning device 5 as shown in FIG. In FIG. 2, for the convenience of explanation, the danger avoidance warning device 5, which is a separate device from the HMD device 1, is illustrated with a wired connection, but may be wirelessly connected or configured integrally with the HMD device 1. May be.
 HMD装置1は、ドライバー20の眼前の外界が視認可能な状態でドライバー20の視野内に映像を表示する機能を備えた半透過型の(透過性を有する)表示スクリーン2を備えた、いわゆるシースルー型のHMD装置1である。また、HMD装置1は、表示スクリーン2を保持し、使用者の頭部に装着する装着体1aと、外景撮影用電子カメラ3と、ドライバー20の眼球を撮像する電子カメラからなる眼球カメラと、を備えている。使用者が装着体1aを頭部に装着すると表示スクリーン2が眼前に位置した状態を保持できる。 The HMD device 1 includes a so-called see-through that includes a transflective display screen 2 having a function of displaying an image in the visual field of the driver 20 in a state where the outside world in front of the eyes of the driver 20 is visible. This is a type of HMD device 1. Further, the HMD device 1 holds a display screen 2 and is mounted on a user's head, a mounting body 1a, an electronic camera 3 for photographing an outside scene, and an eyeball camera including an electronic camera that images the eyeball of the driver 20, It has. When the user wears the mounting body 1a on the head, the state in which the display screen 2 is positioned in front of the eyes can be maintained.
 外景撮影用電子カメラ3は、HMD装置1を装着したドライバー20の視線方向とほぼ同一方向から外界の情景を撮影するように設置されており、ドライバー20の直接視認視野より十分広い視野で外景を撮影できる機能を備えている。 The outside scene photographing electronic camera 3 is installed so as to photograph a scene of the outside world from almost the same direction as the line of sight of the driver 20 to which the HMD device 1 is mounted. Has the ability to shoot.
 外景撮影用電子カメラ3は、通常の可視光による撮影機能を備えたものでもよいし、例えば赤外線など非可視光による撮影機能を備えたものでもよい。 The outside scene photographing electronic camera 3 may be provided with a normal visible light photographing function, or may be provided with a non-visible light photographing function such as infrared rays.
 また外景撮影用電子カメラ3には、例えばオートフォーカス機能あるいは赤外線測距センサなど、被撮影体と、外景撮影用電子カメラ3との間の距離を計測する測距部を備えてもよい。 Further, the outside scene photographing electronic camera 3 may be provided with a distance measuring unit for measuring the distance between the subject to be photographed and the outside scene photographing electronic camera 3, such as an autofocus function or an infrared distance measuring sensor.
 眼球カメラは、例えば、ドライバー20の視界を遮らない位置に配置された小型の電子カメラであり、斜め前方からドライバー20の眼球を撮影し、眼球映像内のドライバー20の瞳または光彩の動きからドライバー20の視線方向を検知する。眼球映像は、後述する危険回避警告装置5の使用者視線方向演算部106に出力されて視線方向の演算処理が実行される。視線方向検知装置は、眼球カメラ及び使用者視線方向検知部を含んだ構成に限定されるものでは無く、ドライバー20の視線方向あるいはドライバー20が視認している視野領域を検知できる構成であれば如何なる構成であっても構わない。 The eyeball camera is, for example, a small electronic camera arranged at a position that does not obstruct the field of view of the driver 20. The eyeball camera captures the eyeball of the driver 20 obliquely from the front, and the driver or eye movement of the driver 20 in the eyeball image is viewed from the driver. 20 line-of-sight directions are detected. The eyeball image is output to a user gaze direction calculation unit 106 of the danger avoidance warning device 5 described later, and a gaze direction calculation process is executed. The gaze direction detection device is not limited to the configuration including the eyeball camera and the user gaze direction detection unit, and any configuration can be used as long as it can detect the gaze direction of the driver 20 or the visual field area visually recognized by the driver 20. It may be a configuration.
 図3を参照して本実施形態に係る危険回避警告装置5のハードウェア構成について説明する。図3は、危険回避警告装置5のハードウェア構成を示す図である。 The hardware configuration of the danger avoidance warning device 5 according to the present embodiment will be described with reference to FIG. FIG. 3 is a diagram illustrating a hardware configuration of the danger avoidance warning device 5.
 図3に示すように、危険回避警告装置5は、CPU(Central Processing Unit)51、RAM(Random Access Memory)52、ROM(Read Only Memory)53、HDD(Hard Disk Drive)54、I/F55、及びバス58含む。そして、CPU51、RAM52、ROM53、HDD54、及びI/F55がバス58を介して互いに接続されて構成される。 As shown in FIG. 3, the danger avoidance warning device 5 includes a CPU (Central Processing Unit) 51, a RAM (Random Access Memory) 52, a ROM (Read Only Memory) 53, an HDD (Hard Disk Drive) 54, an I / F 55, And a bus 58. The CPU 51, RAM 52, ROM 53, HDD 54, and I / F 55 are connected to each other via a bus 58.
 危険回避警告装置5は、I/F55を介して表示スクリーン2、外景撮影用電子カメラ3、及び眼球カメラのそれぞれと接続される。 The danger avoidance warning device 5 is connected to each of the display screen 2, the outside scene photographing electronic camera 3, and the eyeball camera via the I / F 55.
 次に、図4を参照して、危険回避警告装置5の内部構成について説明する。図4は、危険回避警告装置の危険回避警告装置の電子回路構成を示した概略ブロック図である。以下、このブロック図各部の機能について説明する。 Next, the internal configuration of the danger avoidance warning device 5 will be described with reference to FIG. FIG. 4 is a schematic block diagram showing an electronic circuit configuration of the danger avoidance warning device of the danger avoidance warning device. The function of each part of this block diagram will be described below.
 危険回避警告装置5は、主制御部100、特定対象物抽出部101、特定対象物関連情報検知部102、撮影視野領域演算部103、監視対象物判定部104、監視対象物危険度判定部105、使用者視線方向演算部106、使用者視野領域演算部107、表示映像制御部108、メモリ109、グラフィックメモリ110、自車関連情報取得部112、地図データメモリ113、及びGPS(Global Positioning System)機器通信アンテナ60を含む。これらの構成の内、主制御部100、特定対象物抽出部101、特定対象物関連情報検知部102、撮影視野領域演算部103、監視対象物判定部104、監視対象物危険度判定部105、使用者視野方向演算部106、使用者視野領域演算部107、表示映像制御部108、及び自車関連情報取得部112は、これらの各ブロックの機能を実現するソフトウェアと図3に示すハードウェアとが協働することにより構成される。また、メモリ109、グラフィックメモリ110、及び地図データメモリ113のそれぞれは、RAM52及び/又はROM53により構成される。 The danger avoidance warning device 5 includes a main control unit 100, a specific object extraction unit 101, a specific object related information detection unit 102, an imaging visual field region calculation unit 103, a monitoring object determination unit 104, and a monitoring object risk degree determination unit 105. , User gaze direction calculation unit 106, user visual field calculation unit 107, display video control unit 108, memory 109, graphic memory 110, own vehicle related information acquisition unit 112, map data memory 113, and GPS (Global Positioning System) A device communication antenna 60 is included. Among these configurations, the main control unit 100, the specific object extraction unit 101, the specific object related information detection unit 102, the imaging visual field region calculation unit 103, the monitoring target determination unit 104, the monitoring target risk determination unit 105, The user visual field direction calculation unit 106, the user visual field region calculation unit 107, the display video control unit 108, and the own vehicle related information acquisition unit 112 include software that realizes the functions of these blocks, and the hardware illustrated in FIG. Are configured by collaboration. Each of the memory 109, the graphic memory 110, and the map data memory 113 is configured by the RAM 52 and / or the ROM 53.
 特定対象物抽出部101は、外景撮影用電子カメラ3に接続されている。この特定対象物抽出部101は、外景撮影用電子カメラ3で撮影した外景映像から特定対象物、例えば移動する人や車あるいは信号機や踏切、道路標識、車線などの道路走行上の関連物の映像情報を抽出し、あらかじめメモリ109に格納されている特定対象物照合用映像情報と照合することで抽出された特定対象物の種別を識別する機能を備えている。 The specific object extraction unit 101 is connected to the electronic camera 3 for shooting an outside scene. This specific object extraction unit 101 uses a video image of a specific object such as a moving person, a car, a traffic light, a railroad crossing, a road sign, a lane, or the like from an outside scene image captured by the outside scene photographing electronic camera 3. A function is provided for identifying the type of the specific object extracted by extracting the information and collating it with the video information for specific object collation stored in the memory 109 in advance.
 一方、外景撮影用電子カメラ3には、特定対象物抽出部101の他に特定対象物の位置やもしそれが移動している場合はその移動方向および移動速度などの関連情報を検知する特定対象物関連情報検知部102と、外景撮影用電子カメラ3の撮影視野を検知する撮影視野領域演算部103と、がそれぞれ接続されている。 On the other hand, the electronic camera 3 for photographing outside scenes has a specific target for detecting related information such as the moving direction and the moving speed of the specific target in addition to the specific target extracting unit 101 if it is moving. An object-related information detection unit 102 and an imaging field area calculation unit 103 that detects the imaging field of view of the outside scene electronic camera 3 are connected to each other.
 監視対象物判定部104は、特定対象物抽出部101および特定対象物関連情報検知部102で取得された各情報から、その特定対象物が今後自車の走行に対して危険を及ぼすリスクがあり、引き続き監視を必要とする監視対象物か否かを判定する機能を備えている。 The monitoring object determination unit 104 has a risk that the specific object will pose a danger to the traveling of the vehicle in the future based on the information acquired by the specific object extraction unit 101 and the specific object related information detection unit 102. In addition, it has a function of determining whether or not it is a monitoring object that needs to be continuously monitored.
 監視対象物危険度判定部105は、上記各部から得られた監視対象物に関する情報と自車関連情報取得部112で取得された自車位置及び自車の走行速度、方向などの自車関連情報および地図データメモリ113に格納されている自車位置周辺の地図データ等の各データから、総合的に監視対象物の自車に対する危険度を判定し、ドライバーに対して危険回避警告を表示するか否かを決定する機能を備えている。 The monitoring object risk degree determination unit 105 includes information related to the monitoring object obtained from each of the above parts, and the vehicle related information such as the vehicle position acquired by the vehicle related information acquisition unit 112 and the traveling speed and direction of the vehicle. Whether the risk level of the monitored object with respect to the vehicle is comprehensively determined from each data such as map data around the vehicle position stored in the map data memory 113, and a danger avoidance warning is displayed to the driver It has a function to decide whether or not.
 なお本実施形態では、自車関連情報取得部112にGPS機器通信アンテナ60を接続する。そして自動車50にGPS衛星からの測位電波を受信し、自車位置を検出するGPS機器(不図示)を搭載し、このGPS機器からGPS機器通信アンテナ60を介してGPSデータを受信する。自車関連情報を取得するための構成はこれに限定されるものではなく、慣性計測装置(Inertial Measurement Unit)や、自動車50に搭載される自車走行速度検出装置等を併用してもよい。また、GPS機器を本実施形態に係る運転支援システム10に含んでもよい。 In this embodiment, the GPS device communication antenna 60 is connected to the own vehicle related information acquisition unit 112. The vehicle 50 is equipped with a GPS device (not shown) that receives a positioning radio wave from a GPS satellite and detects the position of the vehicle, and receives GPS data from the GPS device via the GPS device communication antenna 60. The configuration for acquiring the vehicle-related information is not limited to this, and an inertial measurement device (Inertial Measurement Unit), a vehicle traveling speed detection device mounted on the automobile 50, and the like may be used in combination. Moreover, you may include the GPS apparatus in the driving assistance system 10 which concerns on this embodiment.
 監視対象物危険度判定部105は危険回避警告を表示するかを決定する。よって、監視対象物危険度判定部105が付加表示判定部に相当する。表示映像制御部108では、所定の危険回避警告アイテムをグラフィックメモリ110から選択、抽出し、HMD装置1の表示スクリーン2上の所定位置に重畳表示するよう制御をする。 The monitoring object risk determination unit 105 determines whether to display a danger avoidance warning. Therefore, the monitoring object risk degree determination unit 105 corresponds to an additional display determination unit. The display video control unit 108 performs control to select and extract a predetermined danger avoidance warning item from the graphic memory 110 and to superimpose it on a predetermined position on the display screen 2 of the HMD device 1.
 なおこの時、表示される危険回避警告アイテムは、ドライバー20自身がHMD装置1の表示スクリーン2越しに直接視認する視野内の監視対象物またはドライバーの視野外にある監視対象物に対する表示アイテムであることを、ドライバー20自身が正しく認識できるようになっていることが好ましい。このためには、HMD装置1の表示スクリーン2上に重畳表示される危険回避警告アイテムが、ドライバー20が直接視認している監視対象物の近傍または該監視対象物に関連付けられた表示位置に正しく表示される必要がある。 At this time, the displayed danger avoidance warning item is a display item for a monitoring object within the visual field that the driver 20 himself / herself directly views through the display screen 2 of the HMD device 1 or a monitoring target object outside the driver's visual field. It is preferable that the driver 20 can recognize it correctly. For this purpose, the danger avoidance warning item superimposed on the display screen 2 of the HMD device 1 is correctly positioned in the vicinity of the monitoring object that the driver 20 is directly viewing or in the display position associated with the monitoring object. Need to be displayed.
 これを実現するため、本実施形態では、視線検出装置として眼球カメラ及びHMD装置1の使用者すなわちドライバー20の視線方向を演算する使用者視線方向演算部106と、その視線情報からドライバーがHMD装置1の表示スクリーン2越しに直接視認する視野領域を検知する使用者視野領域演算部107とを備えている。そしてこの使用者視野領域演算部107で得られたドライバーの直接視認視野と、撮影視野領域演算部103で得られた外景撮影用電子カメラ3の撮影視野との相対位置関係を常にモニタし、そのモニタデータから表示映像制御部108において表示されるべき危険回避警告アイテムの正しい表示位置が計算される。なお以上の危険回避警告アイテム表示位置算出処理フローの具体例については、後ほど詳しく説明する。 In order to realize this, in the present embodiment, a user's line-of-sight direction calculation unit 106 that calculates the line-of-sight direction of the user, that is, the driver 20, of the eyeball camera and the HMD device 1 as the line-of-sight detection device, and the driver uses the line-of-sight information And a user visual field area calculation unit 107 that detects a visual field area that is directly viewed through one display screen 2. The relative visual relationship between the direct visual field of view of the driver obtained by the user visual field calculation unit 107 and the photographing visual field of the outside scene photographing electronic camera 3 obtained by the photographing visual field region calculation unit 103 is constantly monitored. The correct display position of the danger avoidance warning item to be displayed in the display image control unit 108 is calculated from the monitor data. A specific example of the danger avoidance warning item display position calculation processing flow will be described in detail later.
 なお上記ブロック内各部は、その機能、動作が主制御部100によって制御されている。 Note that the function and operation of each unit in the block are controlled by the main control unit 100.
 次に図5のフローチャートを用いて、第一実施形態に係る運転支援システムの動作について説明する。図5は、第一実施形態に係る運転支援システムの処理の流れを示すフローチャートである。以下の処理を開始するにあたり、HMD装置1及び危険回避警告装置5の主電源は投入されているものとする。また、主電源が投入されると、ステップS01からS03までの使用者視野検出処理と、ステップS11からステップS20の危険回避警告処理とが並列して開始される。以下、使用者視野検出処理、危険回避警告処理の順に説明する。 Next, the operation of the driving support system according to the first embodiment will be described using the flowchart of FIG. FIG. 5 is a flowchart showing a process flow of the driving support system according to the first embodiment. In starting the following processing, it is assumed that the main power supply of the HMD device 1 and the danger avoidance warning device 5 is turned on. When the main power is turned on, the user visual field detection process from steps S01 to S03 and the danger avoidance warning process from steps S11 to S20 are started in parallel. Hereinafter, a user visual field detection process and a danger avoidance warning process will be described in this order.
 使用者視野検出処理では、視線方向検知装置4が、使用者すなわちドライバー20の目の映像(眼球映像)を撮像し(ステップS01)、使用者視線方向演算部106に出力する。 In the user visual field detection process, the visual line direction detection device 4 captures an image (eyeball image) of the eye of the user, that is, the driver 20 (step S01), and outputs it to the user visual line direction calculation unit 106.
 使用者視線方向演算部106は、眼球映像から例えば黒目検出処理を行い、黒目の位置及び向きから使用者の視線方向を検出し(ステップS02)、視線方向を使用者視野領域演算部107に出力する。 The user gaze direction calculation unit 106 performs, for example, black eye detection processing from the eyeball image, detects the user's gaze direction from the position and direction of the black eye (step S02), and outputs the gaze direction to the user visual field region calculation unit 107. To do.
 使用者視野領域演算部107は、視線方向情報を基に、ドライバー20の視野領域を検知する(ステップS03)。例えば、視線方向を示すベクトルと、表示スクリーン2との交点を使用者の視野中心に定め、その視野中心から予め定められた表示スクリーン2の面内の範囲を使用者視野領域と定める。その後、ステップS19へ進む。 The user visual field area calculation unit 107 detects the visual field area of the driver 20 based on the line-of-sight direction information (step S03). For example, the intersection of the vector indicating the line-of-sight direction and the display screen 2 is determined as the user's visual field center, and a predetermined in-plane range of the display screen 2 from the visual field center is defined as the user visual field region. Thereafter, the process proceeds to step S19.
 一方、危険回避警告処理では、外景撮影用電子カメラ3がドライバー20とほぼ同じ視点で外景の映像(以下「外景映像」という)を撮像し(ステップS11)、特定対象物抽出部101、特定対象物関連情報検知部102、及び撮影視野領域演算部103に出力する。 On the other hand, in the danger avoidance warning process, the outside scene photographing electronic camera 3 captures an image of the outside scene (hereinafter referred to as “outside scene image”) from substantially the same viewpoint as the driver 20 (step S11), and the specific object extraction unit 101, the specific object The information is output to the object related information detection unit 102 and the photographing visual field region calculation unit 103.
 撮影視野領域演算部103は、外景映像を基に外景映像として撮影されている領域(以下「撮影視野」という)を検知し(ステップS12)、その後、ステップS18へ進む。 The imaging field area calculation unit 103 detects an area (hereinafter referred to as “imaging field of view”) captured as an outside scene image based on the outside scene image (step S12), and then proceeds to step S18.
 特定対象物抽出部101は、外景映像から特定対象物、例えば移動する人や車あるいは信号機や踏切、道路標識、車線などの道路走行上の関連物の映像情報を抽出し、あらかじめメモリに格納されている各特定対象物照合用映像情報と照合することで抽出された特定対象物の種別を識別する。特定対象物抽出部101は、抽出した特定対象物を特定対象物関連情報検知部102に出力する。 The specific object extraction unit 101 extracts video information of a specific object such as a moving person, a car, a traffic light, a railroad crossing, a road sign, a lane, and the like on a road from an outside scene image, and is stored in a memory in advance. The type of the specific object extracted by collating with each specific object collation video information is identified. The specific object extraction unit 101 outputs the extracted specific object to the specific object related information detection unit 102.
 更に、特定対象物関連情報検知部102は、特定対象物抽出部101から取得した特定対象物の関連情報、例えば特定対象物の移動方向及び移動速度等を検知する(ステップS13)。関連情報の検知手法は、例えば、連続するフレーム間における特定対象物が撮影された領域の変化量を基に検知してもよいし特定対象物までの距離を測定する測距部からの出力値を基に関連情報を検知してもよい。抽出した特定対象物、及びそれについての関連情報は、監視対象物判定部104に出力される。 Furthermore, the specific object related information detection unit 102 detects the related information of the specific object acquired from the specific object extraction unit 101, for example, the moving direction and the moving speed of the specific object (step S13). The related information detection method may be detected based on, for example, the amount of change in a region where the specific object is captured between successive frames, or an output value from a distance measuring unit that measures the distance to the specific object. The related information may be detected based on the above. The extracted specific object and the related information about it are output to the monitoring object determination unit 104.
 監視対象物判定部104は、ステップS13で抽出、識別された特定対象物が、自車の走行に対して危険リスクを有する監視対象物か否かを判定する(ステップS14)。もし判定結果が“Yes”であれば、次のステップ16に移行する。一方、判定結果が“No”であれば、改めてステップS01及びステップS11に戻る。監視対象物か否かの判定は、例えば監視対象物の移動方向及び速度を基に、監視対象物の移動方向を示すベクトルが自車の移動方向ベクトルと交わる場合(ベクトル長さは移動速度を基に決定する)には、自車と監視対象物とが干渉する恐れがあると判定する。 The monitoring object determination unit 104 determines whether or not the specific object extracted and identified in step S13 is a monitoring object having a risk of danger with respect to traveling of the host vehicle (step S14). If the determination result is “Yes”, the process proceeds to the next step 16. On the other hand, if the determination result is “No”, the process returns to step S01 and step S11. For example, based on the moving direction and speed of the monitored object, the determination as to whether or not the object is a monitored object is performed when a vector indicating the moving direction of the monitored object intersects with the moving direction vector of the own vehicle (the vector length indicates the moving speed). It is determined that there is a possibility of interference between the own vehicle and the monitoring object.
 監視対象物判定部104は、ステップS14で監視対象物と判定された特定対象物の自車に対する相対位置、相対速度、相対移動方向等の関連情報を引き続き取得し(ステップS15)、監視対象物危険度判定部105に出力する。 The monitoring object determination unit 104 continues to acquire related information such as the relative position, relative speed, and relative movement direction of the specific object that has been determined as the monitoring object in step S14 (step S15). Output to the risk determination unit 105.
 監視対象物危険度判定部105は、ステップS13で得られた監視対象物の関連情報から該監視対象物が自車に及ぼす危険度を判定し、所定の危険回避警告アイテムを表示するか否を判定する(ステップS16)。もし判定結果が“Yes”であれば、次のステップS17に移行する。一方、判定結果が“No”であれば、改めてステップS01及びステップS11に戻る。なお前記ステップS15及びステップS18における具体例な処理内容については、後述する。 The monitored object risk level determination unit 105 determines the risk level of the monitored object on the vehicle from the related information of the monitored object obtained in step S13, and determines whether or not to display a predetermined danger avoidance warning item. Determination is made (step S16). If the determination result is “Yes”, the process proceeds to the next step S17. On the other hand, if the determination result is “No”, the process returns to step S01 and step S11. Specific processing contents in steps S15 and S18 will be described later.
 監視対象物危険度判定部105は、表示する危険回避警告アイテムの種別を決定する(S17)。監視対象物危険度判定部105は、同じ監視対象物であっても、自車との距離がより近い場合には、より使用者に注意を強く喚起できる危険回避警告アイテムを表示してもよい。 The monitored object risk level determination unit 105 determines the type of danger avoidance warning item to be displayed (S17). The monitoring object risk level determination unit 105 may display a danger avoidance warning item that can alert the user more strongly when the distance to the own vehicle is closer even if the monitoring object is the same. .
 表示映像制御部108は、使用者の視野情報及び撮影視野情報を基に表示スクリーン2内における表示位置を決定し(ステップS18)、表示スクリーン2内に警告アイテムの付加表示を行う(ステップS19)。運転支援システム10の主電源がONであれば、運転支援システム10の処理を続行し(ステップS20/Yes)、ステップS01、S11へ戻る。主電源がOFFにされる運転支援システム10の処理を終了する(ステップS20/No)。 The display video control unit 108 determines the display position in the display screen 2 based on the user's visual field information and photographing visual field information (step S18), and additionally displays a warning item in the display screen 2 (step S19). . If the main power supply of the driving support system 10 is ON, the processing of the driving support system 10 is continued (step S20 / Yes), and the process returns to steps S01 and S11. The process of the driving support system 10 in which the main power is turned off is terminated (Step S20 / No).
 次に、前記処理フローのステップS15およびステップS16における具体的な処理内容について、一例として歩行者がわき道から自車前方に飛び出してきた場合を例に挙げて説明する。図6は危険度判定処理を説明するための図であって、(a)は、歩行者がわき道から自車前方に飛び出してこようとしている瞬間をとらえた外景撮影用電子カメラ3の撮影映像概略図であり、(b)は、図6(a)の状況を俯瞰した模式図である。 Next, specific processing contents in step S15 and step S16 of the processing flow will be described by way of an example in which a pedestrian jumps out of a side road ahead of the vehicle. 6A and 6B are diagrams for explaining the risk determination process. FIG. 6A is an outline of a captured image of the electronic camera 3 for capturing an outside scene that captures a moment when a pedestrian is about to jump out of the side of the road. It is a figure and (b) is the schematic diagram which looked down on the condition of Fig.6 (a).
 図6(a)に示すように、特定対象物抽出部101は、撮影視野領域11で撮像した外景映像の中から自車前方方向に飛び出そうとしている歩行者31aの映像を抽出、識別する。特定対象物関連情報検知部102は、歩行者31aに関する関連情報を検知する。監視対象物判定部104は、これを監視対象物と判定し、その動きを監視するモードに入る。そして監視対象物危険度判定部105は、以降の撮影フレームで、例えば歩行者31aが31bのように自車前方方向に近づいてきた場合、システムは、図6(b)に示すように、地図データメモリ113に格納されている自車周辺の地図データ40上で上記歩行者31aおよび31bのそれぞれの位置座標に相当する点41aおよび41bから歩行者の移動ベクトル“u”を算出し、かつ自車関連情報から地図データ40上で自車に相当する点42の移動ベクトル“v”を算出する。このベクトル“u”とベクトル“v”の向きおよび大きさ(速度に相当)を比較することで、対象の歩行者と自車の衝突危険度を計算し、その衝突危険度に応じて危険回避警告アイテムをHMD画面内に表示するか否かを判定する。 As shown in FIG. 6A, the specific object extraction unit 101 extracts and identifies an image of a pedestrian 31a that is about to jump out from the outside scene image captured in the imaging field of view area 11 in the forward direction of the host vehicle. The specific object related information detection unit 102 detects related information related to the pedestrian 31a. The monitoring object determination unit 104 determines that this is a monitoring object and enters a mode for monitoring its movement. Then, in the subsequent shooting frames, when the pedestrian 31a approaches the front direction of the host vehicle, for example, as shown in FIG. A pedestrian movement vector “u” is calculated from points 41a and 41b corresponding to the position coordinates of the pedestrians 31a and 31b on the map data 40 around the own vehicle stored in the data memory 113, and A movement vector “v” of the point 42 corresponding to the own vehicle is calculated on the map data 40 from the vehicle related information. By comparing the direction and magnitude (corresponding to speed) of the vector “u” and the vector “v”, the collision risk between the target pedestrian and the subject vehicle is calculated, and the danger avoidance is performed according to the collision risk. It is determined whether or not the warning item is displayed in the HMD screen.
 次に、危険回避警告アイテムの表示位置について説明する。HMD装置1の表示スクリーン2上に重畳表示される各危険回避警告アイテムは、ドライバー20自身が表示スクリーン2越しに直接視認しているか、もしくは視野外にある該当の監視対象物の近傍または該監視対象物にできるだけ関連付けられた所定位置に正確に重畳表示されることが、視認性の向上の観点からは好ましい。 Next, the display position of the danger avoidance warning item will be described. Each danger avoidance warning item superimposed and displayed on the display screen 2 of the HMD device 1 is directly visible through the display screen 2 by the driver 20, or is near or near the corresponding monitoring object outside the field of view. From the viewpoint of improving the visibility, it is preferable that the information is accurately superimposed and displayed at a predetermined position associated with the object as much as possible.
 このように、所定の危険回避警告アイテムをドライバー20が表示スクリーン2越しに直接視認している情景内の該当監視対象物に対応した位置に正確に表示するためには、表示スクリーン2に重畳表示される映像の視野方向とドライバーが直接視認している外景の視野方向とが完全に一致している必要がある。 As described above, in order to accurately display a predetermined danger avoidance warning item at a position corresponding to the monitoring target in the scene that the driver 20 is directly viewing through the display screen 2, the display is superimposed on the display screen 2. It is necessary that the viewing direction of the image to be displayed and the viewing direction of the outside scene that is directly viewed by the driver match completely.
 しかしながら、HMD装置1を装着したドライバー20自身が直接視認する視野領域は、ドライバー20が注視している視線方向に応じて常に変動している。一方、HMD装置1に固定されている外景撮影用電子カメラ3は、HMD装置1が向いている方向すなわちドライバー20の頭部が向いている方向に固定される。このため、外景撮影用電子カメラ3の撮影映像から形成される撮影視野領域と、ドライバー20自身の直接視認視野は、その視野方向が容易にずれてしまう可能性がある。更に詳しくは、撮影視野領域とその視野で撮像された表示映像の視野とは、映像を表示スクリーン2のどの位置に表示するかによっても異なる。 However, the visual field area directly seen by the driver 20 wearing the HMD device 1 is always changing according to the line-of-sight direction in which the driver 20 is gazing. On the other hand, the outside scene photographing electronic camera 3 fixed to the HMD device 1 is fixed in the direction in which the HMD device 1 is facing, that is, the direction in which the head of the driver 20 is facing. For this reason, there is a possibility that the visual field direction of the photographing visual field region formed from the photographed video of the outside scene photographing electronic camera 3 and the direct visual field of view of the driver 20 itself are easily shifted. More specifically, the field of view for photographing and the field of view of the display image captured with the field of view differ depending on the position on the display screen 2 where the image is displayed.
 ここで、撮影視野領域と表示映像の視野とのずれは、HMD装置1に外景撮影用電子カメラ3を固定していることから撮影視野の中心点と表示スクリーン2の表示領域の中心点との誤差は、その幾何学的な位置関係から補正量がわかる。そこで、撮影視野領域の中心点と表示スクリーン2の中心点とは、上記補正量を用いて一致しているものとして扱う。従って、以下の説明において表示映像の視野と撮影領域視野とは同一のものを指すとする。そこで、表示映像の視野(撮影領域視野)と直接視認視野との位置ずれの発生原理について図7を参照して説明する。図7は、表示映像の視野(撮影領域視野)と直接視認視野との位置ずれの発生原理を示す図である。 Here, the difference between the field of view of the photographing field and the field of view of the display image is that the center point of the photographing field of view and the center point of the display region of the display screen 2 are fixed because the electronic camera 3 for photographing the outside scene is fixed to the HMD device 1. The correction amount of the error can be found from the geometric positional relationship. Therefore, the center point of the photographing visual field area and the center point of the display screen 2 are treated as being coincident using the correction amount. Accordingly, in the following description, it is assumed that the visual field of the display image and the photographing region visual field are the same. Accordingly, the principle of occurrence of positional deviation between the visual field of the display image (imaging area visual field) and the direct visual field of vision will be described with reference to FIG. FIG. 7 is a diagram showing the principle of occurrence of positional deviation between the visual field of the display image (imaging area visual field) and the direct visual field of view.
 図7上段は、ドライバー20が正面を向いている状態を示す。ドライバー20が監視対象物、例えば歩行者31を視認しているということは、歩行者31の表面で反射された可視光が左目21L及び右目21Rのそれぞれに入射して、左目21L及び右目21R内の各網膜内に届き、結像している状態である。入射光のパスは符号βで示す。このときの歩行者31の左右の目に入射する可視光の各パスβと表示スクリーン2との交点の中点をQで示す。 The upper part of FIG. 7 shows a state where the driver 20 is facing the front. The fact that the driver 20 visually recognizes an object to be monitored, for example, a pedestrian 31, means that visible light reflected by the surface of the pedestrian 31 is incident on each of the left eye 21L and the right eye 21R, and in the left eye 21L and the right eye 21R. It is in a state where it reaches within each retina and forms an image. The path of incident light is indicated by the symbol β. The middle point of the intersection of each path β of visible light incident on the left and right eyes of the pedestrian 31 and the display screen 2 is indicated by Q.
 図7下段は、ドライバー20が右を向いている状態を示す。歩行者31の表面で反射された可視光が左目21L及び右目21Rのそれぞれに入射するときの入射光のパスβ’と表示スクリーン2との交点の中点をQ’で示す。中点Q’は、中点Qと位置が異なる。すなわち、HMD装置1と歩行者31との相対位置が同じであっても、ドライバー20の視線がベクトルα’(図7上段)からベクトルα’(図7下段)にずれると、表示スクリーン2における歩行者31の表示映像の視野と、ドライバー20自身の直接視認視野とがずれる。 The lower part of FIG. 7 shows a state where the driver 20 is facing right. Q ′ represents the midpoint of the intersection of the path β ′ of the incident light and the display screen 2 when the visible light reflected by the surface of the pedestrian 31 enters each of the left eye 21L and the right eye 21R. The midpoint Q ′ is different in position from the midpoint Q. That is, even if the relative positions of the HMD device 1 and the pedestrian 31 are the same, if the line of sight of the driver 20 deviates from the vector α ′ (upper part in FIG. 7) to the vector α ′ (lower part in FIG. 7), the display screen 2 The visual field of the display image of the pedestrian 31 and the direct visual field of view of the driver 20 are shifted.
 そこで、本実施形態では、ドライバー20の左右の各目の視線ベクトルαと表示スクリーン2との交点の中点Pを表示視野の中心と定める。図7上段では点Pが表示視野の中心であり、図7下段では点P’が表示視野の中心となる。よって点Pは、表示スクリーン2の面何において点Pから水平方向にΔx変位した位置にある。なお、図7では、水平方向断面を例に説明したが、垂直方向にも視線は変化する。この場合、点Pは、表示スクリーン2の面何において点Pから垂直方向にΔy変位した位置にある。 Therefore, in this embodiment, the midpoint P of the intersection point between the line-of-sight vector α of each of the left and right eyes of the driver 20 and the display screen 2 is determined as the center of the display field. In the upper part of FIG. 7, the point P is the center of the display field, and in the lower part of FIG. 7, the point P ′ is the center of the display field. Therefore, the point P is at a position that is displaced by Δx in the horizontal direction from the point P on the surface of the display screen 2. In FIG. 7, the horizontal cross section is described as an example, but the line of sight also changes in the vertical direction. In this case, the point P is at a position that is displaced by Δy in the vertical direction from the point P on the surface of the display screen 2.
 このように、ドライバー20の視点移動により、HMD装置1と監視対象物との相対位置に変化が無くても、表示映像の視野方向とドライバーの直接視野方向(使用者視野方向)とにはずれが生じる。そこで、これらを常に一致させるためには、表示映像の視野方向をドライバーの直接視野方向にリアルタイムに合わせ込むための何らかの動的視野方向補正を行うことが必要となる。 Thus, even if there is no change in the relative position between the HMD device 1 and the monitoring target due to the viewpoint movement of the driver 20, there is a deviation between the visual field direction of the display image and the direct visual field direction of the driver (user visual field direction). Arise. Therefore, in order to always match these, it is necessary to perform some dynamic visual field direction correction for adjusting the visual field direction of the displayed image to the direct visual field direction of the driver in real time.
 そこで本実施形態は、この表示視野方向動的補正処理を下記のように行う。まず外景撮影用電子カメラ3の撮影視野が、ドライバー20が表示スクリーン2越しに直接視認する視野より十分広くなるようにしておく。その上で以下、図8および図9を用いて説明するような表示視野方向の動的補正を実施する。 Therefore, in the present embodiment, the display visual field direction dynamic correction process is performed as follows. First, the field of view of the electronic camera 3 for photographing the outside scene is set to be sufficiently wider than the field of view directly visible through the display screen 2 by the driver 20. Thereafter, dynamic correction of the display visual field direction as described with reference to FIGS. 8 and 9 is performed.
 図8は、HMD装置1の表示視野とドライバー20の直接視認視野の関係の一例を示す概略図である。図8中の破線の四角枠で囲まれた領域は外景映像の全領域であり、外景撮影用電子カメラ3の撮影視野領域11と一致する。また、実線の四角枠で囲まれた領域をHMD装置1の表示視野領域12、その中の楕円形枠で囲まれた領域をドライバー20の直接視野領域13とする。また表示視野領域12の中央には、点Oとこの点Oを原点とする仮想的なX-Y座標軸が設定されている。そして、各表示アイテムの表示位置(座標)はこのX-Y座標軸を基準にして計算される。なおこれら原点OやX-Y座標軸は、あくまで前記図4のフローチャート中の処理ステップS18の中で仮想的に設定されるもので、実際に表示スクリーン2上に表示されるわけではない。 FIG. 8 is a schematic diagram showing an example of the relationship between the display field of view of the HMD device 1 and the direct visual field of view of the driver 20. A region surrounded by a broken-line square frame in FIG. 8 is the entire region of the outside scene video, and coincides with the photographing field region 11 of the outside scene photographing electronic camera 3. An area surrounded by a solid square frame is a display visual field area 12 of the HMD device 1, and an area surrounded by an elliptical frame is a direct visual field area 13 of the driver 20. In the center of the display visual field area 12, a point O and a virtual XY coordinate axis with this point O as the origin are set. The display position (coordinates) of each display item is calculated with reference to this XY coordinate axis. The origin O and the XY coordinate axes are virtually set in the processing step S18 in the flowchart of FIG. 4 and are not actually displayed on the display screen 2.
 一方、同じく処理ステップS18では、眼球カメラ、使用者視線方向演算部106および使用者視野領域演算部107を経て決定されたHMD使用者すなわちドライバー20の直接視野領域13から、その視野領域の中央点Pを求め、その点Pが表示視野領域12内に設けたX-Y座標上のどの座標位置にあるかを算出する。 On the other hand, in the same processing step S18, the center point of the visual field region is determined from the direct visual field region 13 of the HMD user, that is, the driver 20, determined through the eye camera, the user's visual line direction calculation unit 106, and the user visual field region calculation unit 107. P is obtained, and the coordinate position on the XY coordinates provided in the display visual field region 12 is calculated.
 例えば図8の例では、点Pの座標は(0,0)、すなわちドライバー20の直接視野領域13の中央点Pが表示視野領域12の原点Oと一致している。この場合は、明らかにHMD装置1の表示視野方向とドライバーの直接視野方向が一致していることになる。 For example, in the example of FIG. 8, the coordinates of the point P are (0, 0), that is, the center point P of the direct visual field region 13 of the driver 20 coincides with the origin O of the display visual field region 12. In this case, obviously, the display visual field direction of the HMD device 1 and the direct visual field direction of the driver coincide with each other.
 一方、図9は、HMD装置1の表示視野方向とドライバーの直接視野方向がずれている場合を示した概略図で、この場合には図9(a)に示すように、ドライバー20の直接視野領域13の中央点Pの座標は、表示視野領域12の原点Oから外れた座標(Xp,Yp)になる。 On the other hand, FIG. 9 is a schematic diagram showing a case where the display visual field direction of the HMD device 1 is shifted from the direct visual field direction of the driver. In this case, as shown in FIG. The coordinates of the center point P of the area 13 are coordinates (Xp, Yp) deviating from the origin O of the display visual field area 12.
 この時システムは自動的に、図9(b)に示すように、表示視野領域12の原点Oを点Pの位置と同位置O’にシフトさせ、このO’を原点とする新たな座標軸X’-Y’を定める。つまり、表示映像制御部108は常にドライバー視認視野領域13の中央点Pが表示視野領域12の原点と一致するように、表示視野領域12の原点Oおよび座標軸X-Yの設置位置を調整し、この調整後の新たなX-Y座標軸を用いて各表示アイテムの表示位置(座標)が計算する。 At this time, as shown in FIG. 9B, the system automatically shifts the origin O of the display visual field region 12 to the same position O ′ as the position of the point P, and a new coordinate axis X having this O ′ as the origin. Define '-Y'. In other words, the display image control unit 108 adjusts the installation position of the origin O of the display visual field region 12 and the coordinate axes XY so that the center point P of the driver visual recognition visual field region 13 always coincides with the origin of the display visual field region 12. The display position (coordinates) of each display item is calculated using the new XY coordinate axes after the adjustment.
 このような表示視野の原点および表示座標軸をドライバーの直接視野方向に合わせて自動調整することで、ドライバーが直接視認する該当監視対象物に対応した位置に各表示アイテムを正確に表示させることができる。 By automatically adjusting the origin of the display field and the display coordinate axis in accordance with the driver's direct field of view, each display item can be accurately displayed at a position corresponding to the target object to be directly viewed by the driver. .
 図10、図11を参照して、第一実施形態に係る運転支援システムにおける危険回避警告アイテムの表示例を説明する。図10はドライバー視認視野領域内に監視対象物が存在する場合の付加表示例を示す図であって、(a)は歩行者の飛び出しに対する付加表示例を示し、(b)は赤信号に対する付加表示例である。 Referring to FIGS. 10 and 11, a display example of the danger avoidance warning item in the driving support system according to the first embodiment will be described. FIG. 10 is a diagram showing an example of additional display when a monitoring target is present in the driver visual field of view, wherein (a) shows an example of additional display for a pedestrian jumping out, and (b) is an addition to a red signal. It is a display example.
 図10(a)は、図6で説明した歩行者がわき道から自車前方方向に飛び出そうとしている例で、監視対象物である歩行者にドライバーの注意を集中させるため、例えばドライバーが視認している歩行者を囲むように表示された枠線(破線で表示)201や視野中央付近に表示された「飛び出し注意!」の警告文202等の危険回避警告アイテムが重畳表示されている。 FIG. 10A is an example in which the pedestrian described in FIG. 6 is about to jump out from the side road in the forward direction of the host vehicle. In order to concentrate the driver's attention on the pedestrian that is the monitoring target, A danger avoidance warning item such as a frame line 201 displayed so as to surround a pedestrian (displayed with a broken line) 201 and a warning sentence 202 of “jump out caution!” Displayed near the center of the field of view is superimposed.
 また図10(b)は、自車前方の信号機が赤信号になっている例で、監視対象物である赤信号にドライバーの注意を集中させるため、例えば前方の赤信号を囲むように表示された枠線(破線で表示)203や視野中央付近に表示された「信号 赤 止まれ!」の警告文204等の危険回避警告アイテムが重畳表示されている。 FIG. 10B is an example in which the traffic light in front of the host vehicle is a red signal. For example, in order to concentrate the driver's attention on the red signal that is the object to be monitored, the signal is displayed so as to surround the red signal in the front. Danger avoidance warning items such as a framed line (displayed by a broken line) 203 and a warning sentence 204 of “Signal light red light stop!” Displayed near the center of the visual field are superimposed.
 上記図10(a)(b)の例は、いずれもドライバー20の直接視野領域13内にある危険監視対象物に対する危険回避警告アイテムの例を示したが、本発明では、外景撮影用電子カメラ3の撮影視野をドライバーの直接視野領域13より十分広くとることで、ドライバーの視認視野の外側にある監視対象物に対しても危険回避警告アイテムを表示することができ、ドライバーに注意を喚起することも可能である。 10 (a) and 10 (b) show examples of danger avoidance warning items for danger monitoring objects in the direct visual field region 13 of the driver 20, but in the present invention, an electronic camera for photographing outside scenes By taking the 3 field of view sufficiently wider than the driver's direct field of view 13, it is possible to display a danger avoidance warning item even for a monitoring object outside the driver's visual field of view, and alert the driver. It is also possible.
 図11は、そのような表示例を示した概略図である。歩行者はドライバー20の直接視野領域13の左外側から自車前方方向に飛び出そうとしている。当然ドライバー20はこの歩行者を視認できていないが、本発明の危険回避警告装置5は、外景撮影用電子カメラ3の撮影映像からいち早くこの歩行者を監視対象物として識別し、その動きから衝突危険リスク有りと判断し、例えばドライバーの直接視野領域13内の中央付近に注意喚起のための矢印205や「左側より飛び出し!」の警告文206等の危険回避警告アイテムを表示する。 FIG. 11 is a schematic view showing such a display example. The pedestrian is about to jump out from the left outside of the direct visual field area 13 of the driver 20 in the forward direction of the vehicle. Naturally, the driver 20 cannot visually recognize the pedestrian, but the danger avoidance warning device 5 of the present invention quickly identifies the pedestrian as a monitoring object from the captured image of the outside scene photographing electronic camera 3 and collides from the movement. It is determined that there is a danger risk, and a danger avoidance warning item such as an arrow 205 for alerting or a warning sentence 206 of “Jump out from the left!” Is displayed near the center in the direct visual field area 13 of the driver.
 なお図11の例は、歩行者の飛び出しの例であったが、当然これに限定されるものではなく、図10(b)の例のような赤信号がドライバー20の直接視野領域13外にある場合でも、同様の危険回避警告アイテムを表示することができる。 The example of FIG. 11 is an example of a pedestrian jumping out, but is not limited to this, and a red signal as in the example of FIG. 10B is outside the direct visual field region 13 of the driver 20. Even in some cases, similar danger avoidance warning items can be displayed.
 また監視対象物は、歩行者の飛び出しや信号機だけに限定されるものではなく、自動車の飛び出し、道路標識、踏切、道路の段差、車線等車両走行上何らかの危険リスクが発生する可能性がある対象物であればどのような対象物でも構わない。そして各監視対象物の種別、位置、動きなどの情報からそれぞれ適切な危険回避警告アイテムを選択、表示することでより高度な車両運転支援システムを実現することができる。 In addition, monitoring objects are not limited to pedestrians jumping out and traffic lights, but targets that may cause some danger risk in vehicle driving such as car jumping out, road signs, railroad crossings, road steps, lanes, etc. Any object can be used as long as it is an object. A more advanced vehicle driving support system can be realized by selecting and displaying appropriate danger avoidance warning items from information such as the type, position, and movement of each monitored object.
 このように、ドライバーが直接視認している視野内に危険回避警告アイテムを重畳表示することで、従来の車両組み込みディスプレイによる運転支援システムに比べて危険リスク回避の即応性が格段に高まる。更に、このようなシステムをHMD装置に組み込むことにより、運転支援システムが組み込まれていない車両を運転する場合にも、ドライバーは常に適切な危険回避警告環境下での運転が可能となる。 In this way, by displaying the danger avoidance warning item superimposed in the field of view directly visible to the driver, the responsiveness of danger risk avoidance is significantly improved compared to the driving support system using the conventional vehicle built-in display. Furthermore, by incorporating such a system into the HMD device, the driver can always drive in an appropriate danger avoidance warning environment even when driving a vehicle without a driving support system.
 また、本実施形態によれば、監視対象物の種別だけではなく、位置、移動速度、移動方向などの関連情報も取得し、これを考慮した危険回避警告アイテムを表示することで、ドライバーの即応性、認識性を高め、より運転支援システムの利便性を向上させることができる。 In addition, according to the present embodiment, not only the type of the monitoring object but also related information such as the position, moving speed, and moving direction is acquired, and the risk avoidance warning item taking this into consideration is displayed, thereby promptly responding to the driver. The convenience of the driving support system can be further improved.
 また本実施形態では、危険回避警告アイテムを直接視野に重畳表示する、所謂付加表示を行う際に、ドライバーの視点移動を検出し、その視点移動に伴う直接視野領域の原点と表示視野領域との位置ずれを補正して付加表示を行う。これにより、ドライバーから見て監視対象物と危険可否警告アイテムとの位置ずれがない、または緩和されるので、ドライバーの認識性を高めることができる。 Further, in the present embodiment, when performing so-called additional display in which the danger avoidance warning item is directly superimposed on the visual field, the viewpoint movement of the driver is detected, and the origin of the direct visual field area and the display visual field area associated with the movement of the visual point are detected. Corrects misalignment and performs additional display. Thereby, since there is no position shift between the monitoring object and the danger availability warning item as seen from the driver, or it is mitigated, the recognizability of the driver can be improved.
<第二実施形態>
 第二実施形態は、運転支援システムのうち、特に走行ルート案内システム、いわゆるナビゲーションシステムに本発明を適用した実施形態である。以下図12乃至図14を参照して第二実施形態について説明する。図12は、第二実施形態に係る運転支援装置の電子回路構成を示した概略ブロック図である。図13は、第二実施形態に係る運転支援システムの動作の流れを示すフローチャートである。図14は、第二実施形態係るHMD装置の表示スクリーンの画面表示例である。以下、このブロック図各部の機能について説明する。なお本図において、図3で示した前記第1の実施例の概略ブロック図中の各ブロックと同様の機能を備えたブロックには同じ番号を付している。
<Second embodiment>
The second embodiment is an embodiment in which the present invention is applied to a driving route guidance system, that is, a so-called navigation system, among driving assistance systems. The second embodiment will be described below with reference to FIGS. FIG. 12 is a schematic block diagram illustrating an electronic circuit configuration of the driving support apparatus according to the second embodiment. FIG. 13 is a flowchart showing a flow of operations of the driving support system according to the second embodiment. FIG. 14 is a screen display example of the display screen of the HMD device according to the second embodiment. The function of each part of this block diagram will be described below. In this figure, blocks having the same functions as those in the schematic block diagram of the first embodiment shown in FIG.
 図12に示すように、第二実施形態に係る運転支援装置5aと第一実施形態に係る危険回避警告装置5との違いは、危険回避警告装置5には含まれていた監視対象物危険度判定部105がなく、代わりにルート判定部111を備える点である。また、ブロック名は同じであっても特定対象物として認識する物に相違があるので、以下詳述する。 As shown in FIG. 12, the difference between the driving support device 5 a according to the second embodiment and the danger avoidance warning device 5 according to the first embodiment is that the degree of risk of the monitoring object included in the danger avoidance warning device 5. The determination unit 105 is not provided, and a route determination unit 111 is provided instead. Further, even though the block names are the same, there are differences in the objects recognized as the specific objects, which will be described in detail below.
 図12に示すように、外景撮影用電子カメラ3には特定対象物抽出部101が接続されている。この特定対象物抽出部101は、外景撮影用電子カメラ3外で撮影した外景映像から特定対象物、例えば道路標識や信号機、交差点、踏切、あるいはコンビニエンスストアやファミリーレストラン、ガソリンスタンドなど道路沿いに多く見られる商店の看板等走行ルートを探索する際の探索ポイントとなる映像情報を抽出し、あらかじめメモリ109に格納されている各特定対象物の映像情報と照合することで抽出された特定対象物の種別を識別する機能を備えている。 As shown in FIG. 12, the specific object extraction unit 101 is connected to the electronic camera 3 for photographing outside scenes. This specific object extraction unit 101 is often used along a road such as a road sign, a traffic light, an intersection, a railroad crossing, or a convenience store, a family restaurant, a gas station, etc. Video information that becomes a search point when searching for a travel route such as a signboard of a store to be seen is extracted, and the specific object extracted by collating with the video information of each specific object stored in the memory 109 in advance A function for identifying the type is provided.
 一方、外景撮影用電子カメラ3には、特定対象物抽出部101の他に特定対象物の位置などの関連情報を検知する特定対象物関連情報検知部102と、外景撮影用電子カメラ3の撮影視野を検知する撮影視野領域演算部103が接続されている。 On the other hand, in the outside scene photographing electronic camera 3, in addition to the particular object extracting unit 101, the specific object related information detection unit 102 that detects related information such as the position of the specific object and the outside scene photographing electronic camera 3 are photographed. A photographing visual field region calculation unit 103 for detecting the visual field is connected.
 次に監視対象物判定部104は、特定対象物抽出部101および特定対象物関連情報検知部102で取得された諸情報から、その特定対象映像が今後自車の走行ルート探索に必要で、該特定対象物と自車との相対位置関係をモニタしておく必要がある監視対象物か否かを判定する機能を備えている。 Next, the monitoring target determination unit 104 needs the specific target video for future route search of the own vehicle from various information acquired by the specific target extraction unit 101 and the specific target related information detection unit 102. It has a function of determining whether or not it is a monitoring target that needs to monitor the relative positional relationship between the specific target and the vehicle.
 一方、ルート判定部111は、上記各部から得られた情報と自車位置及び自車の走行速度、走行方向などの情報を検知する自車関連情報取得部112で取得された自車関連情報および地図データメモリ113に格納されている自車位置近傍の地図データを照らし合わせて、自車の走行ルートを探索し、さらにその走行ルートをドライバーに案内する上で監視対象物に対して、走行ルート案内アイテムを表示するか否かを判定する機能を備えている。 On the other hand, the route determination unit 111 detects the vehicle-related information acquired by the vehicle-related information acquisition unit 112 that detects the information obtained from the above-described units, the vehicle position, the traveling speed of the vehicle, the traveling direction, and the like. The map data stored in the map data memory 113 is compared with the map data in the vicinity of the own vehicle position to search for the driving route of the own vehicle, and further to guide the driving route to the driver, the driving route for the monitored object It has a function of determining whether to display guidance items.
 なお本実施形態では、自車関連情報取得部112が、例えばGPSデータを取得する例を示しているが、勿論それに限定されるものではなく、例えばドライバーが乗車している車両内あるいは本発明のHMDシステム内に自車位置や自車走行速度、方向等を検知する構成を設けてもよい。自車関連情報取得部112は、使用者の現在位置を示す位置情報を取得する位置情報取得部に相当する。 In the present embodiment, the vehicle-related information acquisition unit 112 acquires, for example, GPS data. However, the present invention is not limited to this example. For example, the vehicle-related information acquisition unit 112 is not limited to this example. You may provide the structure which detects an own vehicle position, an own vehicle travel speed, a direction, etc. in an HMD system. The own vehicle related information acquisition unit 112 corresponds to a position information acquisition unit that acquires position information indicating the current position of the user.
 ルート判定部111で所定の走行ルート案内アイテムの表示が決定されると、表示映像制御部108では、例えば左折あるいは右折を示す矢印など所定の走行ルート案内アイテムをグラフィックメモリ110から選択、抽出し、HMD装置1の表示スクリーン2上に重畳表示するよう制御がなされる。 When the display of the predetermined travel route guidance item is determined by the route determination unit 111, the display image control unit 108 selects and extracts a predetermined travel route guidance item such as an arrow indicating a left turn or a right turn from the graphic memory 110, and extracts it. Control is performed so as to superimpose and display on the display screen 2 of the HMD device 1.
 この時、第一実施形態と同様、表示される走行ルート案内アイテムは、ドライバー20自身が表示スクリーン2越しに直接視認する該当監視対象物に関連付けられた表示アイテムであることを正しく認識できるように、表示スクリーン2内の所定位置に正確に重畳表示される必要がある。 At this time, as in the first embodiment, the displayed travel route guidance item can be correctly recognized as a display item associated with a corresponding monitoring object that the driver 20 himself / herself directly views through the display screen 2. It is necessary to accurately superimpose and display at a predetermined position in the display screen 2.
 これを実現するため、本実施形態では、眼球カメラを経てHMD装置1の使用者すなわちドライバーの視線方向を検知する使用者視線方向演算部106とその視線情報からドライバーが表示スクリーン2越しに直接視認する視野領域を検知する使用者視野領域演算部107を備えている。そしてこの使用者視野領域演算部107で得られたドライバーの直接視認視野と撮影視野領域演算部103で得られた外景撮影用電子カメラ3の撮影視野との相対位置関係を常にモニタし、そのモニタデータから表示映像制御部108において表示されるべき危険回避警告アイテムの正しい表示位置が計算される。なお以上の警告アイテム表示位置算出処理フローの具体例については、既に第一実施形態(図8及び図9)で説明済みなので、ここでは説明を省略する。 In order to realize this, in the present embodiment, the driver directly recognizes the user's line-of-sight direction calculation unit 106 that detects the line-of-sight direction of the user of the HMD device 1, that is, the driver through the eyeball camera and the line-of-sight information directly through the display screen 2. A user visual field calculation unit 107 that detects the visual field to be detected is provided. Then, the relative position relationship between the direct visual field of view of the driver obtained by the user visual field calculation unit 107 and the photographing visual field of the outside scene photographing electronic camera 3 obtained by the photographing visual field region calculation unit 103 is constantly monitored. From the data, the correct display position of the danger avoidance warning item to be displayed in the display image control unit 108 is calculated. Note that a specific example of the above warning item display position calculation processing flow has already been described in the first embodiment (FIGS. 8 and 9), and thus the description thereof is omitted here.
 また第一実施形態と同様、上記ブロック内の各部の機能、動作は主制御部100によって制御されている。 As in the first embodiment, the function and operation of each unit in the block are controlled by the main control unit 100.
 次に図13のフローチャートを用いて、第二実施形態に係る走行ルート案内システムいわゆるナビゲーションシステムの処理フローについて説明する。なお本図において、第一実施形態で示した図5のフローチャート中の各処理ステップと同様の処理ステップには同じ番号を付している。 Next, the processing flow of the so-called navigation system according to the second embodiment will be described with reference to the flowchart of FIG. In this figure, the same numbers are assigned to the same processing steps as the processing steps in the flowchart of FIG. 5 shown in the first embodiment.
 使用者視野検出処理では、視線方向検知装置4が、使用者すなわちドライバー20の目の映像(使用者視線映像)を撮像し(ステップS01)、使用者視線方向演算部106に出力する。そして、使用者視線方向演算部106が視線方向を検出し(ステップS02)、使用者視野領域演算部107が視線方向情報を基に、ドライバー20の直接視野領域を決定する(ステップS03)。 In the user visual field detection process, the visual line direction detection device 4 captures an image of the eye of the user, that is, the driver 20 (user visual line image) (step S01), and outputs it to the user visual line direction calculation unit 106. Then, the user gaze direction calculation unit 106 detects the gaze direction (step S02), and the user visual field region calculation unit 107 determines the direct visual field region of the driver 20 based on the gaze direction information (step S03).
 使用者視野検出処理と並行して、外景撮影用電子カメラ3が所定の映像情報を逐次取得し(ステップS11)、撮影視野領域演算部103が映像情報を基に撮影視野を検知する(ステップS12) In parallel with the user visual field detection process, the outside scene photographing electronic camera 3 sequentially acquires predetermined video information (step S11), and the photographing visual field region calculation unit 103 detects the photographing visual field based on the video information (step S12). )
 また同時に特定対象物抽出部101は、ステップS11で取得された映像情報から前記したルート探索用特定対象物、例えば道路標識や信号機、交差点、踏切、あるいはコンビニエンスストアやファミリーレストラン、ガソリンスタンドなど道路沿いに多く見られる商店の看板等々走行ルート探索上のポイントとなる映像情報を抽出し、あらかじめメモリに格納されている各特定対象物の映像情報と照合することで抽出された特定対象物の種別を識別する(ステップS13)。 At the same time, the specific object extraction unit 101 uses the above-described route search specific objects from the video information acquired in step S11, such as road signs, traffic lights, intersections, railroad crossings, convenience stores, family restaurants, gas stations, and the like. The video information that becomes the point on the travel route search such as a signboard of a shop that is often seen in the store is extracted, and the type of the specific object extracted by comparing with the video information of each specific object stored in the memory in advance Identify (step S13).
 次にルート判定部111は、ステップS13で抽出、識別された特定対象物の地図データ上の位置を取得し、自車の地図データ上の位置と今後走行すべきルートデータを取得する(ステップS21)。 Next, the route determination unit 111 acquires the position on the map data of the specific object extracted and identified in step S13, and acquires the position on the map data of the host vehicle and route data to be traveled in the future (step S21). ).
 そしてルート判定部111は、ステップS13で取得された特定対象物の位置と、ステップS22で取得された自車の位置、および自車が走行すべきルートデータとを照合し(ステップS22)、特定対象物に対して何らかの走行ルート案内アイテムすなわちナビゲーションアイテムの表示を行うか否かを判定する(ステップS23)。そしてもし判定結果が“Yes”であれば、次のステップS25に移行する。一方、判定結果が“No”であれば、改めてステップS01及びステップS11に戻る。 The route determination unit 111 collates the position of the specific object acquired in step S13 with the position of the own vehicle acquired in step S22 and the route data that the own vehicle should travel (step S22). It is determined whether or not any travel route guidance item, that is, a navigation item is displayed on the object (step S23). And if a determination result is "Yes", it will transfer to the following step S25. On the other hand, if the determination result is “No”, the process returns to step S01 and step S11.
 ルート判定部111は、表示するナビゲーションアイテムの種別を決定する(ステップS24)。 The route determination unit 111 determines the type of navigation item to be displayed (step S24).
 表示映像制御部108は、ステップS03で検知されたドライバー20の直接視野情報、ステップS12で検知された外景撮影用電子カメラ3の撮影視野情報を基に、HMD表示スクリーン上の表示位置(原点)を決定する。そして表示映像制御部108は、ステップS25で決定されたHMD表示スクリーン上の所定位置に(ステップS25)、ナビゲーションアイテムを表示する(ステップS26)。 The display video control unit 108 displays the display position (origin) on the HMD display screen based on the direct visual field information of the driver 20 detected in step S03 and the photographing visual field information of the outside scene photographing electronic camera 3 detected in step S12. To decide. Then, the display video control unit 108 displays the navigation item at a predetermined position on the HMD display screen determined in step S25 (step S25) (step S26).
 運転支援システム10の主電源がONであれば、運転支援システム10の処理を続行し(ステップS27/Yes)、ステップS01、S11へ戻る。主電源がOFFにされる運転支援システム10の処理を終了する(ステップS27/No)。 If the main power supply of the driving support system 10 is ON, the processing of the driving support system 10 is continued (step S27 / Yes), and the process returns to steps S01 and S11. The process of the driving support system 10 in which the main power is turned off is terminated (Step S27 / No).
 図14は、ナビゲーションシステムによるナビゲーションアイテムの表示例を示している。本図において、符号11は外景撮影用電子カメラ3の撮影視野、13はドライバーがHMD表示スクリーン越しに直接視認している直接視野領域13を示す。 FIG. 14 shows a display example of navigation items by the navigation system. In this drawing, reference numeral 11 denotes a photographing field of view of the outside scene photographing electronic camera 3, and 13 denotes a direct field region 13 that the driver directly views through the HMD display screen.
 図14では、システムはまず撮影視野領域11の映像中から例えば特定のレストランの看板の映像(図中に破線で囲った部分)を抽出、識別する。そして自身が記憶している地図データ、ルートデータおよび自車位置データから、例えば走行すべきルートとしてこのレストランの手前の交差点を左折しなければならないことを判定する。そこで前記撮影映像の中から前記レストラン手前の交差点の位置を抽出、識別し、ドライバーの直接視認視野内の前記交差点位置に例えば左折を促す矢印207や「この角を左折」等の案内コメントなど適切なナビゲーションアイテムを重畳表示する。 In FIG. 14, the system first extracts and identifies, for example, an image of a signboard of a specific restaurant (portion surrounded by a broken line in the drawing) from the image of the shooting field of view 11. Then, it is determined from the map data, route data, and own vehicle position data stored therein that it is necessary to turn left at the intersection in front of this restaurant as a route to be run, for example. Therefore, the position of the intersection in front of the restaurant is extracted and identified from the photographed video, and an appropriate comment such as an arrow 207 for prompting a left turn to the intersection position within the driver's visual field of view or a guidance comment such as “turn left at this corner” is appropriate. Navigation items are superimposed.
 このようにドライバーが直接視認している視野内にナビゲーションアイテムを重畳表示することで、従来の車両組み込みディスプレイによるナビゲーションシステムのようにドライバーが運転中外景とナビゲーション画面を交互に見るために頻繁に視線を動かす煩わしさが無くなり、運転の安全性とナビゲーション情報に対するドライバーの認識度が格段に高まる。更に、このようなシステムをHMD装置に組み込むことにより、ナビゲーションシステムが組み込まれていない車両を運転する場合にも、ドライバーは適切なナビゲーション環境下での運転が可能となる。 By overlaying navigation items in the field of view directly visible to the driver in this way, the driver frequently looks at the outside scene and the navigation screen while driving as in a conventional navigation system with a built-in vehicle display. This eliminates the hassle of moving the vehicle and significantly increases the driver's awareness of driving safety and navigation information. Furthermore, by incorporating such a system into the HMD device, the driver can drive under an appropriate navigation environment even when driving a vehicle without a navigation system.
 以上、本発明の実施形態の例を説明したが、本発明の技術を実現する構成は前記実施形態に限られるものではなく、様々な変形例が考えられる。例えば、ある実施形態の構成の一部を他の実施形態の構成と置き換えることが可能であり、また、ある実施形態の構成に他の実施形態の構成を加えることも可能である。これらは全て本発明の範疇に属するものである。また、文中や図中に現れる数値やメッセージ等もあくまでも一例であり、異なるものを用いても本発明の効果を損なうことはない。例えば、前記した第一実施形態の危険回避警告装置と第二実施形態のナビゲーションシステムは、それぞれ本発明のHMDシステムによる運転支援システムにおける独立した実施形態として説明したが、当然のことながら、この二つのシステムを統合した総合的な運転支援システムであってもよい。 As mentioned above, although the example of embodiment of this invention was demonstrated, the structure which implement | achieves the technique of this invention is not restricted to the said embodiment, Various modifications can be considered. For example, part of the configuration of one embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of one embodiment. These all belong to the category of the present invention. In addition, numerical values, messages, and the like appearing in sentences and drawings are merely examples, and the use of different ones does not impair the effects of the present invention. For example, the danger avoidance warning device of the first embodiment and the navigation system of the second embodiment have been described as independent embodiments in the driving support system by the HMD system of the present invention. It may be a comprehensive driving support system in which two systems are integrated.
 前述した本発明の機能等は、それらの一部または全部を、例えば集積回路で設計する等によりハードウェアで実現しても良い。また、マイクロプロセッサユニット等がそれぞれの機能等を実現する動作プログラムを解釈して実行することによりソフトウェアで実現しても良い。ハードウェアとソフトウェアを併用しても良い。 The functions and the like of the present invention described above may be realized by hardware by designing a part or all of them with, for example, an integrated circuit. Further, the microprocessor unit or the like may be realized by software by interpreting and executing an operation program that realizes each function or the like. Hardware and software may be used together.
 また、図中に示した制御線や情報線は説明上必要と考えられるものを示しており、必ずしも製品上の全ての制御線や情報線を示しているとは限らない。実際には殆ど全ての構成が相互に接続されていると考えても良い。 Also, the control lines and information lines shown in the figure are those that are considered necessary for the explanation, and not all control lines and information lines on the product are necessarily shown. Actually, it may be considered that almost all the components are connected to each other.
 また、以上述べた第一実施形態及び第二実施形態は、自動車を運転するドライバー向けの運転支援システムの一例として説明したが、本発明はこれに限定されるものではない。例えばオートバイや自転車などの2輪車を運転するドライバーへの運転支援システムとして本発明を用いてもよいし、一般の歩行者の安全確保補助や行先案内システムとして本発明を用いても一向に構わない。 Moreover, although the first embodiment and the second embodiment described above have been described as an example of a driving support system for a driver who drives an automobile, the present invention is not limited to this. For example, the present invention may be used as a driving support system for a driver who drives a two-wheeled vehicle such as a motorcycle or a bicycle, or the present invention may be used as a general pedestrian safety assistance or destination guidance system. .
 本発明は、運転支援分野に限らず、広く拡張表示(Augmented Reality、AR)を行う技術に対して適用することができる。例えば、)など、AR技術であればその用途を問わず適用することができる。例えばゲーム、歩行者ナビゲーションシステムにも適用できる。具体的には、テーマパーク内でのアトラクション説明装置(対象年齢、身長、待ち時間表示等)、美術館内での展示品の説明装置(但し、撮影機能をロックしておく)等にも、本発明を適用することができる。 The present invention is not limited to the field of driving support, and can be applied to technologies that perform extended display (Augmented Reality, AR). For example, AR technology such as) can be applied regardless of its use. For example, it can be applied to games and pedestrian navigation systems. Specifically, it is also used for the attraction explanation device in the theme park (target age, height, waiting time display, etc.), the explanation device for the exhibits in the museum (however, the shooting function is locked), etc. The invention can be applied.
 更には視覚障害者、視覚弱者の移動時安全補助システムとして用いることも可能である。ただしこの場合は、必要な危険回避警告や走行ルート案内は、HMD表示スクリーンへの映像アイテムの表示ではなく、音声等を用いる必要がある。また、HMD装置に代えて、透過性を有する表示画面を備えた拡張表示用映像表示装置であってもよい。 Furthermore, it can also be used as a safety assistance system when moving visually impaired and visually impaired. In this case, however, the necessary danger avoidance warning and the travel route guidance need to use audio or the like instead of displaying the video item on the HMD display screen. Moreover, it may replace with HMD apparatus and the video display apparatus for extended displays provided with the display screen which has transparency may be sufficient.
1…HMD装置
2…半透過型表示スクリーン
3…外景撮影用電子カメラ
4…眼球カメラ
5…危険回避警告装置
5a…運転支援装置
10…運転支援システム
DESCRIPTION OF SYMBOLS 1 ... HMD apparatus 2 ... Translucent display screen 3 ... Electronic camera 4 for external scene photography ... Eyeball camera 5 ... Danger avoidance warning device 5a ... Driving assistance device 10 ... Driving assistance system

Claims (9)

  1.  使用者の外部情景を撮影して映像情報を生成する外部情景撮影部と、
     前記使用者に対して情報提供を行う対象となる特定対象物、及びその特定対象物に関する情報を示す付加情報を関連付けて格納する付加情報記憶部と、
     前記映像情報から前記特定対象物を抽出する特定対象物抽出部と、
     前記映像情報を表示する表示画面であってかつ該表示画面越しに前記使用者が外部情景を直接視認することができる半透明の表示画面と、
     前記使用者の視線方向を検知して視線方向情報を生成する視線方向検知部と、
     前記使用者の視線方向情報から、前記使用者が前記外部情景を前記表示画面越しに視認する際の視野である直接視野領域を演算する使用者視野領域演算部と、
     前記外部情景撮影部が前記外部情景を撮影した際の視野である撮影視野領域を演算する撮影視野領域演算部と、
     前記特定対象物に関連づけられた前記付加情報を前記付加情報記憶部から抽出し、その付加情報を前記表示画面に表示する表示映像制御部と、を備え、
     前記表示映像制御部は、前記撮影視野領域と前記直接視野領域との相対位置関係を演算し、その相対位置関係を用いて前記付加情報を前記直接視野領域の移動に追従させて前記表示画面に表示する、
     ことを特徴とする映像表示装置。
    An external scene photographing unit for photographing a user's external scene and generating video information;
    An additional information storage unit that associates and stores additional information indicating information on the specific object to be provided to the user and information on the specific object;
    A specific object extraction unit that extracts the specific object from the video information;
    A translucent display screen that displays the video information and allows the user to directly view the external scene through the display screen;
    A gaze direction detection unit that detects gaze direction information by detecting the gaze direction of the user;
    From the user's line-of-sight direction information, a user visual field region calculation unit that calculates a direct visual field region that is a visual field when the user visually recognizes the external scene through the display screen;
    A shooting field area calculation unit that calculates a shooting field area that is a field of view when the external scene shooting unit has shot the external scene;
    A display image control unit that extracts the additional information associated with the specific object from the additional information storage unit and displays the additional information on the display screen;
    The display image control unit calculates a relative positional relationship between the photographing visual field region and the direct visual field region, and uses the relative positional relationship to cause the additional information to follow the movement of the direct visual field region on the display screen. indicate,
    A video display device characterized by that.
  2.  前記表示映像制御部は、前記表示画面内における前記撮影視野領域の中央点、及び前記直接視野領域の中央点の位置ずれ量を求め、前記撮影視野領域の中央点を基準としたときの前記特定対象物の表示位置に対して、前記位置ずれ量を用いた補正を行い、前記特定対象物の補正後の位置を算出し、その補正後の位置を基に前記付加情報の表示位置を算出する、
     ことを特徴とする請求項1に記載の映像表示装置。
    The display video control unit obtains a positional deviation amount of the center point of the photographing field area and the center point of the direct field area in the display screen, and the identification when the center point of the photographing field area is used as a reference The display position of the object is corrected using the amount of displacement, the corrected position of the specific object is calculated, and the display position of the additional information is calculated based on the corrected position. ,
    The video display device according to claim 1.
  3.  前記特定対象物抽出部が抽出した前記特定対象物を、前記表示画面に表示させるか否かを判定するため判断要素からなる関連情報を、前記映像情報を基に生成する特定対象物関連情報検知部と、
     前記関連情報を基に、前記特定対象物を表示の要否の判定するために監視対象物とするか否かを判定する監視対象物判定部と、を更に備え、
     前記表示映像制御部は、前記監視対象物に関する付加情報を表示する、
     ことを特徴とする請求項1に記載の映像表示装置。
    Specific object related information detection for generating related information including determination elements for determining whether or not to display the specific object extracted by the specific object extraction unit on the display screen based on the video information. And
    A monitoring object determining unit that determines whether or not the specific object is a monitoring object in order to determine whether or not the specific object needs to be displayed based on the related information;
    The display video control unit displays additional information related to the monitored object;
    The video display device according to claim 1.
  4.  前記外部情景撮影部は、前記直接視野領域よりも広い前記撮影視野領域により前記外部情景を撮像して前記映像情報を生成し、
     前記表示映像制御部は、前記撮影視野領域に含まれておりかつ前記直接視野領域には含まれていない領域にある前記監視対象物に対する付加情報を表示する、
     ことを特徴とする請求項3に記載の映像表示装置。
    The external scene photographing unit generates the video information by capturing the external scene with the photographing visual field region wider than the direct visual field region,
    The display video control unit displays additional information for the monitoring target in an area that is included in the photographing visual field area and is not included in the direct visual field area.
    The video display apparatus according to claim 3.
  5.  前記使用者の頭部に装着する装着体であって、当該装着体を前記使用者の頭部に装着した状態において、前記表示画面が前記使用者の眼前に位置する装着体と、
     前記監視対象物の関連情報を基に、前記監視対象物に関する付加情報の表示の要否を判定する付加表示判定部と、
     前記使用者の現在位置を示す位置情報を取得する位置情報取得部と、を更に備え、
     前記特定対象物は、歩行者、二輪車、対向車両、前記使用者が運転する自車の直前を走行又は停止中の直前車、信号機、前記自車の走行車線に合流する他の車線からの進入車両の少なくとも一つであり、
     前記特定対象物関連情報検知部は、前記特定対象物の位置、移動方向、及び移動速度の少なくとも一つを含む前記関連情報を生成し、
     前記監視対象物判定部は、前記特定対象物に関する前記関連情報を基に、その特定対象物の移動予測位置が前記使用者の進行方向と干渉する場合に、前記特定対象物を前記監視対象物と判定し、
     前記付加表示判定部は、前記使用者の位置情報及び前記関連情報を基に、前記使用者と前記監視対象物との干渉の危険度を基に、その危険度に応じた前記付加情報を表示すると判定する、
    ことを特徴とする請求項4に記載の映像表示装置。
    A mounting body to be mounted on the user's head, wherein the display screen is positioned in front of the user's eyes in a state in which the mounting body is mounted on the user's head;
    Based on the related information of the monitoring target, an additional display determination unit that determines whether or not to display additional information related to the monitoring target;
    A position information acquisition unit that acquires position information indicating the current position of the user;
    The specific object is a pedestrian, a two-wheeled vehicle, an oncoming vehicle, an immediately preceding vehicle that is running or stopped immediately before the user's own vehicle, a traffic light, an approach from another lane that joins the traveling lane of the own vehicle. At least one of the vehicles,
    The specific object related information detection unit generates the related information including at least one of a position, a moving direction, and a moving speed of the specific object,
    The monitoring object determination unit determines the specific object as the monitoring object when the predicted movement position of the specific object interferes with the traveling direction of the user based on the related information regarding the specific object. And
    The additional display determination unit displays the additional information according to the risk level based on the risk level of interference between the user and the monitoring target based on the user position information and the related information. It is determined that
    The video display apparatus according to claim 4.
  6.  前記使用者の頭部に装着する装着体であって、当該装着体を前記使用者の頭部に装着した状態において、前記表示画面が前記使用者の眼前に位置する装着体と、
     前記使用者の現在位置を示す位置情報を取得する位置情報取得部と、
     前記使用者の現在位置の周辺地図を示す地図データを取得し、前記地図データを基に前記使用者の走行ルートを予測するルート判定部と、を更に備え、
     前記特定対象物は、前記走行ルートの探索ポイントとなる建物、施設、道路の少なくとも一つであり、
     前記特定対象物関連情報検知部は、前記特定対象物と前記使用者との相対位置を含む前記関連情報を生成し、
     前記ルート判定部は、前記使用者の位置情報及び前記特定対象物と前記使用者との相対位置を基に、前記使用者が前記特定対象物に接近すると、前記予測された走行ルートを示す前記付加情報を表示すると判定する、
    ことを特徴とする請求項4に記載の映像表示装置。
    A mounting body to be mounted on the user's head, wherein the display screen is positioned in front of the user's eyes in a state in which the mounting body is mounted on the user's head;
    A position information acquisition unit for acquiring position information indicating the current position of the user;
    A route determination unit that obtains map data indicating a map around the current position of the user and predicts the travel route of the user based on the map data; and
    The specific object is at least one of a building, a facility, and a road serving as a search point for the travel route,
    The specific object related information detection unit generates the related information including a relative position between the specific object and the user,
    The route determination unit indicates the predicted travel route when the user approaches the specific object based on the user's position information and the relative position between the specific object and the user. Decide to display additional information,
    The video display apparatus according to claim 4.
  7.  前記視線方向検知部は、前記使用者の眼球を撮像し眼球映像を出力する電子カメラ、及び前記眼球映像の中から前記使用者の瞳または光彩の位置を検出し、前記使用者の視線方向を検知する視線方向演算部を含む、
     ことを特徴とする請求項1記載の映像表示装置。
    The line-of-sight direction detection unit detects the position of the user's pupil or glow from the electronic camera that images the user's eyeball and outputs an eyeball image, and detects the user's line-of-sight direction. Including a gaze direction calculation unit to detect,
    The video display device according to claim 1.
  8.  前記外部情景撮影部は、可視光又は赤外光により前記外景情景を撮影して映像情報を出力する電子カメラ、及び前記外景情景の被写体と前記電子カメラとの間の距離を測定する測距部を含む、
     ことを特徴とする請求項1記載の映像表示装置。
    The external scene photographing unit is an electronic camera that captures the outside scene with visible light or infrared light and outputs video information, and a distance measuring unit that measures a distance between a subject of the outside scene and the electronic camera. including,
    The video display device according to claim 1.
  9.  使用者の外部情景を撮影して映像情報を生成するステップと、
     前記映像情報から、前記使用者に対して情報提供を行う対象となる前記特定対象物を抽出するステップと、
     前記使用者の視線方向を検知して視線方向情報を生成するステップと、
     前記映像情報を表示する表示画面であってかつ該表示画面越しに前記使用者が外部情景を直接視認することができる半透明の表示画面を介して、前記使用者が前記外部情景を前記表示画面越しに視認する際の視野である直接視野領域を、前記使用者の視線方向情報からを演算するステップと、
     前記外部情景を撮影した際の視野である撮影視野領域を演算するステップと、
     前記特定対象物に関する情報を示す付加情報を関連付けて格納する付加情報記憶部から、前記抽出された特定対象物に関連づけられた付加情報を抽出し、前記撮影視野領域と前記直接視野領域との相対位置関係を演算し、その相対位置関係を用いて前記抽出された付加情報を前記直接視野領域の移動に追従させて前記表示画面に表示するステップと、
    を含むことを特徴とする映像表示方法。
    Photographing a user's external scene and generating video information;
    Extracting from the video information the specific object to be provided with information to the user;
    Detecting a gaze direction of the user to generate gaze direction information;
    The display screen for displaying the video information, and the user can view the external scene through the translucent display screen through which the user can directly view the external scene. Calculating a direct visual field region, which is a visual field when visually recognizing over, from the gaze direction information of the user;
    Calculating a shooting field of view that is a field of view when the external scene is shot;
    The additional information associated with the extracted specific object is extracted from the additional information storage unit that associates and stores additional information indicating the information related to the specific object, and the relative relationship between the imaging visual field area and the direct visual field area is extracted. Calculating a positional relationship and displaying the extracted additional information on the display screen following the movement of the direct visual field region using the relative positional relationship;
    A video display method comprising:
PCT/JP2014/082025 2014-12-03 2014-12-03 Video display device and method WO2016088227A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2014/082025 WO2016088227A1 (en) 2014-12-03 2014-12-03 Video display device and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2014/082025 WO2016088227A1 (en) 2014-12-03 2014-12-03 Video display device and method

Publications (1)

Publication Number Publication Date
WO2016088227A1 true WO2016088227A1 (en) 2016-06-09

Family

ID=56091200

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2014/082025 WO2016088227A1 (en) 2014-12-03 2014-12-03 Video display device and method

Country Status (1)

Country Link
WO (1) WO2016088227A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018112809A (en) * 2017-01-10 2018-07-19 セイコーエプソン株式会社 Head mounted display, control method therefor and computer program
JP2020161988A (en) * 2019-03-27 2020-10-01 日産自動車株式会社 Information processing device and information processing method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004219664A (en) * 2003-01-14 2004-08-05 Sumitomo Electric Ind Ltd Information display system and information display method
JP2008134616A (en) * 2006-10-10 2008-06-12 Itt Manufacturing Enterprises Inc System and method for dynamically correcting parallax in head borne video system
JP2010210822A (en) * 2009-03-09 2010-09-24 Brother Ind Ltd Head mounted display
JP2010256878A (en) * 2009-03-30 2010-11-11 Equos Research Co Ltd Information display device
JP2013203103A (en) * 2012-03-27 2013-10-07 Denso It Laboratory Inc Display device for vehicle, control method therefor, and program
WO2014034065A1 (en) * 2012-08-31 2014-03-06 株式会社デンソー Moving body warning device and moving body warning method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004219664A (en) * 2003-01-14 2004-08-05 Sumitomo Electric Ind Ltd Information display system and information display method
JP2008134616A (en) * 2006-10-10 2008-06-12 Itt Manufacturing Enterprises Inc System and method for dynamically correcting parallax in head borne video system
JP2010210822A (en) * 2009-03-09 2010-09-24 Brother Ind Ltd Head mounted display
JP2010256878A (en) * 2009-03-30 2010-11-11 Equos Research Co Ltd Information display device
JP2013203103A (en) * 2012-03-27 2013-10-07 Denso It Laboratory Inc Display device for vehicle, control method therefor, and program
WO2014034065A1 (en) * 2012-08-31 2014-03-06 株式会社デンソー Moving body warning device and moving body warning method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018112809A (en) * 2017-01-10 2018-07-19 セイコーエプソン株式会社 Head mounted display, control method therefor and computer program
JP2020161988A (en) * 2019-03-27 2020-10-01 日産自動車株式会社 Information processing device and information processing method
JP7278829B2 (en) 2019-03-27 2023-05-22 日産自動車株式会社 Information processing device and information processing method

Similar Documents

Publication Publication Date Title
EP2857886B1 (en) Display control apparatus, computer-implemented method, storage medium, and projection apparatus
EP3213948B1 (en) Display control device and display control program
US11181737B2 (en) Head-up display device for displaying display items having movement attribute or fixed attribute, display control method, and control program
JP2014181927A (en) Information provision device, and information provision program
US20160185219A1 (en) Vehicle-mounted display control device
WO2019097755A1 (en) Display device and computer program
KR20180022374A (en) Lane markings hud for driver and assistant and same method thereof
US20200249044A1 (en) Superimposed-image display device and computer program
JP2014120111A (en) Travel support system, travel support method, and computer program
RU2720591C1 (en) Information displaying method and display control device
KR20150051671A (en) A display control device using vehicles and user motion recognition and its method of operation
JP2005127996A (en) Route guidance system, method, and program
JP2018173399A (en) Display device and computer program
JP2014120114A (en) Travel support system, travel support method, and computer program
JP2016074410A (en) Head-up display device and head-up display display method
WO2020105685A1 (en) Display control device, method, and computer program
JP4277678B2 (en) Vehicle driving support device
JP4270010B2 (en) Object danger judgment device
WO2021132555A1 (en) Display control device, head-up display device, and method
WO2016088227A1 (en) Video display device and method
WO2016056199A1 (en) Head-up display device, and display method for head-up display
JP6805974B2 (en) Driving support device and computer program
JP2014120113A (en) Travel support system, travel support method, and computer program
JP2014120112A (en) Travel support system, travel support method, and computer program
JP6597128B2 (en) Vehicle display device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14907339

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: JP

122 Ep: pct application non-entry in european phase

Ref document number: 14907339

Country of ref document: EP

Kind code of ref document: A1