WO2016088227A1 - Dispositif et procédé d'affichage vidéo - Google Patents
Dispositif et procédé d'affichage vidéo Download PDFInfo
- Publication number
- WO2016088227A1 WO2016088227A1 PCT/JP2014/082025 JP2014082025W WO2016088227A1 WO 2016088227 A1 WO2016088227 A1 WO 2016088227A1 JP 2014082025 W JP2014082025 W JP 2014082025W WO 2016088227 A1 WO2016088227 A1 WO 2016088227A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- information
- visual field
- specific object
- display
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G3/00—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
- G09G3/001—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
- G09G5/37—Details of the operation on graphic patterns
- G09G5/377—Details of the operation on graphic patterns for mixing or overlaying two or more graphic patterns
Definitions
- the present invention relates to an image display apparatus and method, and more particularly, to a technique that is mounted on a user's head and displays predetermined image information in front of the user's eyes.
- an information video display system using a so-called head mounted display device (a head mounted display is referred to as an HMD) of a goggles type or glasses type that is worn on a user's head has been rapidly spreading.
- a so-called see-through type HMD device having a function of directly displaying an external scene through a translucent display screen while a predetermined information image is superimposed on the directly-viewed scene is directly viewed by the user. Since the predetermined related information can be additionally displayed on the external scene, the application range is wide.
- Patent Document 1 states that “in an external light transmissive head-mounted display, an imaging device that electronically captures an object in the line-of-sight direction, and the object is recognized from this electronic data.
- Image recognition means for displaying information relating to the object stored in advance in the storage means, and enabling to register in advance the template data corresponding to the electronic data captured by the imaging device in the imaging device during the video recognition.
- the HMD device described in Patent Document 1 performs additional display by identifying the type of a specific object captured by a camera.
- the position of the user's visual field area changes before and after the viewpoint movement, and the positional relationship between the additional display and the target object changes in the user's visual field area, and visibility decreases. There is a problem of doing.
- This viewpoint movement of the user is not considered at all in Patent Document 1.
- the present invention has been made to solve the above-described problems, and an object of the present invention is to provide a technique for improving the visibility of the additional display even when the user moves the viewpoint.
- the present invention captures a user's external scene to generate video information, and extracts from the video information the specific target that is a target for providing information to the user. Then, it is a display screen that detects the user's line-of-sight direction to generate line-of-sight information and displays the video information, and the user can directly view the external scene through the display screen. Photographing the external scene by calculating a direct visual field area, which is a visual field when the user visually recognizes the external scene through the display screen, from the gaze direction information of the user via a transparent display screen. The additional field associated with the extracted specific object is extracted from the additional information storage unit that calculates and captures the field of view of the photographic field that is the field of view when the image is captured.
- the above A configuration in which a relative positional relationship between a shadow visual field region and the direct visual field region is calculated, and the extracted additional information is displayed on the display screen by following the movement of the direct visual field region using the relative positional relationship. It is characterized by that.
- FIG. 1 Schematic diagram showing an example of an HMD system in the first embodiment
- the figure which shows the hardware constitutions of the danger avoidance warning apparatus 5 Schematic block diagram showing the electronic circuit configuration of the danger avoidance warning device
- the flowchart which shows the flow of a process of the driving assistance system which concerns on 1st embodiment. It is a figure for demonstrating a risk determination process, Comprising: (a) is the picked-up image schematic of the electronic camera 3 for outside scene photography which caught the moment when the pedestrian is going to jump out from the side road ahead of the own vehicle.
- FIG. 6 is a schematic diagram showing a case where the display visual field direction and the direct visual field direction of the driver are deviated, where (a) shows a state before correction of the deviation of the dynamic visual field direction, and (b) shows a state after correction.
- FIG. 1 It is a figure which shows the additional display example in case a monitoring target object exists in a driver
- Figure showing another example of additional display The schematic block diagram which showed the electronic circuit structure of the driving assistance device which concerns on 2nd embodiment.
- the flowchart which shows the flow of operation
- the first embodiment is an embodiment in which the present invention is applied to a driving support system that supports a driver who drives a vehicle using an HMD device, and a danger avoidance warning item is additionally displayed.
- a schematic configuration of the present embodiment will be described with reference to FIGS. 1 and 2.
- FIG. 1 is a schematic diagram showing an example of the HMD system in the first embodiment.
- FIG. 2 is a perspective view showing an outline of an external configuration example of an HMD device that is a main part of the HMD system according to the first embodiment and has a function of a video display device.
- the driving support system 10 shown in FIG. 1 includes the HMD device 1 that is mounted on the head of the driver 20 driving the automobile 50.
- the HMD device 1 is connected to a danger avoidance warning device 5 as shown in FIG.
- the danger avoidance warning device 5 which is a separate device from the HMD device 1, is illustrated with a wired connection, but may be wirelessly connected or configured integrally with the HMD device 1. May be.
- the HMD device 1 includes a so-called see-through that includes a transflective display screen 2 having a function of displaying an image in the visual field of the driver 20 in a state where the outside world in front of the eyes of the driver 20 is visible.
- This is a type of HMD device 1.
- the HMD device 1 holds a display screen 2 and is mounted on a user's head, a mounting body 1a, an electronic camera 3 for photographing an outside scene, and an eyeball camera including an electronic camera that images the eyeball of the driver 20, It has.
- the mounting body 1a on the head the state in which the display screen 2 is positioned in front of the eyes can be maintained.
- the outside scene photographing electronic camera 3 is installed so as to photograph a scene of the outside world from almost the same direction as the line of sight of the driver 20 to which the HMD device 1 is mounted. Has the ability to shoot.
- the outside scene photographing electronic camera 3 may be provided with a normal visible light photographing function, or may be provided with a non-visible light photographing function such as infrared rays.
- outside scene photographing electronic camera 3 may be provided with a distance measuring unit for measuring the distance between the subject to be photographed and the outside scene photographing electronic camera 3, such as an autofocus function or an infrared distance measuring sensor.
- the eyeball camera is, for example, a small electronic camera arranged at a position that does not obstruct the field of view of the driver 20.
- the eyeball camera captures the eyeball of the driver 20 obliquely from the front, and the driver or eye movement of the driver 20 in the eyeball image is viewed from the driver. 20 line-of-sight directions are detected.
- the eyeball image is output to a user gaze direction calculation unit 106 of the danger avoidance warning device 5 described later, and a gaze direction calculation process is executed.
- the gaze direction detection device is not limited to the configuration including the eyeball camera and the user gaze direction detection unit, and any configuration can be used as long as it can detect the gaze direction of the driver 20 or the visual field area visually recognized by the driver 20. It may be a configuration.
- FIG. 3 is a diagram illustrating a hardware configuration of the danger avoidance warning device 5.
- the danger avoidance warning device 5 includes a CPU (Central Processing Unit) 51, a RAM (Random Access Memory) 52, a ROM (Read Only Memory) 53, an HDD (Hard Disk Drive) 54, an I / F 55, And a bus 58.
- the CPU 51, RAM 52, ROM 53, HDD 54, and I / F 55 are connected to each other via a bus 58.
- the danger avoidance warning device 5 is connected to each of the display screen 2, the outside scene photographing electronic camera 3, and the eyeball camera via the I / F 55.
- FIG. 4 is a schematic block diagram showing an electronic circuit configuration of the danger avoidance warning device of the danger avoidance warning device. The function of each part of this block diagram will be described below.
- the danger avoidance warning device 5 includes a main control unit 100, a specific object extraction unit 101, a specific object related information detection unit 102, an imaging visual field region calculation unit 103, a monitoring object determination unit 104, and a monitoring object risk degree determination unit 105.
- User gaze direction calculation unit 106 user visual field calculation unit 107, display video control unit 108, memory 109, graphic memory 110, own vehicle related information acquisition unit 112, map data memory 113, and GPS (Global Positioning System)
- a device communication antenna 60 is included.
- the user visual field direction calculation unit 106, the user visual field region calculation unit 107, the display video control unit 108, and the own vehicle related information acquisition unit 112 include software that realizes the functions of these blocks, and the hardware illustrated in FIG. Are configured by collaboration.
- Each of the memory 109, the graphic memory 110, and the map data memory 113 is configured by the RAM 52 and / or the ROM 53.
- the specific object extraction unit 101 is connected to the electronic camera 3 for shooting an outside scene.
- This specific object extraction unit 101 uses a video image of a specific object such as a moving person, a car, a traffic light, a railroad crossing, a road sign, a lane, or the like from an outside scene image captured by the outside scene photographing electronic camera 3.
- a function is provided for identifying the type of the specific object extracted by extracting the information and collating it with the video information for specific object collation stored in the memory 109 in advance.
- the electronic camera 3 for photographing outside scenes has a specific target for detecting related information such as the moving direction and the moving speed of the specific target in addition to the specific target extracting unit 101 if it is moving.
- An object-related information detection unit 102 and an imaging field area calculation unit 103 that detects the imaging field of view of the outside scene electronic camera 3 are connected to each other.
- the monitoring object determination unit 104 has a risk that the specific object will pose a danger to the traveling of the vehicle in the future based on the information acquired by the specific object extraction unit 101 and the specific object related information detection unit 102. In addition, it has a function of determining whether or not it is a monitoring object that needs to be continuously monitored.
- the monitoring object risk degree determination unit 105 includes information related to the monitoring object obtained from each of the above parts, and the vehicle related information such as the vehicle position acquired by the vehicle related information acquisition unit 112 and the traveling speed and direction of the vehicle. Whether the risk level of the monitored object with respect to the vehicle is comprehensively determined from each data such as map data around the vehicle position stored in the map data memory 113, and a danger avoidance warning is displayed to the driver It has a function to decide whether or not.
- the GPS device communication antenna 60 is connected to the own vehicle related information acquisition unit 112.
- the vehicle 50 is equipped with a GPS device (not shown) that receives a positioning radio wave from a GPS satellite and detects the position of the vehicle, and receives GPS data from the GPS device via the GPS device communication antenna 60.
- the configuration for acquiring the vehicle-related information is not limited to this, and an inertial measurement device (Inertial Measurement Unit), a vehicle traveling speed detection device mounted on the automobile 50, and the like may be used in combination.
- the monitoring object risk determination unit 105 determines whether to display a danger avoidance warning. Therefore, the monitoring object risk degree determination unit 105 corresponds to an additional display determination unit.
- the display video control unit 108 performs control to select and extract a predetermined danger avoidance warning item from the graphic memory 110 and to superimpose it on a predetermined position on the display screen 2 of the HMD device 1.
- the displayed danger avoidance warning item is a display item for a monitoring object within the visual field that the driver 20 himself / herself directly views through the display screen 2 of the HMD device 1 or a monitoring target object outside the driver's visual field. It is preferable that the driver 20 can recognize it correctly.
- the danger avoidance warning item superimposed on the display screen 2 of the HMD device 1 is correctly positioned in the vicinity of the monitoring object that the driver 20 is directly viewing or in the display position associated with the monitoring object. Need to be displayed.
- a user's line-of-sight direction calculation unit 106 that calculates the line-of-sight direction of the user, that is, the driver 20, of the eyeball camera and the HMD device 1 as the line-of-sight detection device, and the driver uses the line-of-sight information
- a user visual field area calculation unit 107 that detects a visual field area that is directly viewed through one display screen 2.
- the relative visual relationship between the direct visual field of view of the driver obtained by the user visual field calculation unit 107 and the photographing visual field of the outside scene photographing electronic camera 3 obtained by the photographing visual field region calculation unit 103 is constantly monitored.
- the correct display position of the danger avoidance warning item to be displayed in the display image control unit 108 is calculated from the monitor data. A specific example of the danger avoidance warning item display position calculation processing flow will be described in detail later.
- FIG. 5 is a flowchart showing a process flow of the driving support system according to the first embodiment.
- the main power supply of the HMD device 1 and the danger avoidance warning device 5 is turned on.
- the user visual field detection process from steps S01 to S03 and the danger avoidance warning process from steps S11 to S20 are started in parallel.
- a user visual field detection process and a danger avoidance warning process will be described in this order.
- the visual line direction detection device 4 captures an image (eyeball image) of the eye of the user, that is, the driver 20 (step S01), and outputs it to the user visual line direction calculation unit 106.
- the user gaze direction calculation unit 106 performs, for example, black eye detection processing from the eyeball image, detects the user's gaze direction from the position and direction of the black eye (step S02), and outputs the gaze direction to the user visual field region calculation unit 107. To do.
- the user visual field area calculation unit 107 detects the visual field area of the driver 20 based on the line-of-sight direction information (step S03). For example, the intersection of the vector indicating the line-of-sight direction and the display screen 2 is determined as the user's visual field center, and a predetermined in-plane range of the display screen 2 from the visual field center is defined as the user visual field region. Thereafter, the process proceeds to step S19.
- the outside scene photographing electronic camera 3 captures an image of the outside scene (hereinafter referred to as “outside scene image”) from substantially the same viewpoint as the driver 20 (step S11), and the specific object extraction unit 101, the specific object The information is output to the object related information detection unit 102 and the photographing visual field region calculation unit 103.
- the imaging field area calculation unit 103 detects an area (hereinafter referred to as “imaging field of view”) captured as an outside scene image based on the outside scene image (step S12), and then proceeds to step S18.
- the specific object extraction unit 101 extracts video information of a specific object such as a moving person, a car, a traffic light, a railroad crossing, a road sign, a lane, and the like on a road from an outside scene image, and is stored in a memory in advance. The type of the specific object extracted by collating with each specific object collation video information is identified. The specific object extraction unit 101 outputs the extracted specific object to the specific object related information detection unit 102.
- the specific object related information detection unit 102 detects the related information of the specific object acquired from the specific object extraction unit 101, for example, the moving direction and the moving speed of the specific object (step S13).
- the related information detection method may be detected based on, for example, the amount of change in a region where the specific object is captured between successive frames, or an output value from a distance measuring unit that measures the distance to the specific object.
- the related information may be detected based on the above.
- the extracted specific object and the related information about it are output to the monitoring object determination unit 104.
- the monitoring object determination unit 104 determines whether or not the specific object extracted and identified in step S13 is a monitoring object having a risk of danger with respect to traveling of the host vehicle (step S14). If the determination result is “Yes”, the process proceeds to the next step 16. On the other hand, if the determination result is “No”, the process returns to step S01 and step S11. For example, based on the moving direction and speed of the monitored object, the determination as to whether or not the object is a monitored object is performed when a vector indicating the moving direction of the monitored object intersects with the moving direction vector of the own vehicle (the vector length indicates the moving speed). It is determined that there is a possibility of interference between the own vehicle and the monitoring object.
- the monitoring object determination unit 104 continues to acquire related information such as the relative position, relative speed, and relative movement direction of the specific object that has been determined as the monitoring object in step S14 (step S15). Output to the risk determination unit 105.
- the monitored object risk level determination unit 105 determines the risk level of the monitored object on the vehicle from the related information of the monitored object obtained in step S13, and determines whether or not to display a predetermined danger avoidance warning item. Determination is made (step S16). If the determination result is “Yes”, the process proceeds to the next step S17. On the other hand, if the determination result is “No”, the process returns to step S01 and step S11. Specific processing contents in steps S15 and S18 will be described later.
- the monitored object risk level determination unit 105 determines the type of danger avoidance warning item to be displayed (S17).
- the monitoring object risk level determination unit 105 may display a danger avoidance warning item that can alert the user more strongly when the distance to the own vehicle is closer even if the monitoring object is the same. .
- the display video control unit 108 determines the display position in the display screen 2 based on the user's visual field information and photographing visual field information (step S18), and additionally displays a warning item in the display screen 2 (step S19). . If the main power supply of the driving support system 10 is ON, the processing of the driving support system 10 is continued (step S20 / Yes), and the process returns to steps S01 and S11. The process of the driving support system 10 in which the main power is turned off is terminated (Step S20 / No).
- FIG. 6A is an outline of a captured image of the electronic camera 3 for capturing an outside scene that captures a moment when a pedestrian is about to jump out of the side of the road. It is a figure and (b) is the schematic diagram which looked down on the condition of Fig.6 (a).
- the specific object extraction unit 101 extracts and identifies an image of a pedestrian 31a that is about to jump out from the outside scene image captured in the imaging field of view area 11 in the forward direction of the host vehicle.
- the specific object related information detection unit 102 detects related information related to the pedestrian 31a.
- the monitoring object determination unit 104 determines that this is a monitoring object and enters a mode for monitoring its movement. Then, in the subsequent shooting frames, when the pedestrian 31a approaches the front direction of the host vehicle, for example, as shown in FIG.
- a pedestrian movement vector “u” is calculated from points 41a and 41b corresponding to the position coordinates of the pedestrians 31a and 31b on the map data 40 around the own vehicle stored in the data memory 113, and A movement vector “v” of the point 42 corresponding to the own vehicle is calculated on the map data 40 from the vehicle related information.
- the collision risk between the target pedestrian and the subject vehicle is calculated, and the danger avoidance is performed according to the collision risk. It is determined whether or not the warning item is displayed in the HMD screen.
- Each danger avoidance warning item superimposed and displayed on the display screen 2 of the HMD device 1 is directly visible through the display screen 2 by the driver 20, or is near or near the corresponding monitoring object outside the field of view. From the viewpoint of improving the visibility, it is preferable that the information is accurately superimposed and displayed at a predetermined position associated with the object as much as possible.
- the display is superimposed on the display screen 2. It is necessary that the viewing direction of the image to be displayed and the viewing direction of the outside scene that is directly viewed by the driver match completely.
- the visual field area directly seen by the driver 20 wearing the HMD device 1 is always changing according to the line-of-sight direction in which the driver 20 is gazing.
- the outside scene photographing electronic camera 3 fixed to the HMD device 1 is fixed in the direction in which the HMD device 1 is facing, that is, the direction in which the head of the driver 20 is facing.
- the visual field direction of the photographing visual field region formed from the photographed video of the outside scene photographing electronic camera 3 and the direct visual field of view of the driver 20 itself are easily shifted. More specifically, the field of view for photographing and the field of view of the display image captured with the field of view differ depending on the position on the display screen 2 where the image is displayed.
- the difference between the field of view of the photographing field and the field of view of the display image is that the center point of the photographing field of view and the center point of the display region of the display screen 2 are fixed because the electronic camera 3 for photographing the outside scene is fixed to the HMD device 1.
- the correction amount of the error can be found from the geometric positional relationship. Therefore, the center point of the photographing visual field area and the center point of the display screen 2 are treated as being coincident using the correction amount. Accordingly, in the following description, it is assumed that the visual field of the display image and the photographing region visual field are the same. Accordingly, the principle of occurrence of positional deviation between the visual field of the display image (imaging area visual field) and the direct visual field of vision will be described with reference to FIG.
- FIG. 7 is a diagram showing the principle of occurrence of positional deviation between the visual field of the display image (imaging area visual field) and the direct visual field of view.
- FIG. 7 shows a state where the driver 20 is facing the front.
- the fact that the driver 20 visually recognizes an object to be monitored, for example, a pedestrian 31, means that visible light reflected by the surface of the pedestrian 31 is incident on each of the left eye 21L and the right eye 21R, and in the left eye 21L and the right eye 21R. It is in a state where it reaches within each retina and forms an image.
- the path of incident light is indicated by the symbol ⁇ .
- the middle point of the intersection of each path ⁇ of visible light incident on the left and right eyes of the pedestrian 31 and the display screen 2 is indicated by Q.
- the lower part of FIG. 7 shows a state where the driver 20 is facing right.
- Q ′ represents the midpoint of the intersection of the path ⁇ ′ of the incident light and the display screen 2 when the visible light reflected by the surface of the pedestrian 31 enters each of the left eye 21L and the right eye 21R.
- the midpoint Q ′ is different in position from the midpoint Q. That is, even if the relative positions of the HMD device 1 and the pedestrian 31 are the same, if the line of sight of the driver 20 deviates from the vector ⁇ ′ (upper part in FIG. 7) to the vector ⁇ ′ (lower part in FIG. 7), the display screen 2 The visual field of the display image of the pedestrian 31 and the direct visual field of view of the driver 20 are shifted.
- the midpoint P of the intersection point between the line-of-sight vector ⁇ of each of the left and right eyes of the driver 20 and the display screen 2 is determined as the center of the display field.
- the point P is the center of the display field
- the point P ′ is the center of the display field. Therefore, the point P is at a position that is displaced by ⁇ x in the horizontal direction from the point P on the surface of the display screen 2.
- the horizontal cross section is described as an example, but the line of sight also changes in the vertical direction. In this case, the point P is at a position that is displaced by ⁇ y in the vertical direction from the point P on the surface of the display screen 2.
- the display visual field direction dynamic correction process is performed as follows. First, the field of view of the electronic camera 3 for photographing the outside scene is set to be sufficiently wider than the field of view directly visible through the display screen 2 by the driver 20. Thereafter, dynamic correction of the display visual field direction as described with reference to FIGS. 8 and 9 is performed.
- FIG. 8 is a schematic diagram showing an example of the relationship between the display field of view of the HMD device 1 and the direct visual field of view of the driver 20.
- a region surrounded by a broken-line square frame in FIG. 8 is the entire region of the outside scene video, and coincides with the photographing field region 11 of the outside scene photographing electronic camera 3.
- An area surrounded by a solid square frame is a display visual field area 12 of the HMD device 1, and an area surrounded by an elliptical frame is a direct visual field area 13 of the driver 20.
- a point O and a virtual XY coordinate axis with this point O as the origin are set.
- the display position (coordinates) of each display item is calculated with reference to this XY coordinate axis.
- the origin O and the XY coordinate axes are virtually set in the processing step S18 in the flowchart of FIG. 4 and are not actually displayed on the display screen 2.
- the center point of the visual field region is determined from the direct visual field region 13 of the HMD user, that is, the driver 20, determined through the eye camera, the user's visual line direction calculation unit 106, and the user visual field region calculation unit 107. P is obtained, and the coordinate position on the XY coordinates provided in the display visual field region 12 is calculated.
- the coordinates of the point P are (0, 0), that is, the center point P of the direct visual field region 13 of the driver 20 coincides with the origin O of the display visual field region 12.
- the display visual field direction of the HMD device 1 and the direct visual field direction of the driver coincide with each other.
- FIG. 9 is a schematic diagram showing a case where the display visual field direction of the HMD device 1 is shifted from the direct visual field direction of the driver.
- the coordinates of the center point P of the area 13 are coordinates (Xp, Yp) deviating from the origin O of the display visual field area 12.
- the system automatically shifts the origin O of the display visual field region 12 to the same position O ′ as the position of the point P, and a new coordinate axis X having this O ′ as the origin.
- '-Y' the display image control unit 108 adjusts the installation position of the origin O of the display visual field region 12 and the coordinate axes XY so that the center point P of the driver visual recognition visual field region 13 always coincides with the origin of the display visual field region 12.
- the display position (coordinates) of each display item is calculated using the new XY coordinate axes after the adjustment.
- each display item can be accurately displayed at a position corresponding to the target object to be directly viewed by the driver. .
- FIG. 10 is a diagram showing an example of additional display when a monitoring target is present in the driver visual field of view, wherein (a) shows an example of additional display for a pedestrian jumping out, and (b) is an addition to a red signal. It is a display example.
- FIG. 10A is an example in which the pedestrian described in FIG. 6 is about to jump out from the side road in the forward direction of the host vehicle.
- a danger avoidance warning item such as a frame line 201 displayed so as to surround a pedestrian (displayed with a broken line) 201 and a warning sentence 202 of “jump out caution!” Displayed near the center of the field of view is superimposed.
- FIG. 10B is an example in which the traffic light in front of the host vehicle is a red signal.
- the signal is displayed so as to surround the red signal in the front.
- Danger avoidance warning items such as a framed line (displayed by a broken line) 203 and a warning sentence 204 of “Signal light red light stop!” Displayed near the center of the visual field are superimposed.
- 10 (a) and 10 (b) show examples of danger avoidance warning items for danger monitoring objects in the direct visual field region 13 of the driver 20, but in the present invention, an electronic camera for photographing outside scenes
- an electronic camera for photographing outside scenes By taking the 3 field of view sufficiently wider than the driver's direct field of view 13, it is possible to display a danger avoidance warning item even for a monitoring object outside the driver's visual field of view, and alert the driver. It is also possible.
- FIG. 11 is a schematic view showing such a display example.
- the pedestrian is about to jump out from the left outside of the direct visual field area 13 of the driver 20 in the forward direction of the vehicle.
- the danger avoidance warning device 5 of the present invention quickly identifies the pedestrian as a monitoring object from the captured image of the outside scene photographing electronic camera 3 and collides from the movement. It is determined that there is a danger risk, and a danger avoidance warning item such as an arrow 205 for alerting or a warning sentence 206 of “Jump out from the left!” Is displayed near the center in the direct visual field area 13 of the driver.
- the example of FIG. 11 is an example of a pedestrian jumping out, but is not limited to this, and a red signal as in the example of FIG. 10B is outside the direct visual field region 13 of the driver 20. Even in some cases, similar danger avoidance warning items can be displayed.
- monitoring objects are not limited to pedestrians jumping out and traffic lights, but targets that may cause some danger risk in vehicle driving such as car jumping out, road signs, railroad crossings, road steps, lanes, etc. Any object can be used as long as it is an object.
- a more advanced vehicle driving support system can be realized by selecting and displaying appropriate danger avoidance warning items from information such as the type, position, and movement of each monitored object.
- the driver can always drive in an appropriate danger avoidance warning environment even when driving a vehicle without a driving support system.
- the present embodiment not only the type of the monitoring object but also related information such as the position, moving speed, and moving direction is acquired, and the risk avoidance warning item taking this into consideration is displayed, thereby promptly responding to the driver.
- the convenience of the driving support system can be further improved.
- the viewpoint movement of the driver is detected, and the origin of the direct visual field area and the display visual field area associated with the movement of the visual point are detected. Corrects misalignment and performs additional display. Thereby, since there is no position shift between the monitoring object and the danger availability warning item as seen from the driver, or it is mitigated, the recognizability of the driver can be improved.
- the second embodiment is an embodiment in which the present invention is applied to a driving route guidance system, that is, a so-called navigation system, among driving assistance systems.
- the second embodiment will be described below with reference to FIGS.
- FIG. 12 is a schematic block diagram illustrating an electronic circuit configuration of the driving support apparatus according to the second embodiment.
- FIG. 13 is a flowchart showing a flow of operations of the driving support system according to the second embodiment.
- FIG. 14 is a screen display example of the display screen of the HMD device according to the second embodiment. The function of each part of this block diagram will be described below. In this figure, blocks having the same functions as those in the schematic block diagram of the first embodiment shown in FIG.
- the difference between the driving support device 5 a according to the second embodiment and the danger avoidance warning device 5 according to the first embodiment is that the degree of risk of the monitoring object included in the danger avoidance warning device 5.
- the determination unit 105 is not provided, and a route determination unit 111 is provided instead. Further, even though the block names are the same, there are differences in the objects recognized as the specific objects, which will be described in detail below.
- the specific object extraction unit 101 is connected to the electronic camera 3 for photographing outside scenes.
- This specific object extraction unit 101 is often used along a road such as a road sign, a traffic light, an intersection, a railroad crossing, or a convenience store, a family restaurant, a gas station, etc.
- Video information that becomes a search point when searching for a travel route such as a signboard of a store to be seen is extracted, and the specific object extracted by collating with the video information of each specific object stored in the memory 109 in advance
- a function for identifying the type is provided.
- the specific object related information detection unit 102 that detects related information such as the position of the specific object and the outside scene photographing electronic camera 3 are photographed.
- a photographing visual field region calculation unit 103 for detecting the visual field is connected.
- the monitoring target determination unit 104 needs the specific target video for future route search of the own vehicle from various information acquired by the specific target extraction unit 101 and the specific target related information detection unit 102. It has a function of determining whether or not it is a monitoring target that needs to monitor the relative positional relationship between the specific target and the vehicle.
- the route determination unit 111 detects the vehicle-related information acquired by the vehicle-related information acquisition unit 112 that detects the information obtained from the above-described units, the vehicle position, the traveling speed of the vehicle, the traveling direction, and the like.
- the map data stored in the map data memory 113 is compared with the map data in the vicinity of the own vehicle position to search for the driving route of the own vehicle, and further to guide the driving route to the driver, the driving route for the monitored object It has a function of determining whether to display guidance items.
- the vehicle-related information acquisition unit 112 acquires, for example, GPS data.
- the present invention is not limited to this example.
- the vehicle-related information acquisition unit 112 is not limited to this example. You may provide the structure which detects an own vehicle position, an own vehicle travel speed, a direction, etc. in an HMD system.
- the own vehicle related information acquisition unit 112 corresponds to a position information acquisition unit that acquires position information indicating the current position of the user.
- the display image control unit 108 selects and extracts a predetermined travel route guidance item such as an arrow indicating a left turn or a right turn from the graphic memory 110, and extracts it. Control is performed so as to superimpose and display on the display screen 2 of the HMD device 1.
- the displayed travel route guidance item can be correctly recognized as a display item associated with a corresponding monitoring object that the driver 20 himself / herself directly views through the display screen 2. It is necessary to accurately superimpose and display at a predetermined position in the display screen 2.
- the driver directly recognizes the user's line-of-sight direction calculation unit 106 that detects the line-of-sight direction of the user of the HMD device 1, that is, the driver through the eyeball camera and the line-of-sight information directly through the display screen 2.
- a user visual field calculation unit 107 that detects the visual field to be detected is provided. Then, the relative position relationship between the direct visual field of view of the driver obtained by the user visual field calculation unit 107 and the photographing visual field of the outside scene photographing electronic camera 3 obtained by the photographing visual field region calculation unit 103 is constantly monitored. From the data, the correct display position of the danger avoidance warning item to be displayed in the display image control unit 108 is calculated. Note that a specific example of the above warning item display position calculation processing flow has already been described in the first embodiment (FIGS. 8 and 9), and thus the description thereof is omitted here.
- the function and operation of each unit in the block are controlled by the main control unit 100.
- the visual line direction detection device 4 captures an image of the eye of the user, that is, the driver 20 (user visual line image) (step S01), and outputs it to the user visual line direction calculation unit 106. Then, the user gaze direction calculation unit 106 detects the gaze direction (step S02), and the user visual field region calculation unit 107 determines the direct visual field region of the driver 20 based on the gaze direction information (step S03).
- the outside scene photographing electronic camera 3 sequentially acquires predetermined video information (step S11), and the photographing visual field region calculation unit 103 detects the photographing visual field based on the video information (step S12). )
- the specific object extraction unit 101 uses the above-described route search specific objects from the video information acquired in step S11, such as road signs, traffic lights, intersections, railroad crossings, convenience stores, family restaurants, gas stations, and the like.
- the video information that becomes the point on the travel route search such as a signboard of a shop that is often seen in the store is extracted, and the type of the specific object extracted by comparing with the video information of each specific object stored in the memory in advance Identify (step S13).
- the route determination unit 111 acquires the position on the map data of the specific object extracted and identified in step S13, and acquires the position on the map data of the host vehicle and route data to be traveled in the future (step S21). ).
- the route determination unit 111 collates the position of the specific object acquired in step S13 with the position of the own vehicle acquired in step S22 and the route data that the own vehicle should travel (step S22). It is determined whether or not any travel route guidance item, that is, a navigation item is displayed on the object (step S23). And if a determination result is "Yes”, it will transfer to the following step S25. On the other hand, if the determination result is “No”, the process returns to step S01 and step S11.
- the route determination unit 111 determines the type of navigation item to be displayed (step S24).
- the display video control unit 108 displays the display position (origin) on the HMD display screen based on the direct visual field information of the driver 20 detected in step S03 and the photographing visual field information of the outside scene photographing electronic camera 3 detected in step S12. To decide. Then, the display video control unit 108 displays the navigation item at a predetermined position on the HMD display screen determined in step S25 (step S25) (step S26).
- Step S27 / Yes If the main power supply of the driving support system 10 is ON, the processing of the driving support system 10 is continued (step S27 / Yes), and the process returns to steps S01 and S11.
- the process of the driving support system 10 in which the main power is turned off is terminated (Step S27 / No).
- FIG. 14 shows a display example of navigation items by the navigation system.
- reference numeral 11 denotes a photographing field of view of the outside scene photographing electronic camera 3
- 13 denotes a direct field region 13 that the driver directly views through the HMD display screen.
- the system first extracts and identifies, for example, an image of a signboard of a specific restaurant (portion surrounded by a broken line in the drawing) from the image of the shooting field of view 11. Then, it is determined from the map data, route data, and own vehicle position data stored therein that it is necessary to turn left at the intersection in front of this restaurant as a route to be run, for example. Therefore, the position of the intersection in front of the restaurant is extracted and identified from the photographed video, and an appropriate comment such as an arrow 207 for prompting a left turn to the intersection position within the driver's visual field of view or a guidance comment such as “turn left at this corner” is appropriate. Navigation items are superimposed.
- the driver By overlaying navigation items in the field of view directly visible to the driver in this way, the driver frequently looks at the outside scene and the navigation screen while driving as in a conventional navigation system with a built-in vehicle display. This eliminates the hassle of moving the vehicle and significantly increases the driver's awareness of driving safety and navigation information. Furthermore, by incorporating such a system into the HMD device, the driver can drive under an appropriate navigation environment even when driving a vehicle without a navigation system.
- the functions and the like of the present invention described above may be realized by hardware by designing a part or all of them with, for example, an integrated circuit. Further, the microprocessor unit or the like may be realized by software by interpreting and executing an operation program that realizes each function or the like. Hardware and software may be used together.
- control lines and information lines shown in the figure are those that are considered necessary for the explanation, and not all control lines and information lines on the product are necessarily shown. Actually, it may be considered that almost all the components are connected to each other.
- the present invention may be used as a driving support system for a driver who drives a two-wheeled vehicle such as a motorcycle or a bicycle, or the present invention may be used as a general pedestrian safety assistance or destination guidance system. .
- the present invention is not limited to the field of driving support, and can be applied to technologies that perform extended display (Augmented Reality, AR).
- AR technology such as
- AR technology can be applied regardless of its use.
- it can be applied to games and pedestrian navigation systems.
- it is also used for the attraction explanation device in the theme park (target age, height, waiting time display, etc.), the explanation device for the exhibits in the museum (however, the shooting function is locked), etc.
- the invention can be applied.
- the necessary danger avoidance warning and the travel route guidance need to use audio or the like instead of displaying the video item on the HMD display screen.
- it may replace with HMD apparatus and the video display apparatus for extended displays provided with the display screen which has transparency may be sufficient.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Traffic Control Systems (AREA)
Abstract
La présente invention a pour objet de produire une technologie qui améliore la visibilité d'un affichage supplémentaire même si un utilisateur change le point de vue. Pour réaliser cet objet, la présente invention génère des informations vidéo en capturant les images de la scène visuelle externe d'un utilisateur (S11), et extrait des informations vidéo une information d'objet spécifique (12) qui doit être fournie à l'utilisateur (S12). De plus, la direction de la ligne de visée de l'utilisateur est détectée afin de générer des informations de direction de ligne de visée (S02). Une zone de champ visuel direct qui est un champ visuel lorsque l'utilisateur observe la scène visuelle externe par l'intermédiaire d'un écran d'affichage est ensuite calculée à partir des informations de direction de ligne de visée de l'utilisateur par l'intermédiaire de l'écran d'affichage semi-transparent qui est un écran d'affichage destiné à afficher les informations vidéo et qui permet à l'utilisateur de visualiser directement la scène visuelle externe par l'intermédiaire de l'écran d'affichage, puis une zone de champ visuel de capteur image, qui est un champ visuel lorsque les images de la scène visuelle externe sont capturées, est calculée (S03). Les informations supplémentaires qui sont associées à l'objet spécifique extrait sont extraites depuis une unité de stockage d'informations supplémentaires dans laquelle sont stockées, d'une manière associée, des informations supplémentaires indiquant des informations sur des objets spécifiques (S17). Une relation de position relative entre la zone de champ visuel de capture d'image et la zone de champ visuel direct est calculée (S18) et, en utilisant les informations de position relative, les informations supplémentaires extraites sont affichées sur l'écran d'affichage de manière à suivre le mouvement de la zone de champ visuel direct (S19).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2014/082025 WO2016088227A1 (fr) | 2014-12-03 | 2014-12-03 | Dispositif et procédé d'affichage vidéo |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2014/082025 WO2016088227A1 (fr) | 2014-12-03 | 2014-12-03 | Dispositif et procédé d'affichage vidéo |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016088227A1 true WO2016088227A1 (fr) | 2016-06-09 |
Family
ID=56091200
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/082025 WO2016088227A1 (fr) | 2014-12-03 | 2014-12-03 | Dispositif et procédé d'affichage vidéo |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2016088227A1 (fr) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018112809A (ja) * | 2017-01-10 | 2018-07-19 | セイコーエプソン株式会社 | 頭部装着型表示装置およびその制御方法、並びにコンピュータープログラム |
JP2020161988A (ja) * | 2019-03-27 | 2020-10-01 | 日産自動車株式会社 | 情報処理装置及び情報処理方法 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004219664A (ja) * | 2003-01-14 | 2004-08-05 | Sumitomo Electric Ind Ltd | 情報表示システム及び情報表示方法 |
JP2008134616A (ja) * | 2006-10-10 | 2008-06-12 | Itt Manufacturing Enterprises Inc | 頭部着用ビデオシステムにおける動的視差修正のためのシステムと方法 |
JP2010210822A (ja) * | 2009-03-09 | 2010-09-24 | Brother Ind Ltd | ヘッドマウントディスプレイ |
JP2010256878A (ja) * | 2009-03-30 | 2010-11-11 | Equos Research Co Ltd | 情報表示装置 |
JP2013203103A (ja) * | 2012-03-27 | 2013-10-07 | Denso It Laboratory Inc | 車両用表示装置、その制御方法及びプログラム |
WO2014034065A1 (fr) * | 2012-08-31 | 2014-03-06 | 株式会社デンソー | Dispositif d'avertissement de corps en mouvement et procédé d'avertissement de corps en mouvement |
-
2014
- 2014-12-03 WO PCT/JP2014/082025 patent/WO2016088227A1/fr active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004219664A (ja) * | 2003-01-14 | 2004-08-05 | Sumitomo Electric Ind Ltd | 情報表示システム及び情報表示方法 |
JP2008134616A (ja) * | 2006-10-10 | 2008-06-12 | Itt Manufacturing Enterprises Inc | 頭部着用ビデオシステムにおける動的視差修正のためのシステムと方法 |
JP2010210822A (ja) * | 2009-03-09 | 2010-09-24 | Brother Ind Ltd | ヘッドマウントディスプレイ |
JP2010256878A (ja) * | 2009-03-30 | 2010-11-11 | Equos Research Co Ltd | 情報表示装置 |
JP2013203103A (ja) * | 2012-03-27 | 2013-10-07 | Denso It Laboratory Inc | 車両用表示装置、その制御方法及びプログラム |
WO2014034065A1 (fr) * | 2012-08-31 | 2014-03-06 | 株式会社デンソー | Dispositif d'avertissement de corps en mouvement et procédé d'avertissement de corps en mouvement |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018112809A (ja) * | 2017-01-10 | 2018-07-19 | セイコーエプソン株式会社 | 頭部装着型表示装置およびその制御方法、並びにコンピュータープログラム |
JP2020161988A (ja) * | 2019-03-27 | 2020-10-01 | 日産自動車株式会社 | 情報処理装置及び情報処理方法 |
JP7278829B2 (ja) | 2019-03-27 | 2023-05-22 | 日産自動車株式会社 | 情報処理装置及び情報処理方法 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2857886B1 (fr) | Appareil de commande d'affichage, procédé mis en oeuvre par ordinateur, support de stockage et appareil de projection | |
EP3213948B1 (fr) | Dispositif de commande d'affichage et programme de commande d'affichage | |
US11181737B2 (en) | Head-up display device for displaying display items having movement attribute or fixed attribute, display control method, and control program | |
JP2014181927A (ja) | 情報提供装置、及び情報提供プログラム | |
US20160185219A1 (en) | Vehicle-mounted display control device | |
WO2019097755A1 (fr) | Dispositif d'affichage et programme informatique | |
CN105009032A (zh) | 信息处理装置、姿势检测方法和姿势检测程序 | |
KR20180022374A (ko) | 운전석과 보조석의 차선표시 hud와 그 방법 | |
JP2015210644A (ja) | 車両用表示システム | |
RU2720591C1 (ru) | Способ отображения информации и устройство управления отображением | |
JP2014120111A (ja) | 走行支援システム、走行支援方法及びコンピュータプログラム | |
JP2018173399A (ja) | 表示装置及びコンピュータプログラム | |
KR20150051671A (ko) | 차량 및 사용자 동작 인식에 따른 화면 제어 장치 및 그 운영방법 | |
JP2005127996A (ja) | 経路誘導装置、経路誘導方法及び経路誘導プログラム | |
JP2014120114A (ja) | 走行支援システム、走行支援方法及びコンピュータプログラム | |
JP2016074410A (ja) | ヘッドアップディスプレイ装置、ヘッドアップディスプレイ表示方法 | |
WO2020105685A1 (fr) | Dispositif, procédé et programme informatique de commande d'affichage | |
JP6136238B2 (ja) | 走行支援システム、走行支援方法及びコンピュータプログラム | |
JP4277678B2 (ja) | 車両運転支援装置 | |
JP4270010B2 (ja) | 物体危険判定装置 | |
WO2021132555A1 (fr) | Dispositif de commande d'affichage, dispositif d'affichage tête haute et procédé | |
WO2016088227A1 (fr) | Dispositif et procédé d'affichage vidéo | |
WO2016056199A1 (fr) | Dispositif d'affichage tête haute, et procédé d'affichage pour affichage tête haute | |
JP2014120113A (ja) | 走行支援システム、走行支援方法及びコンピュータプログラム | |
JP2014174879A (ja) | 情報提供装置、及び情報提供プログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14907339 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
NENP | Non-entry into the national phase |
Ref country code: JP |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 14907339 Country of ref document: EP Kind code of ref document: A1 |