WO2004048895A1 - 移動体ナビゲート情報表示方法および移動体ナビゲート情報表示装置 - Google Patents
移動体ナビゲート情報表示方法および移動体ナビゲート情報表示装置 Download PDFInfo
- Publication number
- WO2004048895A1 WO2004048895A1 PCT/JP2003/014815 JP0314815W WO2004048895A1 WO 2004048895 A1 WO2004048895 A1 WO 2004048895A1 JP 0314815 W JP0314815 W JP 0314815W WO 2004048895 A1 WO2004048895 A1 WO 2004048895A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- road
- image
- navigation information
- road shape
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3626—Details of the output of route guidance instructions
- G01C21/3647—Guidance involving output of stored or live camera images or video streams
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
- G08G1/0968—Systems involving transmission of navigation instructions to the vehicle
- G08G1/0969—Systems involving transmission of navigation instructions to the vehicle having a display in the form of a map
Definitions
- the present invention relates to a so-called car navigation device, a method for displaying navigation object information such as a method for displaying car navigation information, and an apparatus for displaying navigation object information.
- a mobile navigation device such as a so-called car navigation device has been developed.
- a map stored in advance in a storage means such as a CD-ROM as a map is autonomously displayed.
- the current position of the moving object is confirmed based on the GPS signal transmitted from the GPS (Global Positioning System) satellite.
- GPS Global Positioning System
- a travel position and a travel direction indicated by an icon such as an arrow are displayed on a road map in which a two-dimensional image or a road map in which a surrounding view is three-dimensionally converted into a three-dimensional CG image.
- images of information such as the recommended route (optimal route) from the current position to the destination are overlaid.
- a device that displays road congestion information and the like obtained from a road traffic information receiver (VICS) on a screen as a text label.
- VICS road traffic information receiver
- a navigation device using a real image in addition to map information is disclosed in, for example, Japanese Patent Application Laid-Open No. H10-132598, Japanese Patent Application Laid-Open No. H11-304944. It is proposed by Japanese Patent Publication No.
- Japanese Patent Application Laid-Open No. H10-10329898 detects the current position and traveling direction of a moving object, and selects the napige information to be displayed according to the detected current position and traveling direction.
- the actual image captured by the imager from a camera angle that is almost similar to the front view in the traveling direction seen through the windshield of the vehicle, and the navigation read out corresponding to the current position of the vehicle at that time
- the information image By superimposing the information image and displaying it on, for example, the screen of a liquid crystal display panel, the correspondence between the actual road and the surrounding landscape image and the navigation information can be visually understood. To display.
- the navigation information is mainly video information displayed on the video display unit, so that the driver is alert during driving. There is a possibility that it becomes easily distracted.
- Japanese Patent Application Laid-Open No. Hei 11-340499 discloses an imaging device such as a CCD camera for photographing the front of a vehicle body, for example, at a position near a ceiling of a windshield of a vehicle or near a dashport.
- a technology to display a landscape image (image) including the road ahead of the vehicle body captured by the imaging device as a sub-screen at a predetermined position in a screen displaying map information. has been proposed.
- the driver of the vehicle is looking at the navigation information such as the map displayed on the screen of the image display device during driving, the driver is displayed as a sub-screen at a predetermined position in the screen.
- the driver is displayed as a sub-screen at a predetermined position in the screen.
- the external dimensions (screen size) of the sub-screen should be changed.
- the display By setting the display to be enlarged, the occurrence of a high-risk state ahead is immediately and visually communicated to the driver, and the high-risk front situation is displayed with high visibility to the driver.
- technologies that can be shown to the public, which in turn enables further safe driving.
- Japanese Patent Laid-Open No. 2001-331718 discloses a three-dimensional or two-dimensional configuration of an image road shape extracted from image data of a foreground of a host vehicle and map data around the host vehicle. Then, the road shape, the attitude of the host vehicle relative to the road surface, the absolute position of the host vehicle, etc. are estimated based on the overlapping state of the projected road shapes in the logical space. In other words, based on images captured by a single-lens CCD camera, sufficiently accurate road shape estimation is performed to achieve accurate preceding vehicle determination and automatic steering.
- image information is simply used as a background image.
- an image of a destination display arrow starts to be displayed from the center of the screen, and the rear of the landscape ( (In general, the display screen moves from the top to the bottom of the display screen) so that it moves in the center of the screen, for example, in a straight line.
- it displays navigation information on curved roads or displays four lanes on both sides.
- the navigation information display may not be able to determine exactly where in the landscape it is located, or it may point to a position that is completely out of alignment There is a problem.
- the camera's attitude parameter is fixedly set in advance as the angle of the camera's optical axis with respect to the ground (or horizontal direction), for example, the vibration of a running vehicle, the rolling of the vehicle due to steering, and the like. Due to the change of the camera posture due to pitching, the inclination of the vehicle on uphill or downhill, etc., the image of the navigation information such as the arrow in the direction of travel or the direction of travel such as right turn position, It is significantly different from the image of the actual scenery, and it may point to a substantially wrong direction, or it is unclear where to indicate a right turn There is a problem that it is displayed.
- the arrow 901 as the image of the navigation information clearly appears to indicate the left turn position on the road in the scenery ahead.
- a typical passenger car or a vehicle such as a bus or truck with a high driver's seat
- the road scene that is actually seen through the windshield from the driver's seat is shown in Fig. 8. It cannot be a bird's-eye view like looking down from such a high position.
- Figure 1 OA shows the degree of change of the projected image onto the road surface with respect to the unit angle change (posture with respect to the road surface) ⁇ 0 ⁇ from a driver's seat of a general vehicle in a line of sight at a height of, for example, lm (ALA).
- Fig. 10B shows the change in the unit angle change (attitude with respect to the road surface) in the line of sight at a height of, for example, about 1 Om higher than that in Fig. 1 OA. It expresses degree (ALB).
- ALB degree
- the degree of change of the projected image from a low position (the magnitude of the positional shift with respect to the posture change ⁇ 0e when the image is taken from a low position) ALA) and the degree of change of the projected image from a high position (corresponding to the magnitude of the positional shift with respect to the posture change ⁇ 0 ⁇ when the image is taken from a high position) ALA
- LB LB
- the display position of the navigation information often deviates significantly from the position where it should be displayed properly in the live-action video. As a result, it is difficult or impossible for the user (driver) to intuitively and accurately grasp which position in the landscape the navigation information indicates.
- attitude often changes frequently due to pitching, pitching, etc., and for the reasons described above, even in a landscape with a height of about 1 to 3 m of a typical vehicle, even a slight change in attitude significantly
- the display position of navigation information such as an arrow indicating a route is set to an appropriate display position in the actual image. There is a danger that the situation will greatly deviate from the standard.
- changes in attitude often differ depending on the type of vehicle, due to differences in the center of gravity, which is dominantly determined by the structure of the vehicle drive system and the location of the engine.
- Japanese Patent Application Laid-Open No. Hei 11-304944 there are the following problems in addition to the same problems as those relating to Japanese Patent Application Laid-Open No. Hei 10-132595. is there.
- Japanese Patent Application Laid-Open No. H11-110449 only the actual image of the scenery ahead of the road is displayed on the sub-screen, so that the comparison between the road map information and the actual image is also performed.
- the driver has to think in his head. For this reason, for example, it is difficult for drivers who drive on roads where there are many intersections and branches on unfamiliar (or first-time) lands to grasp navigation information intuitively.
- Japanese Patent Laid-Open Publication No. 2001-331718 has the following problems. In other words, since there is no specific road shape model, in the case of a multi-lane road, a large deviation may occur between the center line of the driving lane extracted from the image data and the road center line estimated from the road map data. There is. In addition, since it is not possible to estimate the information on the vehicle's driving lane, it is not possible to provide correct navigation information when changing lanes or turning left or right.
- the road structure is such that the one-sided slope of the traveling road surface is set so as to change in accordance with the horizontal curvature, the posture with respect to the road surface in the lane next to the currently traveling lane on a curved road is not considered.
- the estimation result of the data will change significantly, and it may not be possible to estimate the road shape accurately.
- the portion where the luminance change is large is extracted as a white line by a differential filter for each frame without feedback of the road shape estimation result.
- the estimation results obtained by the method are extremely susceptible to various environmental factors such as weather changes, road shadows and dirt. For this reason, there is also a problem that the road shape model represented by the road feature extraction data may be inaccurate, significantly deviating from the actual road shape.
- the present invention has been made in view of such a problem, and its purpose is to provide navigation information such as route guidance, own vehicle position, and map information in a live-action image of a road ahead of a moving object or in an actual scene.
- the driver can intuitively and accurately recognize the correspondence between the navigation information and the actual image or the actual scene by displaying the image accurately projected at an appropriate position in the navigation system. It is an object of the present invention to provide a gate information display method and a mobile object navigation information display device. Disclosure of the invention
- a method for displaying navigation information of a moving object includes detecting a current position of the moving object, and photographing a photographed image of a scene including a road in a traveling direction of the moving object by a vehicle-mounted camera installed on the moving object.
- the process of generating a road shape model for the road to be extracted the process of extracting road shape data, which is an image of the roads included in the landscape, from the actual video, and comparing the road shape model data with the road shape data Then, the in-vehicle force of the subject or the posture data of the moving object with respect to the road is estimated, and the navigation information read out corresponding to the current position of the moving object based on the posture data in the photographed live-action image is taken.
- the process includes a process of determining a display position, and a process of displaying an image obtained by synthesizing the navigation information read out at the determined position of the photographed live-action video.
- the mobile object navigation information display device includes: a current position detection unit that detects a current position of the mobile object; and a real image that includes a scene including a road in the traveling direction of the mobile object as a subject.
- From the current position of the moving object and the road map data generate a road shape model for the road that is assumed to be photographed from the current position, and use the actual image to capture the image of the road included in the landscape
- the road shape data, which is the data, is extracted, and the data of the road shape model is compared with the road shape data, and the posture of the vehicle-mounted camera or the moving object with respect to the subject road.
- Data and based on the posture data, determine the display position of the napige information read out corresponding to the current position of the moving object in the photographed live-action video, and Data processing means for outputting data for displaying an image formed by combining the navigation information read out at the read position, and a photographed video image is determined based on the data output from the data processing means.
- Image display means for displaying an image obtained by synthesizing the navigation information read at the specified position.
- an image is taken from the current position based on the current position of a moving object such as a car and road map data including the current position.
- a road shape model is generated for roads that are expected to be affected, and road shape data, which is image data of roads included in the scenery ahead or in the direction of travel, is extracted from live-action video, and By comparing the road shape model data with the road shape data, the posture data of the on-board camera or the moving object with respect to the road in the landscape, which is the subject, is estimated. And that posture Based on the data, determine the appropriate display position of the navigation information read out corresponding to the current position of the moving object in the photographed live-action image, and move the navigation information to the determined appropriate position of the captured live-action image. , Displays an image that combines the read navigation information.
- Navigating information such as guidance, the position of the vehicle, map information, etc. can be accurately projected and displayed at the appropriate position in the actual image of the road ahead of the moving object or in the actual scenery. It is possible for the driver to intuitively and accurately recognize the correspondence between the navigation information and the actual image or the actual scene.
- the navigation information includes, for example, route guidance regarding the route to the destination of the moving object, the own vehicle position, the lane in which the own vehicle is traveling, the route guidance, or the own vehicle position.
- At least one of the buildings can be used as a landmark for the driver to confirm.
- the navigation information is character, symbol, or numeric information
- the information can be converted to an icon, and the icon image can be combined with the photographed video and displayed. is there.
- the data processing means or the data processing process described above expresses the navigating information as a virtual entity (vir tual obj ec ect) in a three-dimensional augmented reality (Augmented Reality) space, and has already been obtained.
- the image of the navigation information can be synthesized as a virtual entity in the actual shot video, etc. It is also possible. In this way, it is possible to obtain, for example, joseget information consisting of letters or symbols or numbers of buildings that serve as landmarks for route guidance, and that the landmarked buildings are located behind or on curved roads in the live-action video. Even if it cannot be seen because it is hidden behind the inner circumference, it is possible to visually and intuitively indicate the presence of the building of the landmark.
- the data processing means or the process of performing the data processing is based on the road shape data.
- One night is converted into perspective two-dimensional feature space image data
- road shape model data is transformed into perspective two-dimensional feature space image data.
- the data may be compared with each other to estimate the vehicle-mounted power camera or the posture of the moving object with respect to the road surface of the subject.
- the amount of information is extremely large, such that the road shape data, which is used for estimating the posture data, and the road shape model data are compared with each other in a three-dimensional logical space or the like. Since it is possible to perform two-dimensional data between two-dimensional data in a pseudo three-dimensional two-dimensional feature space without using a method that may make high-speed data processing difficult in some cases, the matching process It is possible to achieve simplification and high-speed operation.
- the posture data may be integrated with angular velocity and acceleration data obtained from a three-dimensional inertial sensor attached to the vehicle body.
- the results obtained from image matching are direct and accurate, but are susceptible to noise and erroneous extraction.
- the results obtained by accumulating the angular velocity and acceleration data from the three-dimensional inertial sensor have a high accumulated error, but are stable and fast. By integrating these two sensors, more stable and accurate vehicle attitude data can be obtained.
- the one-sided gradient of the traveling road surface on the curved road is set so as to change in accordance with the horizontal curvature of the curved road. It is also possible to generate a road shape model of a multi-lane road by performing modeling taking into account the road structure. By doing so, even when a moving object such as an automobile is traveling on a multi-lane road, the road shape of the multi-lane road can be accurately grasped, and furthermore, such a The navigation information can be synthesized and displayed at an appropriate position that accurately corresponds to the road shape of a multi-lane road.
- the data processing means or the data processing process uses a road shape look-up table (RSL) when comparing the road shape model data with the road shape data. Then, the RSL value is calculated by calculating the existence probability of the road white line included in the landscape from the actual video, and the evaluation value based on the RSL value is calculated. May be obtained such that the maximum is obtained. In this way, accurate road shape data can always be extracted without being adversely affected by various environmental factors such as weather changes, road shadows, and dirt. Using this, it is possible to estimate accurate posture data.
- RSL road shape look-up table
- the image display means or the process of displaying an image includes, for example, a dashport, an image obtained by synthesizing the read navigation information at a position determined to be appropriate in the photographed live-action video. It can be displayed on a predetermined display screen of a display device such as a liquid crystal display panel for force navigation installed at a substantially central portion or the like.
- an image obtained by synthesizing the navigation information read out at the determined position of the captured live-action video is displayed on a display device such as a so-called HUD (Head Up Dispay) projection device.
- the image may be projected and displayed on the inner surface of the transparent window in front of the seat.
- the data processing means or the process for performing the data processing reads the navigation information corresponding to the detected current position of the moving object from the navigation information stored in advance in association with the road map data. Based on the current position of the moving object and the road map data, a road shape model for the road that is assumed to be taken from the current position is generated, Road shape data, which is image data, is extracted, the road shape model data is compared with the road shape data, and the posture data of the on-board camera or moving object with respect to the subject road is estimated, and the three-dimensional inertia attached to the vehicle The navigation information is integrated with the azimuth data obtained from the sensor (INS) and read out in accordance with the current position of the moving object based on the integrated attitude data.
- INS azimuth data obtained from the sensor
- the display position in the photographed live-action video of the information and outputs data for displaying the image of the navigation information read at the determined position.
- the navigation information image is projected onto the inside surface of the transparent window in front of the driver's seat of the moving object and displayed, so that the navigation information image is combined with the landscape seen from the transparent window in front of the driver's seat. It is also possible to display it by displaying it.
- FIG. 1 is a diagram showing a schematic configuration of a mobile navigation information display device according to an embodiment of the present invention.
- 2A to 2D are diagrams showing the relative positional relationship between the three-dimensional vehicle coordinate system VCS, the three-dimensional camera coordinate system CCS, and the two-dimensional projected image coordinate system ICS.
- FIG. 3 is a diagram showing an example of the matting by points and lines represented by road map data.
- FIG. 4 is a diagram showing a road segment horizontal shape model approximated by a clothoid curve.
- FIG. 5 is a diagram showing an example of a road horizontal shape model used in 2D-2D matching.
- FIG. 6 is a flowchart showing a flow of a series of main processes including extraction of road shape data, generation of a road shape model, estimation of camera posture parameters, and the like in the central processing unit.
- FIG. 7 is a diagram summarizing various mathematical expressions used for various calculations performed in the central processing unit.
- FIG. 8 is a diagram showing an example of a video obtained by superimposing an image of navigation information which is claimed to be displayable by a conventional navigation system.
- Fig. 9 shows an example of the front view seen from the driver's seat of an actual car through the windshield.
- FIG. 1 OA schematically shows the degree of change of the projected image onto the road surface with respect to a change in the unit angle in the line of sight from the driver's seat of a general vehicle.
- FIG. 9 is a diagram showing a degree of change of a projection image onto a road surface with respect to a change in unit angle in a line of sight from a position higher than the figure.
- FIG. 1 is an outline of a mobile object navigation information display device according to an embodiment of the present invention. It shows the configuration. Since the mobile object navigation information display method according to the embodiment of the present invention is embodied by the operation or action of this mobile object navigation information display device, they will be described below together. .
- This mobile object navigation information display device includes a sensor input unit 1, an operation unit 2, a map management unit 3, a central processing unit 4, an image display unit 5, and a control unit 6 as its main parts. Have.
- the sensor input unit 1 includes a CCD (solid-state imaging device) camera 101, a GPS sensor 102, an NS sensor 103, and a road traffic information receiver (VICS) 104. It has.
- the CCD camera 101 can be used to capture (image) the scene in front of the vehicle with a camera angle almost the same as the driver's line of sight looking through the windshield. It is installed on the dashboard of the driver's seat or near the ceiling (not shown) of such a moving object (hereinafter, also referred to as the own vehicle or the moving object or own vehicle).
- the CCD camera 101 is, for example, a monocular type having a fixed focal length, and captures an image of a landscape ahead including a road and captures the image signal.
- the captured image signal is transferred as data to an image memory (not shown) of the central processing unit 4. Then, the traveling azimuth data and the vehicle speed data of the moving object acquired by the GPS sensor 102 and the DNS sensor 103 are synchronized with the image data acquired by the CCD camera 101. Then, it is transferred to the central processing unit 4.
- the data received from the road traffic information receiving device (VICS) 104 is also transferred to the central processing unit 4.
- the operation unit 2 transfers instructions such as system settings and mode change to the central processing unit 4 in response to a button operation from a user or an operation command input by a remote control input device (not shown).
- the map management unit 3 stores various kinds of information on the road position designated by the command input from the operation unit 2 and maps data of a predetermined geographical area in advance.
- the central processing unit 4 has four modules, an image processing module 401, a positioning processing module 402, a video output processing module 403, and a control output processing module 404.
- the main part is comprised of the yule and the image data generation unit 405.
- the image processing module 401 performs posture estimation, on-board lane tracking, harmful object detection, and inter-vehicle distance calculation of the in-vehicle CCD camera.
- the positioning processing module 402 performs map matching of the azimuth and the vehicle speed from the sensor input unit 1 with the road map data of the map management unit 3, calculates correct road position information, and outputs the data.
- the video output processing module 4003 converts the route guidance, the vehicle position, and the map information in the video display unit 5 into a virtual entity (Augmented Reality) in a three-dimensional augmented reality (Augmented Reality) space. It is projected as a two-dimensional road image with the posture parameters of the CCD camera 101 obtained by the estimation method described later, and fused with the real image of the landscape in front of the moving object (synthesis) ) It also generates data to highlight road lane markings in bad weather conditions and to display dangers such as obstacles. Furthermore, as information to be added to the road map, for example, information of objects that can be a landmark for route guidance, such as landmarks, railway stations, hospitals, gas stations, etc., is iconified, and is represented by icons. Using the camera posture parameter overnight, the image is projected onto the actual image of the road.
- a landmark for route guidance such as landmarks, railway stations, hospitals, gas stations, etc.
- the control output processing module 404 comprehensively judges each analysis result, and issues an alarm output instruction for outputting an alarm or the like corresponding to the degree of danger to the own vehicle to the control unit 6. give.
- the image data generation unit 405 mainly performs the current running of the vehicle based on the data output from the image processing module and the positioning processing module and the map data read from the map management unit 3. It performs navigation lane recognition, road shape recognition, obstacle recognition, absolute vehicle position recognition, and camera posture estimation, etc., and displays navigation information at an appropriate position in the actual video. Generate data.
- the video display unit 5 combines the navigation information at an appropriate position in the actual video based on the data generated by the image data generation unit 405, and displays the video (image) on the screen of the liquid crystal display panel, for example. indicate.
- the control unit 6 controls the output of an alarm or the like corresponding to the above-described alarm output command, controls the sound output corresponding to the analysis result by the control output module, controls the brake, Steering control and the like are performed by, for example, controlling the operation of each support motor system provided for adjusting the control amounts thereof.
- the road map data on the nearby geography including the current position of the own vehicle is displayed on the GPS sensor 102, the INS sensor 103, and the current position detected by the positioning processing module.
- the image data generator 405 Based on the data, the image data generator 405 generates a road shape model for a road that is assumed to be photographed from the current position.
- the road shape data which is the image data of the road included in the scenery in the traveling direction, is extracted from the photographed video based on, for example, the white line image data of the lane marking of the road.
- the image data generation unit 405 compares the road shape data with the data of the road shape model, and the CCD camera 101 with respect to the road being viewed, which is the subject to be photographed by the CCD camera 101. Estimate the attitude data of 101 (or the attitude data of the vehicle). At this time, it is desirable to integrate the angular velocity and acceleration data obtained from the three-dimensional inertial sensor attached to the vehicle body into this attitude data.
- an appropriate display position of the navigation information read out corresponding to the current position of the moving object in the photographed live-action video is determined, and the captured live-action video is determined.
- Image data is generated so that an image combining the read navigation information can be displayed at an appropriate position.
- the image data generation unit 405 compares the road shape model data with the road shape data to determine where in the real video image the navigation information is appropriate to be synthesized.
- the image display unit 5 determines navigation information such as route guidance, own vehicle position, and map information on the road ahead of the moving object based on the image data generated in the evening.
- the display can be accurately synthesized at an appropriate position in the actual video or in the actual scenery seen through the windshield.
- the driver can intuitively and accurately determine the correspondence between the navigation information and the actual image or the actual scene. An image that can be recognized can be displayed.
- the navigation information includes, for example, route guidance regarding the route to the destination of the moving object, the position of the own vehicle, the lane in which the own vehicle is traveling, the route guidance, or the position of the own vehicle. At least one of the buildings that will serve as a landmark for the driver to check.
- the navigation information is character, symbol, or number information, it is desirable to convert the information into an icon, and combine the icon image with the photographed video to display the icon.
- the image data generation unit 405 expresses the navigation information as a virtual entity in the 3D augmented reality space, and converts it into a 2D feature space based on the already obtained posture data etc.
- the image data generation unit 405 expresses the navigation information image to an appropriate position in the actual video image as a virtual entity by assigning it to the corresponding position in the road shape data, for example, a building that serves as a landmark for route guidance In the live-action video, even if the building of the landmark is hidden behind a building on the near side or inside the curved road, etc. The presence of the landmark building is shown visually and intuitively.
- the image data generation unit 405 converts the road shape data into perspective two-dimensional feature space image data and the road shape model data into perspective two-dimensional feature space image data. Then, they are compared with each other in the two-dimensional feature space in the two-dimensional feature space, and the posture data of the on-board camera or the moving object with respect to the road surface of the subject road is estimated. In this way, the road shape data, which is used for estimating the posture data, is compared with the data of the road shape model, and the two-dimensional data are compared in a pseudo three-dimensional two-dimensional feature space.
- the collation process has been simplified and speeded up.
- modeling is performed by taking into account the road structure that the one-sided slope of the running road surface on a curved road is set to change according to the horizontal curvature of the curved road.
- a road shape model of a multi-lane road is generated.
- a road shape look-up table (RSL) is referred to.
- RSL road shape look-up table
- the posture data of the moving object may be obtained such that In this way, accurate road shape data can always be extracted without being adversely affected by various environmental factors such as changes in the weather and shadows or dirt on the road surface. Makes it possible to estimate accurate posture data using the data.
- an image obtained by synthesizing navigation information at an appropriate position in the captured live-action video is converted to a display device such as a liquid crystal display panel for car navigation installed at, for example, the approximate center of a dashport. It may be displayed.
- an image obtained by synthesizing the read navigation information at an appropriate position determined by the above-described collation in the photographed live-action image is used as a so-called HUD (Head Up Display) type projection device.
- the image may be projected and displayed on the inner surface of the transparent window in front of the driver's seat by a simple display device or the like.
- the position determined to be an appropriate position for displaying the navigation information image by the above-described collation instead of combining the navigation information image into the photographed actual image, the position determined to be an appropriate position for displaying the navigation information image by the above-described collation.
- the navigation information image is projected and displayed on the inner surface of the windshield in front of the driver's seat corresponding to that position using the data of the HUD, and the image of the navigation information is displayed in front of the driver's seat. It may be combined with the view seen from the transparent window and displayed.
- Figures 2A to 2D show the 3D vehicle coordinate system VCS (Xv, Yv, ⁇ ), the 3D camera coordinate system CCS (Xc, Yc, Zc), and the 2D projection image coordinate system ICS (xi, yi ) Represents the relative positional relationship.
- VCS 3D vehicle coordinate system
- CCS 3D camera coordinate system
- ICS 2D projection image coordinate system
- Xv and Yv axes are respectively set to point left and up.
- the origin of the three-dimensional camera coordinate system CCS is located at the center of the lens of the CCD camera, and the Zc axis is set to overlap the optical axis of the camera.
- Equation 1 The transformation relationship from the camera coordinate system CCS to the image coordinate system ICS is an orthographic projection. Therefore, it can be described as a relational expression using a matrix as shown in Equation 1 in FIG.
- P is the coordinate [Xc, Yc, Zc, 1] in the CCS coordinate system
- p is the coordinate [xi, yi, 1] in the ICS coordinate system.
- A is a 3 ⁇ 4 projection matrix, which can be generally decomposed as shown in FIG.
- K is called the camera's internal parameter matrix, which is determined by the horizontal and vertical deformation rates (Sx, Sy), the image center point (uo, V0), and the rotational deformation rate. This K is expressed as Equation 3 in FIG.
- the camera posture matrix M is called the camera external parameter overnight matrix, and indicates the transformation relationship from the viewpoint to the target model coordinate system.In general, it is expressed as Equation 4 in Fig. 7 by three-dimensional translation and rotation transformation of a rigid body. It can be expressed as follows.
- R11 to R33 (all elements of R) are rotation parameters
- Tx, Ty, Tz (all elements of ⁇ ) are translation parameters.
- Equation 5 in FIG. 7 a constraint equation as shown in Equation 5 in FIG. 7 is established based on Equations 1 to 4.
- Equation 6 the projection relationship between the image coordinate system and the vehicle coordinate system is expressed by the equation shown in Equation 6 in FIG. Is expressed. That is, according to Equation 6, one pair of corresponding points ( ⁇ , ⁇ ) in the 2D-3D space determines one constraint equation as shown in Equation 6 in Fig. 7 for the camera posture data. Will be done. Theoretically, these six corresponding pairs of points are sufficient to estimate the camera pose.
- this embodiment avoids matching in 2D-3D (by 2D space vs. 3D space matching) and generates a multi-lane road shape model from road map information. Estimation and conversion of the multi-lane road shape model into multi-lane road shape data extracted from the actual image data and matching in the 2D-2D feature space, and matching in two-dimensional space versus two-dimensional space Estimate camera posture data.
- the matching at this time is not limited to only matching in the 2D-2D feature space.
- depth data in 3D space is accurately and reliably estimated from information sources other than the actual image data of the front view, and the data is used to perform comparison between 2D space and 3D space. Needless to say, this is good. However, in this case, it goes without saying that the amount of data to be processed generally tends to be larger than that in the case of matching in the 2D-2D feature space.
- Fig. 3 shows an example of matting with points and lines represented by road map data.
- the road map data records three-dimensional position information such as latitude and longitude and sea level of road segments, generally called nodes, road names, grades, number of lanes, and intersection conditions.
- Road width can be estimated based on road class.
- the node position described in the map data is on the road center line.
- Road structures are generally composed of complex surfaces using horizontal and vertical curvatures.
- Figure 4 shows a road segment horizontal shape model approximated by a clothoid curve.
- a road segment horizontal shape model can be modeled using a mathematical expression as shown in Expression 7 in FIG.
- c O i and c l i in Equation 7 are the initial curvature of the horizontal curve and the change parameter of the curvature, respectively.
- N li is the number of up lanes
- n ri is the number of down lanes
- w i is the average road width between segments.
- L i indicates the segment length.
- the traveling position of a vehicle is not the center line of the road, but is often offset to the left in a country with customary left-hand traffic such as Japan, but the position of the vehicle on the road (from the origin of the vehicle coordinate system to the center of the road).
- the actual travel position is offset to one lane by using the information of the deviation amount) and the direction (the deviation angle between the Z axis direction of the vehicle coordinate system and the horizontal tangent of the road).
- the optimal posture is obtained by performing 2D-2D matching between the road shape model estimated from the road map data and the road shape data extracted from the photographed image of the road in the two-dimensional feature space.
- Fig. 5 shows an example of a road horizontal shape model used in the 2D-2D matching.
- the brightness and hue of the road white line that separates the lanes often change significantly depending on the state of light irradiation by sunlight or artificial lighting, the weather, etc.
- the concept of road white line look-up table (RSL) is used to visualize the existence probabilities of road white lines instead of brightness values, thereby realizing high-speed and mouth-bust road shape matching. Can be.
- the value of RSL increases as it approaches the road white line.
- a road white line, a dividing line, and a boundary line candidate of a road region are extracted as a feature region, and the image is binarized (pixels belonging to the feature region are set to 1 and other pixels are set to 1). 0).
- the RSL value of each pixel is calculated using Equation 9 shown in FIG.
- ⁇ ⁇ and ⁇ are binarized pixel values
- ⁇ and ⁇ are force-cell coefficients for RSL.
- the kernel size is usually set to 5 or 7 to reduce noise. Each coefficient is determined by the Gaussin distribution equation.
- the final evaluation equation for camera pose estimation is shown in equation 10 in FIG.
- 7? ⁇ is a set of two-dimensional projection points of the road horizontal shape model.
- the highest RSL rating that perfectly matches the road shape model generated based on the road map data with the road shape data extracted from the actual video image can be obtained by Expression 10. You can get value.
- the camera posture data obtained in this way is used for collation of data for determining the position where the navigation information is to be displayed.
- This camera attitude data is also used as the feedback amount for the next estimation.
- FIG. 6 is a flowchart showing a flow of a series of main processes including extraction of road shape data, generation of a road shape model, estimation of camera posture parameters, and the like in the central processing unit.
- the data acquired by the GPS sensor and the INS sensor is acquired in synchronism with the acquisition of the actual video data (S2).
- a separation line region such as a white line such as a lane marking of a road or a boundary line of a pavement surface is extracted to calculate an RSL value (S3).
- the current position (absolute position) of the vehicle is determined (detected) by so-called map matching, and related map data corresponding to the current position is read out from information stored in the map data CD (S4). .
- the road horizontal shape model is projected into a perspective two-dimensional space (S6).
- the evaluation value is obtained by matching the RSL expression of the road image with the projected road horizontal shape model (S7).
- the obtained evaluation value is the maximum value (S8).
- the maximum value If it is determined ([38]), the posture vector at that time is output (S9), and the output value is fed back as the next search starting point (S10).
- the posture vector is updated by the Hooke & Jeeves method (S11) and the reevaluation is performed (S11 to S6 to S8). This loop is repeated until the maximum value is obtained (until Y in S8).
- the angle and azimuth change amount accumulated at the same point as the posture vector with the maximum evaluation value is input to the Kalman filter, and the integrated posture data is output.
- the moving object navigation information display method or the moving object navigation information display device of the present invention the current position of a moving object such as a car, the road map data including the current position, A road shape model is generated for the road that is assumed to be photographed from the current position based on the road shape, and the road shape that is the image data of the road included in the landscape in the forward or traveling direction from the actual image De One night is extracted, the data of the road shape model is compared with the road shape data, and the posture data of the vehicle-mounted camera or the moving object with respect to the road in the scenery, which is the subject, is estimated.
- An appropriate display position in the photographed actual image of the navigation information read out corresponding to the current position of the body is determined, and the navigation information is read out to the determined appropriate position of the photographed actual image. Since an image that combines navigation information is displayed, for example, navigation information such as route guidance, own vehicle position, and map information can be displayed in the actual video of the road ahead of the moving object or in the actual scenery. Display can be accurately projected to an appropriate position, and in turn, the driver can intuitively and accurately recognize the correspondence between the navigation information and the actual image or the actual scene. This has the effect that it becomes possible.
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Navigation (AREA)
- Studio Devices (AREA)
- Instructional Devices (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2003302447A AU2003302447A1 (en) | 2002-11-22 | 2003-11-20 | Moving body navigate information display method and moving body navigate information display device |
JP2004554993A JPWO2004048895A1 (ja) | 2002-11-22 | 2003-11-20 | 移動体ナビゲート情報表示方法および移動体ナビゲート情報表示装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2002340012 | 2002-11-22 | ||
JP2002-340012 | 2002-11-22 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2004048895A1 true WO2004048895A1 (ja) | 2004-06-10 |
Family
ID=32375798
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2003/014815 WO2004048895A1 (ja) | 2002-11-22 | 2003-11-20 | 移動体ナビゲート情報表示方法および移動体ナビゲート情報表示装置 |
Country Status (3)
Country | Link |
---|---|
JP (1) | JPWO2004048895A1 (ja) |
AU (1) | AU2003302447A1 (ja) |
WO (1) | WO2004048895A1 (ja) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006035755A1 (ja) * | 2004-09-28 | 2006-04-06 | National University Corporation Kumamoto University | 移動体ナビゲート情報表示方法および移動体ナビゲート情報表示装置 |
JP2006119090A (ja) * | 2004-10-25 | 2006-05-11 | Mitsubishi Electric Corp | ナビゲーション装置 |
JP2008139148A (ja) * | 2006-12-01 | 2008-06-19 | Denso Corp | 通信型ナビゲーションシステム、車両ナビゲーション装置及びセンター装置 |
JP2010096874A (ja) * | 2008-10-15 | 2010-04-30 | Nippon Seiki Co Ltd | 車両用表示装置 |
US7863287B2 (en) | 2002-12-18 | 2011-01-04 | Wyeth Llc | Compositions of non-steroidal anti-inflammatory drugs, decongestants and anti-histamines |
US8108142B2 (en) * | 2005-01-26 | 2012-01-31 | Volkswagen Ag | 3D navigation system for motor vehicles |
US8121350B2 (en) | 2006-12-29 | 2012-02-21 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Apparatus, method and computer program for determining a position on the basis of a camera image from a camera |
US8180567B2 (en) | 2005-06-06 | 2012-05-15 | Tomtom International B.V. | Navigation device with camera-info |
US8423292B2 (en) | 2008-08-19 | 2013-04-16 | Tomtom International B.V. | Navigation device with camera-info |
JP2013242648A (ja) * | 2012-05-18 | 2013-12-05 | Yokogawa Electric Corp | 情報表示装置及び情報表示システム |
US9046378B2 (en) | 2010-07-27 | 2015-06-02 | Toyota Jidosha Kabushiki Kaisha | Driving assistance device |
KR20150088636A (ko) * | 2014-01-24 | 2015-08-03 | 한화테크윈 주식회사 | 위치 추정 장치 및 방법 |
JPWO2016075809A1 (ja) * | 2014-11-14 | 2017-08-17 | 日産自動車株式会社 | 表示装置及び表示方法 |
KR102009031B1 (ko) * | 2018-09-07 | 2019-08-08 | 네이버랩스 주식회사 | 증강현실을 이용한 실내 내비게이션을 위한 방법 및 시스템 |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015015775A (ja) * | 2014-10-22 | 2015-01-22 | レノボ・イノベーションズ・リミテッド(香港) | 地形表示システム、携帯端末、地形表示方法およびプログラム |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07257228A (ja) * | 1994-03-18 | 1995-10-09 | Nissan Motor Co Ltd | 車両用表示装置 |
JPH0883397A (ja) * | 1994-09-12 | 1996-03-26 | Nissan Motor Co Ltd | 車両用経路誘導装置 |
JPH10132598A (ja) * | 1996-10-31 | 1998-05-22 | Sony Corp | ナビゲート方法、ナビゲーション装置及び自動車 |
JPH11304499A (ja) * | 1998-04-22 | 1999-11-05 | Matsushita Electric Ind Co Ltd | カーナビゲーション装置 |
JP2001331787A (ja) * | 2000-05-19 | 2001-11-30 | Toyota Central Res & Dev Lab Inc | 道路形状推定装置 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3079841B2 (ja) * | 1993-07-14 | 2000-08-21 | 日産自動車株式会社 | 道路形状及び自車両位置の計測装置 |
JP3473321B2 (ja) * | 1997-05-09 | 2003-12-02 | トヨタ自動車株式会社 | 車両用表示装置 |
JPH1165431A (ja) * | 1997-08-25 | 1999-03-05 | Nippon Telegr & Teleph Corp <Ntt> | 景観ラベル付カーナビゲーション装置およびシステム |
JP2000097714A (ja) * | 1998-09-21 | 2000-04-07 | Sumitomo Electric Ind Ltd | カーナビゲーション装置 |
-
2003
- 2003-11-20 WO PCT/JP2003/014815 patent/WO2004048895A1/ja active Application Filing
- 2003-11-20 JP JP2004554993A patent/JPWO2004048895A1/ja active Pending
- 2003-11-20 AU AU2003302447A patent/AU2003302447A1/en not_active Abandoned
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07257228A (ja) * | 1994-03-18 | 1995-10-09 | Nissan Motor Co Ltd | 車両用表示装置 |
JPH0883397A (ja) * | 1994-09-12 | 1996-03-26 | Nissan Motor Co Ltd | 車両用経路誘導装置 |
JPH10132598A (ja) * | 1996-10-31 | 1998-05-22 | Sony Corp | ナビゲート方法、ナビゲーション装置及び自動車 |
JPH11304499A (ja) * | 1998-04-22 | 1999-11-05 | Matsushita Electric Ind Co Ltd | カーナビゲーション装置 |
JP2001331787A (ja) * | 2000-05-19 | 2001-11-30 | Toyota Central Res & Dev Lab Inc | 道路形状推定装置 |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7863287B2 (en) | 2002-12-18 | 2011-01-04 | Wyeth Llc | Compositions of non-steroidal anti-inflammatory drugs, decongestants and anti-histamines |
US8195386B2 (en) | 2004-09-28 | 2012-06-05 | National University Corporation Kumamoto University | Movable-body navigation information display method and movable-body navigation information display unit |
JPWO2006035755A1 (ja) * | 2004-09-28 | 2008-05-15 | 国立大学法人 熊本大学 | 移動体ナビゲート情報表示方法および移動体ナビゲート情報表示装置 |
WO2006035755A1 (ja) * | 2004-09-28 | 2006-04-06 | National University Corporation Kumamoto University | 移動体ナビゲート情報表示方法および移動体ナビゲート情報表示装置 |
JP4696248B2 (ja) * | 2004-09-28 | 2011-06-08 | 国立大学法人 熊本大学 | 移動体ナビゲート情報表示方法および移動体ナビゲート情報表示装置 |
JP2006119090A (ja) * | 2004-10-25 | 2006-05-11 | Mitsubishi Electric Corp | ナビゲーション装置 |
US8108142B2 (en) * | 2005-01-26 | 2012-01-31 | Volkswagen Ag | 3D navigation system for motor vehicles |
US8352180B2 (en) | 2005-06-06 | 2013-01-08 | Tomtom International B.V. | Device with camera-info |
US8180567B2 (en) | 2005-06-06 | 2012-05-15 | Tomtom International B.V. | Navigation device with camera-info |
US8352181B2 (en) | 2006-12-01 | 2013-01-08 | Denso Corporation | Navigation system, in-vehicle navigation apparatus and center apparatus |
JP2008139148A (ja) * | 2006-12-01 | 2008-06-19 | Denso Corp | 通信型ナビゲーションシステム、車両ナビゲーション装置及びセンター装置 |
US8121350B2 (en) | 2006-12-29 | 2012-02-21 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Apparatus, method and computer program for determining a position on the basis of a camera image from a camera |
US8423292B2 (en) | 2008-08-19 | 2013-04-16 | Tomtom International B.V. | Navigation device with camera-info |
JP2010096874A (ja) * | 2008-10-15 | 2010-04-30 | Nippon Seiki Co Ltd | 車両用表示装置 |
US9046378B2 (en) | 2010-07-27 | 2015-06-02 | Toyota Jidosha Kabushiki Kaisha | Driving assistance device |
JP2013242648A (ja) * | 2012-05-18 | 2013-12-05 | Yokogawa Electric Corp | 情報表示装置及び情報表示システム |
US9241110B2 (en) | 2012-05-18 | 2016-01-19 | Yokogawa Electric Corporation | Information display device and information device system |
KR20150088636A (ko) * | 2014-01-24 | 2015-08-03 | 한화테크윈 주식회사 | 위치 추정 장치 및 방법 |
KR102016551B1 (ko) * | 2014-01-24 | 2019-09-02 | 한화디펜스 주식회사 | 위치 추정 장치 및 방법 |
JPWO2016075809A1 (ja) * | 2014-11-14 | 2017-08-17 | 日産自動車株式会社 | 表示装置及び表示方法 |
KR102009031B1 (ko) * | 2018-09-07 | 2019-08-08 | 네이버랩스 주식회사 | 증강현실을 이용한 실내 내비게이션을 위한 방법 및 시스템 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2004048895A1 (ja) | 2006-03-23 |
AU2003302447A1 (en) | 2004-06-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4696248B2 (ja) | 移動体ナビゲート情報表示方法および移動体ナビゲート情報表示装置 | |
US6956503B2 (en) | Image display apparatus, image display method, measurement apparatus, measurement method, information processing method, information processing apparatus, and identification method | |
US8558758B2 (en) | Information display apparatus | |
EP2372309B1 (en) | Vehicle position detection system | |
US8352180B2 (en) | Device with camera-info | |
US8423292B2 (en) | Navigation device with camera-info | |
US7039521B2 (en) | Method and device for displaying driving instructions, especially in car navigation systems | |
JP3375258B2 (ja) | 地図表示方法及び装置並びにその装置を備えたナビゲーション装置 | |
US9459113B2 (en) | Visual guidance for vehicle navigation system | |
JP5057184B2 (ja) | 画像処理システム及び車両制御システム | |
US20050209776A1 (en) | Navigation apparatus and intersection guidance method | |
US20100256900A1 (en) | Navigation device | |
US11525694B2 (en) | Superimposed-image display device and computer program | |
CN210139859U (zh) | 汽车碰撞预警系统、导航系统及汽车 | |
WO2004048895A1 (ja) | 移動体ナビゲート情報表示方法および移動体ナビゲート情報表示装置 | |
JP4596566B2 (ja) | 自車情報認識装置及び自車情報認識方法 | |
CN111094898A (zh) | 用于控制用于机动车辆的增强现实抬头显示设备的显示的方法、设备和具有指令的计算机可读存储介质 | |
CN110304057A (zh) | 汽车碰撞预警、导航方法、电子设备、系统及汽车 | |
CN115917255A (zh) | 基于视觉的位置和转弯标记预测 | |
CN112528719A (zh) | 推定装置、推定方法以及存储介质 | |
JP4250391B2 (ja) | 指標検出装置、指標検出方法 | |
JP3298515B2 (ja) | ナビゲーション用地図表示装置 | |
KR20040025150A (ko) | 차량용 항법 장치에서의 경로 안내 방법 | |
Hu et al. | Towards A New Generation of Car Navigation System-Data Fusion Technology in Solving On-board Camera Registration Problem |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2004554993 Country of ref document: JP |
|
122 | Ep: pct application non-entry in european phase |