WO2009087716A1 - 画像変形方法、画像表示方法、画像変形装置、および画像表示装置 - Google Patents
画像変形方法、画像表示方法、画像変形装置、および画像表示装置 Download PDFInfo
- Publication number
- WO2009087716A1 WO2009087716A1 PCT/JP2008/003658 JP2008003658W WO2009087716A1 WO 2009087716 A1 WO2009087716 A1 WO 2009087716A1 JP 2008003658 W JP2008003658 W JP 2008003658W WO 2009087716 A1 WO2009087716 A1 WO 2009087716A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image data
- image
- camera
- processing unit
- map
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
- G01C21/30—Map- or contour-matching
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
- G08G1/0968—Systems involving transmission of navigation instructions to the vehicle
- G08G1/0969—Systems involving transmission of navigation instructions to the vehicle having a display in the form of a map
Definitions
- the present invention relates to a method and apparatus for performing guidance route guidance in a car navigation system.
- the car navigation system sets an optimum guidance route to a preset destination based on the road map image data held in the navigation device, and then, an important point such as an intersection in the route along with the traveling of the vehicle. Is a system that displays left-right turn guidance on the display.
- the route information of the intersection for specifying and guiding the intersection position is synthesized based on the installation position of the camera and the optical condition. for that reason, Identifying the installation position of the camera, the viewing angle of the camera, and the focal length • Aligning the center of the intersection with the center of the viewing angle of the camera, Moreover, Matching the position of map information input from the navigation device with the position of the vehicle Is required, If these are not matched, it is not possible to accurately combine the left and right arrows at the intersection with the map information, and as a result, it is possible for the driver of the host vehicle to misguide route guidance at the intersection.
- An object of the present invention is to make it possible to accurately turn to the right or left at an intersection without depending on the installation position of the camera installed in the host vehicle and the optical conditions.
- the image deformation method is A first step of recognizing a first road shape in the camera image data based on camera image data generated by a camera that captures an external image from the host vehicle;
- the map image data in the vicinity of the host vehicle is read out from the navigation device, and the second focus point coordinates present in the second road shape in the read map image data and the first present in the first road shape
- the preferred embodiment of the present invention is In the first step, after detecting an outline component in the camera image data based on a luminance signal of the camera image data, a pixel equivalent to a first image area estimated to be a road in the camera image data Recognizing the first road shape on the basis of the contour component located at the edge of a second image area having information; It is.
- the preferred embodiment of the present invention is In the first step, a road contour is recognized as the first road shape, In the second step, in the map image data, a second intersection contour coordinate in a road area is detected as the second target point coordinate, and in the second step, in the camera image data, the road contour is detected.
- the inflection point coordinates are recognized as a first intersection outline coordinate, and then the recognized first intersection outline coordinate is detected as the first focus point coordinate. It is.
- the preferred embodiment of the present invention is In the first step, a road contour is recognized as the first road shape, In the second step, in the camera image data, a first intersection contour coordinate in a road area is recognized as the first target point coordinate, and the first target point coordinate recognized is the first target coordinate. When the intersection outline coordinates are insufficient, the first attention point coordinates which are lacking are estimated based on the recognized first attention point coordinates. It is.
- the preferred embodiment of the present invention is In the first step, a road contour is recognized as the first road shape, In the second step, in the map image data, a second intersection contour coordinate in the road area is detected as the second focus point coordinate, and in the second step, the first of the contour components in the camera image data is detected.
- the first intersection contour coordinates are recognized based on the detected first direction vector, and the recognized first intersection contour coordinates are detected as the first focus point coordinates. , It is.
- the preferred embodiment of the present invention is The distortion amount generated between the associated first attention point coordinates and the second attention point coordinates is calculated, and the image of the map image data or the camera image data is calculated according to the calculated distortion amount.
- the preferred embodiment of the present invention is In the third step, the distortion amount is calculated such that the first target point coordinates coincide with the second target point coordinates. It is.
- the preferred embodiment of the present invention is In the second step, a second direction vector of a road area in the map image data and a first direction vector of an outline component in the camera image data are detected. In the third step, the first direction vector and the second direction vector are associated so that the first and second direction vectors move relative to each other by the minimum movement amount, Calculating the distortion amount based on a difference between the first and second direction vectors; It is.
- the image display method according to the present invention is The first and second steps in the image deformation method of the present invention;
- the fourth step Including In the fourth step, after combining the camera image data and the map image data in a state where the first and second focus point coordinates are associated with each other, an image of the combined image data is displayed.
- the image display method is The first to third steps in the image deformation method of the present invention, The fifth step, Including In the first step, guidance route guidance image data corresponding in position to the map image data is further read out from the navigation device; In the third step, instead of the map image data or the camera image data, the guidance route guidance image data is coordinate-transformed so that the image of the guidance route guidance image data is deformed according to the distortion amount In the fifth step, the guide route guidance image data after the deformation and the uncorrected camera image so that the image of the guide route guidance image data after the deformation corresponds in position to the image of the undeformed camera image data After combining with the data, the image of the combined image data is displayed.
- the image display method is The first to third steps in the image deformation method of the present invention, The sixth step, Including In the first step, map image data including guidance route guidance image data as the map image data is read out from the navigation device, In the third step, the map image data including the guidance route guidance image data is coordinate-transformed so that the image of the map image data including the guidance route guidance image data is deformed according to the distortion amount, In the sixth step, the modified guidance route guidance image data is included such that the image of the map image data including the modified guidance route guidance image data corresponds in position to the image of the undeformed camera image data. After the map image data and the image of the untransformed camera image data are synthesized, the image of the synthesized image data is displayed.
- the image deformation apparatus is An image recognition unit that recognizes a first road shape in the camera image data based on camera image data generated by a camera that captures an external image from a host vehicle;
- the map image data in the vicinity of the host vehicle is read out from the navigation device, and the second focus point coordinates present in the second road shape in the read map image data and the first present in the first road shape
- a point-of-interest coordinate detection unit that associates the first point-of-interest coordinates with the second point-of-interest coordinates after detecting the point-of-interest coordinates of
- the map image is calculated according to the calculated distortion amount after calculating the distortion amount generated between the first attention point coordinates and the second attention point coordinates associated by the attention point coordinate detection unit.
- a coordinate conversion processing unit that performs coordinate conversion of the map image data or the camera image data so that the data or the image of the camera image data is deformed; Equipped with
- the image display device is The image deformation apparatus of the present invention;
- the camera image data and the map image data subjected to the coordinate conversion, or the camera image data subjected to the coordinate conversion and the camera image data subjected to the coordinate conversion, are synthesized in a state in which the two attention point coordinates are associated with each other
- An image composition processing unit to generate An image display processing unit that generates a display signal based on the composite image data; Equipped with
- the coordinate conversion processing unit further reads out guidance route guidance image data corresponding to the position of the map image data from the navigation device, and then the image of the guidance route guidance image data is deformed according to the distortion amount. Coordinate converting the guidance route guidance image data;
- the image synthesis processing unit is configured to coordinate the guidance image with the camera image data so that the image of the guidance route guidance image data after deformation corresponds in position to the image of the untransformed camera image data. Synthesize, It is.
- the preferred embodiment of the present invention is
- the coordinate conversion processing unit reads map image data including guidance route guidance image data corresponding to the position of the map image data as the map image data from the navigation device, and then, according to the distortion amount, the guidance route Coordinate converting the map image data including the guidance route guidance image data so that the image of the map image data including the guidance image data is deformed;
- the image synthesis processing unit is configured to coordinate the camera image data so that an image of map image data including the guidance route guidance image data after deformation has a position corresponding to an image of the uncorrected camera image data. Combine with map image data including route guidance image data, It is.
- the preferred embodiment of the present invention is
- the guidance route guidance image data is image data indicating a position of a destination to be guided or image data indicating a direction toward the destination to be guided. It is.
- the preferred embodiment of the present invention is The image synthesis processing unit adjusts a luminance signal or a color difference signal of the area of the camera image data corresponding to the image data indicating the position of the destination to be guided, which is the guidance route guidance image data subjected to coordinate conversion. Combined with the guidance route guidance image data, It is.
- FIG. 1 is a block diagram of a car navigation system according to the present embodiment.
- FIG. 2 is a block diagram of an image deformation apparatus of the present invention and peripheral devices associated therewith.
- FIG. 3 is a pixel configuration diagram for determining contour pixels according to the present invention.
- FIG. 4 is an image view of the camera of the present invention.
- FIG. 5 is an image diagram of camera image data in which an outline component is detected according to the first embodiment of the present invention.
- FIG. 6 is a camera image view showing a specific area according to the first embodiment of the present invention.
- FIG. 7 is an image diagram of road color difference data according to the first embodiment of the present invention.
- FIG. 8 is an image diagram of recognized road image data according to the first embodiment of the present invention.
- FIG. 1 is a block diagram of a car navigation system according to the present embodiment.
- FIG. 2 is a block diagram of an image deformation apparatus of the present invention and peripheral devices associated therewith.
- FIG. 3 is a pixel configuration diagram for
- FIG. 9 is a map image view according to the embodiments 1, 4, 5, 6, 7, 8, 9, 10 of the present invention.
- FIG. 10 is a diagram showing a determination of a refracted portion of a road contour in camera image data according to the first embodiment of the present invention.
- FIG. 11 is a road contour vector diagram according to the first and third embodiments of the present invention.
- FIG. 12 is a diagram showing a determination of a refracted portion of a road contour in map image data according to the first embodiment of the present invention.
- FIG. 13 is a diagram showing a determination of a refracted portion of a road contour in camera image data according to a second embodiment of the present invention.
- FIG. 14 is a road contour vector diagram in camera image data according to the second embodiment of the present invention.
- FIG. 15 is a road contour vector diagram in camera image data according to the third embodiment of the present invention.
- FIG. 16 is a coordinate conversion conceptual diagram according to the fourth, fifth, and sixth embodiments of the present invention.
- FIG. 17 is an image deformation image diagram of map image data according to the fourth and fifth embodiments of the present invention.
- FIG. 18 is an image deformation image diagram of camera image data according to the fourth and fifth embodiments of the present invention.
- FIG. 19 is a road contour vector diagram according to Embodiment 5 of the present invention.
- FIG. 20 is an image diagram of guidance route guidance arrow image data according to the sixth embodiment of the present invention.
- FIG. 21 is an image diagram after image modification of guidance route guidance arrow image data according to the sixth embodiment of the present invention.
- FIG. 22 is a composite image diagram of guidance route guidance arrow image data and camera image data according to the sixth embodiment of the present invention.
- FIG. 23 is an image diagram of map image data including guidance route guidance arrow image data according to the seventh embodiment of the present invention.
- FIG. 24 is an image diagram after image modification of map image data including guidance route guidance arrow image data according to the seventh embodiment of the present invention.
- FIG. 25 is a composite image diagram of map image data and camera image data including guidance route guidance arrow image data according to the seventh embodiment of the present invention.
- FIG. 26 is an image diagram of destination mark image data according to the eighth, ninth, tenth embodiments of the present invention.
- FIG. 27 is an image diagram after image modification of destination mark image data according to the eighth and tenth embodiments of the present invention.
- FIG. 28 is a composite image diagram of destination mark image data and camera image data according to the eighth and ninth embodiments of the present invention.
- FIG. 29 is an image diagram of map image data including destination mark image data according to the ninth embodiment of the present invention.
- FIG. 30 is an image diagram after image deformation of map image data including destination mark image data according to the ninth embodiment of the present invention.
- FIG. 31 is a composite image diagram of map image data and camera image data including destination mark image data according to the ninth embodiment of the present invention.
- FIG. 32 is an image view in which the outline of the destination building according to the tenth embodiment of the present invention is changed.
- FIG. 33 is an image diagram in which the color difference information of the destination building according to the tenth embodiment of the present invention is changed.
- FIG. 34 is a flowchart of the image modification method according to the first, second, third, fourth, and fifth embodiments of the present invention.
- FIG. 35 is a flowchart of the image display method according to the sixth and seventh embodiments of the present invention.
- FIG. 36 is a flowchart of an image display method according to Embodiments 8, 9, 10 of the present invention.
- the present car navigation device is a route guidance device for performing guidance along each route after searching and setting a route to a destination set by the user based on road map image data prepared in advance.
- Each element shown in the functional block diagram of 1 is provided.
- FIG. 1 shows the configuration of a car navigation apparatus according to each embodiment of the present invention.
- the self-contained navigation control unit 102 detects a vehicle speed sensor that detects the traveling speed of the host vehicle and a rotation angle of the host vehicle.
- a self-contained navigation is a navigation which operates a present location cursor only by the signal which can be detected from self-vehicles.
- a Global Positioning System control unit (hereinafter referred to as a GPS control unit) 103 receives GPS signals transmitted from artificial satellites (GPS satellites) disposed in a plurality of predetermined orbits at an altitude of about 20,000 km of the earth by a GPS receiver. It receives and measures the current position and the current direction of the vehicle using information contained in this GPS signal.
- GPS satellites artificial satellites
- VICS information receiver An information receiver (hereinafter referred to as a VICS information receiver) 104 sequentially receives current road traffic information outside the host vehicle transmitted by the VICS center via an external antenna.
- VICS is a system that receives traffic information sent by FM multiplex broadcasts and transmitters on the road and displays it in graphic and text.
- the VICS center uses edited and processed road traffic information (congestion and traffic Send regulations, etc.) in real time.
- the car navigation system receives road traffic information by the VICS information receiver 104 and superimposes and displays the received road traffic information on a prepared map.
- the communication control unit 101 enables data communication wirelessly or by wire.
- a communication device (not shown) controlled by the communication control unit 101 may be incorporated in the navigation device, or may externally connect a mobile communication terminal such as a mobile phone, for example.
- a user can access an external server via the communication control unit 101.
- the navigation control unit 106 is a part that controls the entire apparatus.
- the map information database 107 is various memories necessary for the operation of the apparatus, and holds various data such as recorded map image data and facility data.
- the navigation control unit 106 reads out necessary map image data from the map information database 107.
- the memory in the map information database 107 may be a CD / DVD-ROM or a hard disk drive (HDD).
- the update information database 108 is a memory for storing difference data of map information updated in the map information database 107.
- the update information database 108 is stored and controlled by the navigation control unit 106.
- the voice output unit 105 includes a speaker, and outputs, for example, voice such as intersection guidance at the time of route guidance.
- the imaging unit 109 is a camera provided with an imaging element such as a CCD sensor or a CMOS sensor installed in front of the host vehicle.
- the image processing unit 110 converts the electrical signal from the imaging unit 109 into image data, and performs image processing on the map image data from the navigation control unit 106.
- the image combining processing unit 111 combines the map image data based on the current position of the host vehicle input from the navigation control unit 106 and the camera image data input from the image processing unit 110.
- the image display processing unit 112 displays an image of the image data synthesized by the image synthesis processing unit 111 on a display or the like of the car navigation apparatus.
- FIG. 2 is a block diagram of the image deformation device and the peripheral device associated therewith. The parts corresponding to those in FIG. 1 are given the same reference numerals.
- the image processing unit 110 recognizes an image recognition unit 205 that recognizes a road shape in camera image data (an image outside the subject vehicle) captured by the imaging unit 109 capturing an external image from the subject vehicle.
- a point of interest coordinate detection unit 206 and coordinate conversion processing unit 208 are provided which read map image data from the navigation apparatus indicating the position of the vehicle and detect coordinates of the point of interest from camera image data and map image data.
- the image recognition unit 205, the focus point coordinate detection unit 206, and the coordinate conversion processing unit 208 constitute an image deformation apparatus.
- the image deformation apparatus corresponds to one function of basic image processing of the image processing unit 110 in FIG.
- the image processing unit 110 outputs from the luminance signal / color difference signal separation processing unit 202 that separates the imaging signal from the imaging unit 109 into a luminance signal and a color difference signal, and from the luminance signal / color difference signal separation processing unit 202 A luminance signal processing unit 203 that processes a luminance signal, and a color difference signal processing unit 204 that processes a color difference signal output from the luminance signal / color difference signal separation processing unit 202.
- the image recognition unit 205 performs an image recognition process based on signals separately processed by the luminance signal processing unit 203 and the color difference signal processing unit 204.
- Camera image data is input to the luminance signal / color difference signal separation processing unit 202 from the imaging unit 109.
- the luminance signal / color difference signal separation processing unit 202 receives data of three colors of red (R), green (G), and blue (B) from the imaging unit 109 (three primary colors of light)
- the RGB three-color data is It is converted into Y signal, U signal and V signal according to the general color space conversion formula shown below.
- the color difference signal separation processing unit 202 is configured to transmit RGB three-color data input from the imaging unit 109 according to ITU-R BT.
- conversion to Y signals, Cb signals, and Cr signals may be performed according to the YCbCr color space conversion equation of 601 standard.
- Y 0.257R + 0.504G + 0.098B + 16
- the Y signal indicates a luminance signal (brightness)
- the Cb signal and the U signal indicate a blue difference signal (color difference signal)
- the Cr signal and the V signal indicate a red difference signal.
- the color difference signal separation processing unit 202 When the color difference signal separation processing unit 202 receives data of three colors of cyan (C), magenta (M), and yellow (Y) (three primary colors of colorants) from the imaging unit 109, the CMY three-color data is After converting into RGB three-color data according to the equation shown below, the signal is converted into Y signal, Cb signal, Cr signal (Y signal, U signal, V signal) according to any of the color space conversion equations described above Output on.
- the luminance signal processing unit 203 performs signal processing on the luminance signal input from the luminance signal / color difference signal separation processing unit 202 according to the luminance level and outputs the processed signal. Furthermore, the luminance signal processing unit 203 performs a process of determining an outline pixel. For example, consider the case where outline pixel determination is performed using simple 3 ⁇ 3 peripheral pixels as shown in FIG. In this case, if the luminance signal of each of the peripheral pixels D31 to D34 and D36 to D39 is compared with the luminance signal of the target pixel D35 with respect to the target pixel D35, the luminance signal difference is larger than a preset value.
- a contour is present between the peripheral pixel and the target pixel D35, and the target pixel D35 is determined as a contour pixel.
- outline image data whose image is shown in FIG. 5 is generated as image data in which an outline component is detected based on luminance information. Be done.
- the color difference signal processing unit 204 performs signal processing according to the color difference on the color difference signal input from the luminance signal / color difference signal separation processing unit 202 and outputs the result. Further, the color difference signal processing unit 204 compares color difference information between each pixel and a pixel (hereinafter referred to as a specific area pixel) in a specific image area (first image area) set in advance. A determination process of an image area (second image area) including pixels having color difference information equivalent to the specific area pixel is performed.
- the camera is usually installed toward the front center of the vehicle. In this case, the lower center on the camera image is a road, and the vehicle is always present on the road.
- outline image data (the image is illustrated in FIG. 5) from the luminance signal processing unit 203 and color difference image data (in FIG. 7) of the image area A 701 considered to be a road from the color difference signal processing unit 204.
- the image is illustrated).
- the image recognition unit 205 extracts only outline pixel data of the road area from the supplied image data, and then synthesizes outline pixel data of the extracted road area to obtain an image area whose image is illustrated in FIG.
- the image data of (the second image area) is output.
- the image recognition unit 205 recognizes only the outline pixel data of the road by recognizing an outline component image signal existing at or adjacent to the image area (color difference image data A 701) considered to be a road.
- the image data of the image area formed by combining the extracted road contour pixel data is recognized, and the image data of the recognized image area (the image is illustrated in FIG. 8) is output.
- the road shape can be recognized based on camera image data from the host vehicle.
- the focus point coordinate detection unit 206 includes road image data (image data of a second image area) from the image recognition unit 205 and map image data (an image thereof is illustrated in FIG. 9) from the navigation control unit 106. Supplied. In the image area considered to be a road, the attention point coordinate detection unit 206 calculates a refracted portion (road contour refracted portion) of the road contour, and sets the corresponding coordinates P1001 to P1004 to the attention point coordinates (specifically, intersections) Detected as contour coordinates). The points of interest (coordinates P1001 to P1004) are illustrated in FIG.
- the calculation method of the road contour refracted part by the attention point coordinate detection unit 206 will be specifically described.
- a road contour vector V1006 in the left screen and a road contour vector V1007 in the right screen are calculated.
- road contour vector V1006 in the left screen is limited to the direction vector of the first quadrant (exemplified by V1102 in FIG. 11), and the road in the right screen
- the contour vector V1007 is limited to the direction vector of the second quadrant (exemplified by V1101 in FIG. 11).
- Road contour vectors V1006 and V1007 are detected based on this.
- the direction vector can be detected by calculating a linear approximate straight line for the pixels of the road contour.
- the coordinates of the point in the road contour along the detected left road contour vector V1006 and the right road contour vector V1007 are calculated as the coordinates of the point of interest.
- perspective is line perspective, and it is the technique of setting a vanishing point so that all things gather at one point.
- the point of interest coordinate detection unit 206 similarly calculates the road contour inflection point in the map image data shown in FIG. 9, and then, as shown in FIG. 12, coordinates P1201 to P1204 corresponding to the road contour inflection point. It detects as an attention point coordinate (specifically, an intersection).
- Map image data (FIG. 9) is divided into right and left on the screen by a vertical baseline L 1205 as shown in FIG. 2) Calculate road contour vectors V1206 and V1207 for left and right sides, respectively.
- Direction vector V1206 is limited to the direction vector of the first quadrant as shown by V1102 in FIG. 11, and direction vector V1207 is limited to the direction vector of the second quadrant as shown by direction vector V1101 in FIG. 3)
- the refraction point coordinates are calculated as the attention point (the attention point coordinates).
- the point of interest coordinates in each of the camera image (FIG. 6) and the map image (FIG. 9) are output.
- two-dimensional map image data has been taken as an example, but even with three-dimensional map image data, a point of interest can be calculated by the same processing.
- step S3401 the image processing unit 110 acquires camera image data (FIG. 4) from the imaging unit 109.
- step S3402 based on the camera image data (FIG. 4) acquired by the image processing unit 110, the luminance signal / color difference signal separation processing unit 202, the luminance signal processing unit 203, the color difference signal processing unit 204, and the image recognition unit 205. Recognize the road shape (road contour).
- step S3403 the focus point coordinate detection unit 206 further acquires map image data (FIG. 9) from the navigation control unit 106.
- step S3404 it is determined whether the focus point coordinate detection unit 206 calculates a direction vector. In the present embodiment, since it is not necessary to calculate the direction vector, it is determined in step S3404 that the direction vector is not calculated, whereby step S3405 is skipped and the process moves to step S3406.
- step S3406 the point of interest coordinate detection unit 206 detects intersection contour coordinates as the point of interest coordinates.
- the refraction point coordinates of the road contour in the camera image data (FIG. 4) generated by the imaging unit 109 are detected as attention point coordinates P1001 to coordinates P1004 (intersection contour coordinates) and
- the refractor coordinates in the road contour are camera images with focus point coordinates P1001 to coordinates P1004 (intersection contour coordinates) focus point coordinates P1201 to P1204 (intersection contour coordinates) It detects in the state matched with the refracting point coordinate of the road outline in data.
- the present embodiment basically has the same configuration as that of the first embodiment, but differs from the first embodiment in the following points.
- the attention point coordinate detection unit 206 detects the attention point coordinates (intersection contour coordinates in the camera image data) in a state where another vehicle or an obstacle is present in the attention point to be calculated in the camera image data. ) Is not detected.
- the attention point coordinates (intersection contour coordinates in the camera image data) in a state where another vehicle or an obstacle is present in the attention point to be calculated in the camera image data. ) Is not detected.
- some attention point coordinates hereinafter referred to as detected attention point coordinates
- P 1402 are detected
- other focus point coordinates hereinafter referred to as the remaining focus point coordinates
- the remaining attention point coordinates P1403 are calculated (estimated) based on the road contour vectors V1405 to V1408, the detection attention point coordinates P1401 and P1402, and the direction vectors V1409 and V1410.
- the remaining attention point coordinates P1404 are calculated based on the road contour vectors V1405 to V1408, the detection attention point coordinates P1401 and P1402, and the direction vectors V1411 and V1412.
- the remaining attention point coordinates P1403 and P1404 in the camera image data calculated in this way are added to the detection attention point coordinates P1401 and P1402 previously calculated.
- such calculation (estimation) and addition of attention point coordinates are referred to as change of attention point coordinates.
- Focus point coordinates in the camera image data generated by the process of changing the focus point coordinates are output from the focus point coordinate detection unit 206.
- the direction vector V1410 is opposite to the road contour vector V1407
- the direction vector VV1411 is opposite to the road contour vector V1406, but this is because the missing point coordinates P1403 and P1404 In order to calculate, the reverse vector is used.
- step S3401 the image processing unit 110 acquires camera image data (FIG. 4) from the imaging unit 109.
- step S3402 based on the camera image data (FIG. 4) acquired by the image processing unit 110, the luminance signal / color difference signal separation processing unit 202, the luminance signal processing unit 203, the color difference signal processing unit 204, and the image recognition unit 205. Recognize the road shape (road contour).
- step S3403 the focus point coordinate detection unit 206 further acquires map image data (FIG. 9) from the navigation control unit 106.
- step S3404 it is determined whether the focus point coordinate detection unit 206 calculates a direction vector. In the present embodiment, since it is not necessary to calculate the direction vector, it is determined in step S3404 that the direction vector is not calculated, whereby step S3405 is skipped and the process moves to step S3406.
- step S3406 the focus point coordinate detection unit 206 detects the focus point coordinates as intersection contour coordinates.
- step S3407 when all the focus point coordinates required to specify the intersection can not be detected, the focus point coordinate detection unit 206 changes the focus point coordinates in the next step S3408 (undetected focus point coordinates Calculate (estimate).
- focus point coordinates can be changed (undetected focus point coordinates can be calculated (estimated)).
- the present embodiment basically has the same configuration as that of the first embodiment, but differs from the first embodiment in the following points.
- the focus point coordinate detection unit 206 calculates road contour vectors V1501 to V1504 in camera image data, and then calculates intersection coordinates P1505 to P1508 of the calculated road contour vectors V1501 to V1504.
- the target point coordinate detection unit 206 detects the calculated intersection point coordinates P1505 to P1508 as the target point coordinates (intersection contour coordinates).
- road contour vectors V1501 to V1504 are calculated from the camera image data. From among the direction vectors V1501 to V1504, Located to the left of Baseline L1509, The direction vector of the first quadrant, A road contour vector satisfying the above condition is detected as a left side contour vector V1501 of the host vehicle travel road.
- the host vehicle signal road left side contour vector should be limited to the direction vector of the first quadrant (see V1102 in FIG. 11). Therefore, when detecting the host vehicle signal road left side contour vector, the direction vector is detected limited to the direction vector of the first quadrant.
- the road contour vector satisfying the condition is further detected as the vehicle-traveling road right-side contour vector V1502.
- the host vehicle signal road right side contour vector should be limited to the direction vector of the second quadrant (see V1101 in FIG. 11). Therefore, when detecting the host vehicle signal road right side outline vector, the direction vector is detected with limitation to the direction vector of the second quadrant.
- road contour vectors V1503 and V1504 of a road crossing the host vehicle traveling road are detected separately from the road contour vectors V1501 and V1502.
- the road contour vectors V1503 and V1504 are direction vectors intersecting the host vehicle traveling road left side contour vector V1501 and the host vehicle traveling road right side contour vector V1502.
- Coordinates intersecting each other in the road contour vectors V1501 to V1504 selected as above are regarded as coordinates (intersection contour coordinates) indicating the contour of the intersection, and the coordinates are detected as attention point coordinates.
- road contour vectors V1501 'to V1504' and attention point coordinates are calculated from the map image data by the same method.
- the point of interest coordinates and the road contour vector calculated from each of the camera image data and the map image data as described above are output from the point of interest coordinate detection unit 206 in a state where they are associated with each other.
- step S3401 the image processing unit 110 acquires camera image data (FIG. 4) from the imaging unit 109.
- step S3402 camera image data (FIG. 4) acquired by the image processing unit 110 by the luminance signal / color difference signal separation processing unit 202, the luminance signal processing unit 203, the color difference signal processing unit 204, and the image recognition unit 205. Recognize the road shape (road contour) based on
- step S3403 the focus point coordinate detection unit 206 further acquires map image data (FIG. 9) from the navigation control unit 106.
- the point-of-interest coordinate detection unit 206 determines in step S3404 whether or not to calculate a direction vector. In the present embodiment, since it is necessary to calculate the direction vector, it is determined in step S3404 that the direction vector is to be calculated, and the process proceeds to steps S3405 and S3406.
- step S3405 the focus point coordinate detection unit 206 calculates a direction vector, and in step S3406, the focus point coordinate detection unit 206 similarly detects intersection outline coordinates as the focus point coordinates.
- Embodiment 4 An image deformation method and an image deformation apparatus according to the fourth embodiment of the present invention will be described with reference to FIGS. 1, 2, 16 to 18, and 34.
- the present embodiment basically has the same configuration as that of the first embodiment, but differs from the first embodiment in the following points.
- the image transformation apparatus includes an image recognition unit 205, a focus point coordinate detection unit 206, a coordinate conversion processing unit 208, and a selector 207.
- the selector 207 switches the input image to the coordinate conversion processing unit 208.
- the coordinate conversion processing unit 208 directly receives the target point coordinates in the camera image data and the target point coordinates in the map image data from the target point coordinate detection unit 206.
- the coordinate conversion processing unit 208 includes camera image data (generated by the luminance signal processing unit 203 and the color difference signal processing unit 204), and map image data (the navigation control unit 106 generates the map information database 107 and the update database 108). Is read out.
- the camera image data and the map image data are supplied to the coordinate conversion processing unit 208 while being changed as the vehicle travels.
- the change (switching) of map image data is performed by the selector 207.
- the coordinate conversion processing unit 208 includes attention point coordinates P1601 to P1604 (see open circles in FIG. 16) in the map image data and attention point coordinates P1605 to P1608 (see open circles in FIG. 16) in the camera image data. It is supplied from the focus point coordinate detection unit 206.
- the coordinate conversion processing unit 208 has a point of interest coordinates P1601 and a point of interest coordinates P1605, a point of interest coordinates P1602 and a point of interest coordinates P1606, a point of interest coordinates P1603 and a point of interest coordinates P1607, and a point of interest coordinates P1604 and a point of interest After recognizing that the coordinates P1608 correspond to each other, the distortion amount of the coordinates is calculated such that the corresponding noted point coordinates coincide with each other.
- the coordinate conversion processing unit 208 performs map conversion on map image data input from the navigation control unit 106 via the selector 207 by performing coordinate conversion according to the distortion amount of coordinates calculated in advance. Perform image transformation of camera image data.
- bilinear method a method of performing linear density interpolation according to the coordinates from density values of four surrounding pixels
- bicubic method surrounding method
- a method for converting to arbitrary quadrilateral can be used.
- squares Q1609 and Q1610 are displayed by connecting attention point coordinates P1601 to P1604 on map image data and attention point coordinates P1605 to P1608 on camera image data with dotted lines, respectively.
- the drawing is intended to deepen the understanding of the image deformation in the quadrangle, and is not essential for calculating the amount of distortion.
- step S3401 the image processing unit 110 acquires camera image data from the imaging unit 109.
- step S3402 camera image data (FIG. 4) acquired by the image processing unit 110 by the luminance signal / color difference signal separation processing unit 202, the luminance signal processing unit 203, the color difference signal processing unit 204, and the image recognition unit 205. Recognize road contours based on
- step S3403 the focus point coordinate detection unit 206 further acquires map image data (FIG. 9) from the navigation control unit 106.
- step S3404 it is determined whether the focus point coordinate detection unit 206 calculates a direction vector. In the present embodiment, since it is not necessary to calculate the direction vector, in step S3404 it is determined that the direction vector is not calculated, whereby step S3405 is skipped and the process moves to step S3406.
- step S3406 the focus point coordinate detection unit 206 detects the focus point coordinates as an intersection.
- step S3407 If all the focus point coordinates required to specify the intersection can not be detected in step S3407, the focus point coordinate detection unit 206 changes the focus point coordinates in the next step S3408 (undetected focus point coordinates Calculate (estimate). Further, in step S3409, the coordinate conversion processing unit 208 calculates the amount of distortion of coordinates, and in step S3410, the image data to be an image deformation target is determined. In step S3411 or S3412, the coordinate conversion processing unit 208 performs deformation processing of deformation target image data (camera image data or map image data).
- the coordinate conversion processing unit 208 calculates the distortion amount so that the point of interest coordinates of the map image data and the point of interest coordinates of the camera image data match, and then coordinates according to the calculated distortion amount
- the map image data can be deformed by performing the conversion process.
- deformed map image data obtained by performing image deformation processing on map image data (see FIG. 9) corresponding to camera image data having a distortion amount shown in FIG. 16 is as shown in FIG.
- the coordinate conversion processing unit 208 when performing image deformation processing (coordinate conversion processing) according to the distortion amount to camera image data, performs processing according to the distortion amount to camera image data input through the selector 207.
- image deformation processing in the reverse vector direction, deformed camera image data shown in FIG. 18 is generated from the camera image data shown in FIG.
- FIGS. 1, 2, 16 to 19, and 34 An image deformation method and an image deformation apparatus according to the fifth embodiment of the present invention will be described with reference to FIGS. 1, 2, 16 to 19, and 34.
- the present embodiment basically has the same configuration as that of the fourth embodiment, but differs from the fourth embodiment in the following points.
- the coordinate conversion processing unit 208 is supplied with the road contour vector in the camera image data and the road contour vector in the map image data from the focus point coordinate detection unit 206. Further, camera image data is supplied to the coordinate conversion processing unit 208 from the luminance signal processing unit 203 and the color difference signal processing unit 204. Furthermore, map image data is supplied from the navigation control unit 106 to the coordinate conversion processing unit 208. The camera image data and the map image data are mutually switched by the selector 207 and then supplied to the coordinate conversion processing unit 208.
- direction vector V1901 to direction vector V1904 (dotted line) shown in FIG. 19 as a road contour vector of map image data
- direction vector V1905 to direction vector V1908 black line
- the coordinate conversion processing unit 208 indicates that the direction vector V1901 corresponds to the direction vector V1905, the direction vector V1902 corresponds to the direction vector V1906, the direction vector V1903 corresponds to the direction vector V1907, and the direction vector V1904 corresponds to the direction vector V1908.
- Detect In the selection of the corresponding combination of direction vectors a combination of direction vectors which minimize mutual movement is selected from among a plurality of combinations of direction vectors.
- the coordinate conversion processing unit 208 calculates the amount of distortion based on the difference in the position of the corresponding direction vector pair selected as described above. Specifically, the distortion amount is calculated by the same method as that of the fourth embodiment described above.
- the coordinate conversion processing unit 208 performs image deformation processing on the road contour vectors V1901 to V1904 in the map image data supplied via the selector 207 according to the calculated distortion amount.
- This image transformation process is similar to that described in the fourth embodiment etc., in addition to bilinear method (linear interpolation) often used for image scaling, bicubic method, etc., for converting to arbitrary quadrilaterals. A technique or the like can be used.
- step S3401 the image processing unit 110 acquires camera image data from the imaging unit 109.
- step S3402 camera image data (FIG. 4) acquired by the image processing unit 110 by the luminance signal / color difference signal separation processing unit 202, the luminance signal processing unit 203, the color difference signal processing unit 204, and the image recognition unit 205. Recognize road contours based on
- step S3403 the focus point coordinate detection unit 206 further acquires map image data (FIG. 9) from the navigation control unit 106.
- step S3404 it is determined whether the focus point coordinate detection unit 206 calculates a direction vector. In the present embodiment, since it is necessary to calculate the direction vector, it is determined that the direction vector is to be calculated here, and the process proceeds to steps S3405 and S3406.
- step S3405 the focus point coordinate detection unit 206 calculates a direction vector, and in step S3406, the focus point coordinate detection unit 206 similarly detects intersection outline coordinates as the focus point coordinates.
- step S3407 when all of the focus point coordinates required to specify the contour of the intersection can not be detected, the focus point coordinate detection unit 206 changes the focus point coordinates in the next step S3408 (undetected) Calculate (estimate) the point of interest coordinates. Further, in step S3409, the coordinate conversion processing unit 208 calculates the amount of distortion of coordinates, and in step S3410, the image data to be an image deformation target is determined. In step S3411 or S3412, the coordinate conversion processing unit 208 performs deformation processing of deformation target image data (camera image data or map image data).
- the coordinate conversion processing unit 208 calculates the distortion amount so that the point of interest coordinates of the map image data and the point of interest coordinates of the camera image data coincide with each other.
- the map image data can be deformed by performing the coordinate conversion process. For example, deformed map image data obtained by performing image deformation processing on map image data (see FIG. 9) corresponding to camera image data having a distortion amount shown in FIG. 16 is as shown in FIG.
- the coordinate conversion processing unit 208 when performing image deformation processing (coordinate conversion processing) according to the distortion amount to camera image data, the coordinate conversion processing unit 208 causes camera image data supplied via the selector 207 to be converted according to the distortion amount.
- image deformation processing in the reverse vector direction, deformed camera image data whose image is shown in FIG. 18 is generated from the camera image data whose image is shown in FIG.
- the image display device includes an image deformation device having the same configuration as the image deformation device described in the first to fifth embodiments, an image synthesis processing unit 111, and an image display processing unit 112. .
- the coordinate conversion processing unit 208 reads out guidance route guidance arrow image data, which is one of the guidance route guidance image data, from the navigation control unit 106, and combines it with the map image data. For example, route guidance at an intersection is made by combining the guidance route guidance arrow data A 2001 whose image is shown in FIG. 20 with the map image data whose image is shown in FIG.
- the coordinate conversion processing unit 208 generates the guidance route guidance arrow data (deformation) A2101 whose image is shown in FIG. 21 by performing the image modification processing described in the first to fifth embodiments on the guidance route guidance arrow data A2001. Then, the guidance route guidance arrow image data (deformation) A2101 is supplied to the image synthesis processing unit 111.
- camera image data is supplied to the image synthesis processing unit 111 via the selector 113.
- the guidance route guidance arrow image data (deformation) A 2101 is combined with the camera image data in a state where positional coordinates of each other are associated with each other.
- the resulting image is composite image data shown in FIG.
- the image combining processing unit 111 supplies the combined image data combined as described above to the image display processing unit 112.
- the image display processing unit 112 displays an image of the supplied composite image data on a display screen or the like.
- step S3501 the image conversion processing unit 208 selects guidance route guidance image data to be transformation target image data.
- step S3501 guidance route guidance image data to be transformation target image data is selected.
- the coordinate conversion processing unit 208 acquires guidance route guidance arrow image data from the navigation control unit 106 in step S3052.
- step S 3504 the coordinate conversion processing unit 208 deforms the acquired guidance route guidance arrow image data and supplies the deformed image to the image combining processing unit 111.
- step S3505 the image combining processing unit 111 acquires camera image data.
- step S3506 the image combining processing unit 111 combines the guidance route guidance arrow image data (deformation) supplied from the coordinate conversion processing unit 208 and the camera image data in a state where the positional coordinates of each other are associated with each other.
- the composite image data is supplied to the image display unit 112.
- step S3507 the image display processing unit 112 displays an image of the combined image data supplied from the image combining processing unit 111.
- guidance route guidance image data (deformation) generated by reading guidance route guidance arrow image data from the navigation device and performing image deformation according to the distortion amount to the read guidance route guidance arrow image data It is possible to display an image of the combined image data after combining the camera image data in a state in which the coordinates of the point of interest are associated with each other (see FIG. 22).
- FIGS. 1, 2, 23 to 25, and 35 An image display method and an image display apparatus according to a seventh embodiment of the present invention will be described with reference to FIGS. 1, 2, 23 to 25, and 35.
- the present embodiment basically has the same configuration as that of the sixth embodiment, but differs from the sixth embodiment in the following points.
- map image data including guidance route guidance arrow image data whose image is shown in FIG. 23 as guidance route guidance image data are read from the navigation control unit 106.
- the map image data including guidance route guidance arrow image data for example, in the map image data whose image is shown in FIG. 9, the guidance route guidance arrow image data A2101 whose image is shown in FIG. It refers to image data that enables route guidance at intersections by combining in a linked state.
- the coordinate conversion processing unit 208 performs the coordinate conversion processing described in the first to fifth embodiments on the map image data including the guidance route guidance arrow image data to obtain the guidance route guidance arrow image data illustrated in FIG.
- the map image data (deformation) to be included is generated, and the map image data (deformation) including the generated guidance route guidance arrow image data is output to the image synthesis processing unit 111.
- the image synthesis processing unit 111 performs a process of synthesizing map image data (deformation) including guidance route guidance arrow image data with camera image data.
- the selector 113 selects camera image data. For example, in the case of camera image data whose image is shown in FIG.
- the map image data (deformation) including the guidance route guidance arrow image data whose image is shown in FIG.
- the composition coefficient (the transparency of the layer) in this composite image processing can be arbitrarily changed.
- the image composition processing unit 111 outputs composite image data, which is the composition result, to the image display processing unit 112.
- the image display processing unit 112 displays an image of the supplied composite image data on a display screen or the like.
- step S3501 the navigation control unit 106 selects image data to be guidance route guidance image data, and outputs the selected image data to the selector 207.
- the navigation control unit 106 selects and outputs map image data including guidance route guidance arrow image data.
- the guidance route guidance image data and the camera image data are supplied to the selector 207, in the present embodiment, the guidance route guidance image data is selected and output.
- the coordinate conversion processing unit 208 acquires map image data including guidance route guidance arrow image data, which is guidance route guidance image data (steps S3502 and S3503).
- step S 3504 the coordinate conversion processing unit 208 performs coordinate conversion processing of map image data including guidance route guidance arrow image data supplied from the selector 207 to generate map image data including guidance route guidance arrow image data ( Is generated and output to the image synthesis processing unit 111.
- step S3505 the selector 113 selects image data to be combined target image data from camera image data and map image data, and outputs the selected image data to the image combining processing unit 111.
- the selector 113 selects camera image data as composition target image data.
- the image synthesis processing unit 111 acquires camera image data which is synthesis target image data, and map image data (deformation) including guidance route guidance arrow image data.
- step S3506 the image combining processing unit 111 combines the guidance route guidance image data (deformation) and the camera image data in a state in which the focus point coordinates of each other are associated with each other, and outputs the result to the image display processing unit 112.
- step S3507 the image display processing unit 112 displays an image of composite image data.
- map image data including guidance route guidance arrow image data is read out from the navigation control unit 106, and then distortion amount (map image data is added to map image data including the read guidance route guidance arrow image data Relative to the camera image data, and after performing image deformation according to the focus point coordinate detection unit 206), the guidance route guidance arrow image data after the image deformation is It is possible to display the image (shown in FIG. 25) by synthesizing the camera image data with the map image data (deformation) to be included (deformation) associated with each other at the predetermined synthesis rate and the coordinates of the point of interest.
- FIGS. 1, 2, 26 to 28, and 36 An image display method and an image display apparatus according to an eighth embodiment of the present invention will be described with reference to FIGS. 1, 2, 26 to 28, and 36.
- the present embodiment basically has the same configuration as that of the sixth embodiment, but differs from the sixth embodiment in the following points.
- the coordinate conversion processing unit 208 reads out the destination mark image data M2601 from the navigation control unit 106.
- the destination mark image data M2601 is one of the guidance route guidance image data, and as the image is shown in FIG. 26, the destination on the image can be guided to the destination.
- the coordinate conversion processing unit 208 deforms the destination mark image data M2601 illustrated in FIG. 27 by performing the coordinate conversion processing described in the first to fifth embodiments on the destination mark image data M2601.
- the destination mark image data M2601 after deformation is hereinafter referred to as destination mark image data (deformation) A2701.
- the coordinate conversion processing unit 208 outputs the generated destination mark image data (deformation) A 2701 to the image combining processing unit 111.
- the selector 113 selects camera image data and outputs it to the image composition processing unit 111.
- the image combining processing unit 111 performs processing of combining camera image data and destination mark image data (deformation) A 2701 in a state in which positional coordinates of each other are associated with each other, and image data of the combining result is displayed on the image display processing unit Output to 112.
- the image display processing unit 112 displays an image of the supplied composite image data on a display screen or the like. For example, in the case of camera image data whose image is shown in FIG. 4, by combining destination mark image data (deformation) A 2701 with this camera image data, the image of the synthesized image data is shown in FIG. Become.
- step S3601 the navigation control unit 106 selects image data to be guidance route guidance image data, and outputs the selected image data to the selector 207.
- the destination mark image data M 2601 is selected and output from the navigation control unit 106.
- the guidance route guidance image data (the destination mark image data M2601) from the navigation control unit 106 and the camera image data from the luminance signal processing unit 203 and the color difference signal processing unit 204 are input to the selector 207, respectively.
- the selector 207 selects the destination mark image data M 2601 supplied from the navigation control unit 106 and supplies the selected destination mark image data M 2601 to the coordinate conversion processing unit 208. Thereby, the coordinate conversion processing unit 208 acquires the destination mark image data M2601 (steps S3602 and S3603). The coordinate conversion processing unit 208 deforms the supplied destination mark image data M 2601 (step S 3604).
- destination mark image data M2601 from navigation control unit 106 and camera image data from luminance signal processing unit 203 and color difference signal processing unit 204 are input to selector 113, respectively.
- the selector 113 selects camera image data supplied from the luminance signal processing unit 203 and the color difference signal processing unit 204, and supplies the camera image data to the image combining processing unit 111.
- the image combining processing unit 111 obtains camera image data (step S3605).
- the image combining processing unit 111 determines whether the change mode of the target image is set (step S3606). In the present embodiment, since the change mode of the target image data is not set, the process proceeds to step S3607.
- step S3607 the image combining processing unit 111 combines the destination mark image data (deformation) and the camera image data in a state where the positional coordinates of each other are associated with each other, and outputs the combined result to the image display processing unit 112.
- the image display processing unit 112 displays the composite image data supplied from the image combining processing unit 111 (step S3608). The display image is shown in FIG.
- the destination mark image data is read out from the navigation control unit 27, and the image data is deformed according to the amount of distortion, and further the destination mark image data (deformation) and the camera image data Can be combined and displayed in a state in which the focus point coordinates of each other are associated with each other.
- FIGS. 1, 2, 29 to 31, and 36 An image display method and an image display apparatus according to a ninth embodiment of the present invention will be described with reference to FIGS. 1, 2, 29 to 31, and 36.
- the present embodiment basically has the same configuration as that of the sixth embodiment, but differs from the sixth embodiment in the following points.
- the coordinate conversion processing unit 208 reads map image data including destination mark image data from the navigation control unit 106.
- Coordinate conversion processing unit 208 generates map image data M2901 including destination mark image data, which is one of the guidance route guidance image data, in a manner similar to that described in the first to fifth embodiments. It is transformed into map image data including a destination mark where the image is shown.
- map image data (deformation) A 3001 including destination mark image data M guide map image data (deformation) A 3001 including destination mark image data M.
- the coordinate conversion processing unit 208 outputs map image data (deformation) A 3001 including the generated destination mark image data to the image combining processing unit 111.
- the selector 113 selects camera image data and outputs it to the image composition processing unit 111.
- the image synthesis processing unit 111 performs processing of synthesizing camera image data and map image data (deformation) A 3001 including destination mark image data in a state in which mutual attention point coordinates are associated, and an image of the synthesis result
- the data is output to the image display processing unit 112.
- the image display processing unit 112 displays an image of the supplied composite image data on a display screen or the like.
- the image of the composite image data is as shown in FIG. Note that the composition coefficient (the transparency of the layer) in the camera image data and the guide map image data in image composition can be arbitrarily changed.
- step S3601 the navigation control unit 106 selects image data to be guidance route guidance image data, and outputs the selected image data to the selector 207.
- map image data M 2901 including destination mark image data is selected and output from the navigation control unit 106.
- map image data M2901 including destination mark image data from the navigation control unit 106 and camera image data from the luminance signal processing unit 203 and the color difference signal processing unit 204 are respectively input.
- the map image data M2901 including the destination mark image data supplied from the navigation control unit 106 is selected and supplied to the coordinate conversion processing unit 208.
- the coordinate conversion processing unit 208 acquires map image data M2901 including destination mark image data (steps S3602 and S3603).
- the coordinate conversion processing unit 208 transforms the map image data M2901 including the supplied destination mark image data into an image (step S3604), and the map image data M2901 including the image modified destination mark image data is marked as a destination mark Map image data (deformation) A 2901 including image data.
- map image data M2901 including destination mark image data from the navigation control unit 106 and camera image data from the luminance signal processing unit 203 and the color difference signal processing unit 204 are input to the selector 113, respectively.
- the selector 113 selects camera image data supplied from the luminance signal processing unit 203 and the color difference signal processing unit 204 and supplies the camera image data to the image combining processing unit 111.
- the image combining processing unit 111 obtains camera image data (step S3605).
- the image combining processing unit 111 determines whether the change mode of the target image is set (step S3606). In the present embodiment, since the change mode of the target image is not set, the process proceeds to step S3607.
- step S3607 the image synthesis processing unit 111 generates synthetic image data by synthesizing the map image data (deformation) A 2901 including the destination mark image data and the camera image data in a state where the attention point coordinates are associated with each other.
- the composite image data is output to the image display processing unit 112.
- the image display processing unit 112 displays the composite image data supplied from the image combining processing unit 111 (step S3608). The display image is shown in FIG.
- map image data including destination mark image data is read out from the navigation control unit 27, and the image data is deformed according to the amount of distortion, and a map image including destination mark image data Data (deformation) and camera image data can be combined and displayed in a state in which the coordinates of the point of interest are associated with each other.
- FIG. 1 An image display method and an image display apparatus according to a tenth embodiment of the present invention will be described with reference to FIG. 1, FIG. 2, FIG. 26, FIG. 27, FIG.
- the present embodiment basically has the same configuration as that of the sixth embodiment, but differs from the sixth embodiment in the following points.
- the coordinate conversion processing unit 208 performs navigation control unit for map image data including the destination mark image data M2601 or the destination mark image data M2901. Read from 106. For example, by combining the map image data whose image is shown in FIG. 9 and the destination mark image data M 2601 whose image is shown in FIG. 26 and FIG. Guidance to the destination is made. A more detailed description will be given below.
- Coordinate conversion processing unit 208 converts destination mark image data M 2601 into destination mark image data (deformation) A 2701 shown in FIGS. 27 and 30 in the same manner as described in the first to fifth embodiments. Then, the coordinate-transformed image data is output to the image composition processing unit 111.
- the selector 113 selects camera image data and outputs it to the image composition processing unit 111.
- the image composition processing unit 111 performs processing of adjusting the image of the camera image data based on the destination mark image data (deformation) A 270 to generate adjusted image data. That is, for example, when camera image data whose image is shown in FIG. 4 is used as camera image data, the coordinates of the destination mark in the destination mark image data (deformation) A 2701 are located around or around the coordinates of the destination mark. Change contour information of camera image data.
- the image synthesis processing unit 111 can obtain the contour information of the camera image data by using the data from the luminance signal processing unit 203.
- FIG. 1 An image example of camera image data E 3201 whose outline information has been changed in such a manner is shown in FIG.
- the image synthesis processing unit 111 outputs camera image data E 3201 whose outline information has been changed to the image display processing unit 112.
- the image display processing unit 112 displays an image of the supplied camera image data E 3201 on a display screen or the like.
- the image composition processing unit 111 may change not only the contour information of the camera image data but also the color difference information surrounding or surrounding the coordinates of the destination mark.
- the image display processing unit 112 can acquire color difference information of camera image data by using data from the color difference signal processing unit 204.
- An example image of camera image data E 3301 in which color difference information is changed is shown in FIG.
- step S3601 the navigation control unit 106 outputs the destination mark image data M2601 to the selector 207.
- Destination mark image data M2601 from navigation control unit 106 and camera image data from luminance signal processing unit 203 and color difference signal processing unit 204 are respectively input to selector 207.
- Selector 207 in the processing of the present embodiment. 6 selects the destination mark image data M 2601 supplied from the navigation control unit 106 and supplies it to the coordinate conversion processing unit 208.
- the coordinate conversion processing unit 208 acquires the destination mark image data M2601 (steps S3602 and S3603).
- the coordinate conversion processing unit 208 performs image deformation on the supplied destination mark image data M2601 (step S3604).
- the image data on the deformed destination mark image data M2601 is referred to as destination mark image data (deformation) A2701.
- destination mark image data M2601 from navigation control unit 106 and camera image data from luminance signal processing unit 203 and color difference signal processing unit 204 are input to selector 113, respectively.
- the selector 113 selects camera image data supplied from the luminance signal processing unit 203 and the color difference signal processing unit 204, and supplies the camera image data to the image combining processing unit 111.
- the image combining processing unit 111 obtains camera image data (step S3605).
- the image combining processing unit 111 determines whether the change mode of the target image is set (step S3606). In the present embodiment, since the change mode of the target image is set, the process proceeds to step S3609.
- the image combining processing unit 111 calculates the coordinates of the destination mark in the destination mark image data (deformation) A 2701 (step S3609).
- the image synthesis processing unit 111 generates adjusted image data by adjusting camera image data that surrounds or is located at the calculated coordinates, and outputs the adjusted image data to the image display processing unit 112. .
- the adjustment is performed by changing the outline information or changing the color difference information.
- the image display processing unit 112 displays the adjusted image data supplied from the image combining processing unit 111 (step S3611). The display image is shown in FIG. 31 or FIG.
- the information of the destination to be guided is read out from the navigation device, the image is deformed according to the distortion amount, and the image (contour or color difference) of the object located at the corresponding coordinates in the camera image data Can be adjusted to be emphasized.
- the map image data is referred to determine whether there is an intersection to enter next. Since the road direction to be noted by the driver is calculated in advance, the intersection image can be displayed on the display almost simultaneously with the entry of the vehicle into the intersection. As a result, the driver or the passenger can be alerted and safe driving support can be provided.
- intersection image is displayed also when it is except route guidance mode You may make it Also in this case, it is possible to determine an intersection at which the road on which the vehicle is traveling next intersects from the vehicle position and the map image data, and to calculate a predetermined road direction at the intersection.
- the present invention can be practiced also in other T-junctions and three-fork roads, and also at a junction where a plurality of branches are made.
- the road type is not limited to the intersection between the priority road and the non-priority road, and may be an intersection where a traffic signal is installed or an intersection of a road having a plurality of lanes.
- the present invention is described on the premise that the navigation apparatus synthesizes the guidance route guidance image data, the destination mark image data, and the camera image data as a navigation assistance application for a vehicle driver.
- the present invention can also be implemented in a configuration in which various guidance image data are combined with specific image data.
- the image deformation method and the image deformation device, and the image display method and the image display device according to the present invention can be used in a computer device or the like provided with a navigation function. Moreover, in addition to the navigation function, an audio function, a video function, etc. may be included.
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Navigation (AREA)
- Traffic Control Systems (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Instructional Devices (AREA)
Abstract
Description
・カメラの設置位置、カメラの視野角、および焦点距離を特定すること、
・交差点中心部とカメラの視野角中心とを一致させること、
さらには、
・ナビゲーション装置から入力される地図情報位置と車両位置とを一致させること、
が必要となり、
これらを一致させなければ、交差点において右左折の矢印を正確に地図情報に合成することができず、その結果、自車両運転手に対し交差点での誘導案内を誤る可能性がある。
自車両から外部画像を撮像するカメラが生成するカメラ画像データに基づいて前記カメラ画像データにおける第1の道路形状を認識する第1ステップと、
ナビゲーション装置から自車両近傍の地図画像データを読み出したうえで、読み出した前記地図画像データにおける第2の道路形状に存在する第2の注目点座標と、前記第1の道路形状に存在する第1の注目点座標とをそれぞれ検出したうえで、前記第1の注目点座標と前記第2の注目点座標とを対応付ける第2ステップと、
を含む。
前記第1ステップでは、前記カメラ画像データの輝度信号に基づいて前記カメラ画像データにおける輪郭成分を検出したうえで、前記カメラ画像データにおいて道路であると推定される第1の画像領域と同等の画素情報を有する第2の画像領域の縁部に位置する前記輪郭成分に基づいて前記第1の道路形状を認識する、
ことである。
前記第1ステップでは、前記第1の道路形状として道路輪郭を認識し、
前記第2ステップでは、前記地図画像データにおいて、道路領域における第2の交差点輪郭座標を前記第2の注目点座標として検出し、かつ当該第2ステップでは、前記カメラ画像データにおいて、前記道路輪郭の屈折点座標を第1の交差点輪郭座標として認識したうえで、認識した前記第1の交差点輪郭座標を前記第1の注目点座標として検出する、
ことである。
前記第1ステップでは、前記第1の道路形状として道路輪郭を認識し、
前記第2ステップでは、前記カメラ画像データにおいて、道路領域における第1の交差点輪郭座標を、前記第1の注目点座標として認識したうえで、認識した前記第1の注目点座標が前記第1の交差点輪郭座標として不足する場合には、認識した前記第1の注目点座標に基づいて、不足する前記第1の注目点座標を推定する、
ことである。
前記第1ステップでは、前記第1の道路形状として道路輪郭を認識し、
前記第2ステップでは、前記地図画像データにおいて、道路領域における第2の交差点輪郭座標を前記第2の注目点座標として検出し、かつ当該第2ステップでは、前記カメラ画像データにおける輪郭成分の第1の方向ベクトルを検出したうえで、検出した前記第1の方向ベクトルに基づいて第1の交差点輪郭座標を認識し、認識した前記第1の交差点輪郭座標を前記第1の注目点座標として検出する、
ことである。
対応付けられた前記第1の注目点座標と前記第2の注目点座標との間に生じる歪み量を算出したうえで、算出した歪み量に応じて前記地図画像データまたは前記カメラ画像データのイメージが変形するように、前記地図画像データまたは前記カメラ画像データを座標変換する第3ステップを、
さらに含む、
ことである。
前記第3ステップでは、前記第1の注目点座標と前記第2の注目点座標とが一致するように、前記歪み量を算出する、
ことである。
前記第2ステップでは、前記地図画像データにおける道路領域の第2の方向ベクトルと、前記カメラ画像データにおける輪郭成分の第1の方向ベクトルとを検出し、
前記第3ステップでは、前記第1の方向ベクトルと前記第2の方向ベクトルとを、当該第1,第2の方向ベクトルが最小の移動量で相互移動するように対応付けたうえで、対応付けた前記第1,第2の方向ベクトル間の差異に基づいて前記歪み量を算出する、
ことである。
本発明の画像変形方法における前記第1~第2ステップと、
第4ステップと、
を含み、
前記第4ステップでは、前記カメラ画像データと前記地図画像データとを、前記第1,第2の注目点座標を対応付けた状態で合成したうえで、その合成画像データのイメージを表示する。
本発明の画像変形方法における前記第1~第3ステップと、
第5ステップと、
を含み、
前記第1のステップでは、前記ナビゲーション装置から前記地図画像データに位置対応する誘導経路案内画像データをさらに読み出し、
前記第3ステップでは、前記地図画像データまたは前記カメラ画像データに換えて前記誘導経路案内画像データを、前記歪み量に応じて前記誘導経路案内画像データのイメージが変形するように座標変換し、
前記第5ステップでは、変形後の前記誘導経路案内画像データのイメージが未変形の前記カメラ画像データのイメージに位置対応するように、前記変形後の誘導経路案内画像データと前記未変形のカメラ画像データとを合成したうえで、その合成画像データのイメージを表示する。
本発明の画像変形方法における前記第1~第3ステップと、
第6ステップと、
を含み、
前記第1のステップでは、前記地図画像データとして誘導経路案内画像データを含む地図画像データを、前記ナビゲーション装置から読み出し、
前記第3ステップでは、前記誘導経路案内画像データを含む地図画像データを、前記歪み量に応じて前記誘導経路案内画像データを含む地図画像データのイメージが変形するように座標変換し、
前記第6ステップでは、変形後の前記誘導経路案内画像データを含む地図画像データのイメージが未変形の前記カメラ画像データのイメージに位置対応するように、前記変形後の誘導経路案内画像データを含む地図画像データと前記未変形のカメラ画像データのイメージとを合成したうえで、その合成画像データのイメージを表示する。
自車両から外部画像を撮像するカメラが生成するカメラ画像データに基づいて前記カメラ画像データにおける第1の道路形状を認識する画像認識部と、
ナビゲーション装置から自車両近傍の地図画像データを読み出したうえで、読み出した前記地図画像データにおける第2の道路形状に存在する第2の注目点座標と、前記第1の道路形状に存在する第1の注目点座標とをそれぞれ検出したうえで、前記第1の注目点座標と前記第2の注目点座標とを対応付ける注目点座標検出部と、
前記注目点座標検出部によって対応付けられた前記第1の注目点座標と前記第2の注目点座標との間に生じる歪み量を算出したうえで、算出した前記歪み量に応じて前記地図画像データまたは前記カメラ画像データのイメージが変形するように、前記地図画像データまたは前記カメラ画像データを座標変換する座標変換処理部と、
を備える。
本発明の画像変形装置と、
前記カメラ画像データと座標変換された前記地図画像データとを、もしくは前記地図画像データと座標変換された前記カメラ画像データとをこれら両注目点座標を対応付けた状態で合成して合成画像データを生成する画像合成処理部と、
前記合成画像データに基づいて表示信号を生成する画像表示処理部と、
を備える。
前記座標変換処理部は、前記ナビゲーション装置から前記地図画像データに位置対応する誘導経路案内画像データをさらに読み出したうえで、前記歪み量に応じて前記誘導経路案内画像データのイメージが変形するように、前記誘導経路案内画像データを座標変換し、
前記画像合成処理部は、変形後の前記誘導経路案内画像データのイメージが未変形の前記カメラ画像データのイメージに位置対応するように前記カメラ画像データと座標変換された前記誘導経路案内画像データとを合成する、
ことである。
前記座標変換処理部は、前記地図画像データとして、前記地図画像データに位置対応する誘導経路案内画像データを含む地図画像データを前記ナビゲーション装置から読み出したうえで、前記歪み量に応じて前記誘導経路案内画像データを含む地図画像データのイメージが変形するように、前記誘導経路案内画像データを含む地図画像データを座標変換し、
前記画像合成処理部は、変形後の前記誘導経路案内画像データを含む地図画像データのイメージが未変形の前記カメラ画像データのイメージに位置対応するように前記カメラ画像データと座標変換された前記誘導経路案内画像データを含む地図画像データとを合成する、
ことである。
前記誘導経路案内画像データは、誘導すべき目的地の位置を示す画像データ、もしくは、誘導すべき目的地に向かう方向を示す画像データである、
ことである。
前記画像合成処理部は、座標変換された前記誘導経路案内画像データである前記誘導すべき目的地の位置を示す画像データに位置対応する前記カメラ画像データの領域の輝度信号もしくは色差信号を調整したうえで、前記誘導経路案内画像データと合成する、
ことである。
102 自立航法制御部
103 GPS制御部
104 VICS情報受信機
105 音声出力部
106 ナビゲーション制御部
107 地図情報データベース
108 更新情報データベース
109 撮像部
110 画像処理部
111 画像合成処理部
112 画像表示処理部
113 セレクタ
202 輝度信号/色差信号分離処理部
203 輝度信号処理部
204 色差信号処理部
205 画像認識部
206 注目点座標検出部
207 セレクタ
208 座標変換処理部
本発明の実施の形態1に係わる画像変形方法および画像変形装置について、図1~図12,図34を参照して説明する。図2は、画像変形装置とそれに付随する周辺装置のブロック図である。図1と対応する部分には同一の符号を付している。
Y=0.29891×R+0.58661×G+0.11448×B
U=-0.16874×R-0.33126×G+0.50000×B
V=0.50000×R-0.41869×G-0.08131×B
また、色差信号分離処理部202は、撮像部109から入力されるRGB三色データを、以下に示すITU-R BT.601規定のYCbCrの色空間変換式によってY信号,Cb信号,Cr信号に変換することもある。
Y=0.257R+0.504G+0.098B+16
Cb=-0.148R-0.291G+0.439B+128
Cr=0.439R-0.368G-0.071B+128
ここでのY信号は、輝度信号(明るさ)を示し、Cb信号,U信号は、青の差分信号(色差信号)を示し、Cr信号,V信号は、赤の差分信号を示す。
R=1.0-C
G=1.0-M
B=1.0-Y
なお、撮像部109からY信号,U信号,V信号が入力される構成では、輝度信号/色差信号分離処理部202は特に信号の変換は行わず、信号分離だけを行う。
1)地図画像データ(図9)を図12に示すように垂直基線L1205により画面上で左右に分割する。
2)左側・右側それぞれの道路輪郭ベクトルV1206,V1207を算出する。方向ベクトルV1206は、図11におけるV1102に示すように、第一象限の方向ベクトルに限られ、方向ベクトルV1207は、図11における方向ベクトルV1101に示すように、第二象限の方向ベクトルに限られる。
3)道路輪郭ベクトルV1206,V1207に沿った道路輪郭において屈折点座標を注目点(注目点座標)として算出する。
4)カメラ画像(図6),地図画像(図9)それぞれにおける注目点座標を出力する。
本発明の実施の形態2に係わる画像変形方法および画像変形装置について、図1,図2,図13,図14,および図34を参照して説明する。本実施の形態は、基本的には実施の形態1と同様の構成を備えるものの、以下の点で実施の形態1と異なる。
本発明の実施の形態3に係わる画像変形方法および画像変形装置について、図1,図2,図11,図15,および図34を参照して説明する。本実施の形態は、基本的には実施の形態1と同様の構成を備えるものの、以下の点で実施の形態1と異なる。
・基線L1509の左側に位置する、
・第一象限の方向ベクトルである、
という条件を満たす道路輪郭ベクトルが、自車両進行道路左側輪郭ベクトルV1501として検出される。
・基線L1509の右側に位置する、
・第二象限の方向ベクトルである、
という条件を満たす道路輪郭ベクトルが、自車両進行道路右側輪郭ベクトルV1502としてさらに検出される。
本発明の実施の形態4に係わる画像変形方法および画像変形装置について、図1,図2,図16~図18,および図34を参照して説明する。本実施の形態は、基本的には実施の形態1と同様の構成を備えるものの、以下の点で実施の形態1と異なる。本実施の形態では、画像変形装置は、画像認識部205と注目点座標検出部206と座標変換処理部208とセレクタ207とから構成される。セレクタ207は、座標変換処理部208への入力画像を切り替えるものである。
本発明の実施の形態5に係わる画像変形方法および画像変形装置について、図1,図2,図16~図19,および図34を参照して説明する。本実施の形態は、基本的には実施の形態4と同様の構成を備えるものの、以下の点で実施の形態4と異なる。
本発明の実施の形態6に係わる画像表示方法および画像表示装置について、図1,図2,図20~図22,および図35を参照して説明する。本実施の形態の画像表示装置は、実施の形態1~5に示された画像変形装置と同等の構成を備えた画像変形装置と、画像合成処理部111と、画像表示処理部112とを備える。
本発明の実施の形態7に係わる画像表示方法および画像表示装置を、図1,図2,図23~図25,および図35を参照して説明する。本実施の形態は、本実施の形態は、基本的には実施の形態6と同様の構成を備えるものの、以下の点で実施の形態6と異なる。
本発明の実施の形態8に係わる画像表示方法および画像表示装置を、図1,図2,図26~図28,および図36を参照して説明する。本実施の形態は、本実施の形態は、基本的には実施の形態6と同様の構成を備えるものの、以下の点で実施の形態6と異なる。
本発明の実施の形態9に係わる画像表示方法および画像表示装置を、図1,図2,図29~図31,および図36を参照して説明する。本実施の形態は、本実施の形態は、基本的には実施の形態6と同様の構成を備えるものの、以下の点で実施の形態6と異なる。
本発明の実施の形態10に係わる画像表示方法および画像表示装置を、図1,図2,図26,図27,図32,図33,図36を参照して説明する。本実施の形態は、基本的には実施の形態6と同様の構成を備えるものの、以下の点で実施の形態6と異なる。
Claims (23)
- 自車両から外部画像を撮像するカメラが生成するカメラ画像データに基づいて前記カメラ画像データにおける第1の道路形状を認識する第1ステップと、
ナビゲーション装置から自車両近傍の地図画像データを読み出したうえで、読み出した前記地図画像データにおける第2の道路形状に存在する第2の注目点座標と、前記第1の道路形状に存在する第1の注目点座標とをそれぞれ検出したうえで、前記第1の注目点座標と前記第2の注目点座標とを対応付ける第2ステップと、
を含む画像変形方法。 - 前記第1ステップでは、前記カメラ画像データの輝度信号に基づいて前記カメラ画像データにおける輪郭成分を検出したうえで、前記カメラ画像データにおいて道路であると推定される第1の画像領域と同等の画素情報を有する第2の画像領域の縁部に位置する前記輪郭成分に基づいて前記第1の道路形状を認識する、
請求項1の画像変形方法。 - 前記第1ステップでは、前記第1の道路形状として道路輪郭を認識し、
前記第2ステップでは、前記地図画像データにおいて、道路領域における第2の交差点輪郭座標を前記第2の注目点座標として検出し、かつ当該第2ステップでは、前記カメラ画像データにおいて、前記道路輪郭の屈折点座標を第1の交差点輪郭座標として認識したうえで、認識した前記第1の交差点輪郭座標を前記第1の注目点座標として検出する、
請求項1の画像変形方法。 - 前記第1ステップでは、前記第1の道路形状として道路輪郭を認識し、
前記第2ステップでは、前記カメラ画像データにおいて、道路領域における第1の交差点輪郭座標を、前記第1の注目点座標として認識したうえで、認識した前記第1の注目点座標が前記第1の交差点輪郭座標として不足する場合には、認識した前記第1の注目点座標に基づいて、不足する前記第1の注目点座標を推定する、
請求項1の画像変形方法。 - 前記第1ステップでは、前記第1の道路形状として道路輪郭を認識し、
前記第2ステップでは、前記地図画像データにおいて、道路領域における第2の交差点輪郭座標を前記第2の注目点座標として検出し、かつ当該第2ステップでは、前記カメラ画像データにおける輪郭成分の第1の方向ベクトルを検出したうえで、検出した前記第1の方向ベクトルに基づいて第1の交差点輪郭座標を認識し、認識した前記第1の交差点輪郭座標を前記第1の注目点座標として検出する、
請求項1の画像変形方法。 - 対応付けられた前記第1の注目点座標と前記第2の注目点座標との間に生じる歪み量を算出したうえで、算出した歪み量に応じて前記地図画像データまたは前記カメラ画像データのイメージが変形するように、前記地図画像データまたは前記カメラ画像データを座標変換する第3ステップを、
さらに含む、
請求項1の画像変形方法。 - 前記第3ステップでは、前記第1の注目点座標と前記第2の注目点座標とが一致するように、前記歪み量を算出する、
請求項6の画像変形方法。 - 前記第2ステップでは、前記地図画像データにおける道路領域の第2の方向ベクトルと、前記カメラ画像データにおける輪郭成分の第1の方向ベクトルとを検出し、
前記第3ステップでは、前記第1の方向ベクトルと前記第2の方向ベクトルとを、当該第1,第2の方向ベクトルが最小の移動量で相互移動するように対応付けたうえで、対応付けた前記第1,第2の方向ベクトル間の差異に基づいて前記歪み量を算出する、
請求項6の画像変形方法。 - 請求項1の画像変形方法における前記第1~第2ステップと、
第4ステップと、
を含み、
前記第4ステップでは、前記カメラ画像データと前記地図画像データとを、前記第1,第2の注目点座標を対応付けた状態で合成したうえで、その合成画像データのイメージを表示する、
画像表示方法。 - 請求項6の画像変形方法における前記第1~第3ステップと、
第5ステップと、
を含み、
前記第1のステップでは、前記ナビゲーション装置から前記地図画像データに位置対応する誘導経路案内画像データをさらに読み出し、
前記第3ステップでは、前記地図画像データまたは前記カメラ画像データに換えて前記誘導経路案内画像データを、前記歪み量に応じて前記誘導経路案内画像データのイメージが変形するように座標変換し、
前記第5ステップでは、変形後の前記誘導経路案内画像データのイメージが未変形の前記カメラ画像データのイメージに位置対応するように、前記変形後の誘導経路案内画像データと前記未変形のカメラ画像データとを合成したうえで、その合成画像データのイメージを表示する、
画像表示方法。 - 請求項6の画像変形方法における前記第1~第3ステップと、
第6ステップと、
を含み、
前記第1のステップでは、前記地図画像データとして誘導経路案内画像データを含む地図画像データを、前記ナビゲーション装置から読み出し、
前記第3ステップでは、前記誘導経路案内画像データを含む地図画像データを、前記歪み量に応じて前記誘導経路案内画像データを含む地図画像データのイメージが変形するように座標変換し、
前記第6ステップでは、変形後の前記誘導経路案内画像データを含む地図画像データのイメージが未変形の前記カメラ画像データのイメージに位置対応するように、前記変形後の誘導経路案内画像データを含む地図画像データと前記未変形のカメラ画像データのイメージとを合成したうえで、その合成画像データのイメージを表示する、
画像表示方法。 - 前記誘導経路案内画像データは、誘導すべき目的地の位置を示す画像データである、
請求項10の画像表示方法。 - 前記誘導経路案内画像データは、誘導すべき目的地に向かう方向を示す画像データである、
請求項10の画像表示方法。 - 前記誘導経路案内画像データは、誘導すべき目的地の位置を示す画像データである、
請求項11の画像表示方法。 - 前記誘導経路案内画像データは、誘導すべき目的地に向かう方向を示す画像データである、
請求項11の画像表示方法。 - 自車両から外部画像を撮像するカメラが生成するカメラ画像データに基づいて前記カメラ画像データにおける第1の道路形状を認識する画像認識部と、
ナビゲーション装置から自車両近傍の地図画像データを読み出したうえで、読み出した前記地図画像データにおける第2の道路形状に存在する第2の注目点座標と、前記第1の道路形状に存在する第1の注目点座標とをそれぞれ検出したうえで、前記第1の注目点座標と前記第2の注目点座標とを対応付ける注目点座標検出部と、
前記注目点座標検出部によって対応付けられた前記第1の注目点座標と前記第2の注目点座標との間に生じる歪み量を算出したうえで、算出した前記歪み量に応じて前記地図画像データまたは前記カメラ画像データのイメージが変形するように、前記地図画像データまたは前記カメラ画像データを座標変換する座標変換処理部と、
を備える画像変形装置。 - 前記画像認識部は、
前記カメラ画像データから輝度信号と色差信号とを抽出する輝度信号/色差信号分離処理部と、
前記輝度信号に基づいて輪郭信号を生成する輝度信号処理部と、
前記カメラ画像データから、当該カメラ画像データにおいて道路と推定される画像領域における色差信号を抽出する色差信号処理部と、
前記輪郭信号と、前記画像領域における色差信号とに基づいて前記第1の道路形状を認識する画像認識部と、
を備える、
請求項16の画像変形装置。 - 請求項16の画像変形装置と、
前記カメラ画像データと座標変換された前記地図画像データとを、もしくは前記地図画像データと座標変換された前記カメラ画像データとをこれら両注目点座標を対応付けた状態で合成して合成画像データを生成する画像合成処理部と、
前記合成画像データに基づいて表示信号を生成する画像表示処理部と、
を備える、
画像表示装置。 - 前記座標変換処理部は、前記ナビゲーション装置から前記地図画像データに位置対応する誘導経路案内画像データをさらに読み出したうえで、前記歪み量に応じて前記誘導経路案内画像データのイメージが変形するように、前記誘導経路案内画像データを座標変換し、
前記画像合成処理部は、変形後の前記誘導経路案内画像データのイメージが未変形の前記カメラ画像データのイメージに位置対応するように前記カメラ画像データと座標変換された前記誘導経路案内画像データとを合成する、
請求項18の画像表示装置。 - 前記座標変換処理部は、前記地図画像データとして、前記地図画像データに位置対応する誘導経路案内画像データを含む地図画像データを前記ナビゲーション装置から読み出したうえで、前記歪み量に応じて前記誘導経路案内画像データを含む地図画像データのイメージが変形するように、前記誘導経路案内画像データを含む地図画像データを座標変換し、
前記画像合成処理部は、変形後の前記誘導経路案内画像データを含む地図画像データのイメージが未変形の前記カメラ画像データのイメージに位置対応するように前記カメラ画像データと座標変換された前記誘導経路案内画像データを含む地図画像データとを合成する、
請求項19の画像表示装置。 - 前記誘導経路案内画像データは、誘導すべき目的地の位置を示す画像データである、
請求項19の画像表示装置。 - 前記誘導経路案内画像データは、誘導すべき目的地に向かう方向を示す画像データである、
請求項19の画像表示装置。 - 前記画像合成処理部は、座標変換された前記誘導経路案内画像データである前記誘導すべき目的地の位置を示す画像データに位置対応する前記カメラ画像データの領域の輝度信号もしくは色差信号を調整したうえで、前記誘導経路案内画像データと合成する、
請求項21の画像表示装置。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2008801219860A CN101903906A (zh) | 2008-01-07 | 2008-12-09 | 图像变形方法、图像显示方法、图像变形装置以及图像显示装置 |
US12/810,482 US20100274478A1 (en) | 2008-01-07 | 2008-12-09 | Image transformation method, image display method, image transformation apparatus and image display apparatus |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2008-000561 | 2008-01-07 | ||
JP2008000561A JP2009163504A (ja) | 2008-01-07 | 2008-01-07 | 画像変形方法等 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2009087716A1 true WO2009087716A1 (ja) | 2009-07-16 |
Family
ID=40852841
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2008/003658 WO2009087716A1 (ja) | 2008-01-07 | 2008-12-09 | 画像変形方法、画像表示方法、画像変形装置、および画像表示装置 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20100274478A1 (ja) |
JP (1) | JP2009163504A (ja) |
CN (1) | CN101903906A (ja) |
WO (1) | WO2009087716A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102750827A (zh) * | 2012-06-26 | 2012-10-24 | 浙江大学 | 群体诱导信息下驾驶员响应行为的数据采样和辨识系统 |
CN104050829A (zh) * | 2013-03-14 | 2014-09-17 | 联想(北京)有限公司 | 一种信息处理的方法及装置 |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8471732B2 (en) * | 2009-12-14 | 2013-06-25 | Robert Bosch Gmbh | Method for re-using photorealistic 3D landmarks for nonphotorealistic 3D maps |
JP5517176B2 (ja) * | 2010-12-24 | 2014-06-11 | パイオニア株式会社 | 画像調整装置、制御方法、プログラム、及び記憶媒体 |
JP5817927B2 (ja) * | 2012-05-18 | 2015-11-18 | 日産自動車株式会社 | 車両用表示装置、車両用表示方法及び車両用表示プログラム |
US9091628B2 (en) | 2012-12-21 | 2015-07-28 | L-3 Communications Security And Detection Systems, Inc. | 3D mapping with two orthogonal imaging views |
JP6169366B2 (ja) * | 2013-02-08 | 2017-07-26 | 株式会社メガチップス | 物体検出装置、プログラムおよび集積回路 |
US9514650B2 (en) * | 2013-03-13 | 2016-12-06 | Honda Motor Co., Ltd. | System and method for warning a driver of pedestrians and other obstacles when turning |
JP6194604B2 (ja) * | 2013-03-15 | 2017-09-13 | 株式会社リコー | 認識装置、車両及びコンピュータが実行可能なプログラム |
KR101474521B1 (ko) | 2014-02-14 | 2014-12-22 | 주식회사 다음카카오 | 영상 데이터베이스 구축 방법 및 장치 |
KR20160001178A (ko) * | 2014-06-26 | 2016-01-06 | 엘지전자 주식회사 | 글래스 타입 단말기 및 이의 제어방법 |
KR102299487B1 (ko) * | 2014-07-17 | 2021-09-08 | 현대자동차주식회사 | 증강 현실 알림 제공 시스템 및 방법 |
DE102014113957A1 (de) * | 2014-09-26 | 2016-03-31 | Connaught Electronics Ltd. | Verfahren zum Konvertieren eines Bilds, Fahrerassistenzsystem und Kraftfahrzeug |
CN104567890A (zh) * | 2014-11-24 | 2015-04-29 | 朱今兰 | 一种智能车辆导航辅助系统 |
CN105991590B (zh) | 2015-02-15 | 2019-10-18 | 阿里巴巴集团控股有限公司 | 一种验证用户身份的方法、系统、客户端及服务器 |
US10606242B2 (en) * | 2015-03-12 | 2020-03-31 | Canon Kabushiki Kaisha | Print data division apparatus and program |
CN106034029A (zh) * | 2015-03-20 | 2016-10-19 | 阿里巴巴集团控股有限公司 | 基于图片验证码的验证方法和装置 |
JP6150950B1 (ja) * | 2015-11-20 | 2017-06-21 | 三菱電機株式会社 | 運転支援装置、運転支援システム、運転支援方法及び運転支援プログラム |
DE102015223175A1 (de) * | 2015-11-24 | 2017-05-24 | Conti Temic Microelectronic Gmbh | Fahrerassistenzsystem mit adaptiver Umgebungsbilddatenverarbeitung |
US10430968B2 (en) * | 2017-03-14 | 2019-10-01 | Ford Global Technologies, Llc | Vehicle localization using cameras |
JP6820561B2 (ja) * | 2017-12-28 | 2021-01-27 | パナソニックIpマネジメント株式会社 | 画像処理装置、表示装置、ナビゲーションシステム、画像処理方法及びプログラム |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001331787A (ja) * | 2000-05-19 | 2001-11-30 | Toyota Central Res & Dev Lab Inc | 道路形状推定装置 |
JP2006250917A (ja) * | 2005-02-14 | 2006-09-21 | Kazuo Iwane | 高精度cv演算装置と、この高精度cv演算装置を備えたcv方式三次元地図生成装置及びcv方式航法装置 |
JP2007271568A (ja) * | 2006-03-31 | 2007-10-18 | Aisin Aw Co Ltd | 自車位置認識装置及び自車位置認識方法 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6285317B1 (en) * | 1998-05-01 | 2001-09-04 | Lucent Technologies Inc. | Navigation system with three-dimensional display |
GB0212748D0 (en) * | 2002-05-31 | 2002-07-10 | Qinetiq Ltd | Feature mapping between data sets |
JP4696248B2 (ja) * | 2004-09-28 | 2011-06-08 | 国立大学法人 熊本大学 | 移動体ナビゲート情報表示方法および移動体ナビゲート情報表示装置 |
KR100689376B1 (ko) * | 2004-12-14 | 2007-03-02 | 삼성전자주식회사 | 네비게이션 시스템에서 맵 디스플레이 장치 및 방법 |
DE602005016311D1 (de) * | 2005-06-06 | 2009-10-08 | Tomtom Int Bv | Navigationseinrichtung mit kamera-info |
-
2008
- 2008-01-07 JP JP2008000561A patent/JP2009163504A/ja not_active Withdrawn
- 2008-12-09 WO PCT/JP2008/003658 patent/WO2009087716A1/ja active Application Filing
- 2008-12-09 US US12/810,482 patent/US20100274478A1/en not_active Abandoned
- 2008-12-09 CN CN2008801219860A patent/CN101903906A/zh active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001331787A (ja) * | 2000-05-19 | 2001-11-30 | Toyota Central Res & Dev Lab Inc | 道路形状推定装置 |
JP2006250917A (ja) * | 2005-02-14 | 2006-09-21 | Kazuo Iwane | 高精度cv演算装置と、この高精度cv演算装置を備えたcv方式三次元地図生成装置及びcv方式航法装置 |
JP2007271568A (ja) * | 2006-03-31 | 2007-10-18 | Aisin Aw Co Ltd | 自車位置認識装置及び自車位置認識方法 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102750827A (zh) * | 2012-06-26 | 2012-10-24 | 浙江大学 | 群体诱导信息下驾驶员响应行为的数据采样和辨识系统 |
CN104050829A (zh) * | 2013-03-14 | 2014-09-17 | 联想(北京)有限公司 | 一种信息处理的方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
JP2009163504A (ja) | 2009-07-23 |
CN101903906A (zh) | 2010-12-01 |
US20100274478A1 (en) | 2010-10-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2009087716A1 (ja) | 画像変形方法、画像表示方法、画像変形装置、および画像表示装置 | |
US10719984B2 (en) | Display method and display device for providing surrounding information based on driving condition | |
EP2080983B1 (en) | Navigation system, mobile terminal device, and route guiding method | |
US20100250116A1 (en) | Navigation device | |
JP4895313B2 (ja) | ナビゲーション装置およびその方法 | |
US20050209776A1 (en) | Navigation apparatus and intersection guidance method | |
US10999562B2 (en) | Image processing device, image processing method and imaging device capable of performing parallax compensation for captured color image | |
WO2009084129A1 (ja) | ナビゲーション装置 | |
JPH11108684A (ja) | カーナビゲーションシステム | |
JP4577655B2 (ja) | 地物認識装置 | |
JPH10339646A (ja) | 車両用案内表示装置 | |
KR20150144681A (ko) | 전자 장치 및 그의 제어 방법 | |
JP2008128827A (ja) | ナビゲーション装置およびナビゲーション方法ならびにそのプログラム | |
WO2019224922A1 (ja) | ヘッドアップディスプレイ制御装置、ヘッドアップディスプレイシステム、及びヘッドアップディスプレイ制御方法 | |
JP3811238B2 (ja) | 画像情報を利用した車両用音声案内装置 | |
EP3490241B1 (en) | Image processing device and image processing method | |
EP3859390A1 (en) | Method and system for rendering a representation of an evinronment of a vehicle | |
US10650506B2 (en) | Image processing apparatus and image processing method | |
US10748264B2 (en) | Image processing apparatus and image processing method | |
JP4858212B2 (ja) | 車載ナビゲーション装置 | |
JP2008002965A (ja) | ナビゲーション装置およびその方法 | |
JP2010176645A (ja) | 画像認識方法および画像認識装置 | |
JP4398216B2 (ja) | 情報表示装置および情報表示方法 | |
JP4574157B2 (ja) | 情報表示装置および情報表示方法 | |
CN115917255A (zh) | 基于视觉的位置和转弯标记预测 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200880121986.0 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 08869898 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 12810482 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 08869898 Country of ref document: EP Kind code of ref document: A1 |