US20070239357A1 - Driving support method and driving support device - Google Patents
Driving support method and driving support device Download PDFInfo
- Publication number
- US20070239357A1 US20070239357A1 US11/723,454 US72345407A US2007239357A1 US 20070239357 A1 US20070239357 A1 US 20070239357A1 US 72345407 A US72345407 A US 72345407A US 2007239357 A1 US2007239357 A1 US 2007239357A1
- Authority
- US
- United States
- Prior art keywords
- image data
- reference position
- vehicle
- driving support
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B62—LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
- B62D—MOTOR VEHICLES; TRAILERS
- B62D15/00—Steering not otherwise provided for
- B62D15/02—Steering position indicators ; Steering position determination; Steering aids
- B62D15/027—Parking aids, e.g. instruction means
- B62D15/028—Guided parking by providing commands to the driver, e.g. acoustically or optically
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B62—LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
- B62D—MOTOR VEHICLES; TRAILERS
- B62D15/00—Steering not otherwise provided for
- B62D15/02—Steering position indicators ; Steering position determination; Steering aids
- B62D15/027—Parking aids, e.g. instruction means
Definitions
- the present invention relates to a driving support method and a driving support device.
- Conventional devices for supporting safe driving include a navigation apparatus that shows a background image of the vehicle's surroundings on a display.
- a navigation apparatus that shows a background image of the vehicle's surroundings on a display.
- Such an apparatus inputs a picture signal from an onboard camera mounted at the rear end of the vehicle, and outputs a background image based on that picture signal to a display provided near the driver's seat.
- Japanese Patent Application Publication No. JP-A-2002-373327 proposes an image processing device that collects image data input from an onboard camera when a parking operation is performed, and uses the collected image data to perform image processing so as to show an overhead or top-down view that shows the area surrounding the vehicle.
- the image processing device performs processing so as to rotate the image data collected and stored in a memory to correspond to a current steering angle of the vehicle. By visually reviewing the overhead image, a driver can thus grasp the relative position/relative orientation of his or her own vehicle with respect to the target area of a parking space.
- FIG. 19A illustrates for example, to collect the image data, obtained image data Ga is converted into an overhead view. The overhead view is then drawn on a pixel area 61 i , which corresponds to the image data Ga converted into the overhead view and is located within a synthesis memory area 60 .
- new image data Gb is obtained thereafter
- FIG. 19B illustrates, such image data is similarly converted into an overhead view, and is used to overwrite the pixel area 61 i that corresponds to the imaging range of the onboard camera.
- the previously obtained image data Ga is processed in order to rotate or shift the image data in accordance with the current position and steering angle of the vehicle, and new pixel coordinates 61 j are overlaid.
- recorded data to serve as a reference are preset, and image processing is performed on the recorded data to generate new recorded data.
- Such new recorded data is then used to generate driving support image data, which is output to a display means.
- the image processing is not repeated each time image data is input, and the number of times that image processing is performed can be kept at a minimum. Consequently, it is possible to suppress errors and reductions in image quality.
- a geometric conversion is performed on the image data based on a relative position of the vehicle from a reference position.
- the recorded image data is then stored in a corresponding area within a synthesis memory area.
- Such recorded image data is used to output driving support image data to a display means.
- the recorded data is generated by rotation processing using a first image processing means.
- the number of times that the rotation processing is performed can be kept at a minimum. Consequently, it is possible to suppress errors and reductions in image quality such as white lines on the road surface becoming jagged.
- a new reference position is set if the vehicle moves a predetermined distance from the original reference position. Therefore, it is possible to avoid increasing the memory area.
- the first image processing means performs a geometric conversion on the recorded data thus far accumulated in order to align the recorded data with a newly set reference position, if the new reference position has changed.
- the recorded data already accumulated can be utilized instead of being discarded.
- a seventh aspect of the driving support method and the driving support device when the driving support image data is generated, rotation processing is performed in order to align the driving support image with the position of the vehicle at such time. Therefore, it is possible to show an image that the driver can directly understand with ease.
- FIG. 1 is a block diagram of a parking assist system according to an embodiment
- FIG. 1A is a block diagram of the processor shown in FIG. 1 ;
- FIG. 2 is an overhead view of a vehicle equipped with the parking assist system
- FIG. 3A is an explanatory drawing in frame format of image data
- FIG. 3B is an explanatory drawing in frame format of overhead image data
- FIG. 4 is an explanatory drawing of recorded image data
- FIG. 5 is an explanatory drawing of a processing procedure according to the present embodiment
- FIG. 6 is an explanatory drawing of a processing procedure according to the present embodiment.
- FIG. 7 is an explanatory drawing of the synthesis of recorded image data
- FIG. 8 is an explanatory drawing of the synthesis of recorded image data
- FIG. 9 is an explanatory drawing of the synthesis of recorded image data
- FIG. 10A is an explanatory drawing of trimming data before rotation processing
- FIG. 10B is an explanatory drawing of trimming data after rotation processing
- FIG. 11 is an explanatory drawing of a parking assist image
- FIG. 12 is an explanatory drawing of the synthesis of recorded image data
- FIG. 13 is an explanatory drawing of a parking assist image
- FIG. 14 is an explanatory drawing of the synthesis of recorded image data
- FIG. 15 is an explanatory drawing of the synthesis of recorded image data
- FIG. 16 is an explanatory drawing of the synthesis of recorded image data
- FIG. 17 is an explanatory drawing of a parking assist image
- FIG. 18 is an explanatory drawing of a general overhead image
- FIGS. 19A and 19B are explanatory drawings of the synthesis of conventional recorded image data.
- FIG. 1A is a block diagram that explains the structure of a parking assist system 1 mounted in an automobile.
- the parking assist system 1 includes: a parking assist unit 2 , which acts as a driving support device; a display 22 , which acts as a display means for showing various screens; a speaker 25 that outputs warning sounds and speech guidance; and a back monitor camera (hereinafter simply referred to as a camera 35 ) which acts as an imaging device.
- the camera 35 is installed on or near the rear end CB of a vehicle, such as a rear door of a vehicle C, such that an optical axis thereof faces diagonally downward.
- the camera 35 may be a digital camera that produces a color image. Of course, any other suitable type of imaging device may be used.
- the camera 35 may also include: an optical mechanism structured from a wide-angle lens, a mirror, and the like; and a CCD imaging element (none of which is shown).
- the area located to the rear of the vehicle is designated as an imaging range Z.
- a control unit 10 acts as a position detecting means of the parking assist unit 2 , and is provided with a computer (not shown).
- the control unit 10 performs as a main control for executing processing in accordance with a driving support program stored in a ROM 12 .
- a main memory 11 temporarily stores the calculation results of the control unit 10 , and also stores various variables, flags, and the like in order to assist the driver during parking of the vehicle.
- Driving support programs are stored in the ROM 12 , as well as vehicle image data 12 a for depicting the vehicle C on the display 22 .
- the vehicle image data 12 a are data for showing an image that represents the vehicle C in a view from above in the top-down direction.
- the parking assist unit 2 includes a GPS receiving unit 13 .
- the control unit 10 calculates the absolute coordinates of the vehicle C, using electronic navigation, based upon a signal from a GPS satellite that is received by the GPS receiving unit 13 .
- the parking assist unit 2 includes a vehicle-side interface unit (a vehicle-side I/F unit 15 ) that defines a detecting means.
- the control unit 10 respectively receives a vehicle speed pulse VP and an orientation detection signal GRP from a vehicle speed sensor 30 and a gyro 31 provided in the vehicle C, via the vehicle-side I/F unit 15 . Based on the number of input pulses for the vehicle speed pulse VP, the control unit 10 calculates the relative amount of movement of the vehicle C from a reference position. The control unit 10 also updates a variable stored in the main memory 11 , such as a current orientation GR, based upon the inputted orientation detection signal GRP.
- a variable stored in the main memory 11 such as a current orientation GR
- control unit 10 receives a steering sensor signal STP from a steering angle sensor 32 , via the vehicle-side I/F unit 15 . Based upon the steering sensor signal STP, the control unit 10 updates a current steering angle STR of the vehicle C which is stored in the main memory 11 . Also, the control unit 10 receives a shift position signal SPP from a neutral start switch 33 of the vehicle C, via the vehicle-side I/F unit 15 , and updates a variable, such as a shift position SP, stored in the main memory 11 .
- the parking assist unit 2 also includes: an image data input unit 16 that acts as an image data input means; and an image processor 20 that acts as a reference position setting means, a first image processing means 27 , a second image processing means 28 , an output control means 40 , and a third image processing means 29 .
- the image data input unit 16 drivingly controls the camera 35 through the main control of the control unit 10 , and inputs image data G.
- the image processor 20 performs image processing and the like on the image data G that is input from the camera 35 .
- the control unit 10 captures, at predetermined intervals, the image data G, shown in a frame format in FIG. 3A , from the camera 35 via the image data input unit 16 .
- the image processor 20 then converts the image data G into an overhead view (hereinafter referred to as an overhead view conversion), using a known method, as illustrated in FIG. 3B .
- This in turn generates recorded image data G 1 that describes the rear surroundings of the vehicle C from a viewpoint as set from above in the vertical direction.
- the image processor 20 Upon generating the recorded image data G 1 , the image processor 20 obtains the current position and orientation of the vehicle C via the control unit 10 . As FIG. 4 illustrates, such data is associated with the recorded image data G 1 as positional data 45 , which is formed from coordinate data 43 and steering angle data 44 . These are then stored in an image memory area 18 within an image memory 17 (see FIG. 1A ) that acts as an image data storing means provided in the parking assist unit 2 . Note that the coordinate data 43 and the steering angle data 44 may be attached to the recorded image data G 1 as illustrated in FIG. 4 , or a map formed from the coordinate data 43 and the steering angle data 44 may be linked to the recorded image data G 1 by a header or the like.
- the image processor 20 compares the positional data 45 with a preset reference position. Based on a relative position thereof (relative data), the image processor 20 performs a geometric conversion (image processing) on the recorded image data G 1 . This is then written into and synthesized with an area corresponding to a synthesis memory area 19 (see FIG. 1A ) of the image memory 17 . More specifically, the image processor 20 performs rotation processing (a rotational conversion) on the recorded image data G 1 in accordance with the steering angle of the vehicle C, and performs shift processing (parallel translation) on the recorded image data G 1 based upon a relative distance of the vehicle C with respect to the reference position.
- rotation processing a rotational conversion
- shift processing parallel translation
- the image processor 20 performs processing to generate a parking assist screen in parallel with such processing.
- This processing uses the previously recorded image data G 1 from the synthesis memory area 19 and the latest image data G (hereinafter referred to as current image data G 4 ) obtained from the camera 35 .
- the processing generates synthesized data to show an overhead image of the vehicle and its surroundings, which includes current blind spots of the camera 35 .
- the synthesized data is temporarily stored in a VRAM 21 (see FIG. 1A ), and output to the display 22 at predetermined times.
- the display 22 may be a touch panel, or any other suitable display.
- an external input interface unit hereinafter referred to as an external input I/F unit 23 ) of the parking assist unit 2 outputs an input signal to the control unit 10 in accordance with the specific input operation.
- the parking assist unit 2 includes a sound processor 24 .
- the sound processor 24 includes a memory (not shown) that stores sound files and a digital/analog converter. Using the sound files, the sound processor 24 outputs speech guidance and warning sounds from a speaker 25 provided in the parking assist system 1 .
- step S 1 - 1 the control unit 10 first initializes the system by clearing the image memory 17 and the like, in accordance with a driving support program stored in the ROM 12 (step S 1 - 1 ).
- the control unit 10 then waits for the input of a start trigger in order to begin driving support processing (step S 1 - 2 ).
- the start trigger is the shift position signal SPP indicating reverse, which is output from the neutral start switch 33 .
- step S 1 - 2 If the start trigger is input due to the vehicle C backing up (YES at step S 1 - 2 ), then the control unit 10 sets the position of the vehicle C when the start trigger was input as a reference position S 1 (step S 1 - 3 ).
- the control unit 10 accesses the camera 35 via the image data input unit 16 , and inputs the image data G taken at the reference position S 1 (step S 1 - 4 ).
- the image processor 20 next performs an overhead view conversion of such image data G to generate the recorded image data G 1 as shown in a frame format in FIG. 3B , using a known image processing method (step S 1 - 5 ).
- the image processor 20 associates the coordinate data 43 and the steering angle data 44 , which indicate the reference position S 1 , with the recorded image data G 1 via the control unit 10 .
- the associated data are then stored in the image memory area 18 within the image memory 17 (step S 1 - 6 ).
- the image processor 20 also stores the recorded image data G 1 for the reference position S 1 in the synthesis memory area 19 within the image memory 17 (step S 1 - 7 ). More specifically, as FIG. 7 illustrates, the recorded image data G 1 is stored in a pixel area R 1 , which corresponds to an imaging range Z 1 in the reference position S 1 of the vehicle C and is within the synthesis memory area 19 . Thus, the recorded image data G 1 taken at the reference position S 1 is written into the synthesis memory area 19 and synthesized. Thereafter, the procedure proceeds to processing at A, as shown in FIG. 6 .
- the processing procedure shown in FIG. 6 is a procedure for the second and subsequent storage and synthesis processing of the recorded image data G 1 .
- the control unit 10 obtains the positional data 45 based upon the vehicle speed pulse VP and the steering sensor signal STP (step S 2 - 1 ).
- the control unit 10 determines whether the appropriate time interval at which to capture an image is reached (step S 2 - 2 ).
- the control unit 10 accumulates the number of input pulses of the vehicle speed pulses VP, from the time point when the reference position S 1 was set, in a pulse counter stored in the main memory 11 , and determines the image capture time as the point when the accumulated number of pulses reaches a predetermined quantity.
- an image is captured every time the vehicle C backs up a predetermined distance D 1 , which may be several hundred millimeters, for example.
- a first capture position P 1 (not shown) is behind the reference position S 1 from the travel direction (i.e., in the backing up direction) at the predetermined distance D 1 .
- the control unit 10 determines the time at which an image is to be captured, as when the vehicle C reaches the first capture position P 1 (YES at step S 2 - 2 ), and the procedure proceeds to step S 2 - 3 . If the vehicle C has not yet reached the first capture position P 1 behind the reference position S 1 at the predetermined distance D 1 (NO at step S 2 - 2 ), then the control unit 10 continues to perform this judgment until the appropriate time for the image capture is reached.
- the control unit 10 receives new image data G from the camera 35 via the image data input unit 16 . Following input of the image data G the control unit 10 resets a counter value of the pulse counter stored in the main memory 11 , and returns it to an initial value (step S 2 - 4 ). Also, the image processor 20 performs an overhead view conversion of the image data G to generate the recorded image data G 1 (step S 2 - 5 ). The positional data 45 formed from the coordinate data 43 and the steering angle data 44 are then associated with the recorded image data G 1 , and these are stored in the image memory 17 (step S 2 - 6 ).
- the image processor 20 performs rotation processing for the generated recorded image data G 1 based on the relative position of the vehicle C with respect to the reference position S 1 , using a known image processing method (step S 2 - 7 ), and stores the recorded image data G 1 , subjected to the rotation processing, in a corresponding pixel area within the synthesis memory area 19 (step S 2 - 8 ). More specifically, the image processor 20 obtains the steering angle data 44 of the reference position S 1 , and compares that steering angle data 44 with the current steering angle 44 . Rotation processing is then performed for the recorded image data G 1 to correspond with the steering angle data 44 of the reference position S 1 .
- the image processor 20 compares the coordinate data 43 of the reference position S 1 with the current coordinate data 43 .
- the pixel values of the recorded image data G 1 that are subjected to rotation processing are then stored in a pixel area R 2 corresponding to the synthesis memory area 19 , as illustrated in FIG. 8 . If the pixel area R 2 and the pixel area R 1 of the recorded image data G 1 taken at the reference position S 1 overlap, then the overlapping portions are overwritten by the later taken recorded image data G 1 .
- the control unit 10 determines whether the appropriate time at which to output the parking assist image is reached (step S 2 - 9 ). In the present embodiment, this judgment is made based upon whether a predetermined quantity of recorded image data G 1 are stored within the synthesis memory area 19 . However, the judgment may also be made based on whether the vehicle C has backed up a predetermined distance D 2 from the reference position S 1 . In the present embodiment, the time at which to show the parking assist image is designated as a time when four or more recorded image data G 1 are stored. Thus, at this time point a judgment is made not to show the parking assist image as only two recorded image data G 1 have been accumulated (NO at step S 2 - 9 ), and the procedure then returns to step S 2 - 1 .
- step S 2 - 2 to S 2 - 9 The above processing is further repeated twice (steps S 2 - 2 to S 2 - 9 ).
- step S 2 - 9 Once four recorded image data G 1 , taken at respective capture positions P 1 -P 4 (P 1 and P 2 not shown), are stored in the synthesis memory area 19 in pixel areas R 1 -R 4 respectively, as illustrated in FIG. 9 , then the image processor 20 determines that the time at which to output the parking assist image has been reached (YES at step S 2 - 9 ). At that time point, the control unit 10 obtains the most recently taken current image data G 4 (step S 2 - 10 ).
- the current image data G 4 and the recorded image data G 1 are used to generate synthesized data G 6 as driving support image data (step S 2 - 11 ). More specifically, as FIG. 9 illustrates, the image processor 20 trims a trimming range 50 , corresponding to the current surroundings of the rear portion of the vehicle C, from each recorded image data G 1 written into the synthesis memory area 19 . Trimmed data G 2 as illustrated in FIG. 10A are thus generated. Moreover, the image orientation of the trimmed data G 2 matches the direction of the steering angle for the reference position S 1 . Accordingly, known rotation processing is performed in order to align the trimmed data G 2 with the current steering angle, based upon the steering sensor signal STP.
- Synthesis recorded data G 3 as illustrated in FIG. 10B , are thus generated.
- the synthesis recorded data G 3 on an upper side within the display area of the display 22 and the current image data G 4 on the lower side within the display area of the display 22 are joined in a continuous manner so as to generate synthesized data G 6 .
- the image processor 20 temporarily stores the synthesized data G 6 in the VRAM 21 , and outputs it to the display 22 at a predetermined time (step S 2 - 12 ).
- the display 22 shows a parking assist image 51 as illustrated in FIG. 11 .
- Shown in the parking assist image 51 are: a recorded image 52 based on the recorded image data G 1 , and a current image 53 based on the current image data G 4 .
- the recorded image 52 is shown on the upper portion of the screen, and shows a road surface or the like in the surroundings of and behind the rear portion of the vehicle C, which are currently in blind spots of the camera 35 .
- the current image 53 is shown on the lower portion of the screen, and is an image that reflects the current state of the vehicle's surroundings.
- the parking assist image 51 is shown such that the travel direction (backing up direction) of the vehicle C becomes a direction extending from the upper portion of the display 22 towards the lower portion of the display 22 (a direction x 1 in FIG. 11 ).
- the image processor 20 overlaps a portion of a vehicle image 54 , based upon the vehicle image data 12 a , onto a position within the recorded image 52 that corresponds to the current position of the vehicle C.
- the rear portion of the vehicle C is depicted by the vehicle image data 12 a on the parking assist image 51 shown in FIG. 11 .
- the driver can confirm the relative position and the relative direction between the vehicle C and a parking target area indicated by white lines 51 a that are shown based on the recorded image 52 and the current image 53 .
- the image processor 20 superimposes a guide line 55 onto the current image 53 .
- the guide line 55 is formed from the following: a target line 56 that indicates a predetermined distance (e.g. 0.8 meters) rearward from the rear end CB of the vehicle C, a vehicle width line 57 that indicates the width of the vehicle C in a space behind the vehicle C, and a predicted trajectory line 58 that is depicted in accordance with the current steering angle.
- the control unit 10 determines whether a condition for updating (hereinafter referred to as an update condition) of the reference position S 1 is applicable (step S 2 - 13 ).
- the update condition is triggered when the pixel coordinates corresponding to the current position of the vehicle C within the synthesis memory area 19 fall outside of a predetermined range 19 a that is set in advance.
- a fourth capture position P 4 falls within the predetermined range 19 a . Therefore, the control unit 10 determines that the update condition is not applicable (NO at step S 2 - 9 ) and the procedure proceeds to step S 2 - 16 .
- step S 2 - 16 the control unit 10 determines whether an end trigger has been input.
- the end trigger in the present embodiment is the shift position signal SPP indicating any shift position other than reverse. If it is judged that the end trigger has not been input (NO at step S 2 - 16 ), then the procedure returns to step S 2 - 1 .
- step S 2 - 1 the control unit 10 and the image processor 20 repeat steps S 2 - 1 to S 2 - 13 .
- FIG. 12 illustrates, regardless of whether the synthesis memory area 19 is overwritten with a plurality of recorded image data G 1 as the vehicle C approaches the parking target area, the initially set reference position S 1 is maintained if the nth number of the Nth capture position Pn is included within the predetermined range 19 a .
- step S 2 - 11 the trimming range 50 around the current position (the Nth capture position Pn) of the vehicle C is extracted, and the trimmed data G 2 is rotationally converted to align with the current steering angle data 44 , in order to generate the synthesized data G 6 .
- step S 2 - 12 the parking assist image 51 as illustrated in FIG. 13 is shown.
- the travel direction (backing up direction) of the vehicle C becomes aligned with the vertical direction (the direction x 1 in FIG. 13 ) of the display 22 in the parking assist image 51 .
- the image processor 20 updates the set reference position S 1 (step S 2 - 14 ).
- the image processor 20 performs rotation processing or shift processing for the out-of-range capture position Pe, which is within the overhead coordinate system (x, y) but outside of the predetermined range 19 a , so that it becomes aligned with the pixel coordinates thus far set as the reference position S 1 . Accordingly, a new reference position S 2 is set.
- step S 2 - 15 Similar rotation processing or shift processing is performed for the pixel values of the recorded image data G 1 thus far written into the synthesis memory area 19 , as illustrated in FIG. 15 (step S 2 - 15 ).
- a known rotation processing is performed based upon a turning angle of the vehicle C or any other appropriate variable.
- control unit 10 determines whether the end trigger has been input (step S 2 - 16 ). Once the end trigger has been input (YES at step S 2 - 16 ), the control unit 10 ends the processing. If the end trigger has not been input (NO at step S 2 - 16 ), then the procedure returns to step S 2 - 1 .
- step S 2 - 1 to S 2 - 11 The above processing is thus repeated again (steps S 2 - 1 to S 2 - 11 ).
- a new reference position S 2 After a new reference position S 2 has been set, that position is written into the recorded image data G 1 as a reference.
- the reference position S 2 is used as a reference to write the pixel values of the recorded image data G 1 into pixel areas R n+1 , R n+2 which correspond to the imaging range Z of the camera 35 at each time point.
- the image processor 20 reads out the trimming range 50 from the recorded image data G 1 , and displays the parking assist image 51 as illustrated in FIG. 17 .
- the white lines 51 a indicated in the parking assist image 51 are smooth white lines instead of jagged white lines.
- the processing is ended. Furthermore, separate from the processing to depict the parking assist image 51 as the vehicle C backs up the predetermined distance D 1 , the image processor 20 also measures an elapsed time using a timer (not shown), or the like, if the vehicle stops while the vehicle C is backing up. If a predetermined time set in the range of several seconds to several tens of seconds elapses, then a general overhead view 59 illustrated in FIG. 18 is shown on the display 22 . At this time the image processor 20 extracts the recorded image data G 1 in a predetermined range from the synthesis memory area 19 , and shows it on the display 22 .
- the image processor 20 also outputs the vehicle image data 12 a and depicts the vehicle image 54 .
- the general overhead view 59 uses more recorded image data G 1 than the parking assist image 51 .
- the parking assist image 51 can be shown with good image quality, due to suppression of the number of times rotation processing is repeated when accumulating the recorded image data G 1 in the image memory 17 .
- the image processor 20 of the parking assist unit 2 stores the respective recorded image data G 1 taken during backing up of the vehicle C in the synthesis memory area 19 .
- the reference position S 1 is set in advance, prior to capturing and storing the recorded image data G 1 .
- rotation processing is performed to bring the recorded image data G 1 in line with the reference position S 1 , and the processed data is stored in the corresponding pixel area.
- the recorded image data G 1 stored in the synthesis memory area 19 and the most current image data G 4 are used to generate the synthesized data G 6 .
- the parking assist image 51 is then output to the display 22 based upon the synthesized data G 6 . Therefore, the number of times rotation processing is performed prior to the output of the parking assist image 51 can be kept to a minimum of two or three times. Consequently, a decrease in image quality due to rotation processing can be suppressed.
- a new reference position is set if the pixel coordinates corresponding to the current position of the vehicle C fall outside of the synthesis memory area 19 . Therefore, the limited memory area can be more effectively utilized.
- the image data G 1 that is input is converted into an overhead view so as to generate the recorded image data G 1 . Therefore, an overhead image that displays and describes the vehicle surroundings can be achieved for the parking assist image 51 .
Landscapes
- Engineering & Computer Science (AREA)
- Chemical & Material Sciences (AREA)
- Combustion & Propulsion (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Image Processing (AREA)
- Closed-Circuit Television Systems (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
- Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)
Abstract
A parking assist unit includes an image processor for setting a reference position of a vehicle, and an image data input unit for inputting image data from a camera provided on the vehicle. The image processor performs rotation processing on the image data to generate recorded data, based on a relative position with respect to the reference position of the vehicle during imaging. In addition, the recorded image data generated is accumulated in a corresponding pixel area within a synthesis memory area. Furthermore, the recorded image data accumulated is used to generate synthesized data, and such synthesized data is shown on a display.
Description
- The disclosure of Japanese Patent Application No. 2006-094065 filed on Mar. 30, 2006 including the specification, drawings and abstract is incorporated herein by reference in its entirety.
- 1. Field of the Invention
- The present invention relates to a driving support method and a driving support device.
- 2. Description of the Related Art
- Conventional devices for supporting safe driving include a navigation apparatus that shows a background image of the vehicle's surroundings on a display. Such an apparatus inputs a picture signal from an onboard camera mounted at the rear end of the vehicle, and outputs a background image based on that picture signal to a display provided near the driver's seat.
- Japanese Patent Application Publication No. JP-A-2002-373327 proposes an image processing device that collects image data input from an onboard camera when a parking operation is performed, and uses the collected image data to perform image processing so as to show an overhead or top-down view that shows the area surrounding the vehicle. In addition, the image processing device performs processing so as to rotate the image data collected and stored in a memory to correspond to a current steering angle of the vehicle. By visually reviewing the overhead image, a driver can thus grasp the relative position/relative orientation of his or her own vehicle with respect to the target area of a parking space.
- However, repeated use of the collected image data poses the risk of lowered image quality due to the repeated data processing required for rotating the collected image data. In other words, as
FIG. 19A illustrates for example, to collect the image data, obtained image data Ga is converted into an overhead view. The overhead view is then drawn on a pixel area 61 i, which corresponds to the image data Ga converted into the overhead view and is located within asynthesis memory area 60. When new image data Gb is obtained thereafter, asFIG. 19B illustrates, such image data is similarly converted into an overhead view, and is used to overwrite the pixel area 61 i that corresponds to the imaging range of the onboard camera. At this time, the previously obtained image data Ga is processed in order to rotate or shift the image data in accordance with the current position and steering angle of the vehicle, and new pixel coordinates 61 j are overlaid. - Repeatedly performing such processing to collect the image data means that older image data taken further back in time is thus repeatedly subjected to the rotation processing. When image data is repeatedly processed to rotate corresponding with turning of the vehicle, although not necessarily according to the above synthesis procedure, filter and rotation processing errors or the like are introduced and accumulated. This leads to problems such as the following: a white line depicting a target parking space area may be shown as jagged; there may be noise in the image; there may be a drop in image quality as mentioned above; and there may be a general inability to accurately show the position of the driver's vehicle on the overhead image.
- The present invention was devised in view of the foregoing problems, and provides a driving support method and a driving support device that are capable of showing an image with good image quality.
- According to a first aspect of the driving support method and the driving support device, recorded data to serve as a reference are preset, and image processing is performed on the recorded data to generate new recorded data. Such new recorded data is then used to generate driving support image data, which is output to a display means. Thus, the image processing is not repeated each time image data is input, and the number of times that image processing is performed can be kept at a minimum. Consequently, it is possible to suppress errors and reductions in image quality.
- According to a second aspect of the driving support method and the driving support device, when image data is input, a geometric conversion is performed on the image data based on a relative position of the vehicle from a reference position. The recorded image data is then stored in a corresponding area within a synthesis memory area. Such recorded image data is used to output driving support image data to a display means. Thus, the geometric conversion is not repeated each time image data is input, and the number of times the geometric conversion is performed can be kept at a minimum. Consequently, it is possible to suppress errors and reductions in image quality.
- According to a third aspect of the driving support method and the driving support device, the recorded data is generated by rotation processing using a first image processing means. Thus, the number of times that the rotation processing is performed can be kept at a minimum. Consequently, it is possible to suppress errors and reductions in image quality such as white lines on the road surface becoming jagged.
- According to a fourth aspect of the driving support method and the driving support device, a new reference position is set if a relative position with respect to the original reference position falls outside of a predetermined range. Therefore, the limited memory area can be effectively utilized without increasing the memory area.
- According to a fifth aspect of the driving support method and the driving support device, a new reference position is set if the vehicle moves a predetermined distance from the original reference position. Therefore, it is possible to avoid increasing the memory area.
- According to a sixth aspect of the driving support method and the driving support device, the first image processing means performs a geometric conversion on the recorded data thus far accumulated in order to align the recorded data with a newly set reference position, if the new reference position has changed. Thus, the recorded data already accumulated can be utilized instead of being discarded.
- According to a seventh aspect of the driving support method and the driving support device, when the driving support image data is generated, rotation processing is performed in order to align the driving support image with the position of the vehicle at such time. Therefore, it is possible to show an image that the driver can directly understand with ease.
- According to an eighth aspect of the driving support method and the driving support device, following conversion of the image data into an overhead view, such image data is stored as recorded data. Thus, an overhead image that accurately describes the vehicle surroundings can be shown.
-
FIG. 1 is a block diagram of a parking assist system according to an embodiment; -
FIG. 1A is a block diagram of the processor shown inFIG. 1 ; -
FIG. 2 is an overhead view of a vehicle equipped with the parking assist system; -
FIG. 3A is an explanatory drawing in frame format of image data, andFIG. 3B is an explanatory drawing in frame format of overhead image data; -
FIG. 4 is an explanatory drawing of recorded image data; -
FIG. 5 is an explanatory drawing of a processing procedure according to the present embodiment; -
FIG. 6 is an explanatory drawing of a processing procedure according to the present embodiment; -
FIG. 7 is an explanatory drawing of the synthesis of recorded image data; -
FIG. 8 is an explanatory drawing of the synthesis of recorded image data; -
FIG. 9 is an explanatory drawing of the synthesis of recorded image data; -
FIG. 10A is an explanatory drawing of trimming data before rotation processing, and -
FIG. 10B is an explanatory drawing of trimming data after rotation processing; -
FIG. 11 is an explanatory drawing of a parking assist image; -
FIG. 12 is an explanatory drawing of the synthesis of recorded image data; -
FIG. 13 is an explanatory drawing of a parking assist image; -
FIG. 14 is an explanatory drawing of the synthesis of recorded image data; -
FIG. 15 is an explanatory drawing of the synthesis of recorded image data; -
FIG. 16 is an explanatory drawing of the synthesis of recorded image data; -
FIG. 17 is an explanatory drawing of a parking assist image; -
FIG. 18 is an explanatory drawing of a general overhead image; and -
FIGS. 19A and 19B are explanatory drawings of the synthesis of conventional recorded image data. - It should be noted that the drawing figures are not necessarily drawn to scale, but instead are drawn to provide a better understanding of the components thereof, and are not intended to be limiting in scope, but rather provide exemplary illustrations.
- As discussed below, an exemplary embodiment of a driving support method and a driving support device will be described with reference to
FIGS. 1 to 18 .FIG. 1A is a block diagram that explains the structure of aparking assist system 1 mounted in an automobile. - Referring to
FIG. 1A , the parking assistsystem 1 includes: aparking assist unit 2, which acts as a driving support device; adisplay 22, which acts as a display means for showing various screens; aspeaker 25 that outputs warning sounds and speech guidance; and a back monitor camera (hereinafter simply referred to as a camera 35) which acts as an imaging device. AsFIG. 2 illustrates, thecamera 35 is installed on or near the rear end CB of a vehicle, such as a rear door of a vehicle C, such that an optical axis thereof faces diagonally downward. Thecamera 35 may be a digital camera that produces a color image. Of course, any other suitable type of imaging device may be used. Thecamera 35 may also include: an optical mechanism structured from a wide-angle lens, a mirror, and the like; and a CCD imaging element (none of which is shown). The area located to the rear of the vehicle is designated as an imaging range Z. - Referring again to
FIG. 1A , acontrol unit 10 acts as a position detecting means of the parking assistunit 2, and is provided with a computer (not shown). Thecontrol unit 10 performs as a main control for executing processing in accordance with a driving support program stored in aROM 12. Amain memory 11 temporarily stores the calculation results of thecontrol unit 10, and also stores various variables, flags, and the like in order to assist the driver during parking of the vehicle. - Driving support programs are stored in the
ROM 12, as well asvehicle image data 12 a for depicting the vehicle C on thedisplay 22. Thevehicle image data 12 a are data for showing an image that represents the vehicle C in a view from above in the top-down direction. - In addition, the parking assist
unit 2 includes aGPS receiving unit 13. Thecontrol unit 10 calculates the absolute coordinates of the vehicle C, using electronic navigation, based upon a signal from a GPS satellite that is received by theGPS receiving unit 13. - Furthermore, the parking assist
unit 2 includes a vehicle-side interface unit (a vehicle-side I/F unit 15) that defines a detecting means. Thecontrol unit 10 respectively receives a vehicle speed pulse VP and an orientation detection signal GRP from avehicle speed sensor 30 and agyro 31 provided in the vehicle C, via the vehicle-side I/F unit 15. Based on the number of input pulses for the vehicle speed pulse VP, thecontrol unit 10 calculates the relative amount of movement of the vehicle C from a reference position. Thecontrol unit 10 also updates a variable stored in themain memory 11, such as a current orientation GR, based upon the inputted orientation detection signal GRP. - Additionally, the
control unit 10 receives a steering sensor signal STP from asteering angle sensor 32, via the vehicle-side I/F unit 15. Based upon the steering sensor signal STP, thecontrol unit 10 updates a current steering angle STR of the vehicle C which is stored in themain memory 11. Also, thecontrol unit 10 receives a shift position signal SPP from aneutral start switch 33 of the vehicle C, via the vehicle-side I/F unit 15, and updates a variable, such as a shift position SP, stored in themain memory 11. - The
parking assist unit 2 also includes: an imagedata input unit 16 that acts as an image data input means; and animage processor 20 that acts as a reference position setting means, a first image processing means 27, a second image processing means 28, an output control means 40, and a third image processing means 29. The imagedata input unit 16 drivingly controls thecamera 35 through the main control of thecontrol unit 10, and inputs image data G. Theimage processor 20 performs image processing and the like on the image data G that is input from thecamera 35. - Once the vehicle C starts to back up during a parking operation, the
control unit 10 captures, at predetermined intervals, the image data G, shown in a frame format inFIG. 3A , from thecamera 35 via the imagedata input unit 16. Theimage processor 20 then converts the image data G into an overhead view (hereinafter referred to as an overhead view conversion), using a known method, as illustrated inFIG. 3B . This in turn generates recorded image data G1 that describes the rear surroundings of the vehicle C from a viewpoint as set from above in the vertical direction. - Upon generating the recorded image data G1, the
image processor 20 obtains the current position and orientation of the vehicle C via thecontrol unit 10. AsFIG. 4 illustrates, such data is associated with the recorded image data G1 aspositional data 45, which is formed from coordinatedata 43 andsteering angle data 44. These are then stored in animage memory area 18 within an image memory 17 (seeFIG. 1A ) that acts as an image data storing means provided in the parking assistunit 2. Note that the coordinatedata 43 and thesteering angle data 44 may be attached to the recorded image data G1 as illustrated inFIG. 4 , or a map formed from the coordinatedata 43 and thesteering angle data 44 may be linked to the recorded image data G1 by a header or the like. - In addition, the
image processor 20 compares thepositional data 45 with a preset reference position. Based on a relative position thereof (relative data), theimage processor 20 performs a geometric conversion (image processing) on the recorded image data G1. This is then written into and synthesized with an area corresponding to a synthesis memory area 19 (seeFIG. 1A ) of theimage memory 17. More specifically, theimage processor 20 performs rotation processing (a rotational conversion) on the recorded image data G1 in accordance with the steering angle of the vehicle C, and performs shift processing (parallel translation) on the recorded image data G1 based upon a relative distance of the vehicle C with respect to the reference position. - Moreover, the
image processor 20 performs processing to generate a parking assist screen in parallel with such processing. This processing uses the previously recorded image data G1 from thesynthesis memory area 19 and the latest image data G (hereinafter referred to as current image data G4) obtained from thecamera 35. The processing generates synthesized data to show an overhead image of the vehicle and its surroundings, which includes current blind spots of thecamera 35. The synthesized data is temporarily stored in a VRAM 21 (seeFIG. 1A ), and output to thedisplay 22 at predetermined times. - The
display 22, on which various images are shown, may be a touch panel, or any other suitable display. Following an input operation by the user of the touch panel or of operating switches 26 (seeFIG. 1A ) provided adjacent to thedisplay 22, an external input interface unit (hereinafter referred to as an external input I/F unit 23) of the parking assistunit 2 outputs an input signal to thecontrol unit 10 in accordance with the specific input operation. - Referring to
FIG. 1A again, the parking assistunit 2 includes asound processor 24. Thesound processor 24 includes a memory (not shown) that stores sound files and a digital/analog converter. Using the sound files, thesound processor 24 outputs speech guidance and warning sounds from aspeaker 25 provided in the parking assistsystem 1. - A processing procedure according to the present embodiment will be explained next with reference to
FIGS. 5 and 6 . Following the input of an ON signal from an ignition module (not shown) of the vehicle C, thecontrol unit 10 first initializes the system by clearing theimage memory 17 and the like, in accordance with a driving support program stored in the ROM 12 (step S1-1). Thecontrol unit 10 then waits for the input of a start trigger in order to begin driving support processing (step S1-2). In the present embodiment, the start trigger is the shift position signal SPP indicating reverse, which is output from theneutral start switch 33. - If the start trigger is input due to the vehicle C backing up (YES at step S1-2), then the
control unit 10 sets the position of the vehicle C when the start trigger was input as a reference position S1 (step S1-3). - The
control unit 10 accesses thecamera 35 via the imagedata input unit 16, and inputs the image data G taken at the reference position S1 (step S1-4). Following the input of the image data G taken at the reference position S1, theimage processor 20 next performs an overhead view conversion of such image data G to generate the recorded image data G1 as shown in a frame format inFIG. 3B , using a known image processing method (step S1-5). After performing the overhead view conversion, theimage processor 20 associates the coordinatedata 43 and thesteering angle data 44, which indicate the reference position S1, with the recorded image data G1 via thecontrol unit 10. The associated data are then stored in theimage memory area 18 within the image memory 17 (step S1-6). - The
image processor 20 also stores the recorded image data G1 for the reference position S1 in thesynthesis memory area 19 within the image memory 17 (step S1-7). More specifically, asFIG. 7 illustrates, the recorded image data G1 is stored in a pixel area R1, which corresponds to an imaging range Z1 in the reference position S1 of the vehicle C and is within thesynthesis memory area 19. Thus, the recorded image data G1 taken at the reference position S1 is written into thesynthesis memory area 19 and synthesized. Thereafter, the procedure proceeds to processing at A, as shown inFIG. 6 . - The processing procedure shown in
FIG. 6 is a procedure for the second and subsequent storage and synthesis processing of the recorded image data G1. First, thecontrol unit 10 obtains thepositional data 45 based upon the vehicle speed pulse VP and the steering sensor signal STP (step S2-1). Thecontrol unit 10 then determines whether the appropriate time interval at which to capture an image is reached (step S2-2). In the present embodiment, thecontrol unit 10 accumulates the number of input pulses of the vehicle speed pulses VP, from the time point when the reference position S1 was set, in a pulse counter stored in themain memory 11, and determines the image capture time as the point when the accumulated number of pulses reaches a predetermined quantity. In other words, an image is captured every time the vehicle C backs up a predetermined distance D1, which may be several hundred millimeters, for example. In this case, a first capture position P1 (not shown) is behind the reference position S1 from the travel direction (i.e., in the backing up direction) at the predetermined distance D1. Thecontrol unit 10 determines the time at which an image is to be captured, as when the vehicle C reaches the first capture position P1 (YES at step S2-2), and the procedure proceeds to step S2-3. If the vehicle C has not yet reached the first capture position P1 behind the reference position S1 at the predetermined distance D1 (NO at step S2-2), then thecontrol unit 10 continues to perform this judgment until the appropriate time for the image capture is reached. - At step S2-3, the
control unit 10 receives new image data G from thecamera 35 via the imagedata input unit 16. Following input of the image data G thecontrol unit 10 resets a counter value of the pulse counter stored in themain memory 11, and returns it to an initial value (step S2-4). Also, theimage processor 20 performs an overhead view conversion of the image data G to generate the recorded image data G1 (step S2-5). Thepositional data 45 formed from the coordinatedata 43 and thesteering angle data 44 are then associated with the recorded image data G1, and these are stored in the image memory 17 (step S2-6). - Subsequently, the
image processor 20 performs rotation processing for the generated recorded image data G1 based on the relative position of the vehicle C with respect to the reference position S1, using a known image processing method (step S2-7), and stores the recorded image data G1, subjected to the rotation processing, in a corresponding pixel area within the synthesis memory area 19 (step S2-8). More specifically, theimage processor 20 obtains thesteering angle data 44 of the reference position S1, and compares that steeringangle data 44 with thecurrent steering angle 44. Rotation processing is then performed for the recorded image data G1 to correspond with thesteering angle data 44 of the reference position S1. In addition, theimage processor 20 compares the coordinatedata 43 of the reference position S1 with the current coordinatedata 43. The pixel values of the recorded image data G1 that are subjected to rotation processing are then stored in a pixel area R2 corresponding to thesynthesis memory area 19, as illustrated inFIG. 8 . If the pixel area R2 and the pixel area R1 of the recorded image data G1 taken at the reference position S1 overlap, then the overlapping portions are overwritten by the later taken recorded image data G1. - Following storage of the recorded image data in the
synthesis memory area 19, thecontrol unit 10 determines whether the appropriate time at which to output the parking assist image is reached (step S2-9). In the present embodiment, this judgment is made based upon whether a predetermined quantity of recorded image data G1 are stored within thesynthesis memory area 19. However, the judgment may also be made based on whether the vehicle C has backed up a predetermined distance D2 from the reference position S1. In the present embodiment, the time at which to show the parking assist image is designated as a time when four or more recorded image data G1 are stored. Thus, at this time point a judgment is made not to show the parking assist image as only two recorded image data G1 have been accumulated (NO at step S2-9), and the procedure then returns to step S2-1. - The above processing is further repeated twice (steps S2-2 to S2-9). Once four recorded image data G1, taken at respective capture positions P1-P4 (P1 and P2 not shown), are stored in the
synthesis memory area 19 in pixel areas R1-R4 respectively, as illustrated inFIG. 9 , then theimage processor 20 determines that the time at which to output the parking assist image has been reached (YES at step S2-9). At that time point, thecontrol unit 10 obtains the most recently taken current image data G4 (step S2-10). - After the current image data G4 is obtained, the current image data G4 and the recorded image data G1 are used to generate synthesized data G6 as driving support image data (step S2-11). More specifically, as
FIG. 9 illustrates, theimage processor 20 trims atrimming range 50, corresponding to the current surroundings of the rear portion of the vehicle C, from each recorded image data G1 written into thesynthesis memory area 19. Trimmed data G2 as illustrated inFIG. 10A are thus generated. Moreover, the image orientation of the trimmed data G2 matches the direction of the steering angle for the reference position S1. Accordingly, known rotation processing is performed in order to align the trimmed data G2 with the current steering angle, based upon the steering sensor signal STP. Synthesis recorded data G3, as illustrated inFIG. 10B , are thus generated. In addition, the synthesis recorded data G3 on an upper side within the display area of thedisplay 22 and the current image data G4 on the lower side within the display area of thedisplay 22 are joined in a continuous manner so as to generate synthesized data G6. - Following generation of the synthesized data G6, the
image processor 20 temporarily stores the synthesized data G6 in theVRAM 21, and outputs it to thedisplay 22 at a predetermined time (step S2-12). - As a consequence, the
display 22 shows aparking assist image 51 as illustrated inFIG. 11 . Shown in the parking assistimage 51 are: a recordedimage 52 based on the recorded image data G1, and acurrent image 53 based on the current image data G4. The recordedimage 52 is shown on the upper portion of the screen, and shows a road surface or the like in the surroundings of and behind the rear portion of the vehicle C, which are currently in blind spots of thecamera 35. In addition, thecurrent image 53 is shown on the lower portion of the screen, and is an image that reflects the current state of the vehicle's surroundings. In other words, the parking assistimage 51 is shown such that the travel direction (backing up direction) of the vehicle C becomes a direction extending from the upper portion of thedisplay 22 towards the lower portion of the display 22 (a direction x1 inFIG. 11 ). - Also, the
image processor 20 overlaps a portion of avehicle image 54, based upon thevehicle image data 12 a, onto a position within the recordedimage 52 that corresponds to the current position of the vehicle C. The rear portion of the vehicle C is depicted by thevehicle image data 12 a on the parking assistimage 51 shown inFIG. 11 . Thus, the driver can confirm the relative position and the relative direction between the vehicle C and a parking target area indicated bywhite lines 51 a that are shown based on the recordedimage 52 and thecurrent image 53. - Furthermore, the
image processor 20 superimposes aguide line 55 onto thecurrent image 53. Theguide line 55 is formed from the following: atarget line 56 that indicates a predetermined distance (e.g. 0.8 meters) rearward from the rear end CB of the vehicle C, avehicle width line 57 that indicates the width of the vehicle C in a space behind the vehicle C, and a predictedtrajectory line 58 that is depicted in accordance with the current steering angle. - Once the parking assist
image 51 is shown, thecontrol unit 10 determines whether a condition for updating (hereinafter referred to as an update condition) of the reference position S1 is applicable (step S2-13). In the present embodiment, the update condition is triggered when the pixel coordinates corresponding to the current position of the vehicle C within thesynthesis memory area 19 fall outside of apredetermined range 19 a that is set in advance. Here, a fourth capture position P4, as illustrated inFIG. 9 , falls within thepredetermined range 19 a. Therefore, thecontrol unit 10 determines that the update condition is not applicable (NO at step S2-9) and the procedure proceeds to step S2-16. At step S2-16, thecontrol unit 10 determines whether an end trigger has been input. The end trigger in the present embodiment is the shift position signal SPP indicating any shift position other than reverse. If it is judged that the end trigger has not been input (NO at step S2-16), then the procedure returns to step S2-1. - Upon returning to step S2-1, the
control unit 10 and theimage processor 20 repeat steps S2-1 to S2-13. AsFIG. 12 illustrates, regardless of whether thesynthesis memory area 19 is overwritten with a plurality of recorded image data G1 as the vehicle C approaches the parking target area, the initially set reference position S1 is maintained if the nth number of the Nth capture position Pn is included within thepredetermined range 19 a. At such time, at step S2-11, thetrimming range 50 around the current position (the Nth capture position Pn) of the vehicle C is extracted, and the trimmed data G2 is rotationally converted to align with the currentsteering angle data 44, in order to generate the synthesized data G6. As a consequence, at step S2-12, the parking assistimage 51 as illustrated inFIG. 13 is shown. By performing rotation processing to align the trimmed data G2 with the current steering angle, the travel direction (backing up direction) of the vehicle C becomes aligned with the vertical direction (the direction x1 inFIG. 13 ) of thedisplay 22 in the parking assistimage 51. - Referring to
FIG. 14 , if the pixel coordinates corresponding to the position of the vehicle C reach an out-of-range capture position Pe that is outside thepredetermined range 19 a (YES at step S2-13), then theimage processor 20 updates the set reference position S1 (step S2-14). At this time, asFIG. 15 illustrates, theimage processor 20 performs rotation processing or shift processing for the out-of-range capture position Pe, which is within the overhead coordinate system (x, y) but outside of thepredetermined range 19 a, so that it becomes aligned with the pixel coordinates thus far set as the reference position S1. Accordingly, a new reference position S2 is set. In conjunction with such processing, similar rotation processing or shift processing is performed for the pixel values of the recorded image data G1 thus far written into thesynthesis memory area 19, as illustrated inFIG. 15 (step S2-15). For example, when the vehicle C is turning, a known rotation processing is performed based upon a turning angle of the vehicle C or any other appropriate variable. - Subsequently, the
control unit 10 determines whether the end trigger has been input (step S2-16). Once the end trigger has been input (YES at step S2-16), thecontrol unit 10 ends the processing. If the end trigger has not been input (NO at step S2-16), then the procedure returns to step S2-1. - The above processing is thus repeated again (steps S2-1 to S2-11). After a new reference position S2 has been set, that position is written into the recorded image data G1 as a reference. Thus, as
FIG. 16 illustrates, the reference position S2 is used as a reference to write the pixel values of the recorded image data G1 into pixel areas Rn+1, Rn+2 which correspond to the imaging range Z of thecamera 35 at each time point. At step S2-12, theimage processor 20 reads out thetrimming range 50 from the recorded image data G1, and displays the parking assistimage 51 as illustrated inFIG. 17 . Thus, a good image quality is achieved for the parking assistimage 51 by reducing the number of times rotation processing is performed when accumulating the recorded image data G1. As an example, thewhite lines 51 a indicated in the parking assistimage 51 are smooth white lines instead of jagged white lines. - Following the input of the end trigger at step S2-16 (YES at step S2-12), the processing is ended. Furthermore, separate from the processing to depict the parking assist
image 51 as the vehicle C backs up the predetermined distance D1, theimage processor 20 also measures an elapsed time using a timer (not shown), or the like, if the vehicle stops while the vehicle C is backing up. If a predetermined time set in the range of several seconds to several tens of seconds elapses, then a generaloverhead view 59 illustrated inFIG. 18 is shown on thedisplay 22. At this time theimage processor 20 extracts the recorded image data G1 in a predetermined range from thesynthesis memory area 19, and shows it on thedisplay 22. Theimage processor 20 also outputs thevehicle image data 12 a and depicts thevehicle image 54. The generaloverhead view 59 uses more recorded image data G1 than the parking assistimage 51. However, the parking assistimage 51 can be shown with good image quality, due to suppression of the number of times rotation processing is repeated when accumulating the recorded image data G1 in theimage memory 17. - According to the above embodiment, exemplary effects such as the following can be obtained.
- (1) In the above embodiment, the
image processor 20 of the parking assistunit 2 stores the respective recorded image data G1 taken during backing up of the vehicle C in thesynthesis memory area 19. The reference position S1 is set in advance, prior to capturing and storing the recorded image data G1. In addition, rotation processing is performed to bring the recorded image data G1 in line with the reference position S1, and the processed data is stored in the corresponding pixel area. Also, the recorded image data G1 stored in thesynthesis memory area 19 and the most current image data G4 are used to generate the synthesized data G6. Theparking assist image 51 is then output to thedisplay 22 based upon the synthesized data G6. Therefore, the number of times rotation processing is performed prior to the output of the parking assistimage 51 can be kept to a minimum of two or three times. Consequently, a decrease in image quality due to rotation processing can be suppressed. - (2) In the above embodiment, a new reference position is set if the pixel coordinates corresponding to the current position of the vehicle C fall outside of the
synthesis memory area 19. Therefore, the limited memory area can be more effectively utilized. - (3) In the above embodiment, when a new reference position S2 is set, the
image processor 20 performs rotation processing or shift processing on the recorded image data G1 thus far accumulated. Therefore, the recorded image data G1 already accumulated can be utilized instead of being discarded. - (4) In the above embodiment, when generating the synthesized data G6, the
image processor 20 performs rotation processing to align the trimmed data G2, which is a result of trimming the recorded image data G1, with the currentsteering angle data 44. Therefore, the travel direction (backing up direction) of the vehicle C can be shown as constantly in line and corresponding with the vertical direction (direction x1) of the screen. Consequently, it is possible to show an image that the driver can directly understand with ease. - (5) In the above embodiment, the image data G1 that is input is converted into an overhead view so as to generate the recorded image data G1. Therefore, an overhead image that displays and describes the vehicle surroundings can be achieved for the parking assist
image 51. - Note that the present embodiment may be modified as follows.
-
- The
image memory area 18 and thesynthesis memory area 19, which acts as the image data storing means, are provided in thesame image memory 17. However, these may be provided in separate memories. - In the above embodiment, the
steering angle data 44 is associated with the recorded image data G1. However, an absolute orientation or relative orientation may also be used. - In the above embodiment, subsequent to showing the parking assist
image 51 the generaloverhead image 59 is shown. However, showing the generaloverhead image 59 is not required. Alternatively, a generaloverhead image 59 may be shown prior to the parking assistimage 51, and/or intermittently between showing the parking assistimage 51. - In the above embodiment, the
guide line 55 is formed from thetarget line 56, the vehicle width extendedline 57, and the estimatedtrajectory line 58. However, theguide line 55 may be formed from any single one of these. Other guide lines may also be shown. Alternatively, theguide line 55 need not be depicted on the parking assistimage 51. - In the above embodiment, a condition for updating the reference position is designated as the pixel coordinates corresponding to the vehicle's position falling outside the
synthesis memory area 19. However, the reference position may be changed when a cumulative backing up distance of the vehicle C from the reference position S1 is a predetermined distance. - In the above embodiment, when a new reference position S2 is set, rotation processing or shift processing is performed for the recorded image data G1 thus far accumulated. However, when the reference position S2 is newly set, the recorded image data G1 thus far accumulated may be discarded. Alternatively, a portion of the recorded image data G1 accumulated may be stored in another memory area.
- In the above embodiment, a plurality of recorded image data G1 and the current image data G4 subject to overhead view conversion are joined in a continuous manner to generate the synthesized data G6. However, an image other than an overhead image may be shown, and other synthesis methods may be used such as generating the synthesized data G6 with the viewpoint set near the driver's seat or the rear wheel axle.
- In the above embodiment, the recorded image data G1 is generated based upon a predetermined distance traveled. However, the recorded image data G1 may be generated based upon a predetermined time and then accumulated in the
image memory 17. In this case as well, the recorded image data G1 is stored in a corresponding area within thesynthesis memory area 19. - In the above embodiment, the
image processor 20 performs a geometric conversion (rotation processing and shift processing) as image processing. However, other image processing may be performed. For example, contrast adjusting or the like may be performed using the recorded image data G1 taken at the reference position S1 as a reference to correct the color of subsequent recorded image data G1. In such case, the number of times image processing is performed for the newly inputted respective image data can be reduced. Therefore, it is also possible to lessen the image processing load, in addition to suppressing lowered image quality. - In the above embodiment, rotation processing and shift processing are specified as geometric conversions performed by the
image processor 20. However, other conversion processing may be performed. - In the above embodiment, the
camera 35 is installed on or near the rear end CB of the vehicle C. However, thecamera 35 may be installed on another portion such as a front end or a side end of the vehicle. In such cases, the parking assist image may be shown at times other than when the vehicle C is backing up. Furthermore, thecamera 35 may be utilized to show an image on the display in order to assist with driving operations other than parking. A plurality of cameras may also be installed on the vehicle C and used to generate the parking assist image.
- The
Claims (17)
1. A driving support method for supporting a driving operation of a vehicle, comprising:
inputting image data from an imaging device provided in the vehicle, and storing recorded data to serve as a reference in an image data storing means;
performing image processing for newly input image data based upon relative data related to the recorded data serving as a reference and the newly input image data, and generating new recorded data therefrom; and
generating driving support image data from the recorded data subjected to image processing, and outputting such driving support image data to a display means.
2. A driving support device mounted in a vehicle, comprising:
a position detecting means for detecting a position of the vehicle;
a reference position setting means for setting a reference position of the vehicle;
an image data input means for inputting image data from an imaging device provided in the vehicle;
a first image processing means for performing a geometric conversion with respect to the respective image data, based on a relative position with respect to the reference position of the vehicle during imaging, and for generating recorded data;
an image data storing means for storing the recorded data in a corresponding area within a synthesis memory area;
a second image processing means for generating driving support image data using the respective recorded data; and
an output control means for outputting the driving support image data to a display means.
3. The driving support device according to claim 2 , wherein
the first image processing means performs rotation processing on the respective image data based on a relative position with respect to the reference position of the vehicle.
4. The driving support device according to claim 2 , wherein
the reference position setting means sets a new reference position if a relative position with respect to the reference position falls outside of a predetermined range.
5. The driving support device according to claim 2 , wherein
the reference position setting means sets a new reference position if the vehicle moves a predetermined distance from the reference position.
6. The driving support device according to claim 4 , wherein
the first image processing means performs a geometric conversion for the accumulated recorded data, if the reference position setting means sets a new reference position, in order to align the recorded data with the newly set reference position.
7. The driving support device according to claim 5 , wherein
the first image processing means performs a geometric conversion for the accumulated recorded data, if the reference position setting means sets a new reference position, in order to align the recorded data with the newly set reference position.
8. The driving support device according to claim 2 , wherein
the second image processing means performs rotation processing for the recorded data when the driving support image data is generated, in order to align the recorded data with the position of the vehicle at such time.
9. The driving support device according to claim 2 , further comprising:
a third image processing means for generating the recorded data, wherein the image data inputted is converted into an overhead view.
10. A driving support device mounted in a vehicle, comprising:
a position detector for detecting a position of the vehicle;
a reference position setting device for setting a reference position of the vehicle;
an image data input device for inputting image data from an imaging device provided in the vehicle;
a first image processor for performing a geometric conversion with respect to the respective image data, based on a relative position with respect to the reference position of the vehicle during imaging, and for generating recorded data;
an image data storing device for storing the recorded data in a corresponding area within a synthesis memory area;
a second image processor for generating driving support image data using the respective recorded data; and
an output controller for outputting the driving support image data to a display device.
11. The driving support device according to claim 10 , wherein
the first image processor performs rotation processing on the respective image data based on a relative position with respect to the reference position of the vehicle.
12. The driving support device according to claim 10 , wherein
the reference position setting device sets a new reference position if a relative position with respect to the reference position falls outside of a predetermined range.
13. The driving support device according to claim 10 , wherein
the reference position setting device sets a new reference position if the vehicle moves a predetermined distance from the reference position.
14. The driving support device according to claim 10 , wherein
the first image processor performs a geometric conversion for the accumulated recorded data, if the reference position setting device sets a new reference position, in order to align the recorded data with the newly set reference position.
15. The driving support device according to claim 10 , wherein
the first image processor performs a geometric conversion for the accumulated recorded data, if the reference position setting device sets a new reference position, in order to align the recorded data with the newly set reference position.
16. The driving support device according to claim 10 , wherein
the second image processor performs rotation processing for the recorded data when the driving support image data is generated, in order to align the recorded data with the position of the vehicle at such time.
17. The driving support device according to claim 10 , further comprising:
a third image processor for generating the recorded data, wherein the image data inputted is converted into an overhead view.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2006094065A JP4661658B2 (en) | 2006-03-30 | 2006-03-30 | Driving support method, driving support device, and driving support program |
JP2006-094065 | 2006-03-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
US20070239357A1 true US20070239357A1 (en) | 2007-10-11 |
Family
ID=38192679
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/723,454 Abandoned US20070239357A1 (en) | 2006-03-30 | 2007-03-20 | Driving support method and driving support device |
Country Status (3)
Country | Link |
---|---|
US (1) | US20070239357A1 (en) |
EP (1) | EP1839999B1 (en) |
JP (1) | JP4661658B2 (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080158011A1 (en) * | 2006-12-28 | 2008-07-03 | Aisin Seiki Kabushiki Kaisha | Parking assist apparatus |
US20090102921A1 (en) * | 2007-10-23 | 2009-04-23 | Haruo Ito | Vehicle-mounted image capturing apparatus |
US20090102922A1 (en) * | 2007-10-23 | 2009-04-23 | Haruo Ito | On-vehicle image pickup apparatus |
US20100066825A1 (en) * | 2007-05-30 | 2010-03-18 | Aisin Seiki Kabushiki Kaisha | Parking assistance device |
US20100215218A1 (en) * | 2007-10-30 | 2010-08-26 | Katsuhiko Takahashi | Road marking image processing device, road marking image processing method, and program |
US20110069169A1 (en) * | 2008-05-08 | 2011-03-24 | Aisin Seiki Kabushiki Kaisha | Vehicle peripheral display device |
US20110282580A1 (en) * | 2010-05-11 | 2011-11-17 | Honeywell International Inc. | Method of image based navigation for precision guidance and landing |
US20130010117A1 (en) * | 2010-03-26 | 2013-01-10 | Aisin Seiki Kabushiki Kaisha | Vehicle peripheral observation device |
US20140085474A1 (en) * | 2011-05-09 | 2014-03-27 | Lg Innotek Co., Ltd. | Parking camera system and method of driving the same |
US20150232125A1 (en) * | 2014-02-17 | 2015-08-20 | Caterpillar Inc. | Parking assistance system |
US20160189420A1 (en) * | 2010-04-12 | 2016-06-30 | Sumitomo Heavy Industries, Ltd. | Image generation device and operation support system |
US9633566B2 (en) * | 2011-12-20 | 2017-04-25 | Continental Automotive Systems, Inc. | Trailer backing path prediction using GPS and camera images |
US10796177B1 (en) * | 2019-05-15 | 2020-10-06 | Toyota Motor Engineering & Manufacturing North America, Inc. | Systems and methods for controlling the playback of video in a vehicle using timers |
DE102009008113B4 (en) | 2008-02-14 | 2022-01-13 | Mando Mobility Solutions Corp. | Method and device for detecting a target parking position using two reference points and a parking assistance system using the same |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7797615B2 (en) | 2005-07-07 | 2010-09-14 | Acer Incorporated | Utilizing variable-length inputs in an inter-sequence permutation turbo code system |
DE102009029436A1 (en) * | 2009-09-14 | 2011-03-24 | Robert Bosch Gmbh | Procedure for parking a vehicle |
DE102014223941A1 (en) * | 2014-11-25 | 2016-05-25 | Robert Bosch Gmbh | Method for marking camera images of a parking maneuver assistant |
CN107615757B (en) * | 2015-05-29 | 2018-10-09 | 日产自动车株式会社 | Information presentation system |
JP6740991B2 (en) * | 2017-11-10 | 2020-08-19 | 株式会社デンソー | Display processor |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5790403A (en) * | 1994-07-12 | 1998-08-04 | Honda Giken Kogyo Kabushiki Kaisha | Lane image processing system for vehicle |
US20010026317A1 (en) * | 2000-02-29 | 2001-10-04 | Toshiaki Kakinami | Assistant apparatus and method for a vehicle in reverse motion |
US20010030688A1 (en) * | 1999-12-28 | 2001-10-18 | Goro Asahi | Steering assist device |
US20030108222A1 (en) * | 2001-12-12 | 2003-06-12 | Kabushikikaisha Equos Research | Image processing system for vehicle |
US20030165255A1 (en) * | 2001-06-13 | 2003-09-04 | Hirohiko Yanagawa | Peripheral image processor of vehicle and recording medium |
US20050031169A1 (en) * | 2003-08-09 | 2005-02-10 | Alan Shulman | Birds eye view virtual imaging for real time composited wide field of view |
US7088262B2 (en) * | 2002-10-25 | 2006-08-08 | Donnelly Hohe Gmbh & Co. Kg | Method of operating a display system in a vehicle for finding a parking place |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3183284B2 (en) * | 1999-01-19 | 2001-07-09 | 株式会社豊田自動織機製作所 | Steering support device for reversing a vehicle |
EP1465135A1 (en) * | 2000-04-05 | 2004-10-06 | Matsushita Electric Industrial Co., Ltd. | Driving operation assisting method and system |
JP3778849B2 (en) * | 2001-12-18 | 2006-05-24 | 株式会社デンソー | Vehicle periphery image processing apparatus and recording medium |
JP2005001570A (en) * | 2003-06-12 | 2005-01-06 | Equos Research Co Ltd | Parking support device |
-
2006
- 2006-03-30 JP JP2006094065A patent/JP4661658B2/en not_active Expired - Fee Related
-
2007
- 2007-03-14 EP EP07104080A patent/EP1839999B1/en not_active Expired - Fee Related
- 2007-03-20 US US11/723,454 patent/US20070239357A1/en not_active Abandoned
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5790403A (en) * | 1994-07-12 | 1998-08-04 | Honda Giken Kogyo Kabushiki Kaisha | Lane image processing system for vehicle |
US20010030688A1 (en) * | 1999-12-28 | 2001-10-18 | Goro Asahi | Steering assist device |
US20010026317A1 (en) * | 2000-02-29 | 2001-10-04 | Toshiaki Kakinami | Assistant apparatus and method for a vehicle in reverse motion |
US20030165255A1 (en) * | 2001-06-13 | 2003-09-04 | Hirohiko Yanagawa | Peripheral image processor of vehicle and recording medium |
US20030108222A1 (en) * | 2001-12-12 | 2003-06-12 | Kabushikikaisha Equos Research | Image processing system for vehicle |
US7088262B2 (en) * | 2002-10-25 | 2006-08-08 | Donnelly Hohe Gmbh & Co. Kg | Method of operating a display system in a vehicle for finding a parking place |
US20050031169A1 (en) * | 2003-08-09 | 2005-02-10 | Alan Shulman | Birds eye view virtual imaging for real time composited wide field of view |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080158011A1 (en) * | 2006-12-28 | 2008-07-03 | Aisin Seiki Kabushiki Kaisha | Parking assist apparatus |
US7940193B2 (en) * | 2006-12-28 | 2011-05-10 | Aisin Seiki Kabushiki Kaisha | Parking assist apparatus |
US20100066825A1 (en) * | 2007-05-30 | 2010-03-18 | Aisin Seiki Kabushiki Kaisha | Parking assistance device |
US8049778B2 (en) * | 2007-05-30 | 2011-11-01 | Aisin Seiki Kabushiki Kaisha | Parking assistance device |
US20090102922A1 (en) * | 2007-10-23 | 2009-04-23 | Haruo Ito | On-vehicle image pickup apparatus |
US20090102921A1 (en) * | 2007-10-23 | 2009-04-23 | Haruo Ito | Vehicle-mounted image capturing apparatus |
US8130270B2 (en) * | 2007-10-23 | 2012-03-06 | Alpine Electronics, Inc. | Vehicle-mounted image capturing apparatus |
US8477191B2 (en) | 2007-10-23 | 2013-07-02 | Alpine Electronics, Inc. | On-vehicle image pickup apparatus |
US20100215218A1 (en) * | 2007-10-30 | 2010-08-26 | Katsuhiko Takahashi | Road marking image processing device, road marking image processing method, and program |
US8503728B2 (en) * | 2007-10-30 | 2013-08-06 | Nec Corporation | Road marking image processing device, road marking image processing method, and program |
DE102009008113B4 (en) | 2008-02-14 | 2022-01-13 | Mando Mobility Solutions Corp. | Method and device for detecting a target parking position using two reference points and a parking assistance system using the same |
US20110069169A1 (en) * | 2008-05-08 | 2011-03-24 | Aisin Seiki Kabushiki Kaisha | Vehicle peripheral display device |
US20110181724A1 (en) * | 2008-05-08 | 2011-07-28 | Aisin Seiki Kabushiki Kaisha | Vehicle peripheral display device |
US10744942B2 (en) | 2008-05-08 | 2020-08-18 | Aisin Seiki Kabushiki Kaisha | Vehicle peripheral display device |
US10315568B2 (en) * | 2008-05-08 | 2019-06-11 | Aisin Seiki Kabushiki Kaisha | Vehicle peripheral display device |
US9308863B2 (en) * | 2010-03-26 | 2016-04-12 | Aisin Seiki Kabushiki Kaisha | Vehicle peripheral observation device |
US10479275B2 (en) * | 2010-03-26 | 2019-11-19 | Aisin Seiki Kabushiki Kaisha | Vehicle peripheral observation device |
US20130010117A1 (en) * | 2010-03-26 | 2013-01-10 | Aisin Seiki Kabushiki Kaisha | Vehicle peripheral observation device |
US20180178725A1 (en) * | 2010-03-26 | 2018-06-28 | Aisin Seiki Kabushiki Kaisha | Vehicle peripheral observation device |
US9919650B2 (en) | 2010-03-26 | 2018-03-20 | Aisin Seiki Kabushiki Kaisha | Vehicle peripheral observation device |
US9881412B2 (en) * | 2010-04-12 | 2018-01-30 | Sumitomo Heavy Industries, Ltd. | Image generation device and operation support system |
US20160189420A1 (en) * | 2010-04-12 | 2016-06-30 | Sumitomo Heavy Industries, Ltd. | Image generation device and operation support system |
US20110282580A1 (en) * | 2010-05-11 | 2011-11-17 | Honeywell International Inc. | Method of image based navigation for precision guidance and landing |
US9519832B2 (en) * | 2011-05-09 | 2016-12-13 | Lg Innotek Co., Ltd. | Parking camera system and method of driving the same |
KR101803973B1 (en) * | 2011-05-09 | 2017-12-01 | 엘지이노텍 주식회사 | Parking camera system and method of driving the same |
TWI561421B (en) * | 2011-05-09 | 2016-12-11 | Lg Innotek Co Ltd | Parking camera system and method of driving the same |
US20160350602A1 (en) * | 2011-05-09 | 2016-12-01 | Lg Innotek Co., Ltd. | Parking camera system and method of driving the same |
US10074018B2 (en) * | 2011-05-09 | 2018-09-11 | Lg Innotek Co., Ltd. | Parking camera system and method of driving the same |
US20140085474A1 (en) * | 2011-05-09 | 2014-03-27 | Lg Innotek Co., Ltd. | Parking camera system and method of driving the same |
US9633566B2 (en) * | 2011-12-20 | 2017-04-25 | Continental Automotive Systems, Inc. | Trailer backing path prediction using GPS and camera images |
US20150232125A1 (en) * | 2014-02-17 | 2015-08-20 | Caterpillar Inc. | Parking assistance system |
US10796177B1 (en) * | 2019-05-15 | 2020-10-06 | Toyota Motor Engineering & Manufacturing North America, Inc. | Systems and methods for controlling the playback of video in a vehicle using timers |
Also Published As
Publication number | Publication date |
---|---|
EP1839999A2 (en) | 2007-10-03 |
EP1839999A3 (en) | 2008-10-22 |
EP1839999B1 (en) | 2012-06-13 |
JP4661658B2 (en) | 2011-03-30 |
JP2007269060A (en) | 2007-10-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20070239357A1 (en) | Driving support method and driving support device | |
JP4561479B2 (en) | Parking support method and parking support device | |
CN103140377B (en) | For showing method and the driver assistance system of image on the display apparatus | |
US7482949B2 (en) | Parking assist method and a parking assist apparatus | |
JP4412380B2 (en) | Driving support device, driving support method, and computer program | |
JP4321543B2 (en) | Vehicle periphery monitoring device | |
US7212653B2 (en) | Image processing system for vehicle | |
US20070057816A1 (en) | Parking assist method and parking assist apparatus | |
JP5035321B2 (en) | Vehicle periphery display control device and program for vehicle periphery display control device | |
JP4696691B2 (en) | Parking support method and parking support device | |
US7680570B2 (en) | Parking assist devices, methods, and programs | |
US20160207526A1 (en) | Vehicle-side method and vehicle-side device for detecting and displaying parking spaces for a vehicle | |
US20060119472A1 (en) | Driving support apparatus and driving support method | |
JP2006298115A (en) | Driving-support method and driving-support device | |
JP2007266930A (en) | Driving assist method and driving support apparatus | |
JP2008279875A (en) | Parking support device | |
JP2007226300A (en) | Driving support method and driving support device | |
JP7426174B2 (en) | Vehicle surrounding image display system and vehicle surrounding image display method | |
JP2007221199A (en) | On-vehicle camera display device and image processing unit | |
JP4561512B2 (en) | Parking support method and parking support device | |
JP2009100261A (en) | Vehicle surrounding image supplying device, and method of correcting wide-angle image | |
JP2007096496A (en) | Vehicle periphery display system | |
JP5083137B2 (en) | Driving assistance device | |
JP2007158642A (en) | Car-periphery image provider | |
JP2006027334A (en) | Drive assisting device and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: AISIN AW CO., LTD., JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MORI, TOSHIHIRO;KUBOTA, TOMOKI;REEL/FRAME:019097/0162 Effective date: 20070313 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |