JP2006327498A - Parking support method and parking support device - Google Patents

Parking support method and parking support device Download PDF

Info

Publication number
JP2006327498A
JP2006327498A JP2005156164A JP2005156164A JP2006327498A JP 2006327498 A JP2006327498 A JP 2006327498A JP 2005156164 A JP2005156164 A JP 2005156164A JP 2005156164 A JP2005156164 A JP 2005156164A JP 2006327498 A JP2006327498 A JP 2006327498A
Authority
JP
Japan
Prior art keywords
image data
vehicle
position
image
past
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2005156164A
Other languages
Japanese (ja)
Other versions
JP4696691B2 (en
Inventor
Tomoki Kubota
Toshihiro Mori
Teruhiro Nakajima
Seiji Sakakibara
彰宏 中嶋
俊宏 森
聖治 榊原
智氣 窪田
Original Assignee
Aisin Aw Co Ltd
アイシン・エィ・ダブリュ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aisin Aw Co Ltd, アイシン・エィ・ダブリュ株式会社 filed Critical Aisin Aw Co Ltd
Priority to JP2005156164A priority Critical patent/JP4696691B2/en
Publication of JP2006327498A publication Critical patent/JP2006327498A/en
Application granted granted Critical
Publication of JP4696691B2 publication Critical patent/JP4696691B2/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

PROBLEM TO BE SOLVED: To provide a parking support method and a parking support device capable of displaying a screen that allows easy recognition of the relative position between a vehicle and a parking target area using past images.
A navigation apparatus 1 associates an image data acquisition unit 12 for acquiring image data obtained by imaging the periphery of a vehicle from a camera 21 provided in the vehicle, and associates the image data with the position of the vehicle that has captured the image data. And an image memory 6 for storing as past image data. Further, the control unit 3 controls the read past image data into a virtual viewpoint set above the rear wheel axis of the current vehicle, and the viewpoint-converted past image data and image data captured at the current position. Is provided on the display 7.
[Selection] Figure 1

Description

  The present invention relates to a parking assistance method and a parking assistance device.

  2. Description of the Related Art Conventionally, as a device for assisting driving operation when parking a car, a device that acquires image data from a camera attached to the rear end of the vehicle and outputs the image data to a display disposed in a vehicle interior. It is known (see, for example, Patent Document 1). Such a camera is fixed substantially at the center of the rear end of the vehicle with its optical axis facing downward. On the display, the road surface for several meters behind the vehicle can be seen.

As such an apparatus, an apparatus that stores image data acquired from a camera in a memory and displays a composite image using image data captured in the past has been proposed (see Patent Document 2). In this device, an area that does not fall within the current camera field of view can be displayed in a pseudo manner, so that the relative position between the vehicle and the parking target area can be easily confirmed.
JP 2000-280823 A JP 2001-218197 A

  However, although the overhead view image converted from the viewpoint as described above has an advantage that it is easy to grasp the positions of the entire vehicle body and the parking target area as a whole, in addition to the viewpoint conversion process, a process for joining the converted images is performed. Moreover, since it is necessary to sequentially update the composite screen as the vehicle moves, a heavy load is applied to the apparatus. In addition, when a bird's-eye view image converted from a viewpoint is displayed directly above the vehicle, when the vehicle approaches a wheel stopper, the wheel stopper is hidden by the vehicle body, or the timing of contact between the wheel stopper and the rear wheel is difficult to grasp. Is assumed.

  On the other hand, if viewpoint conversion is omitted in order to reduce the load on the device, the screen will become difficult to understand intuitively, the timing of contact between the wheel stopper and the rear wheel, and the relative position between the vehicle and the parking target area will be grasped. The screen may be difficult. Accordingly, there is a demand for a display format for a screen using past images that allows the driver to easily grasp the relative position between the vehicle and the parking target area.

  The present invention has been made in view of the above problems, and its purpose is to use a past image data, and to provide a parking assist method capable of displaying a screen on which the relative position between the vehicle and the parking target area can be easily viewed. The object is to provide a parking assistance device.

  In order to solve the above-mentioned problem, the invention according to claim 1 includes a position specifying unit that specifies a position of a vehicle, an image data acquiring unit that acquires image data from an imaging device provided in the vehicle, In a parking assistance method for assisting a parking operation using an image data storage means for storing image data and a display means for outputting the image data, the acquired image data is taken from the vehicle that has captured the image data. A step of storing in the image data storage means as past image data in association with the position, and the past image data read from the image data storage means at a wheel set at the current position or a viewpoint set above the wheels. A step of converting the viewpoint, an image based on the past image data subjected to the viewpoint conversion, and an image based on the image data captured at the current position are displayed on the display means. And summarized in that a step of.

According to a second aspect of the present invention, there is provided position specifying means for specifying a position of a vehicle, image data acquisition means for acquiring image data obtained by imaging the periphery of the vehicle from an image pickup device provided in the vehicle, and the image data Is stored as past image data in association with the position of the vehicle that captured the image data, display means for displaying an image based on the image data, and the past image data as the image data storage means. Reading means for reading from, viewpoint conversion means for converting the read past image data into a wheel at a current position or a viewpoint set above the wheels, and an image based on the past image data subjected to viewpoint conversion, The present invention includes an output control unit that displays an image based on the image data captured at the current position on the display unit.

  According to a third aspect of the present invention, in the parking assist device according to the second aspect, the viewpoint conversion means converts the past image data from a viewpoint set above the current rear wheel axis to a rear wheel and a rear wheel. The gist is to convert it into image data looking down at the periphery.

  According to a fourth aspect of the present invention, in the parking assist device according to the second or third aspect, the rotation for rotationally converting the past image data based on the steering angle information acquired from a sensor provided in the steering device of the vehicle. The gist of the present invention is that it is further provided with conversion means.

  The invention according to claim 5 displays the image based on the past image data and the image based on the image data captured at the current position in the parking assist device according to any one of claims 2 to 4. The gist of the present invention is that vehicle position display means for displaying an index indicating the current position of the vehicle is further provided on the screen.

  According to a sixth aspect of the present invention, in the parking assist device according to any one of the second to fifth aspects, the reading means moves backward when the vehicle moves a predetermined image update distance. The past image data associated with the position behind the image update distance is read out from the image data storage means, and the viewpoint conversion means reads the read past image data at the current position or between the wheels. The viewpoint is converted to the viewpoint set to the image, and the output control means captures the image based on the past image data converted from the viewpoint and the current position of the vehicle each time the vehicle moves by the image update distance. The gist is to display an image based on image data on the display means.

  A seventh aspect of the present invention is the parking support apparatus according to any one of the second to sixth aspects, wherein the output control means includes the vehicle at the current position in the past image data whose viewpoint has been converted. A first area including a rear wheel and a periphery of the rear wheel, and a second area that is continuous with the first area in the image data acquired at the current position, are continuously displayed on the display unit. The gist.

  According to the first aspect of the present invention, the image data acquired from the imaging device is stored in the image data storage means. The past image data read from the image data storage means is converted to a viewpoint set above the wheels at the current position, and the viewpoint-converted past image data and the current position image data are displayed. For this reason, the past image based on the past image data becomes an image concentrated around the rear wheel and the rear wheel, so that the driver, for example, reaches the second half of the parking operation, and the driver is relative to the rear wheel and the wheel stopper or the white line. When you want to check the position, you can grasp the situation around the rear wheel in detail. Therefore, it is possible to relieve the impact caused by the contact between the wheel stopper and the rear wheel, or to park the vehicle so that the vehicle body does not protrude from the parking target area.

According to the second aspect of the present invention, the parking assist device includes a position specifying unit, an image data acquiring unit that acquires image data from the imaging device, and a display unit. In addition, viewpoint conversion means that converts the past image data read from the image data storage means into a wheel at the current position or a viewpoint set above the wheels, viewpoint-converted past image data, and an image at the current position Output control means for outputting data. For this reason, the viewpoint-converted past image becomes an image concentrated around the rear wheel and the rear wheel, so that the driver, for example, reaches the second half of the parking operation, and the relative position between the rear wheel and the wheel stopper or the white line. If you want to check, you can grasp the situation around the rear wheel in detail. Therefore, it is possible to relieve the impact caused by the contact between the wheel stopper and the rear wheel, or to park the vehicle so that the vehicle body does not protrude from the parking target area.

  According to the invention described in claim 3, since the viewpoint conversion means converts the past image data into image data viewed from the viewpoint set above the rear wheel axis, an easy-to-understand image centered on each rear wheel. Can be displayed.

  According to the fourth aspect of the present invention, the parking assist device rotationally converts past image data based on the steering angle information acquired from the sensor provided in the steering device. For this reason, the continuity between the current image and the past image can be improved.

According to the fifth aspect of the invention, since the current position of the vehicle is displayed on the screen, it is possible to display a screen in which the relative position between the vehicle and the parking target area can be easily grasped.
According to the sixth aspect of the invention, every time the vehicle moves by the image update distance, the past image data captured at the position behind the image update distance is read, and the past image data is subjected to viewpoint conversion. Then, past image data subjected to viewpoint conversion and image data captured at the current position are output. For this reason, every time the image is updated, the past image data is converted into a viewpoint corresponding to the current position, and the viewpoint-converted image is displayed. It is possible to display a screen concentrated on the screen.

  According to the seventh aspect of the present invention, among the past image data, the first area for displaying the area corresponding to the rear wheel and the rear wheel around the current vehicle, and the first image among the current image data. A second area continuous with the area is output. For this reason, the past image can be an image concentrated on the area corresponding to the rear wheel, and the current image can be an image continuous to the area. Further, since the first area and the second area can be displayed side by side, it is easy to confirm the area where the past image is displayed.

  Hereinafter, an embodiment in which the parking support method and the parking support apparatus of the present invention are embodied in a navigation device mounted on a vehicle (automobile) will be described with reference to FIGS. FIG. 1 is a block diagram illustrating the configuration of the navigation device 1.

  As shown in FIG. 1, the navigation device 1 includes a control device 2 that constitutes a parking assistance system. The control device 2 includes a control unit 3 that performs main control, a main memory 4, a ROM 5, and an image memory 6 as image data storage means. The control unit 3 includes a CPU and performs various processes according to various programs such as a route guidance program and a parking assistance program stored in the ROM 5. The control unit 3 constitutes a position specifying unit, a reading unit, a viewpoint conversion unit, an output control unit, a rotation conversion unit, and a vehicle position display unit. The main memory 4 is a RAM and stores various variables for parking assistance.

  The ROM 5 stores contour drawing data 5a. The contour drawing data 5a is data for outputting the contour of the vehicle C (see FIG. 3) on which the navigation device 1 is mounted to the display 7 as display means. The contour drawing data 5a is set according to the width and length of the vehicle C. When the vehicle C is a small vehicle, the contour drawing data 5a is displayed. When the vehicle C is a large vehicle, the contour drawing data 5a is displayed. Is drawn on the screen.

  Moreover, the control apparatus 2 is provided with the GPS receiving part 8 which comprises a position specific means. The GPS receiver 8 receives radio waves from GPS satellites. The control unit 3 periodically calculates the absolute position of the vehicle C such as latitude / longitude / altitude based on the position detection signal input from the GPS receiving unit 8.

  Furthermore, the control device 2 includes a vehicle-side interface unit (vehicle-side I / F unit 9). The control unit 3 inputs various data from a vehicle ECU (electronic control unit) 20 provided in the vehicle C via the vehicle-side I / F unit 9. The vehicle ECU 20 receives a shift position signal SP, a steering sensor signal ST as steering angle information, a vehicle speed signal Vp, and a direction detection signal GYR from various sensors and control circuits provided in the vehicle C. The shift position signal SP is a signal output from a control circuit (not shown) for controlling the transmission, for example, and indicates the current shift position. The steering sensor signal ST is a signal output from a steering sensor (not shown) provided in the steering device and indicates the current steering angle of the vehicle C. The vehicle speed signal Vp is a signal output from a vehicle speed sensor (not shown) and indicates the traveling speed of the vehicle C. The direction detection signal GYR indicates the direction of the vehicle C and is output from a gyro sensor provided in the vehicle C.

  The control unit 3 calculates the relative distance and the relative direction from the reference position based on the vehicle speed signal Vp and the direction detection signal GYR input via the vehicle-side I / F unit 9, and indicates the autonomous vehicle position. Generate navigation data. Then, the absolute position of the vehicle C based on the GPS receiving unit 8 is corrected with the autonomous navigation data, the own vehicle position is determined, and the own vehicle position is determined. In the present embodiment, as shown in FIG. 3, the center point C3 of the rear wheel shaft C2 is calculated as the vehicle position.

  Further, the control unit 3 stores or updates the shift position NSW and the current steering angle STR according to the parking assistance program based on the shift position signal SP and the steering sensor signal ST input via the vehicle-side I / F unit 9. The shift position NSW is a variable indicating the current shift position of the vehicle C. The current steering angle STR is a variable indicating the current steering angle of the vehicle C.

  The control device 2 includes a map data storage unit 10. The map data storage unit 10 stores route data 11a and map drawing data 11b. The route data 11a includes node data and link data, and the control unit 3 uses this route data 11a when performing route guidance processing to the destination, and searches for a route according to the route guidance program stored in the ROM 5. And route guidance. In addition, the control unit 3 collates the vehicle position calculated as described above, the travel locus, and the route data 11a, and positions the vehicle position on an appropriate road, thereby improving the accuracy of the vehicle position. It is like that. The map drawing data 11b is data for displaying a map from a wide range to a narrow range on the display 7, and is associated with the route data 11a.

  The control device 2 also includes an image data acquisition unit 12 as image data acquisition means. The image data acquisition unit 12 drives and controls a back monitor camera (hereinafter simply referred to as a camera 21) as an imaging device provided in the vehicle C, and whenever the vehicle C moves by a predetermined distance, the image data G is acquired sequentially.

  As shown in FIG. 2, the camera 21 is attached to the approximate center of the rear end of the vehicle C, such as a back door of the vehicle C, with the optical axis AX directed downward. The camera 21 is a digital camera that captures a color image, and includes an optical mechanism including a wide-angle lens, a mirror, and the like, and a CCD image sensor (none of which is shown). As shown in FIG. 3, the camera 21 has a rear visual field of, for example, 140 degrees to the left and right, and includes a rear range of about 3 meters or more including the rear end of the vehicle C as a visible range.

  The image data G generated by the camera 21 is analog / digital converted digital data, and the image data acquisition unit 12 controls the drive of the camera 21 under the control of the control unit 3 to acquire the image data G. To start. Since the camera 21 uses a wide-angle lens, the image data G acquired at this time has a so-called distortion aberration in which an image around the image is distorted.

  When acquiring the image data G from the image data acquisition unit 12, the control unit 3 attaches the position where the image data G is captured as an index DM (header) of the image data G, as schematically shown in FIG. And stored in the image memory 6. That is, not all image data G captured by the camera 21 is stored in the image memory 6, but image data G is stored every time the vehicle C moves by a predetermined distance. The attached index DM may be an absolute coordinate or a relative coordinate from the reference position. Further, the control unit 3 updates the current steering angle STR based on the steering sensor signal ST input from the vehicle ECU 20 and attaches the current steering angle STR to the image data G as past steering angle data 14.

  As shown in FIG. 1, the navigation device 1 includes an output unit 13 including a display 7 and a speaker 18. The display 7 for displaying the image data G is a touch panel. When the vehicle C moves forward, the map drawing data 11b is output by the drive control by the control unit 3 to display the map screen 7a shown in FIG. To do. Further, when the vehicle C moves backward, the rear imaging screen 30 (see FIG. 5) that images the rear of the vehicle C is displayed by the drive control by the control unit 3. Furthermore, at a predetermined timing, a screen displaying the past image and the current image is output using the image data G stored in the image memory 6 and the image data G captured at the current position.

  When the user operates the touch switch or the operation switch 15 provided adjacent to the display 7, a user input interface unit (hereinafter referred to as a user input I / F unit 16) provided in the control device 2 responds to the input operation. An input signal is output to the control unit 3.

  In addition, the control device 2 includes an audio output unit 17. The voice output unit 17 has a voice file (not shown), and outputs a guidance voice and a guidance sound from the speaker 18 included in the navigation device 1 by the drive control of the control unit 3.

  Furthermore, the control device 2 includes a drawing processing unit 19 that constitutes a reading unit, a viewpoint conversion unit, an output control unit, a rotation conversion unit, and a vehicle position display unit. The drawing processing unit 19 includes a calculation unit that performs image processing, a VRAM (not shown) that temporarily stores output data to be displayed on the display 7, and the like. The drawing processing unit 19 corrects distortion aberration of the image data G under the control of the control unit 3.

  Further, when the drawing processing unit 19 inputs a control signal output from the control unit 3 when the vehicle C moves backward, the drawing processing unit 19 inputs the image data G captured at the current position from the image data acquisition unit 12. And the acquired image data G is output to the display 7, and the back imaging screen 30 as shown in FIG. 5 is displayed.

On the rear imaging screen 30, a background image 31 obtained by imaging the rear of the vehicle C is displayed. In the background image 31, a guide line L composed of a vehicle width extension line 32 indicated by a solid line in FIG. 5 and an expected trajectory line 33 indicated by a broken line is displayed in an overlapping manner. The vehicle width extension line 32 is an index obtained by extending the vehicle width of the vehicle C rearward. The predicted trajectory line 33 is an index that predicts the traveling trajectory of the vehicle C that moves backward based on the vehicle width and the steering angle of the vehicle C at that time, and is a distance from the vehicle C to a predetermined distance (eg, 2.7 m). A travel locus is shown. In the actual screen, the vehicle width extension line 32 and the predicted trajectory line 33 are drawn with lines of different colors so that they can be easily discriminated. A rear end image 41 obtained by imaging the rear end portion of the vehicle C such as a rear bumper is displayed at the lowermost end of the rear imaging screen 30.

  Further, the drawing processing unit 19 outputs a composite screen 49 shown in FIG. 18 under the control of the control unit 3. At this time, when the drawing processing unit 19 inputs the current vehicle position output from the control unit 3, the drawing processing unit 19 is behind the vehicle position by a predetermined image update distance Dx in the traveling direction (reverse direction). The captured image data G is searched. In the present embodiment, the image update distance Dx is set to 500 mm.

  More specifically, among the pieces of image data G stored in the image memory 6 (hereinafter referred to as past image data G2), past image data G2 to which a position behind the current position by the image update distance Dx is attached as an index DM. Are read from the image memory 6. At this time, the read past image data G2 is, for example, data for outputting an image 40 as shown in FIG. The image 40 includes a rear end image 41 of the vehicle C.

  The drawing processing unit 19 corrects the distortion aberration of the past image data G2 as in the image 42 shown in FIG. Further, the first region 43a in which a range close to the vehicle C is imaged is extracted (trimmed) from the corrected past image data G2, and the past data for synthesis G3 is generated.

  In addition, the drawing processing unit 19 performs viewpoint conversion processing on the past data for synthesis G3. For example, it is assumed that the current vehicle C is at a position B shown in FIG. 13B, and the past data for synthesis G3 (past image data G2) is captured at a position A shown in FIG. At this time, the imaging viewpoint VA of the past data for synthesis G3 shown in FIG. 13A differs from the current imaging viewpoint VB shown in FIG. 13B in the traveling direction (reverse direction, Y arrow direction in FIG. 13). Located behind. The imaging viewpoints VA and VB indicate the position of the camera 21 (lens) at the positions A and B.

  As illustrated in FIG. 14, the drawing processing unit 19 converts the past data for synthesis G3 into a virtual viewpoint Vi set above the center point C3 of the rear wheel axis C2, using a known viewpoint conversion method. When the composition past data G3 is converted into the virtual viewpoint Vi, the image in the first area 43a in FIG. 16B becomes an image 42a in FIG. 16C. That is, the image 42a looks down on the white line 100 marked on the road surface around each rear wheel C1 from above the rear wheel C1. In addition, the virtual viewpoint Vi is set to a viewpoint in which the contact between the rear wheel C1 and the wheel stopper 101 can be confirmed when the rear wheel C1 approaches the wheel stopper 101.

  Furthermore, the drawing processing unit 19 acquires image data G captured at the current vehicle position (hereinafter referred to as current image data G1). The current image data G1 is data for outputting an image 44 as shown in FIG. The drawing processing unit 19 corrects the distortion aberration of the current image data G1. Thereby, the current image data G1 is corrected as an image 45 shown in FIG. Then, the second area 43b other than the rear end image 41 is extracted (trimmed) from the current image data G1, and the composition current data G4 is generated.

  When the composition current data G4 is generated, the rendering processing unit 19 reduces the composition current data G4 at a predetermined reduction rate and generates reduced data G5 as shown in FIG. Then, as shown in FIG. 16G, the reduced data G5 is combined with the combining region 46 in the display region 7b of the display 7. Then, a composite screen 49 displaying the past image 47 based on the past data for synthesis G3 and the current image 48 based on the reduced data G5 (current image data G1) is displayed on the display 7. The past image 47 is an image that displays an area that is outside the visible range of the current camera 21, and displays a road surface and the like around the current rear wheel C <b> 1.

  Further, the drawing processing unit 19 displays an auxiliary line 50 as an index on the screen of the display 7 as shown in FIG. The auxiliary line 50 includes an outline drawing line 54 indicating the outline of the vehicle body, a projection line 51 obtained by projecting the outline drawing line 54 on the ground (road surface), and a rear wheel drawing line 52 indicating each rear wheel. Furthermore, it has a partition line 53 that indicates the boundary between the past image 47 and the current image 48.

  The drawing processing unit 19 reduces or enlarges the projection line 51, the outline drawing line 54, and the rear wheel drawing line 52 according to the own vehicle position, and converts the viewpoint according to the own vehicle position. Then, as shown in FIG. 18, the projection line 51, the outline drawing line 54, and the rear wheel drawing line 52 are drawn in the area corresponding to the vehicle position in the display area 7 b of the display 7. In addition, a partition line 53 is drawn at the boundary between the past image 47 and the current image 48. Thereby, the projection line 51, the outline drawing line 54, and the rear wheel drawing line 52 are displayed so as to be superimposed on the past image 47 of the synthesis screen 49, and the position of the vehicle C can be known in the synthesis screen 49. In addition, it is possible to determine which display range in the composite screen 49 is an image in accordance with the current situation by the lane marking 53.

  Next, the processing procedure of the parking assistance process of this embodiment is demonstrated according to FIGS. As shown in FIG. 6, the parking support process is performed mainly by the control unit 3 according to the parking support program stored in the ROM 5, and includes a system activation management process S1, a vehicle signal input process S2, and an image data input process S3. And a drawing process S4. Then, when the operation switch 15 for operating the on / off of the parking support function is input or the ignition module is turned off, it is set as an end trigger (YES in S5), and the entire processing is ended. If there is no end trigger (NO in S5), the processes S1 to S4 are repeated while waiting for the input of the end trigger.

(System startup management process)
The system activation management process S1 will be described with reference to FIG. First, the control unit 3 inputs the shift position signal SP via the vehicle-side I / F unit 9, and updates the shift position NSW stored in the main memory 4 based on the shift position signal SP (S1- 1). Then, based on the shift position NSW, it is determined whether or not the shift position is in the reverse state (S1-2).

  When it is determined that the shift position is reverse (YES in step S1-2), it is determined whether or not the system activation flag STT stored in the main memory 4 is in an on state (step S1-3). The system activation flag STT is a flag indicating whether or not a parking assistance system that displays the rear imaging screen 30 or stores and synthesizes image data G is activated. At this point in time, since the shift lever is switched to the reverse position and the parking assistance process is started, the system activation flag STT is in the off state.

  If it is determined that the system activation flag STT is off (NO in step S1-3), the control unit 3 deletes the map screen 7a and the like displayed on the display 7 and switches to the rear imaging screen 30 (step S1- 4). And the control part 3 updates the system starting flag STT to an ON state (step S1-5).

  Furthermore, the control unit 3 initializes the first retraction distance ΔDM1 and the second retraction distance ΔDM2 stored in the main memory 4 to “0” (S1-6). The first reverse distance ΔDM1 is a variable for switching to the image composition mode, and is a variable obtained by accumulating the distance that the vehicle C has moved backward since the parking support function was activated. The second backward distance ΔDM2 is a variable for determining the timing for displaying or updating the composite screen 49, and indicates the backward distance from the position where the image data G has already been captured. When the first and second backward distances ΔDM1, ΔDM2 are initialized, the process proceeds to a vehicle signal input process (S2).

  Similarly, when the system activation management process is performed after the second time during the parking assistance process, the system activation flag STT is on (YES in step S1-3), so that the vehicle signal input process (S2). )

(Vehicle signal input processing)
Next, the vehicle signal input process (S2) will be described with reference to FIG. The control unit 3 acquires the steering sensor signal ST via the vehicle side I / F unit 9 and updates the current steering angle STR (step S2-1). Further, the control unit 3 inputs the vehicle speed signal Vp via the vehicle side I / F unit 9, and calculates the movement distance Δd based on the vehicle speed signal Vp (step S2-2). The moving distance Δd is a moving distance from the time when the first and second backward distances ΔDM1, ΔDM2 are calculated in the previous step S2-2. Therefore, immediately after the first and second reverse distances ΔDM1 and ΔDM2 are initialized to “0”, the movement distance Δd from the vehicle position at the time when the parking support system is activated is calculated.

  Then, the control unit 3 adds the movement distance Δd to the first and second backward distances ΔDM1 and ΔDM2 stored in the main memory 4, and newly calculates the first and second backward distances ΔDM1 and ΔDM2. (Step S2-3). Then, the process proceeds to the image data input process S3.

(Image data input processing)
Subsequently, the control unit 3 performs an image data input process S3 shown in FIG. First, the control unit 3 transmits an image acquisition command to the image data acquisition unit 12. The image data acquisition unit 12 drives and controls the camera 21 to start imaging, and acquires image data G from the camera 21 (step S3-1).

  Next, the control unit 3 determines whether or not the second retraction distance ΔDM2 is larger than the above-described image update distance Dx (step S3-2). If it is determined that the second retraction distance ΔDM2 is equal to or less than the image update distance Dx (NO in step S3-2), the process proceeds to the next drawing process S4. If it is determined that the second retraction distance ΔDM2 is greater than the image update distance Dx (YES in step S3-2), the position where the image data G is captured is attached to the image data G as an index DM (step S3-3). ). As described above, the index DM may be an absolute coordinate or a relative coordinate from the reference position.

  Then, the image data G attached with the index DM is stored in the image memory 6 (step S3-4). Further, the control unit 3 resets the second backward distance ΔDM2 to “0”. Then, the process proceeds to the drawing process (S4).

(Drawing process)
Next, the drawing process S4 will be described with reference to FIG. The controller 3 determines whether or not the first reverse distance ΔDM1 calculated in the vehicle signal input process S2 is greater than a predetermined switching distance DR (S4-1). The switching distance DR is a distance serving as a reference for switching the display mode of the screen of the display 7 from the mode for displaying the rear imaging screen 30 to the image composition mode for displaying the composition screen 49 using the past image data G2. , A predetermined fixed value. Further, the switching distance DR is the rear end of the white line 100 (see FIG. 12) that enters the parking target area R after the vehicle C starts moving backward and divides the parking target area R, or the parking target area R. Is set by estimating the distance to reach a few meters before the wheel stopper 101 (see FIG. 12). The switching distance DR is set to, for example, 5 m or more and less than 10 m (switching distance DR> image update distance Dx).

If it is determined that the first backward distance ΔDM1 is equal to or shorter than the switching distance DR (NO in step S4-1), the rear imaging screen 30 is output (step S4-2). At this time, the drawing processing unit 19 displays the image data G captured at the current position and the guide line L. When the rear imaging screen 30 is output, the process returns to step S1-1 of the system activation management process (S1).

  If it is determined that first reverse distance ΔDM1 is greater than switching distance DR (YES in step S4-1), control unit 3 updates search index IND (step S4-3). The search index IND is updated by subtracting the image update distance Dx from the first backward distance ΔDM1 from the position where the backward movement is started to the current position, and searches the past image data G2 stored in the image memory 6. It is a variable for. Further, the control unit 3 controls the drawing processing unit 19 to read the past image data G2 to which the same index DM as the search index IND is attached among the past image data G2 stored in the image memory 6 (step S1). S4-4). That is, the drawing processing unit 19 reads the past image data G2 captured behind the image update distance Dx from the current position. For example, assuming that the current position of the vehicle C is a position B shown in FIG. 12B, past image data captured at a position A that is an image update distance Dx behind the position B shown in FIG. Read G2. The past image data G2 captured when the vehicle C is at the position A captures the imaging range a behind the vehicle C as shown in FIG. 13A, as shown in FIG. An image 40 (frame) is captured.

  Further, the control unit 3 controls the drawing processing unit 19 to acquire image data G (current image data G1) captured at the current position (step S4-5). For example, assuming that the current position is at a position B shown in FIGS. 12B and 13B, the camera 21 images the imaging range b shown in FIG. 13B from the imaging viewpoint VB. The current image data G1 acquired at this time is data for outputting an image 44 (frame) as shown in FIG. 16D, for example.

(Image composition processing)
Next, the control unit 3 performs image composition processing using the past image data G2 and the current image data G1 (step S4-6). This image composition processing will be described with reference to FIG. First, the control unit 3 outputs the current steering angle STR to the drawing processing unit 19. The drawing processing unit 19 rotationally converts the past image data G2 according to the input current steering angle STR and the past steering angle data 14 attached to the past image data G2 (step S4-7). At this time, the drawing processing unit 19 calculates a relative angle between the current steering angle STR and the past steering angle data 14, and rotationally converts the past steering angle data 14 according to the current steering angle STR.

  Next, the drawing processing unit 19 corrects / trims the past image data G2 (step S4-8). For example, the distortion aberration due to the wide-angle lens is corrected so that the image 40 having the distortion shown in FIG. 16A becomes an image 42 having no distortion shown in FIG. Then, a first area 43a shown in FIG. 16B is extracted (trimmed) from the corrected past image data G2. Thereby, past data for synthesis G3 is generated. By this trimming, data in a range far from the vehicle C is deleted from the imaging range a at the position A shown in FIG. 13A, and the imaging range of the past data G3 for synthesis after trimming is as shown in FIG. The imaging range a1 shown in FIG.

Further, the drawing processing unit 19 converts the past data for synthesis G3 into an image viewed from the virtual viewpoint Vi described above (step S4-9). As shown in FIGS. 12B and 14, the virtual viewpoint Vi is set several tens of centimeters above the center point C3 of the rear wheel axis C2 and is the same height as the imaging viewpoint VB. For example, the image of the imaging range a1 viewed from the past imaging viewpoint VA shown in FIG. 13B is converted into an image of the imaging range a1 viewed from the virtual viewpoint Vi shown in FIG. As a result, the image in the first area 43a shown in FIG. 16B is converted into an image 42a as shown in FIG. The image 42a is an image captured at an angle of view as if looking down at the road surface around the rear wheel C1 and the rear wheel C1 from above the rear wheel C1.

  Subsequently, the drawing processing unit 19 corrects and trims the current image data G1 (step S4-10). First, the drawing processing unit 19 corrects the distortion of the current image data G1. Thereby, the image 44 (frame) in which the distortion by the wide-angle lens as shown in FIG. 16D is generated is converted into an image 45 without distortion as shown in FIG. Then, the second region 43b other than the rear end image 41 is extracted (trimmed) from the corrected current image data G1. As a result, the composition current data G4 is generated.

  Further, the drawing processing unit 19 reduces the composition current data G4 to a predetermined reduction rate, and generates reduced data G5 as shown in FIG. 16F (step S4-11). Then, as shown in FIG. 16G, the reduced data G5 is combined with the combining area 46 of the combining past data G3 (step S4-12). When the composition is completed, the process proceeds to step S4-13 of the drawing process (S4) shown in FIG.

  In step S4-13, the reduced data G5 after synthesis and the past data G3 for synthesis are transferred to the VRAM provided in the drawing processing unit 19 and output to the screen of the display 7. And the drawing process part 19 performs the drawing process (step S4-14) of the guide line L (vehicle width extension line 32), and the drawing process (step S4-15) of the auxiliary line 50 as shown in FIG. At this time, the auxiliary line 50 is also converted according to the virtual viewpoint Vi. As a result, a composite screen 49 as shown in FIG. The past image 47 displayed on the composite screen 49 is an image in which the periphery of the rear wheel C1 of the vehicle C at the current position is looked down almost directly and the optical axis AX (view angle) is further lowered toward the road surface. Yes. Therefore, the composition screen 49 is concentrated around the rear wheel C1, so that the relative position and relative direction between the position of the rear wheel C1 and the white line 100 or wheel stopper 101 around the rear wheel C1 can be understood in detail. . For this reason, the driver can perform a driving operation so that the vehicle C does not protrude from the parking target area R and is parked straight while checking the outer drawing line 54 and the rear wheel drawing line 52. Further, since the relative distance between the rear wheel C1 and the wheel stopper 101 can be grasped, the driving operation can be performed while predicting the timing at which the rear wheel C1 contacts the wheel stopper 101.

  When the composite screen 49 is displayed, the process returns to step S1-1 of the system activation management process (S1). And after repeating each process S1-S4 several times, the vehicle C reverse | retreats to the position C where each rear-wheel C1 contacts the wheel stop 101, as shown in FIG.12 (c) and FIG.13 (c). .

  As shown in FIG. 13C, when the vehicle C is at the position C, the wheel stopper 101 has moved outside the imaging range c of the camera 21. The current image data G1 captured at this time outputs an image 63 as shown in FIG. At the position C shown in FIG. 12C, the drawing processor 19 captures the past imaged at the position B that is behind the current position C by the image update distance Dx in the backward direction, as shown in FIG. 12B. Image data G2 is read (step S4-4).

  In the image composition process (step S4-6), the past image data G2 is rotationally converted, corrected, and trimmed (steps S4-7, S4-8). As a result, an image 60 having distortion caused by the lens as shown in FIG. 19A is converted into an image 61 having no distortion as shown in FIG. At this time, if the distortion of the image data G captured at the position B has already been corrected as the current image data G1, the corrected image data G is stored in the main memory 4 or the image memory 6, and later May be used as the past image data G2. Further, the first region 43a is extracted (trimmed) from the past image data G2, and the past data for synthesis G3 is generated.

Further, the viewpoint of the synthesized past data G3 after trimming is converted into a virtual viewpoint Vi shown in FIG. 12C and FIG. 15 set above the center point C3 of the rear wheel axis C2 at the position C (step). S4-9). Thereby, the past data for synthesis G3 is converted like an image 62 shown in FIG. The image 62 converted into the virtual viewpoint Vi is an image in which the road surface around the rear wheel C1 at the current position (position C) is looked down from above the rear wheel C1 and concentrated around the rear wheel C1.

  The drawing processing unit 19 corrects, trims, and reduces the current image data G1 imaged at the position C (steps S4-10 and S4-11). As a result, the image 63 having the distortion caused by the lens as shown in FIG. 19D is corrected to the image 64 having no distortion as shown in FIG. Then, the second area 43b is extracted from the corrected current image data G1, and the composition current data G4 is generated. Then, the composition current data G4 is reduced as an image 65 shown in FIG.

  Then, the drawing processing unit 19 synthesizes the reduced data G5 and the past data for synthesis G3 as shown in FIG. 19G (step S4-12). Then, by executing steps S4-13 to S4-15, a composite screen 49 as shown in FIG. Since the composite screen 49 displays a past image 47 as if the rear wheel C1 and the wheel stopper 101 were looked down from directly above, the driver visually recognizes the past image 47, so that the wheel stopper 101 and the rear wheel The timing of contact with the rear wheel drawing line 52 indicating C1 can be confirmed. Therefore, the driving operation can be performed so that the impact caused by the contact between the rear wheel C1 and the wheel stopper 101 is minimized. In addition, the relative position and relative orientation between the rear wheel C1 and the white line 100 can be grasped in detail. Furthermore, it is possible to confirm whether the vehicle C is parked straight within the parking target area R, whether the vehicle body protrudes from the white line 100, or the like, based on the relative position and relative orientation between the outline drawing line 54 and the white line 100.

  When parking is completed, the driver changes the shift lever from the reverse position to the parking position or the like. Thereby, as shown in FIG. 7, the control unit 3 determines that the shift position is not reverse based on the shift position NSW input in step S1-1 (NO in step S1-2). Furthermore, the control unit 3 determines whether or not the system activation flag STT is on (step S1-7). When parking is completed, the system activation flag STT is on (YES in step S1-7), and thus the process proceeds to step S1-8.

  In step S1-8, the control unit 3 deletes the composite screen 49 and switches to a mode for displaying the map screen 7a. Then, the system activation flag STT is turned off (step S1-9). Then, the search index IND, shift position NSW, current steering angle STR, and movement distance Δd stored in the main memory 4 are reset to initial values (step S1-10). Then, Step 1-1, Step S1-2, and Step S1-7 are repeated until the ignition module of the vehicle C is turned off or the operation switch 15 is operated to turn off the parking assist function.

  When the ignition module is turned off or the operation switch 15 is input to generate an end trigger (YES in step S5 in FIG. 6), the parking support process is ended.

According to the above embodiment, the following effects can be obtained.
(1) In the above embodiment, the navigation device 1 associates the image data G with the vehicle position, the image data acquisition unit 12 that acquires the image data G from the camera 21 attached to the rear end of the vehicle C, and the past. And an image memory 6 for storing the image data G2. In addition, the drawing processing unit 19 of the navigation device 1 controls the past image data G2 read from the image memory 6 under the control of the control unit 3 to the virtual viewpoint Vi set above the center point C3 of the current rear wheel axis C2. I converted it. In addition, the drawing processing unit 19 outputs the past image data G2 (the past data G3 for synthesis) and the current image data G1 (the reduced data G5) that have undergone viewpoint conversion to the display 7. For this reason, the composite screen 49 is displayed on the display 7 such that the rear wheel C1 and the periphery of the rear wheel C1 are looked down from directly above, focusing on the periphery of the rear wheel C1, and the timing of contact between the rear wheel C1 and the wheel stopper 101 A state in which the rear wheel C1 approaches the white line 100 at the end of the parking target area R can be displayed. For this reason, the driver can drive the vehicle so that the impact caused by the contact between the rear wheel C1 and the wheel stopper 101 is minimized. Further, it can be confirmed whether the vehicle C is parked straight within the parking target area R, whether the vehicle body is not protruding from the white line 100, or the like.

  (2) In the above embodiment, the control unit 3 is based on the past image data G2 and the current image data G1 when the first backward distance ΔDM1 of the vehicle C is larger than the predetermined switching distance DR. The composite screen 49 is displayed on the display 7. Therefore, since the composite screen 49 is displayed when the vehicle C approaches the end of the parking target area R, the composite screen 49 can be output at a timing when the driver wants to confirm the position of the rear wheel C1. In addition, since the image composition processing is minimized, the load on the apparatus can be reduced.

  (3) In the above embodiment, the drawing processing unit 19 is based on the past steering angle data 14 attached to the past image data G2 and the steering sensor signal ST input from the vehicle ECU 20, and the current image based on the current image data G1. In accordance with 48, the past image data G2 is rotationally converted. For this reason, the continuity between the past image 47 and the current image 48 constituting the composite screen 49 can be improved.

  (4) In the above embodiment, the auxiliary line 50 including the projection line 51, the outline drawing line 54, the rear wheel drawing line 52, and the partition line 53 is drawn on the composite screen 49 according to the current position of the vehicle C. I made it. For this reason, when the driver views the composite screen 49, the relative position between the position of the vehicle C and the parking target area R can be intuitively grasped.

  (5) In the above embodiment, the control unit 3 controls the drawing processing unit 19 every time the vehicle C moves by a predetermined image update distance Dx, and the image update distance Dx from the current position in the backward direction. The past image data G2 imaged only at the rear position is read from the image memory 6. Then, the read past image data G2 is converted into a virtual viewpoint Vi and synthesized with the current image data G1 captured at the current position. For this reason, each time the image is moved by the image update distance Dx, the viewpoint-converted past image 47 and current image 48 are updated, so that a relatively new image can be displayed.

  (6) In the above embodiment, the control unit 3 controls the drawing processing unit 19 to correct the distortion aberration of the past image data G2, extract the first region 43a, and perform viewpoint conversion for the synthesis past data G3. Was generated. Further, the drawing processing unit 19 corrects the distortion of the current image data G1, generates the composition current data G4 from which the second region 43b is extracted, reduces the composition current data G4, and reduces the reduced data G5. Generated. Then, the reduced data G5 and the past data for synthesis G3 are synthesized and the synthesis screen 49 is output. For this reason, since the past image 47 and the current image 48 can be displayed side by side (in the traveling direction), the display area of the past image 47 can be easily determined.

  (6) In the above embodiment, viewpoint conversion is performed only on the past image data G2. For this reason, it is possible to minimize the viewpoint conversion process and to suppress the load on the apparatus.

  (7) In the above embodiment, the parking assistance device is embodied in the navigation device 1. For this reason, parking assistance is provided by effectively utilizing the already-provided own vehicle position detecting function such as the GPS receiving unit 8, the vehicle side I / F unit 9, and the program for detecting the own vehicle position. The composite screen 49 can be output.

In addition, you may change this embodiment as follows.
The navigation device 1 receives the orientation detection signal GYR from the vehicle ECU 20, but the navigation device 1 may include a gyro sensor that detects the orientation of the vehicle C.

  In the above embodiment, the control unit 3 inputs the shift position signal SP and the steering sensor signal ST from the vehicle ECU 20, but as a transmission control circuit as a traveling state detection unit or a traveling state detection unit It may be made to input via the vehicle side I / F part 9 from the steering sensor of this steering device.

  In the above embodiment, the image update distance Dx is 500 mm, but other distances such as 100 mm may be used. The switching distance DR is a fixed value set in a range of 5 m or more and less than 10 m, but may be a fixed value in another range.

  The navigation device 1 may detect that the vehicle C has approached the end of the parking target region R or the wheel stopper 101 based on a sensor or a detection device as a detection unit attached to the vehicle C. For example, the navigation device 1 (detection device) may analyze the image data G captured by the camera 21 (detection device) and detect a white line marked on the road surface. Further, a radar (detection device) or the like for detecting a three-dimensional obstacle inside or around the parking target region R is attached to the vehicle C, and the navigation device 1 inputs a detection signal from the radar, thereby the wheel stopper 101 or the like. Other three-dimensional obstacles may be detected. Then, when it is detected by these detection devices that the vehicle C has moved to a position several meters before the wheel stop 101 or several meters to the white line at the rear end, the image synthesis mode May be started.

  In the embodiment described above, the control unit 3 accumulates the image data G when it detects that the vehicle C has started to reverse. In addition to this, when the control unit 3 detects the backward movement of the vehicle C and the current steering angle STR becomes less than a predetermined steering angle (for example, a predetermined steering angle AGL), accumulation of the image data G is started. You may do it. In this way, it is possible to minimize the accumulation of image data, and to reduce the load on the apparatus in the image data input process and the drawing process.

  In the above embodiment, the viewpoint image data G2 is converted into the virtual viewpoint Vi set above the center point C3 of the rear wheel axis C2, but the viewpoint set at a position other than this may be used. Specifically, as shown in FIG. 21, any position within the rear wheel region Z including each rear wheel C1 and between the rear wheels C1 as long as the position is higher than the rear wheel C1. But you can. For example, the virtual viewpoint Vi may be set above one of the rear wheels C1. Or you may set in positions other than the upper direction of the rear-wheel axis C2 among the area | regions Z between rear-wheels. The virtual viewpoint Vi may be set at any height as long as it is above the rear wheel C1 and can capture at least the range of the rear wheel C1. At this time, as in the above-described embodiment, when the height is substantially the same as the mounting height of the camera 21, the screen can be intuitively understood by the driver when the past image 47 and the current image 48 are simultaneously viewed. can do.

In the above embodiment, the distortion of the image by the wide-angle lens is corrected, but this distortion correction process may be omitted. Further, the rotational conversion of the past image data G2 may be omitted.

In the above embodiment, the viewpoint conversion of the current image data G1 may be performed. In this case, although different from the actual image, the image continuity of the composite screen 49 can be improved.
The camera 21 may be attached to the front or side of the vehicle C instead of the rear of the vehicle C. Further, when the vehicle C enters the parking target area R while moving forward, a parking assistance process may be performed. At this time, the virtual viewpoint Vi is set above the rear wheel inter-wheel area Z. Also in this case, since the area below the vehicle body can be displayed using the image data G generated by this camera, it is possible to make it easier to visually recognize an obstacle that is difficult to see in front of the vehicle C.

  In each of the above embodiments, only the case where the vehicle C is parked in parallel has been described. However, when parallel parking is performed, the image data G at the current position and the past image data G may be displayed on the display 7.

  In the above embodiment, the parking assistance device is embodied in the navigation device 1, but may be embodied in other in-vehicle devices. In this case, the GPS receiving unit 8, the map data storage unit 10, and the like are omitted.

The block diagram explaining the structure of the navigation apparatus of this embodiment. Explanatory drawing explaining the attachment position of a camera. Explanatory drawing explaining the imaging range of a camera. Explanatory drawing of the image data stored in the image memory. Explanatory drawing of a back imaging screen. Explanatory drawing of the process sequence of this embodiment. Explanatory drawing of the process sequence of a system starting management process. Explanatory drawing of the process sequence of a vehicle signal input process. Explanatory drawing of the process sequence of an image data input process. Explanatory drawing of the process sequence of a drawing process. Explanatory drawing of the process sequence of an image synthesis process. It is explanatory drawing explaining the state which a vehicle reverse | retreats, Comprising: (a)-(c) is explanatory drawing explaining the position A-position C, respectively. It is explanatory drawing explaining the state which a vehicle reverses, Comprising: (a)-(c) is explanatory drawing explaining the position A-position C from the side surface of a vehicle, respectively. Explanatory drawing explaining the viewpoint conversion to a virtual viewpoint. Explanatory drawing explaining the viewpoint conversion to a virtual viewpoint. (A)-(c) is explanatory drawing of the image based on present image data, (d)-(f) is explanatory drawing of the image based on past image data, (g) is explanatory drawing of a synthetic | combination screen. Explanatory drawing of an auxiliary line. Explanatory drawing of the composite screen by which the auxiliary line was drawn. (A)-(c) is explanatory drawing of the image based on present image data, (d)-(f) is explanatory drawing of the image based on past image data, (g) is explanatory drawing of a synthetic | combination screen. Explanatory drawing of the composite screen by which the auxiliary line was drawn. Explanatory drawing of the position of the virtual viewpoint of another example.

Explanation of symbols

DESCRIPTION OF SYMBOLS 1 ... Navigation apparatus as parking assistance apparatus, 3 ... Position specification means, reading means, viewpoint conversion means, output control means, rotation conversion means, and control part which comprises vehicle position display means, 6 ... Image as image data storage means Memory: 7 ... Display as display means, 8 ... GPS receiving section constituting position specifying means, 12 ... Image data acquisition section as image data acquisition means, 19 ... Reading means, viewpoint conversion means, output control means, rotation conversion A drawing processing unit constituting the means and the vehicle position display means, 20 ... a vehicle ECU as a driving means, 21 ... a camera as an imaging device, 43a ... a first area, 43b ... a second area, 50 ... assistance as an index Line, C ... vehicle, C1 ... rear wheel, C2 ... rear wheel axis, Dx ... image update distance, G ... image data, G2: past image data, ST ... steer as steering angle information Ngusensa signal, Vi ... virtual viewpoint.

Claims (7)

  1. A position specifying means for specifying the position of the vehicle, an image data acquiring means for acquiring image data from an imaging device provided in the vehicle, an image data storage means for storing the image data, and an image based on the image data. In a parking support method for supporting a parking operation using a display means for displaying,
    Storing the acquired image data in the image data storage unit as past image data in association with the position of the vehicle that captured the image data;
    Converting the past image data read from the image data storage means into a wheel set at a current position or a view set above the wheel;
    A parking support method comprising: displaying on the display means an image based on the past image data whose viewpoint has been converted and an image based on the image data captured at a current position.
  2. Position specifying means for specifying the position of the vehicle;
    Image data acquisition means for acquiring image data obtained by imaging the periphery of the vehicle from an imaging device provided in the vehicle;
    Image data storage means for storing the image data as past image data in association with the position of the vehicle that captured the image data;
    Display means for displaying an image based on the image data;
    Reading means for reading the past image data from the image data storage means;
    Viewpoint conversion means for converting the read past image data into a wheel at the current position or a viewpoint set above the wheels;
    A parking support apparatus comprising: output control means for displaying on the display means an image based on the past image data subjected to viewpoint conversion and an image based on the image data captured at the current position.
  3. In the parking assistance device according to claim 2,
    The viewpoint conversion means includes
    A parking assistance device, wherein the past image data is converted into image data looking down at the rear wheel and the periphery of the rear wheel from a viewpoint set above the current rear wheel axis.
  4. In the parking assistance device according to claim 2 or 3,
    A parking assist device, further comprising a rotation conversion means for rotating and converting the past image data based on rudder angle information acquired from a sensor provided in the vehicle steering device.
  5. In the parking assistance device according to any one of claims 2 to 4,
    Vehicle position display means for displaying an index indicating the current position of the vehicle on a screen displaying an image based on the past image data and an image based on the image data captured at the current position; Parking assistance device.
  6. In the parking assistance device according to any one of claims 2 to 5,
    The reading means includes
    Each time the vehicle moves by a predetermined image update distance, the past image data associated with a position behind the image update distance in the backward direction is read from the image data storage means,
    The viewpoint conversion means includes
    The read past image data is converted into a viewpoint at a wheel set at the current position or a position set above the wheel,
    The output control means includes
    Each time the vehicle moves by the image update distance, an image based on the past image data whose viewpoint has been converted and an image based on the image data captured at the current position of the vehicle are displayed on the display means. Parking assistance device.
  7. In the parking assistance device according to any one of claims 2 to 6,
    The output control means includes
    Of the past image data whose viewpoint has been converted, a first region including a rear wheel and a rear wheel periphery of the vehicle at the current position;
    A parking assistance apparatus, wherein a second area that is continuous with the first area is continuously displayed on the display means among the image data acquired at a current position.
JP2005156164A 2005-05-27 2005-05-27 Parking support method and parking support device Expired - Fee Related JP4696691B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2005156164A JP4696691B2 (en) 2005-05-27 2005-05-27 Parking support method and parking support device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2005156164A JP4696691B2 (en) 2005-05-27 2005-05-27 Parking support method and parking support device

Publications (2)

Publication Number Publication Date
JP2006327498A true JP2006327498A (en) 2006-12-07
JP4696691B2 JP4696691B2 (en) 2011-06-08

Family

ID=37549633

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2005156164A Expired - Fee Related JP4696691B2 (en) 2005-05-27 2005-05-27 Parking support method and parking support device

Country Status (1)

Country Link
JP (1) JP4696691B2 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101808236A (en) * 2009-02-12 2010-08-18 株式会社电装 Vehicle periphery displaying apparatus
JP2011126433A (en) * 2009-12-18 2011-06-30 Suzuki Motor Corp Parking support device
JP2012005243A (en) * 2010-06-17 2012-01-05 Aisin Seiki Co Ltd Vehicle parking assist system and electric vehicle with the same
JP2012027534A (en) * 2010-07-20 2012-02-09 Suzuki Motor Corp White line edge detection method and parking-support device using the same
JP2012175764A (en) * 2011-02-18 2012-09-10 Honda Motor Co Ltd Electric vehicle
US8514282B2 (en) 2009-03-25 2013-08-20 Denso Corporation Vehicle periphery display device and method for vehicle periphery image
WO2015114775A1 (en) * 2014-01-30 2015-08-06 日産自動車株式会社 Parking assistance device and parking assistance method
US20150375631A1 (en) * 2013-05-31 2015-12-31 Ihi Corporation Vehicle power-supplying system
EP2974909A1 (en) * 2014-07-14 2016-01-20 Aisin Seiki Kabushiki Kaisha Periphery surveillance apparatus and program
US9478061B2 (en) 2012-10-25 2016-10-25 Fujitsu Limited Image processing apparatus and method that synthesizes an all-round image of a vehicle's surroundings

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002029314A (en) * 2000-07-12 2002-01-29 Nissan Motor Co Ltd Parking support device
JP2002314990A (en) * 2001-04-12 2002-10-25 Auto Network Gijutsu Kenkyusho:Kk System for visually confirming periphery of vehicle
JP2003191810A (en) * 2001-12-26 2003-07-09 Denso Corp Vehicle surroundings monitoring system, and vehicle moving state detector
JP2004114879A (en) * 2002-09-27 2004-04-15 Clarion Co Ltd Parking assisting device, and image display device
JP2004289386A (en) * 2003-03-20 2004-10-14 Clarion Co Ltd Image display method, image display apparatus, and image processing apparatus
JP2005001570A (en) * 2003-06-12 2005-01-06 Equos Research Co Ltd Parking support device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002029314A (en) * 2000-07-12 2002-01-29 Nissan Motor Co Ltd Parking support device
JP2002314990A (en) * 2001-04-12 2002-10-25 Auto Network Gijutsu Kenkyusho:Kk System for visually confirming periphery of vehicle
JP2003191810A (en) * 2001-12-26 2003-07-09 Denso Corp Vehicle surroundings monitoring system, and vehicle moving state detector
JP2004114879A (en) * 2002-09-27 2004-04-15 Clarion Co Ltd Parking assisting device, and image display device
JP2004289386A (en) * 2003-03-20 2004-10-14 Clarion Co Ltd Image display method, image display apparatus, and image processing apparatus
JP2005001570A (en) * 2003-06-12 2005-01-06 Equos Research Co Ltd Parking support device

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8441536B2 (en) 2009-02-12 2013-05-14 Denso Corporation Vehicle periphery displaying apparatus
DE102010000385A1 (en) 2009-02-12 2010-08-19 Denso Corporation, Kariya-City Vehicle periphery display device
CN101808236A (en) * 2009-02-12 2010-08-18 株式会社电装 Vehicle periphery displaying apparatus
US8514282B2 (en) 2009-03-25 2013-08-20 Denso Corporation Vehicle periphery display device and method for vehicle periphery image
JP2011126433A (en) * 2009-12-18 2011-06-30 Suzuki Motor Corp Parking support device
JP2012005243A (en) * 2010-06-17 2012-01-05 Aisin Seiki Co Ltd Vehicle parking assist system and electric vehicle with the same
JP2012027534A (en) * 2010-07-20 2012-02-09 Suzuki Motor Corp White line edge detection method and parking-support device using the same
JP2012175764A (en) * 2011-02-18 2012-09-10 Honda Motor Co Ltd Electric vehicle
US9478061B2 (en) 2012-10-25 2016-10-25 Fujitsu Limited Image processing apparatus and method that synthesizes an all-round image of a vehicle's surroundings
US9633266B2 (en) 2012-10-25 2017-04-25 Fujitsu Limited Image processing apparatus and method that synthesizes an all-round image of a vehicle's surroundings
US20150375631A1 (en) * 2013-05-31 2015-12-31 Ihi Corporation Vehicle power-supplying system
US9758048B2 (en) * 2013-05-31 2017-09-12 Ihi Corporation Vehicle power-supplying system
WO2015114775A1 (en) * 2014-01-30 2015-08-06 日産自動車株式会社 Parking assistance device and parking assistance method
JP2016021653A (en) * 2014-07-14 2016-02-04 アイシン精機株式会社 Periphery monitoring device and program
CN105282499A (en) * 2014-07-14 2016-01-27 爱信精机株式会社 Periphery surveillance apparatus and program
EP2974909A1 (en) * 2014-07-14 2016-01-20 Aisin Seiki Kabushiki Kaisha Periphery surveillance apparatus and program
US9902323B2 (en) 2014-07-14 2018-02-27 Aisin Seiki Kabushiki Kaisha Periphery surveillance apparatus and program

Also Published As

Publication number Publication date
JP4696691B2 (en) 2011-06-08

Similar Documents

Publication Publication Date Title
JP6340969B2 (en) Perimeter monitoring apparatus and program
JP6028848B2 (en) Vehicle control apparatus and program
EP2981077B1 (en) Periphery monitoring device and program
EP3024700B1 (en) Method and device for reproducing a lateral and/or rear surrounding area of a vehicle
JP5212748B2 (en) Parking assistance device
EP2620330B1 (en) Driving assistance device
US7277123B1 (en) Driving-operation assist and recording medium
KR101119563B1 (en) Parking assisting system
EP2192552B1 (en) Image processing apparatus, image processing method, and recording medium
JP4432930B2 (en) Parking assistance device and parking assistance method
DE102006008703B4 (en) Driving support system and navigation system for a vehicle
DE60009000T2 (en) Parking assistance system
JP5421072B2 (en) Approaching object detection system
KR101190482B1 (en) Parking assisting device
JP5035643B2 (en) Image display device
EP2623376B1 (en) Parking assistance device
US8441536B2 (en) Vehicle periphery displaying apparatus
JP4364471B2 (en) Image processing apparatus for vehicle
EP1974998B1 (en) Driving support method and driving support apparatus
JP5035284B2 (en) Vehicle periphery display device
JP3445197B2 (en) Driving operation assist device
JP3972722B2 (en) In-vehicle image processing device
JP4412380B2 (en) Driving support device, driving support method, and computer program
DE102004012604B4 (en) Method for operating a display system in a vehicle for starting a parking space
JP4412365B2 (en) Driving support method and driving support device

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20080213

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20100215

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20100302

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20100506

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20100706

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20100906

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20110201

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20110214

R150 Certificate of patent or registration of utility model

Ref document number: 4696691

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20150311

Year of fee payment: 4

LAPS Cancellation because of no payment of annual fees