WO2023027039A1 - Parking assistance device and parking assistance method - Google Patents

Parking assistance device and parking assistance method Download PDF

Info

Publication number
WO2023027039A1
WO2023027039A1 PCT/JP2022/031613 JP2022031613W WO2023027039A1 WO 2023027039 A1 WO2023027039 A1 WO 2023027039A1 JP 2022031613 W JP2022031613 W JP 2022031613W WO 2023027039 A1 WO2023027039 A1 WO 2023027039A1
Authority
WO
WIPO (PCT)
Prior art keywords
parking
route
vehicle
information
image
Prior art date
Application number
PCT/JP2022/031613
Other languages
French (fr)
Japanese (ja)
Inventor
賢治 小原
Original Assignee
株式会社デンソー
株式会社J-QuAD DYNAMICS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社デンソー, 株式会社J-QuAD DYNAMICS filed Critical 株式会社デンソー
Priority to JP2023543911A priority Critical patent/JPWO2023027039A1/ja
Priority to CN202280055839.8A priority patent/CN117836183A/en
Publication of WO2023027039A1 publication Critical patent/WO2023027039A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/06Automatic manoeuvring for parking
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/09Taking automatic action to avoid collision, e.g. braking and steering
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems

Definitions

  • the present disclosure relates to a parking assistance device and a parking assistance method.
  • Patent Document 1 Conventionally, as a parking assistance device, there is known a device that automatically parks a vehicle at a predetermined parking position (see Patent Document 1, for example).
  • the parking assistance device described in Patent Document 1 learns the driving route of the vehicle from the reference start position to the parking position when the vehicle is parked at the planned parking position by the driver's operation, and uses the learning result to park the vehicle. To automatically park a vehicle at a predetermined position.
  • An object of the present disclosure is to provide a parking assistance device and a parking assistance method capable of improving usability.
  • the parking assist device a route generation unit that generates a target route that the vehicle should follow when the vehicle is parked based on route information including information about the vehicle's driving route and the surroundings of the vehicle on the driving route when the vehicle is parked by the user; , a follow-up control unit that performs a follow-up control process for automatically moving the vehicle to the planned parking position along the target route; and an information providing unit that provides information to the user,
  • the information providing unit provides the information regarding the planned parking position included in the route information to the user in a visual manner before the follow-up control process is started.
  • Parking assistance method generating a target route that the vehicle should follow when the vehicle is parked based on route information including information about the vehicle's travel route and the surroundings of the vehicle on the travel route when the vehicle is parked by the user; performing follow-up control processing for automatically moving the vehicle to the planned parking position along the target route; providing information to users; Providing the information to the user includes providing the information on the planned parking position included in the route information in a visual manner to the user before starting the follow-up control process.
  • the user can start automatically parking the vehicle at the planned parking position after clearly grasping the planned parking position. Therefore, according to the parking assistance device and the parking assistance method of the present disclosure, usability of automatic parking can be improved.
  • “Usability” refers to the degree of effectiveness, efficiency, and user satisfaction when a product is used by a specific user to achieve a specified goal in a specific usage situation. is.
  • FIG. 1 is a schematic configuration diagram of an automatic parking system according to an embodiment of the present disclosure
  • FIG. It is an explanatory view for explaining a parking lot containing a parking space of vehicles.
  • 4 is a flowchart showing an example of learning processing executed by a parking control unit of the parking assistance device
  • 4 is a flowchart showing an example of assistance processing executed by a parking control unit
  • FIG. 10 is an explanatory diagram for explaining an example of the content displayed on the touch panel display unit before starting the support process
  • 7 is a flowchart illustrating an example of target route generation processing executed by a parking control unit
  • FIG. 10 is an explanatory diagram for explaining an example of display contents on the touch panel display section before the follow-up control process is started; It is an explanatory view for explaining an example of a display mode on a touch-panel display part of a parking plan position.
  • FIG. 10 is an explanatory diagram for explaining an example of display contents on the display unit when there are a plurality of candidates for the planned parking position; 4 is a flowchart showing an example of follow-up control processing executed by a parking control unit; FIG.
  • FIG. 10 is an explanatory diagram for explaining an example of display contents on the touch panel display unit when the follow-up control process is started;
  • FIG. 10 is an explanatory diagram for explaining an example of display contents on the touch panel display unit after the follow-up control process is started;
  • FIG. 10 is an explanatory diagram for explaining an example of automatic adjustment of the angle of the virtual viewpoint of the virtual viewpoint image according to the position change between the target and the vehicle;
  • FIG. 11 is an explanatory diagram for explaining another example of display contents on the touch panel display unit after the follow-up control process is started;
  • FIG. 11 is an explanatory diagram for explaining an example of display contents on the touch panel display unit during searching for an avoidance route;
  • It is an explanatory view for explaining an example of an avoidance course.
  • It is an explanatory view for explaining an example of a display mode on a touch-panel display part, such as an avoidance route.
  • FIG. 1 the automatic parking system 1 has a perimeter monitoring sensor 3 , various ECUs 4 , and a parking assistance device 5 .
  • the parking assistance device 5 is communicably connected to the perimeter monitoring sensor 3 and various ECUs 4 directly or via an in-vehicle LAN (Local Area Network).
  • LAN Local Area Network
  • the surroundings monitoring sensor 3 is an autonomous sensor that monitors the surroundings of the vehicle V itself.
  • the surroundings monitoring sensor 3 detects an obstacle OB composed of three-dimensional objects around the own vehicle, such as moving dynamic targets such as pedestrians and other vehicles, and stationary static targets such as structures on the road.
  • parking assistance marks indicating parking information which is information about the parking lot PL, etc., are detected as objects to be detected.
  • a surroundings monitoring camera 31 that captures a predetermined range around the vehicle
  • a sonar 32 that transmits search waves to a predetermined range around the vehicle
  • a millimeter wave radar 33 LiDAR (Light Detection and An abbreviation for Ranging) 34 or the like is provided.
  • LiDAR Light Detection and An abbreviation for Ranging
  • the surroundings monitoring camera 31 corresponds to an image capturing device, captures an image of the surroundings of the own vehicle, and outputs the imaged data to the parking assistance device 5 as sensing information.
  • the front camera 31a, the rear camera 31b, the left side camera 31c, and the right side camera 31d which capture images in front of, behind, and on the left and right sides of the vehicle, are exemplified as the peripheral monitoring camera 31, but the camera is limited to this. not a thing
  • the search wave sensor outputs the search wave and acquires the reflected wave, and the measurement results such as the relative speed and relative distance to the target and the azimuth angle where the target exists are used as sensing information for the parking assist device. 5 sequentially.
  • the sonar 32 performs measurement using ultrasonic waves as survey waves, and is provided at a plurality of locations on the vehicle V. For example, a plurality of sonars 32 are arranged side by side in the left and right direction of the vehicle on the front and rear bumpers, and are arranged around the vehicle. Measurement is performed by outputting a probe wave.
  • the millimeter wave radar 33 performs measurement using millimeter waves as search waves.
  • the LiDAR 34 performs measurements using laser light as the probe wave. Both the millimeter wave radar 33 and the LiDAR 34 output search waves, for example, within a predetermined range in front of the vehicle V, and perform measurement within that output range.
  • the peripheral monitoring sensor 3 includes a peripheral monitoring camera 31, a sonar 32, a millimeter wave radar 33, and a LiDAR 34. It's fine if you can do it, and you don't have to have everything.
  • the parking assistance device 5 constitutes an ECU (that is, an electronic control device) for performing various controls for realizing a parking assistance method in the automatic parking system 1, and includes a CPU, a storage unit 50, an I/O, and the like. It is composed of a microcomputer with
  • the storage unit 50 includes ROM, RAM, EEPROM, and the like. That is, the storage unit 50 has a volatile memory such as RAM and a nonvolatile memory such as EEPROM.
  • the storage unit 50 is composed of a non-transition tangible recording medium.
  • the parking assistance device 5 determines a target route that the vehicle V should follow when the vehicle V is parked, based on information about the vehicle V's traveling route and the surroundings of the vehicle V on the traveling route when the vehicle V is parked by the user. Generate a TP.
  • the "information around the vehicle V" includes, for example, dynamic targets such as people and other vehicles around the vehicle V, curbs around the vehicle V, static targets such as buildings, various signs, guide lines, and the like. information such as road markings.
  • the parking assistance device 5 automatically moves the vehicle V from the assistance start position STP to the planned parking position SEP along the target route TP.
  • the planned parking position SEP is the end point of the target route TP.
  • the planned parking position SEP is registered in advance by the user as the parking space SP for the own vehicle.
  • the parking assistance device 5 stores the sensing information, which is the result of detection by the periphery monitoring sensor 3, in the non-volatile memory of the storage unit 50 when the user performs the parking operation of the vehicle V. .
  • the parking assistance device 5 generates the target route TP and performs various controls for parking assistance based on the sensing information stored in the storage unit 50 and the sensing information from the surrounding monitoring sensor 3 during parking assistance.
  • the learning process for storing information about the driving route and the surroundings of the vehicle V during manual driving by the user is performed when an instruction to perform the learning process is issued, for example, when a learning switch (not shown) is operated by the user. executed. Further, parking assistance is executed when the user issues an instruction to perform parking assistance, such as when the parking assistance start switch 35 is operated by the user.
  • the parking assistance device 5 recognizes targets, parking free spaces, parking positions, etc. on the travel route of the vehicle V based on sensing information from the surroundings monitoring sensor 3 . These recognition results are sequentially stored in the non-volatile memory of the storage unit 50 and used for parking assistance.
  • the parking assistance device 5 when the user issues a parking assistance instruction, the parking assistance device 5 generates a target route TP based on the sensing information stored in the storage unit 50 and the sensing information of the surrounding monitoring sensor 3 during parking assistance. and perform route tracking control according to the route.
  • the parking assistance device 5 includes a recognition processing unit 51, a vehicle information acquisition unit 52, and a parking control unit 53 as functional units that execute various controls.
  • the recognition processing unit 51 receives sensing information from the surroundings monitoring sensor 3, and based on the sensing information, recognizes the surrounding environment of the vehicle that is about to be parked, recognizes the scene in which parking is to be performed, Furthermore, it recognizes objects existing around the vehicle.
  • the recognition processing section 51 is composed of an image recognition section 51a, a space recognition section 51b, and a free space recognition section 51c.
  • the image recognition unit 51a performs scene recognition, three-dimensional object recognition, and the like. Various recognitions by the image recognition unit 51a are realized by image analysis of image data from the peripheral monitoring camera 31 that is input as sensing information.
  • scene recognition it recognizes what kind of scene the parking scene is. For example, it is recognized whether it is a normal parking scene in which there is no obstacle OB near the planned parking position SEP and the parking of the vehicle V is not particularly restricted, or a special parking scene in which the parking of the vehicle V is restricted by the obstacle OB. I do.
  • the imaging data input from the surroundings monitoring camera 31 shows the surroundings, so if the image is analyzed, it can be determined whether it is a normal parking scene or a special parking scene. For example, if an object around the planned parking position SEP is detected from the imaging data and the object obstructs parking at the planned parking position SEP, it can be determined as a special parking scene.
  • the scene recognition may be performed based on not only the sensing information of the perimeter monitoring camera 31 but also the sensing information of the survey wave sensor.
  • obstacles OB composed of three-dimensional objects that exist around the vehicle, such as dynamic targets and static targets, are recognized as objects to be detected. Based on the detection object recognized by this three-dimensional object recognition, preferably the shape of a static target among them, the scene recognition described above and the generation of the parking support map including the obstacle OB are performed.
  • the space recognition unit 51b performs three-dimensional object recognition and the like.
  • the space recognition unit 51b recognizes three-dimensional objects in the space around the vehicle based on sensing information from at least one of the sonar 32, the millimeter wave radar 33, and the LiDAR 34.
  • the three-dimensional object recognition here is the same as the three-dimensional object recognition performed by the image recognition section 51a. Therefore, if either one of the image recognition section 51a and the space recognition section 51b is provided, three-dimensional object recognition can be performed.
  • the space recognition unit 51b does not perform scene recognition, but the space recognition unit 51b performs scene recognition based on sensing information from at least one of the sonar 32, the millimeter wave radar 33, and the LIDAR 34. can also be done.
  • three-dimensional object recognition and scene recognition can be performed by either the image recognition unit 51a or the space recognition unit 51b, it is possible to perform three-dimensional object recognition and scene recognition with higher accuracy by using both.
  • the image recognition unit 51a by complementing the three-dimensional object recognition and scene recognition by the image recognition unit 51a with the three-dimensional object recognition and scene recognition by the space recognition unit 51b, it is possible to perform three-dimensional object recognition and scene recognition with higher accuracy.
  • the free space recognition unit 51c performs free space recognition to recognize a free space in the parking lot PL.
  • the free space means, for example, a space with a size and shape that allows the vehicle V to stop in the parking lot PL.
  • the space is not limited to a plurality of spaces in the parking lot PL, and there may be only one space.
  • the free space recognition unit 51c recognizes free spaces in the parking lot PL based on the recognition results of scene recognition and three-dimensional object recognition by the image recognition unit 51a and the space recognition unit 51b. For example, from the results of scene recognition and three-dimensional object recognition, the shape of the parking lot PL and the presence or absence of parking of other vehicles can be grasped, so based on this, the free space in the parking lot PL is recognized.
  • the free space recognizing unit 51c identifies free spaces in the image, for example, by using semantic segmentation to categorize each pixel in the image based on the peripheral information of each pixel.
  • the vehicle information acquisition unit 52 acquires information on the amount of operation of the vehicle V from other ECUs 4 and the like. Specifically, the vehicle information acquisition unit 52 acquires detection signals output from sensors such as an accelerator position sensor, a brake depression force sensor, a steering angle sensor, a wheel speed sensor, and a shift position sensor mounted on the vehicle V. .
  • the parking control unit 53 executes various controls required for parking assistance.
  • the parking control unit 53 includes a route storage unit 54, a route generation unit 55, a position estimation unit 56, a tracking control unit 57, an information provision unit 58, and an image generation unit 59 as functional units that execute various controls. It is configured with
  • the route storage unit 54 stores in the storage unit 50 the sensing information of the perimeter monitoring sensor 3 when the user performs the parking operation of the vehicle V. For example, when the learning process is started, the route storage unit 54 stores targets in the travel route of the vehicle V, parking free spaces, parking positions, etc., which are sequentially acquired by the recognition processing unit 51, as route information. memorize to
  • the route storage unit 54 stores image data and the like sequentially input from the perimeter monitoring camera 31 in the storage unit 50 as route information. Note that when the imaging data of the front camera 31a, the rear camera 31b, the left side camera 31c, and the right side camera 31d are sequentially stored in the storage unit 50, the amount of route information increases, and the capacity of the storage unit 50 becomes tight. I will stay. Therefore, even if the route storage unit 54 stores, in the storage unit 50, synthetic image data obtained by synthesizing the imaging data of the front camera 31a, the rear camera 31b, the left side camera 31c, and the right side camera 31d, for example. good.
  • the route generation unit 55 generates a route based on the results of scene recognition, three-dimensional object recognition, and free space recognition.
  • the route generation unit 55 generates a target route TP that the vehicle V should follow when the vehicle V is parked, based on the travel route of the vehicle V during the learning process and information about the surroundings of the vehicle V on the travel route. For example, the route generation unit 55 sets the travel route of the vehicle V as a reference route, and if there is a section in the reference route in which the distance between the vehicle V and the obstacle OB is equal to or less than a predetermined value, the section is defined as the vehicle V and the obstacle OB.
  • a target route TP is generated by replacing it with a route whose distance from the OB exceeds a predetermined value.
  • the obstacle OB is composed of a three-dimensional object recognized by three-dimensional object recognition.
  • the position estimation unit 56 estimates the current position of the vehicle V based on the sensing information stored in the storage unit 50 and the sensing information sequentially acquired by the surroundings monitoring sensor 3 during parking assistance.
  • the position estimation unit 56 compares, for example, the sensing information stored in the storage unit 50 with the sensing information acquired during parking assistance, and estimates the current position based on the difference between them.
  • the follow-up control unit 57 automatically moves the vehicle V from the support start position STP to the planned parking position SEP along the target route TP by performing vehicle motion control such as acceleration/deceleration control and steering control of the vehicle V. Specifically, the follow-up control unit 57 outputs a control signal to various ECUs 4 so that the current position of the vehicle V estimated by the position estimation unit 56 reaches the planned parking position SEP along the target route TP. .
  • the various ECUs 4 include a steering ECU 41 that controls steering, a brake ECU 42 that controls acceleration and deceleration, a power management ECU 43, and a body ECU 44 that controls various electrical components such as lights and door mirrors.
  • the follow-up control unit 57 outputs from each sensor such as an accelerator position sensor, a brake depression force sensor, a steering angle sensor, a wheel speed sensor, and a shift position sensor mounted on the vehicle V via the vehicle information acquisition unit 52. A detected signal is obtained. Then, the follow-up control unit 57 detects the state of each unit from the acquired detection signal, and outputs control signals to various ECUs 4 in order to move the vehicle V following the target route TP.
  • each sensor such as an accelerator position sensor, a brake depression force sensor, a steering angle sensor, a wheel speed sensor, and a shift position sensor mounted on the vehicle V via the vehicle information acquisition unit 52. A detected signal is obtained. Then, the follow-up control unit 57 detects the state of each unit from the acquired detection signal, and outputs control signals to various ECUs 4 in order to move the vehicle V following the target route TP.
  • the information providing unit 58 uses HMI (abbreviation for Human Machine Interface) 45 to provide information to the user.
  • the HMI 45 is a device for providing various types of support to the user.
  • the HMI 45 has a touch panel display section 46 and a speaker 47 .
  • the touch panel display unit 46 is a touch panel type display used in a navigation system or a meter system.
  • the information providing unit 58 provides the user with information regarding the planned parking position SEP included in the route information stored in the storage unit 50 in a visual manner before the follow-up control process is started. For example, the information providing unit 58 displays an image showing the surroundings of the planned parking position SEP on the touch panel display unit 46 before starting the follow-up control process.
  • the information providing unit 58 displays various buttons to prompt the user to perform touch operations on the touch panel display unit 46.
  • Various buttons are operation buttons touch-operated by the user.
  • the information providing unit 58 displays, for example, a start button STB for follow-up control processing, a selection button SLB for selecting the expected parking position SEP, and the like on the touch panel display unit 46 .
  • the touch panel display section 46 of the present embodiment not only displays information, but also serves as an "operation section" operated by the user.
  • the information providing unit 58 changes the display contents of the touch panel display unit 46 according to the operation signal of the touch operation of the touch panel display unit 46.
  • the information providing unit 58 changes the viewpoint of the three-dimensional display (that is, 3D view) displayed on the touch panel display unit 46 in response to an operation signal of the touch panel display unit 46 by the user.
  • the image generation unit 59 generates image data to be displayed on the touch panel display unit 46 using the image data of the perimeter monitoring camera 31 .
  • the image generator 59 and the image recognition unit 51a are separate in this embodiment, the image generator 59 may be included in the image recognition unit 51a.
  • the image generator 59 periodically or irregularly generates peripheral image data (hereinafter also referred to as peripheral images) using, for example, captured data from the front camera 31a, the rear camera 31b, the left side camera 31c, and the right side camera 31d. do.
  • the peripheral image is an image corresponding to at least a partial range of the area around the vehicle V, and includes the camera viewpoint image Gc, a synthesized image, and the like.
  • the camera viewpoint image Gc is an image with a viewpoint of the arrangement position of each lens of the periphery monitoring camera 31 .
  • One of the synthesized images is an image of the surroundings of the vehicle V viewed from a virtual viewpoint set at an arbitrary position around the vehicle V (hereinafter also referred to as a virtual viewpoint image). A method of generating a virtual viewpoint image will be described below.
  • the image generation unit 59 converts the information of each pixel included in the imaging data of the front camera 31a, the rear camera 31b, the left side camera 31c, and the right side camera 31d into a predetermined projected curved surface (for example, a bowl) in a virtual three-dimensional space. the curved surface of the shape). Specifically, the image generator 59 projects the information of each pixel included in the imaging data of the front camera 31a, the rear camera 31b, the left camera 31c, and the right camera 31d onto a portion other than the center of the projection curved surface.
  • the center of the projected curved surface is defined as the vehicle V position.
  • the image generation unit 59 sets a virtual viewpoint in a virtual three-dimensional space, and extracts a predetermined region of the projected curved surface included in a predetermined viewing angle as image data when viewed from the virtual viewpoint. Generate a viewpoint image.
  • the virtual viewpoint image obtained in this manner is a three-dimensional representation of an image showing the surroundings of the vehicle V. FIG.
  • the image generator 59 further superimposes a virtual vehicle image Gv showing the vehicle V and lines, frames, marks, etc. for supporting the parking operation on the camera viewpoint image Gc and the virtual viewpoint image. Generate an image.
  • the virtual vehicle image Gv is composed of, for example, opaque or translucent polygons representing the shape of the vehicle V.
  • the automatic parking system 1 is configured as described above. Next, the operation of the automatic parking system 1 configured in this manner will be described.
  • the case where the vehicle V is parked in the parking lot PL shown in FIG. 2 will be described as an example.
  • Four parking spaces SP for vehicles V are set in the parking lot PL shown in FIG.
  • a first parking space SP1 and a second parking space SP2 are vertically arranged along a passage PS that extends linearly from a vehicle entrance/exit B.
  • a third parking space SP3 and a fourth parking space SP4 are provided adjacent to each other so as to cross the passage PS.
  • a third parking space SP3 and a fourth parking space SP4 are provided between the building BL and the house HM.
  • the vehicle V can be parked facing forward by moving back and forth (that is, turning). This also applies to the fourth parking space SP4.
  • the third parking space SP3 is assumed to be the planned parking position SEP, and the vehicle V is parked facing forward at the planned parking position SEP.
  • the learning process shown in FIG. 3 is executed by the parking control unit 53 at each predetermined control cycle when an instruction to perform the learning process is issued, such as when a learning switch (not shown) is operated by the user. .
  • Each processing shown in this flowchart is implemented by each functional unit of the parking assistance device 5 . Further, each step for realizing this processing can also be grasped as each step for realizing the parking assistance method.
  • the parking control unit 53 starts recognition processing in step S100.
  • recognition processing scene recognition, three-dimensional object recognition, and free space recognition by the recognition processing unit 51 are started based on the sensing information of the periphery monitoring sensor 3 .
  • the parking control unit 53 determines whether or not the learning start condition is satisfied.
  • the learning start condition is, for example, a condition that is met when the vehicle V enters a learning start area designated in advance by the user around the parking lot PL.
  • the learning start condition may be a condition that is satisfied when a learning switch (not shown) is turned on.
  • the parking control unit 53 waits until the learning start condition is satisfied, and when the learning start condition is satisfied, in step S120, starts storing various information necessary for parking assistance.
  • the parking control unit 53 stores, for example, targets in the traveling route of the vehicle V, parking available free spaces, parking positions, etc., which are sequentially acquired by the recognition processing unit 51, in the storage unit 50 as route information.
  • the parking control unit 53 of the present embodiment stores in the storage unit 50 peripheral images when the vehicle V is running and when the vehicle is parked at the parking position. Specifically, the parking control unit 53 stores images captured by the front camera 31a, the rear camera 31b, the left side camera 31c, and the right side camera 31d in the storage unit 50 as surrounding images during parking.
  • a composite image obtained by synthesizing the images captured by the front camera 31a, the rear camera 31b, the left camera 31c, and the right camera 31d is stored as the surrounding image when parking. Preferably stored in unit 50 .
  • the parking control unit 53 determines whether or not the learning stop condition is satisfied.
  • the learning stop condition is a condition that is met when the vehicle V stops at the planned parking position SEP designated in advance by the user or in the vicinity of the planned parking position SEP.
  • the learning stop condition may be a condition that is met when the shift position is switched to a position that means parking (for example, the P position).
  • the parking control unit 53 continues storing various information in the storage unit 50 until the learning stop condition is satisfied. On the other hand, when the learning stop condition is established, the parking control unit 53 stops storing various information in step S140.
  • step S150 the parking control unit 53 notifies the user via the HMI 45 that the storage of various information has been completed, and exits the learning process.
  • step S150 for example, the travel route of the vehicle V during the learning process and the circumstances around the travel route may be notified.
  • the learning process from step S100 to step S150 is performed by the route storage section 54 of the parking control section 53.
  • FIG. 4 An example of support processing for automatically moving the vehicle V from the support start position STP to the planned parking position SEP along the target route TP will be described with reference to the flowchart shown in FIG.
  • the support process shown in FIG. 4 is executed by the parking control unit 53 in each predetermined control cycle under the condition that the learning process has been performed at least once.
  • Each processing shown in this flowchart is implemented by each functional unit of the parking assistance device 5 . Further, each step for realizing this processing can also be grasped as each step for realizing the parking assistance method.
  • step S200 the parking control unit 53 determines that the current position of the vehicle V is near the support start position STP using the sensing information of the surroundings monitoring sensor 3, the GPS (not shown), and a map database. Determine whether or not The support start position STP is set near the vehicle entrance/exit B of the parking lot PL.
  • the vehicle entrance/exit B is a boundary portion between the public road OL and the parking lot PL.
  • the support start position STP may be set on the public road OL side instead of the parking lot PL side.
  • step S210 the parking control unit 53 notifies the user via the HMI 45 that the current position of the vehicle V is near the support start position STP.
  • the parking control unit 53 displays the camera viewpoint image Gc in the left area of the touch panel display unit 46, and displays the overhead image Gh in the right area, so that the current position of the vehicle V is displayed.
  • the user is notified that it is near the support start position STP.
  • the notification to the user may be realized by displaying a message on the touch panel display unit 46 indicating that the user is near the support start position STP, or by outputting a voice from the speaker 47 notifying that the user is near the support start position STP. .
  • the camera viewpoint image Gc shown in FIG. 5 is an image taken from the viewpoint of the arrangement position of the lens of the camera (the front camera 31a in this example) that captures the scenery in the direction in which the vehicle V is scheduled to move.
  • a bird's-eye view image Gh shown in FIG. It is an image on which Gv is superimposed.
  • the camera viewpoint image Gc and the bird's-eye view image Gh are generated by the image generation unit 59 based on the images captured by the surroundings monitoring camera 2 during execution of the support process.
  • the parking spaces SP, objects, and the like appearing in the camera viewpoint image Gc and the bird's-eye view image Gh are given the same reference numerals as those of the actual objects. This is the same for images other than the camera viewpoint image Gc and the overhead image Gh.
  • step S220 the parking control unit 53 determines whether or not an instruction to perform parking assistance has been given by the user through the operation of the parking assistance start switch 35.
  • the parking control unit 53 skips subsequent processing and exits this processing when the user does not turn on the parking support start switch 35 .
  • the parking control unit 53 performs processing for generating the target route TP in step S230 when the start switch 35 for parking assistance is turned on by the user. Details of the processing in step S230 will be described below with reference to the flowchart shown in FIG.
  • the parking control unit 53 reads the route information stored in the storage unit 50 during the learning process in step S300.
  • the parking control unit 53 reads the plurality of pieces of route information.
  • the parking control unit 53 starts recognition processing in step S310.
  • recognition processing scene recognition, three-dimensional object recognition, and free space recognition by the recognition processing unit 51 are started based on the sensing information of the periphery monitoring sensor 3 .
  • step S320 the parking control unit 53 generates the target route TP based on the route information. Specifically, the parking control unit 53 generates the target route TP that the vehicle V should follow when the vehicle V is parked, based on the travel route of the vehicle V during the learning process and information about the surroundings of the vehicle V on the travel route. . As shown in FIG. 7, this target route TP passes in front of the third parking space SP3 as in the learning process, and then the vehicle V is turned back so that the vehicle V is directed forward to the third parking space. This is the route for parking at SP3.
  • the parking control unit 53 sets the third parking space SP3 to the planned parking position. Generate a target route TP as SEP.
  • the parking control unit 53 controls the third parking space SP3 and the fourth parking space SP4.
  • the fourth parking space SP4 is set as a candidate position for the planned parking position SEP. Then, a target route TP is generated for each candidate position of the planned parking position SEP.
  • step S330 the parking control unit 53 determines whether or not there is a new obstacle OB that did not exist during the learning process on the target route TP. Specifically, the parking control unit 53 determines whether or not there is a new obstacle OB based on the recognition result of the three-dimensional object recognition at the support start position STP and the recognition result of the three-dimensional object recognition included in the route information.
  • the obstacle OB is composed of a three-dimensional object recognized by three-dimensional object recognition.
  • the parking control unit 53 skips the subsequent processes and exits this process.
  • step S340 the parking control unit 53 searches for an object avoidance route that avoids the obstacle OB on the target route TP and reaches the planned parking position SEP, and attempts to generate the object avoidance route. Specifically, the parking control unit 53 searches for a section in which the distance between the vehicle V and the obstacle OB is equal to or less than a predetermined value on the traveling route of the vehicle V included in the route information, The object avoidance route is generated by replacing the route with the distance from the object OB exceeding a predetermined value.
  • the object avoidance route generated in this manner is, for example, a route avoiding a collision between the vehicle V and the obstacle OB, as shown in FIG. Of the obstacles OB, dynamic targets move. Therefore, it is desirable that the object avoidance route is a route that avoids only static targets.
  • step S350 the parking control unit 53 determines whether the object avoidance route has been generated. If the object avoidance route can be generated, the parking control unit 53 sets the object avoidance route as the target route TP in step S360, and exits this process. Accordingly, by setting the object avoidance route as the target route TP, the object avoidance route is visually provided to the user at the time of display processing of the planned parking position SEP, which will be described later.
  • the parking control unit 53 turns on the parking prohibition flag indicating that parking at the planned parking position SEP is impossible in step S370, and ends this processing. .
  • the processing of steps S320 to S370 is performed by the route generation section 55 of the parking control section 53.
  • step S240 the parking control unit 53 determines whether the vehicle can be parked at the planned parking position SEP. In this determination processing, for example, when the parking prohibition flag is off, it is determined that parking is possible at the planned parking position SEP, and when the parking prohibition flag is on, it is determined that parking is not possible at the expected parking position SEP. judge.
  • the parking control unit 53 When parking at the planned parking position SEP is possible, the parking control unit 53 performs display processing of the planned parking position SEP in step S250.
  • the processing for displaying the expected parking position SEP will be described with reference to the flowchart shown in FIG.
  • the parking control unit 53 determines in step S400 whether there are multiple candidate positions for the planned parking position SEP. For example, the parking control unit 53 determines whether the storage unit 50 stores route information obtained when the vehicle V is parked in a different parking space SP.
  • step S410 the parking control unit 53 visually provides the user with information on the planned parking position SEP included in the route information in step S410.
  • the processing of step S410 is performed by the information providing section 58 of the parking control section 53.
  • the parking control unit 53 provides the user with a virtual parking image Gp obtained as information about the vicinity of the planned parking position SEP among the route information before starting the follow-up control process. For example, the parking control unit 53 displays a virtual parking image Gp in the upper right area of the touch panel display unit 46, as shown in FIG.
  • the virtual parking image Gp is generated by the image generation unit 59 based on the image stored in the storage unit 50 as the route information during the learning process.
  • the parking control unit 53 superimposes the virtual vehicle image Gv and the parking frame image Gf indicating the planned parking position SEP on the virtual viewpoint image showing the surroundings of the planned parking position SEP as the virtual parking image Gp. It is displayed in the upper right area of the touch panel display section 46 .
  • the virtual vehicle image Gv is an image (this example of Polygon).
  • the parking frame image Gf is a thick-line image colored in blue or red so that it can be distinguished from the parking frame shown in the virtual viewpoint image.
  • the virtual parking image Gp is a three-dimensional representation of an image showing the surroundings of the planned parking position SEP.
  • the parking control unit 53 changes the viewpoint of the virtual parking image Gp according to the operation signal of the touch operation by the user of the touch panel display unit 46 .
  • the parking control unit 53 acquires an operation signal corresponding to a touch operation such as a flick or drag on the touch panel display unit 46 in the directions indicated by the vertical and horizontal rotation arrows R shown in the virtual parking image Gp, and performs the operation.
  • the viewpoint of the virtual parking image Gp is changed according to the signal.
  • the parking control unit 53 enlarges and reduces the virtual parking image Gp, for example, by touching the zoom-in icon ZI and the zoom-out icon ZO shown in the virtual parking image Gp.
  • enlargement and reduction of the virtual parking image Gp may be realized by operations other than icon operations.
  • Enlargement and reduction of the virtual parking image Gp are realized, for example, by pinching out to widen the distance between two fingers on the surface of the touch panel display unit 46 and pinching in to narrow the distance by pinching the two fingers. It is desirable that If enlargement and reduction of the virtual parking image Gp are realized by such screen operations, it is possible to avoid reduction in the display size of the virtual parking image Gp due to icon display and obscuring part of the image due to superimposition of icons. can do. That is, it is possible to ensure the display size of the image and the visibility of the image. These are not limited to enlargement and reduction of an image, but are the same when changing the viewpoint of an image.
  • the parking control unit 53 displays a vehicle surrounding image Ga including a surrounding image showing the surroundings of the vehicle V in the left area of the touch panel display unit 46, and an illustration image Gi in the lower right area of the touch panel display unit 46. indicate.
  • the vehicle peripheral image Ga is a virtual viewpoint image obtained by viewing a projected curved surface from a virtual viewpoint set behind the vehicle V and extracting an area on the projected curved surface included in a predetermined viewing angle as an image. It is an image in which an image Gt and an icon P are superimposed.
  • This vehicle surroundings image Ga is generated by the image generating unit 59 based on the image captured by the surroundings monitoring camera 2 during the execution of the support process and the information regarding the target route TP.
  • the target route image Gt is an image showing the target route TP from the current position of the vehicle V to the planned parking position SEP.
  • the parking control unit 53 identifies a route in front of an object appearing in the vehicle peripheral image Ga on the target route TP as a front route, and identifies a route behind the object on the target route TP as a back route.
  • the parking control unit 53 uses semantic segmentation to identify an object appearing in the vehicle peripheral image Ga, and identifies the positional relationship between the object and the target route TP.
  • the image generation unit 59 of the parking control unit 53 superimposes the portion Gtb corresponding to the back route and the portion Gtf corresponding to the front route in the target route image Gt on the vehicle peripheral image Ga in different manners.
  • the image generating unit 59 superimposes an image in which the portion Gtf corresponding to the front route is a solid line and the portion Gtb corresponding to the back route is a broken line as the target route image Gt and superimposed on the vehicle peripheral image Ga.
  • a virtual vehicle image Gv is displayed in an opaque manner in the vehicle peripheral image Ga.
  • a front portion of the virtual vehicle image Gv is superimposed on the vehicle peripheral image Ga so that the viewpoint can be easily grasped.
  • a virtual vehicle image Gv showing the entire vehicle V may be superimposed on the vehicle peripheral image Ga.
  • the icon P indicates the planned parking position SEP.
  • the icon P is superimposed near the planned parking position SEP.
  • the icon P is superimposed in a translucent manner.
  • the illustration image Gi is an image showing the relationship between the current position of the vehicle V, the target route TP, and the planned parking position SEP.
  • the illustration image Gi of this embodiment is composed of an overall bird's-eye view image including all of the current position of the vehicle V, the target route TP, and the planned parking position SEP.
  • the illustration image Gi is composed of pictures and diagrams.
  • the illustration image Gi is generated by the image generation unit 59 based on information about the target route TP and icons indicating the vehicle V and the planned parking position SEP prepared in advance. Note that the illustration image Gi shown in FIG. 10 shows only the current position of the vehicle V, the target route TP, and the planned parking position SEP, but other information such as an obstacle OB may also be shown.
  • the parking control unit 53 displays a start button STB in the area below the illustration image Gi on the touch panel display unit 46.
  • the start button STB is a button touched by the user when instructing the start of the follow-up control process.
  • the parking control unit 53 determines in step S420 whether or not the user has touched the start button STB. Then, the parking control unit 53 waits until the start button STB is touched, and when the start button STB is touched, exits this process and proceeds to the follow-up control process of step S260 shown in FIG.
  • the parking control unit 53 provides the user with information on the multiple candidate positions and information prompting for the planned parking position SEP in step S430.
  • This step S430 processing is performed by the information providing section 58 of the parking control section 53 .
  • the parking control unit 53 displays an icon P1 indicating a candidate position in a portion corresponding to the candidate position of the planned parking position SEP shown in each of the virtual parking image Gp, the vehicle peripheral image Ga, and the illustration image Gi.
  • the image on which P2 is superimposed is displayed on the touch panel display section 46 .
  • the parking control unit 53 displays the selection buttons SLB1 and SLB2 corresponding to the respective candidate positions in the area below the vehicle periphery image Ga on the touch panel display unit 46, and determines the planned parking position SEP in the area below the illustration image Gi. Display a decision button DB for
  • step S440 the parking control unit 53 determines whether or not the user has selected a candidate position. For example, when one of the selection buttons SLB1 and SLB2 is selected and the decision button DB is touched, the parking control unit 53 determines that the user has selected a candidate position. Further, the parking control unit 53 determines that the user has not selected a candidate position if one of the touch operations of the selection buttons SLB1 and SLB2 and the touch operation of the enter button DB has not been performed.
  • the parking control unit 53 waits until the user selects a candidate position, and when the user selects a candidate position, the process proceeds to step S450. After setting the candidate position selected by the user as the planned parking position SEP in step S450, the parking control unit 53 proceeds to step S410.
  • the parking control unit 53 displays an image showing all of the plurality of candidate positions on the touch panel display unit 46 before the user selects a candidate position.
  • the parking control unit 53 displays an image focused on the selected candidate position on the touch panel display unit 46 after the user selects the candidate position. According to this, the user can clearly grasp the planned parking position SEP.
  • the parking control unit 53 proceeds to step S250 shown in FIG.
  • the parking control unit 53 starts follow-up control processing in step S250.
  • the follow-up control process is a process of automatically moving the vehicle V to the planned parking position SEP along the target route TP. The follow-up control process will be described with reference to the flowchart shown in FIG.
  • the parking control unit 53 starts recognition processing in step S500.
  • recognition processing scene recognition, three-dimensional object recognition, and free space recognition by the recognition processing unit 51 are started based on the sensing information of the periphery monitoring sensor 3 .
  • step S510 the parking control unit 53 estimates the current position of the vehicle V based on the sensing information stored in the storage unit 50 and the sensing information sequentially acquired by the periphery monitoring sensor 3 during parking assistance.
  • the processing of step S510 is performed by the position estimation section 56 of the parking control section 53 .
  • step S520 the parking control unit 53 starts automatic parking of the vehicle V at the planned parking position SEP by performing vehicle motion control such as acceleration/deceleration control and steering control of the vehicle V.
  • vehicle motion control such as acceleration/deceleration control and steering control of the vehicle V.
  • step S ⁇ b>520 is performed by the follow-up control section 57 of the parking control section 53 .
  • step S530 the parking control unit 53 determines whether there is a new obstacle OB that did not exist during the learning process on the target route TP and the planned parking position SEP. Specifically, the parking control unit 53 determines whether or not there is a new obstacle OB based on the recognition result of the three-dimensional object recognition after the start of the tracking control process and the recognition result of the three-dimensional object recognition included in the route information.
  • the obstacle OB is composed of a three-dimensional object recognized by three-dimensional object recognition.
  • step S540 processing is performed by the information providing section 58 of the parking control section 53 .
  • the parking control unit 53 displays, in the left area of the touch panel display unit 46, an icon P superimposed on a portion corresponding to the planned parking position SEP shown in the vehicle peripheral image Ga. .
  • FIG. 14 illustrates an example of display contents on the touch panel display unit 46 at the start of the follow-up control process.
  • FIG. 15 illustrates an example of display contents on the touch panel display unit 46 after the follow-up control process is started.
  • the icon P is superimposed in a translucent manner.
  • the icon P is superimposed in an opaque manner. This makes it possible to grasp the position of the planned parking position SEP.
  • a part of the planned parking position SEP is cut off. Therefore, the icon P is superimposed on the right end of the vehicle peripheral image Ga.
  • the parking control unit 53 displays, in the right area of the touch panel display unit 46, the target route image Gt superimposed on the overhead image Gh. Furthermore, the parking control unit 53 displays a progress bar PB indicating the progress of automatic parking of the vehicle V in the area below the vehicle peripheral image Ga on the touch panel display unit 46 .
  • This progress bar PB has a horizontally long bar shape, and as the distance from the support start position STP to the current position of the vehicle V increases, the colored portion inside the bar increases. This allows the user to visually grasp the progress of automatic parking.
  • the remaining distance to the planned parking position SEP may be displayed in the lower area of the vehicle periphery image Ga, instead of the progress bar PB, for example.
  • the parking control unit 53 increases the angle of the virtual viewpoint of the virtual viewpoint image when approaching the planned parking position SEP, and decreases the angle of the virtual viewpoint of the virtual viewpoint image when moving away from the planned parking position SEP. , the display mode of the virtual viewpoint image is changed. That is, as with the user's field of view, the closer the target is, the narrower the range to be displayed in the image. According to this, since the display mode of the virtual viewpoint image is changed in the same manner as the user's field of view, it becomes easier for the user to grasp the distance from the planned parking position SEP.
  • the image displayed in the right area of the touch panel display section 46 is not limited to the overhead image Gh.
  • an illustration image Gi may be displayed in the right area of the touch panel display section 46 instead of the overhead image Gh.
  • step S550 the parking control unit 53 determines whether or not the vehicle V has reached the planned parking position SEP.
  • the parking control unit 53 returns to the process of step S510 when the vehicle V has not reached the planned parking position SEP, and exits the follow-up control process when the vehicle V reaches the planned parking position SEP.
  • step S560 the parking control unit 53 searches for an avoidance route that avoids the obstacle OB on the target route TP and reaches the planned parking position SEP, and attempts to generate the avoidance route. Since the avoidance route search and the like are the same as the processing in step S340, the description thereof will be omitted.
  • the parking control unit 53 displays a message image Gm indicating that the avoidance route is being searched on the touch panel display unit 46, as shown in FIG. 18, for example. In this way, by notifying the user of the internal state of the system, it is possible to make the user ready to change the route during automatic parking.
  • the notification that the avoidance route is being searched is not limited to the display of the message image Gm on the touch panel display unit 46.
  • the user is notified that the avoidance route is being searched.
  • the user may be notified by voice that the avoidance route is being searched.
  • step S570 the parking control unit 53 determines whether the avoidance route has been generated. If the avoidance route can be generated, the parking control unit 53 replaces the avoidance route with the target route TP and displays information about the avoidance route on the touch panel display unit 46 in step S580. For example, the parking control unit 53 displays, on the touch panel display unit 46, an image indicating an avoidance route superimposed on the vehicle peripheral image Ga and the overhead image Gh. In addition, the parking control unit 53 uses the speaker 47 to announce to the user that the avoidance route is replaced with the target route TP. A message indicating that the target route is to be changed to the avoidance route may be displayed on the touch panel display section 46 . Since changing the target route to the avoidance route is unintended by the user, it is desirable to combine the display of the avoidance route on the touch panel display unit 46 with the display of a message regarding route change and voice notification.
  • step S550 the parking control unit 53 determines whether or not the vehicle V has reached the planned parking position SEP.
  • the parking control unit 53 returns to the process of step S510 when the vehicle V has not reached the planned parking position SEP, and exits the follow-up control process when the vehicle V reaches the planned parking position SEP.
  • the parking control unit 53 identifies a stop position TSP where the vehicle V can be stopped within the parking lot PL in step S590. Specifically, the parking control unit 53 identifies the stop position TSP using the recognition result of the free space recognition. For example, as shown in FIG. 19, when the second parking space SP2 is vacant, the parking control unit 53 identifies the second parking space SP2 as the stop position TSP. This stop position TSP is a possible stop position different from the expected parking position SEP. The parking control unit 53 may specify a free space other than the second parking space SP2 as the stop position TSP.
  • step S600 the parking control unit 53 provides the user with information regarding the stop position TSP and the route to the stop position TSP, and recommends that the user stop at a position different from the planned parking position SEP.
  • the parking control unit 53 superimposes an icon P on a portion corresponding to the stop position TSP shown in the vehicle peripheral image Ga, as shown in FIG. is displayed in the left area of the touch panel display unit 46 .
  • the parking control unit 53 displays, in the right area of the touch panel display unit 46, an image showing the route to the stop position TSP superimposed on the bird's-eye view image Gh.
  • step S600 uses the speaker 47 to announce that the vehicle cannot be parked at the planned parking position SEP and will stop at a position different from the planned parking position SEP.
  • the processing of step S600 is performed by the information providing section 58 of the parking control section 53.
  • FIG. the parking control unit 53 displays a start button STB in the area below the vehicle periphery image Ga on the touch panel display unit 46, and when the start button STB is touch-operated, the process proceeds to another position stop processing in step S610.
  • step S610 the parking control unit 53 executes another position stop processing for moving the vehicle V to the stop position TSP and stopping the vehicle.
  • the parking control unit 53 starts automatic parking of the vehicle V at the stop position TSP by performing vehicle motion control such as acceleration/deceleration control and steering control of the vehicle V.
  • FIG. The process of step S610 is performed by the follow-up control section 57 of the parking control section 53.
  • the parking control unit 53 stops the vehicle V at the stop position TSP and exits from this process.
  • the parking control unit 53 specifies a possible stop position where the vehicle V can be stopped in step S270. .
  • the parking control unit 53 specifies the stop possible position when the object avoidance route cannot be generated when the target route TP is generated.
  • the parking control unit 53 uses the recognition result of the free space recognition to specify the position where the vehicle can be stopped. For example, as shown in FIG. 19, when the second parking space SP2 is vacant, the parking control unit 53 identifies the second parking space SP2 as a possible stop position. This stopable position is a stopable position different from the expected parking position SEP.
  • the parking control unit 53 may specify a free space other than the second parking space SP2 as a possible stop position.
  • step S280 the parking control unit 53 provides the user with information on the possible stop position and the route to the possible stop position, and recommends that the user stop at a position different from the planned parking position SEP.
  • the touch panel display unit 46 displays an icon P superimposed on the stopable position shown in the vehicle peripheral image Ga or the bird's-eye view image Gh.
  • the parking control unit 53 uses the speaker 47 to announce that the vehicle cannot be parked at the planned parking position SEP and will stop at a position different from the planned parking position SEP.
  • the processing of step S280 is performed by the information providing section 58 of the parking control section 53.
  • FIG. the parking control unit 53 displays the start button STB on the touch panel display unit 46, and when the start button STB is touch-operated, the process proceeds to another position stop processing in step S290.
  • step S290 the parking control unit 53 executes another position stop processing for moving the vehicle V to a position where the vehicle can be stopped and stopping the vehicle.
  • the parking control unit 53 performs vehicle motion control such as acceleration/deceleration control and steering control of the vehicle V to start automatic parking of the vehicle V at a position where the vehicle can be stopped.
  • the process of step S290 is performed by the follow-up control section 57 of the parking control section 53. FIG.
  • the parking control unit 53 stops the vehicle V at the stoppable position and exits from this process.
  • the parking assistance device 5 and the parking assistance method described above are used to park the vehicle V based on route information including information about the vehicle V's surroundings on the travel route and the travel route when the vehicle V is parked by the user.
  • a target route TP that the vehicle V should follow when V is parked is generated.
  • the parking assistance device 5 and the parking assistance method perform follow-up control processing for automatically moving the vehicle V to the planned parking position SEP along the target route TP.
  • the parking assistance device 5 and the parking assistance method provide the information regarding the planned parking position SEP included in the above route information to the user in a visual manner before starting the follow-up control process.
  • the user can start automatically parking the vehicle V at the planned parking position SEP after clearly grasping the planned parking position SEP. Therefore, according to the parking assistance device 5 and the parking assistance method of the present disclosure, usability of automatic parking can be improved.
  • the information providing unit 58 superimposes a virtual vehicle image Gv showing the vehicle V on the planned parking position SEP in the image obtained as the information about the planned parking position SEP in the route information. Provide the object to the user before the start of the follow-up control process. According to this, the user can visually grasp the parking state of the vehicle V at the planned parking position SEP before the follow-up control process is started. That is, the user can easily visualize the parking position of the vehicle V by automatic parking before the follow-up control process is started. This is a factor that increases user satisfaction, and greatly contributes to improving the usability of automatic parking.
  • the virtual vehicle image Gv is an image showing the vehicle V in a translucent manner. According to this, the user can be made to recognize that the image showing the parking state of the vehicle V at the planned parking position SEP does not show the current position of the vehicle V. FIG. That is, it is possible to prevent misunderstanding that the image showing the parking state of the vehicle V at the planned parking position SEP shows the current position of the vehicle V.
  • the information providing unit 58 superimposes the parking frame image Gf indicating the planned parking position SEP on the image obtained as the information about the planned parking position SEP among the route information, and starts the follow-up control process. provided to users in advance. According to this, by emphasizing the planned parking position SEP, it becomes easier for the user to visually grasp the planned parking position SEP before the follow-up control process is started.
  • the information providing unit 58 provides the user with a three-dimensional display of an image showing the surroundings of the planned parking position SEP, and the viewpoint of the three-dimensional display according to the operation signal of the touch panel display unit 46 by the user. make changes. According to this, it becomes possible to provide the user with detailed information regarding the planned parking position SEP. In particular, since it is possible to change the viewpoint of the three-dimensional display according to the touch operation of the touch panel display section 46, it is possible to provide information in accordance with the user's intention.
  • the information providing unit 58 visually presents information regarding the plurality of candidate positions included in the route information to the user. provided in a manner. Then, the information providing unit 58 provides information for prompting selection of the planned parking position SEP from among the plurality of candidate positions. Thus, if the user can select the intended parking position SEP, it is possible to realize parking assistance that appropriately reflects the user's intention.
  • the follow-up control unit 57 determines whether the vehicle V will be parked based on information about the surroundings of the vehicle V obtained after the start of the follow-up control process. A possible stop position different from the position SEP is specified. Then, when the vehicle V cannot be parked at the planned parking position SEP after the follow-up control process is started, the information providing unit 58 provides information for recommending that the vehicle be stopped at a possible stop position. In this way, even if the vehicle V cannot be parked at the planned parking position SEP after the automatic parking starts, the user is encouraged to stop at a possible stop position different from the planned parking position SEP.
  • a situation in which the vehicle V cannot be parked at the planned parking position SEP is, for example, when another vehicle is parked at the planned parking position SEP, or an obstacle OB is installed at the planned parking position SEP to obstruct the parking of the vehicle V.
  • the information providing unit 58 After starting the follow-up control process, the information providing unit 58 superimposes a target route image Gt indicating the target route TP on a surrounding image showing the surroundings of the vehicle V obtained during execution of the follow-up control process. provided to users. In this manner, it is desirable that the route along which the vehicle V is scheduled to travel is provided visually to the user during automatic parking. According to this, the user can clearly grasp the travel route of the vehicle V to the planned parking position SEP, and then automatically park the vehicle V at the planned parking position SEP.
  • the information providing unit 58 identifies the path in front of the object appearing in the surrounding image on the target path TP as the front path, and identifies the path behind the object on the target path TP as the back path.
  • the information providing unit 58 superimposes the portion Gtb corresponding to the back route and the portion Gtf corresponding to the front route in the target route image Gt on the surrounding images in different manners. In this way, it is desirable that the target route TP of the vehicle V is provided in such a manner that the route that the user can actually see and the route that the user cannot actually see are distinguished.
  • the information providing unit 58 provides the user with an illustration image Gi showing the relationship between the current position of the vehicle V, the target route TP, and the planned parking position SEP. Since the illustration image Gi has a smaller amount of extra information than the captured image or the like, the current position of the vehicle V, the target route TP, and the planned parking position SEP stand out. Therefore, by providing the user with an illustration image Gi showing the current position of the vehicle V, the target route TP, and the planned parking position SEP, it becomes easier to convey an overview of automatic parking to the user.
  • the follow-up control unit 57 When an obstacle OB is found on the target route TP, the follow-up control unit 57 tries to generate an avoidance route that avoids the obstacle OB and reaches the planned parking position SEP. Then, when the follow-up control unit 57 generates an avoidance route, the information providing unit 58 provides the user with information regarding the avoidance route in a visual manner. In this way, it is desirable to provide the user with information on the avoidance route in a visual manner when there is an obstacle OB on the target route TP.
  • the follow-up control unit 57 identifies a stop position TSP where the vehicle V can be stopped.
  • the information providing unit 58 provides the user with information regarding the stop position TSP and the route to the stop position TSP. In this way, when the avoidance route cannot be generated, it is desirable to provide the user with information regarding the stop position TSP of the vehicle V, etc., as an alternative for the avoidance route.
  • the information providing unit 58 provides the user with information indicating that the follow-up control unit 57 is generating the avoidance route. In this way, it is desirable to notify the user that the avoidance route is being generated before providing the user with information on the avoidance route. It can be expected that the route change will be transmitted in stages rather than suddenly, thereby reducing the psychological burden on the user accompanying the route change.
  • the path generation unit 55 avoids the obstacle OB before the start of the follow-up control process and determines the parking position. Attempt to generate an object avoidance path leading to When the object avoidance route is generated by the route generation unit 55, the information providing unit 58 provides the user with information on the object avoidance route in a visual manner before the follow-up control process is started. In this way, when it is found that there is an obstacle OB on the target route TP before starting automatic parking, information regarding the object avoidance route is provided to the user in a visual manner before starting automatic parking. It is desirable to have a configuration that
  • the follow-up control unit 57 identifies a possible stop position where the vehicle V can be stopped. Then, the information providing unit 58 provides the user with information regarding the possible stop position and the route to the possible stop position. In this manner, when it is determined that parking at the planned parking position SEP is not possible before the start of automatic parking, the user is provided with information regarding possible stop positions of the vehicle V as an alternative to the object avoidance route. It is desirable that
  • the parking assistance device 5 displays the virtual parking image Gp on the touch panel display unit 46 as information about the expected parking position SEP before starting the follow-up control process, the present invention is not limited to this.
  • the parking assistance device 5 may provide the user with an image of the detection result of the search wave sensor at the planned parking position SEP, for example.
  • the above-described parking assistance device 5 displays not only the virtual parking image Gp but also the vehicle surrounding image Ga and the illustration image Gi on the touch panel display unit 46 before starting the follow-up control process. For example, only the virtual parking image Gp may be displayed.
  • the vehicle peripheral image Ga is displayed in the left area of the touch panel display section 46, and the overhead image Gh, the virtual parking image Gp, and the illustration image Gi are displayed in the right area of the touch panel display section 46.
  • the image display layout and the like are not limited to this.
  • the image display layout, image size, and the like on the touch panel display unit 46 may be different from those described above.
  • the HMI 45 has the touch panel display unit 46 in the above embodiment, the HMI 45 is not limited to this.
  • the HMI 45 may have a display operated by an operation device such as a remote controller instead of the touch panel display unit 46, for example.
  • the HMI 45 may be implemented using part of the navigation system.
  • touch panel display unit 46 also serves as an operation unit, the operation unit and the display unit may be configured separately.
  • the operation unit is not limited to the touch operation, and may be operated by the user's voice, for example.
  • the parking assistance device 5 superimposes the virtual vehicle image Gv and the parking frame image Gf on an image obtained as information about the area around the planned parking position SEP, and provides the image to the user.
  • the parking assistance device 5 directs to the user an image itself obtained as information about the vicinity of the planned parking position SEP, or an image obtained by superimposing one of the virtual vehicle image Gv and the parking frame image Gf on the image. may be provided.
  • the virtual vehicle image Gv is not limited to showing the vehicle V in a translucent manner, and may also show the vehicle V in an opaque manner.
  • the parking assistance device 5 provides the user with a three-dimensional display of an image showing the surroundings of the planned parking position SEP, and changes the three-dimensional display according to the operation of the operation unit by the user.
  • the parking assistance device 5 may, for example, provide the user with a two-dimensional display of an image showing the surroundings of the planned parking position SEP.
  • the parking assistance device 5 when the route information includes a plurality of candidate positions for the planned parking position SEP, the parking assistance device 5 directs the information regarding the plurality of candidate positions included in the route information to the user. Although it is desirable to provide it in a visual manner, it is not limited to this. For example, if the route information includes a plurality of candidate positions for the planned parking position SEP, the parking assistance device 5 selects one of the plurality of candidate positions as the planned parking position SEP based on predetermined criteria. may be automatically set to
  • the parking assistance device 5 allows the user to visually grasp the progress of automatic parking by using the progress bar PB or the like, but it is not limited to this.
  • the parking assistance device 5 may, for example, allow the user to audibly grasp the progress of automatic parking.
  • the parking assistance device 5 changes the angle of the virtual viewpoint of the virtual viewpoint image according to the distance to the target, but it is not limited to this.
  • the parking assistance device 5 may keep the angle of the virtual viewpoint image constant regardless of the distance to the target.
  • the parking assistance device 5 instructs the user to stop the vehicle at a possible stop position different from the planned parking position SEP when the vehicle cannot be parked at the planned parking position SEP after the start of automatic parking.
  • the parking assistance device 5 notifies the driver of the situation, stops the vehicle V on the spot, and forcibly terminates the automatic parking. good.
  • the parking assistance device 5 after the follow-up control process is started, the parking assistance device 5 superimposes the target route image Gt indicating the target route TP on the surrounding image obtained during the execution of the follow-up control process.
  • the parking assistance device 5 may display the peripheral image obtained during the execution of the follow-up control process as it is.
  • the parking assistance device 5 displays the part behind the object and the part in front of the object in the target route image Gt in different manners. is not limited to The parking assistance device 5 may display, for example, the part behind the object and the part in front of the object in the target route image Gt in the same manner.
  • the parking assistance device 5 preferably provides the user with an illustration image Gi showing the relationship between the current position of the vehicle V, the target route TP, and the planned parking position SEP during automatic parking.
  • the illustration image Gi may not be provided.
  • the parking assistance device 5 notifies the user when searching for a route to avoid the obstacle OB on the target route TP. Although it is desirable that there is, it is not limited to this, and nothing may be notified.
  • the parking assistance device 5 searches for a route that avoids the obstacle OB when there is an obstacle OB on the target route TP, but is not limited to this.
  • the parking assistance device 5 may, for example, prompt the vehicle to stop on the spot or request the designation of the parking position without searching for an avoidance route for the obstacle OB.
  • the parking assistance device 5 of the present disclosure is applied to parking assistance in a parking lot PL having a plurality of parking spaces SP
  • the application target of the parking assistance device 5 is not limited to this.
  • the parking assistance device 5 can also be applied to parking assistance in a land or the like where one parking space SP is provided, such as in front of one's house.
  • the controller and techniques of the present disclosure are implemented on a dedicated computer provided by configuring a processor and memory programmed to perform one or more functions embodied by the computer program. good too.
  • the controller and techniques of the present disclosure may be implemented in a dedicated computer provided by configuring the processor with one or more dedicated hardware logic circuits.
  • the control unit and method of the present disclosure is a combination of a processor and memory programmed to perform one or more functions and a processor configured by one or more hardware logic circuits. It may be implemented on one or more dedicated computers.
  • the computer program may also be stored as computer-executable instructions on a computer-readable non-transitional tangible recording medium.
  • a parking assistance device A target to be passed by the vehicle when the vehicle is parked based on route information including a travel route of the vehicle and information about the surroundings of the vehicle on the travel route when the vehicle (V) is parked by a user.
  • a route generation unit (55) that generates a route (TP);
  • a follow-up control unit (57) that performs a follow-up control process for automatically moving the vehicle to the planned parking position (SEP) along the target route;
  • An information providing unit (58) that provides information to the user, The parking assistance device, wherein the information providing unit provides the information regarding the planned parking position included in the route information in a visual form to the user before starting the follow-up control process.
  • the information providing unit superimposes a virtual vehicle image (Gv) showing the vehicle at the planned parking position in the image obtained as information about the planned parking position in the route information.
  • Gv virtual vehicle image
  • the information providing unit superimposes a parking frame image (Gf) indicating the planned parking position on an image obtained as information around the planned parking position among the route information in the follow-up control process.
  • Gf parking frame image
  • the information providing unit provides the user with a three-dimensional display of an image showing the surroundings of the planned parking position, and the three-dimensional display in response to an operation signal of the operation unit (46) by the user. 5.
  • the parking assistance device according to any one of Disclosures 1 to 4, which changes a viewpoint.
  • the information providing unit visually displays information regarding the plurality of candidate positions included in the route information to the user.
  • the parking assistance device according to any one of Disclosures 1 to 5, which provides information for prompting selection of the planned parking position from among the plurality of candidate positions.
  • the follow-up control unit controls the parking plan based on information about the vehicle's surroundings obtained after the start of the follow-up control process. Identify a possible stop position that is different from the position,
  • the information providing unit provides information for recommending that the vehicle be stopped at the stoppable position.
  • the parking assistance device according to any one of the above.
  • the information providing unit After the follow-up control process is started, the information providing unit superimposes a target route image (Gt) indicating the target route on a surrounding image showing the surroundings of the vehicle obtained during execution of the follow-up control process. toward the user, the parking assistance device according to any one of Disclosures 1 to 7.
  • Gt target route image
  • the follow-up control unit When an obstacle is found on the target route, the follow-up control unit tries to generate an avoidance route that avoids the obstacle and reaches the planned parking position, 10.
  • the information providing unit according to any one of Disclosures 1 to 9, wherein when the follow-up control unit generates the avoidance route, the information providing unit provides the user with information regarding the avoidance route in a visual manner. parking aid.
  • the follow-up control unit identifies a stop position where the vehicle can be stopped when the avoidance route cannot be generated, 12.
  • the route generation unit avoids the obstacle before the follow-up control process is started and reaches the planned parking position. Attempt to generate an escape route,
  • the information providing unit provides the user with information regarding the object avoidance route in a visual manner before the follow-up control process is started.
  • a parking assist device according to any one of Disclosures 1 to 13.
  • the follow-up control unit identifies a possible stop position where the vehicle can be stopped when the object avoidance route cannot be generated by the route generation unit, 15.
  • the parking assistance device according to Disclosure 14, wherein the information providing unit provides the user with information about the possible stop position and a route to the possible stop position.
  • a parking assistance method comprising: A target to be passed by the vehicle when the vehicle is parked based on route information including a travel route of the vehicle when the user performs a parking operation of the vehicle (V) and information about the surroundings of the vehicle on the travel route. generating a path (TP); performing follow-up control processing for automatically moving the vehicle to a planned parking position (SEP) along the target route; providing information to the user; Providing the information to the user includes providing the information on the planned parking position included in the route information in a visual manner to the user before starting the follow-up control process. Parking assistance method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)

Abstract

This parking assistance device (5) comprises a path generation unit (55) that generates a target path (TP) to be traveled by a vehicle during the parking of the vehicle on the basis of path information including a travel path of the vehicle when a parking operation of the vehicle (V) is performed by a user and information about the surroundings of the vehicle on the travel path. The parking assistance device comprises: a tracking control unit (57) that performs a tracking control process to automatically move the vehicle to a planned parking position along the target path; and an information providing unit (58) that provides information to the user. The information providing unit provides information relating to the planned parking position included in the path information, in a visual mode to the user before the start of the tracking control process.

Description

駐車支援装置、駐車支援方法PARKING ASSIST DEVICE, PARKING ASSIST METHOD 関連出願への相互参照Cross-references to related applications
 本出願は、2021年8月24日に出願された日本特許出願番号2021-136598号に基づくもので、ここにその記載内容が参照により組み入れられる。 This application is based on Japanese Patent Application No. 2021-136598 filed on August 24, 2021, the contents of which are incorporated herein by reference.
 本開示は、駐車支援装置および駐車支援方法に関する。 The present disclosure relates to a parking assistance device and a parking assistance method.
 従来、駐車支援装置として、所定の駐車予定位置に車両を自動的に駐車させるものが知られている(例えば、特許文献1参照)。特許文献1に記載の駐車支援装置は、ドライバの操作により車両を駐車予定位置に駐車する際、基準開始位置から駐車位置までの間の車両の走行経路を学習し、その学習結果を用いて駐車予定位置に車両を自動的に駐車させる。  Conventionally, as a parking assistance device, there is known a device that automatically parks a vehicle at a predetermined parking position (see Patent Document 1, for example). The parking assistance device described in Patent Document 1 learns the driving route of the vehicle from the reference start position to the parking position when the vehicle is parked at the planned parking position by the driver's operation, and uses the learning result to park the vehicle. To automatically park a vehicle at a predetermined position.
特表2013-530867号公報Japanese Patent Publication No. 2013-530867
 ところで、例えば、車両と駐車予定位置との距離が大きかったり、駐車予定位置の周囲に障害物があったりすると、駐車予定位置への車両の自動駐車を開始する前に、利用者が駐車予定位置を確認できない。このことは、ユーザビリティを低下させる要因となることから好ましくない。
 本開示は、ユーザビリティの向上を図ることが可能な駐車支援装置および駐車支援方法を提供することを目的とする。
By the way, for example, if the distance between the vehicle and the planned parking position is large, or if there are obstacles around the planned parking position, the user must enter the planned parking position before automatically parking the vehicle at the planned parking position. cannot be confirmed. This is not preferable because it causes usability to deteriorate.
An object of the present disclosure is to provide a parking assistance device and a parking assistance method capable of improving usability.
 本開示の1つの観点によれば、
 駐車支援装置は、
 利用者による車両の駐車操作が行われた際の車両の走行経路および走行経路における車両の周辺の情報を含む経路情報に基づいて車両の駐車時に車両が通るべき目標経路を生成する経路生成部と、
 目標経路に沿って駐車予定位置まで車両を自動的に移動させる追従制御処理を行う追従制御部と、
 利用者に情報を提供する情報提供部と、を備え、
 情報提供部は、経路情報に含まれる駐車予定位置に関する情報を、追従制御処理の開始前に利用者に向けて視覚的な態様で提供する。
According to one aspect of the present disclosure,
The parking assist device
a route generation unit that generates a target route that the vehicle should follow when the vehicle is parked based on route information including information about the vehicle's driving route and the surroundings of the vehicle on the driving route when the vehicle is parked by the user; ,
a follow-up control unit that performs a follow-up control process for automatically moving the vehicle to the planned parking position along the target route;
and an information providing unit that provides information to the user,
The information providing unit provides the information regarding the planned parking position included in the route information to the user in a visual manner before the follow-up control process is started.
 本開示の別の観点によれば、
 駐車支援方法は、
 利用者による車両の駐車操作が行われた際の車両の走行経路および走行経路における車両の周辺の情報を含む経路情報に基づいて車両の駐車時に車両が通るべき目標経路を生成することと、
 目標経路に沿って駐車予定位置まで車両を自動的に移動させる追従制御処理を行うことと、
 利用者に情報を提供することと、を含み、
 利用者に情報を提供することでは、経路情報に含まれる駐車予定位置に関する情報を、追従制御処理の開始前に利用者に向けて視覚的な態様で提供することを含む。
According to another aspect of the disclosure,
Parking assistance method
generating a target route that the vehicle should follow when the vehicle is parked based on route information including information about the vehicle's travel route and the surroundings of the vehicle on the travel route when the vehicle is parked by the user;
performing follow-up control processing for automatically moving the vehicle to the planned parking position along the target route;
providing information to users;
Providing the information to the user includes providing the information on the planned parking position included in the route information in a visual manner to the user before starting the follow-up control process.
 これらによると、利用者は、駐車予定位置を明確に把握した上で当該駐車予定位置への車両の自動駐車を開始させることができる。したがって、本開示の駐車支援装置および駐車支援方法によれば、自動駐車のユーザビリティの向上を図ることができる。なお、“ユーザビリティ”とは、特定の利用状況において、特定の利用者によって、ある製品が、指定された目標を達成するために用いられる際の、有効さ、効率、利用者の満足度の度合いである。 According to these, the user can start automatically parking the vehicle at the planned parking position after clearly grasping the planned parking position. Therefore, according to the parking assistance device and the parking assistance method of the present disclosure, usability of automatic parking can be improved. “Usability” refers to the degree of effectiveness, efficiency, and user satisfaction when a product is used by a specific user to achieve a specified goal in a specific usage situation. is.
 なお、各構成要素等に付された括弧付きの参照符号は、その構成要素等と後述する実施形態に記載の具体的な構成要素等との対応関係の一例を示すものである。 It should be noted that the reference numerals in parentheses attached to each component etc. indicate an example of the correspondence relationship between the component etc. and the specific component etc. described in the embodiment described later.
本開示の実施形態に係る自動駐車システムの概略構成図である。1 is a schematic configuration diagram of an automatic parking system according to an embodiment of the present disclosure; FIG. 車両の駐車スペースを含む駐車場を説明するための説明図である。It is an explanatory view for explaining a parking lot containing a parking space of vehicles. 駐車支援装置の駐車制御部が実行する学習処理の一例を示すフローチャートである。4 is a flowchart showing an example of learning processing executed by a parking control unit of the parking assistance device; 駐車制御部が実行する支援処理の一例を示すフローチャートである。4 is a flowchart showing an example of assistance processing executed by a parking control unit; 支援処理の開始前のタッチパネル表示部への表示内容の一例を説明するための説明図である。FIG. 10 is an explanatory diagram for explaining an example of the content displayed on the touch panel display unit before starting the support process; 駐車制御部が実行する目標経路の生成処理の一例を示すフローチャートである。7 is a flowchart illustrating an example of target route generation processing executed by a parking control unit; 目標経路を説明するための説明図である。It is an explanatory view for explaining a target course. 物体回避経路を説明するための説明図である。FIG. 4 is an explanatory diagram for explaining an object avoidance route; 駐車制御部が実行する駐車予定位置の表示処理の一例を示すフローチャートである。It is a flow chart which shows an example of display processing of a parking plan position which a parking control part performs. 追従制御処理の開始前のタッチパネル表示部への表示内容の一例を説明するための説明図である。FIG. 10 is an explanatory diagram for explaining an example of display contents on the touch panel display section before the follow-up control process is started; 駐車予定位置のタッチパネル表示部への表示態様の一例を説明するための説明図である。It is an explanatory view for explaining an example of a display mode on a touch-panel display part of a parking plan position. 駐車予定位置の候補が複数ある場合の表示部への表示内容の一例を説明するための説明図である。FIG. 10 is an explanatory diagram for explaining an example of display contents on the display unit when there are a plurality of candidates for the planned parking position; 駐車制御部が実行する追従制御処理の一例を示すフローチャートである。4 is a flowchart showing an example of follow-up control processing executed by a parking control unit; 追従制御処理の開始時のタッチパネル表示部への表示内容の一例を説明するための説明図である。FIG. 10 is an explanatory diagram for explaining an example of display contents on the touch panel display unit when the follow-up control process is started; 追従制御処理の開始後のタッチパネル表示部への表示内容の一例を説明するための説明図である。FIG. 10 is an explanatory diagram for explaining an example of display contents on the touch panel display unit after the follow-up control process is started; 目標と車両との位置変化に応じた仮想視点画像の仮想視点の角度自動調整の一例を説明するための説明図である。FIG. 10 is an explanatory diagram for explaining an example of automatic adjustment of the angle of the virtual viewpoint of the virtual viewpoint image according to the position change between the target and the vehicle; 追従制御処理の開始後のタッチパネル表示部への表示内容の他の例を説明するための説明図である。FIG. 11 is an explanatory diagram for explaining another example of display contents on the touch panel display unit after the follow-up control process is started; 回避経路の探索中におけるタッチパネル表示部への表示内容の一例を説明するための説明図である。FIG. 11 is an explanatory diagram for explaining an example of display contents on the touch panel display unit during searching for an avoidance route; 回避経路の一例を説明するための説明図である。It is an explanatory view for explaining an example of an avoidance course. 回避経路等のタッチパネル表示部への表示態様の一例を説明するための説明図である。It is an explanatory view for explaining an example of a display mode on a touch-panel display part, such as an avoidance route.
 本開示の一実施形態について図1~図20に基づいて説明する。本実施形態では、本開示の駐車支援装置5および駐車支援方法を自動駐車システム1に適用した例について説明する。図1に示すように、自動駐車システム1は、周辺監視センサ3、各種ECU4、および駐車支援装置5を有する。駐車支援装置5は、周辺監視センサ3および各種ECU4に対して直接もしくは車内LAN(Local Area Network)を介して通信可能に接続されている。 An embodiment of the present disclosure will be described based on FIGS. 1 to 20. FIG. In this embodiment, an example in which the parking assistance device 5 and the parking assistance method of the present disclosure are applied to the automatic parking system 1 will be described. As shown in FIG. 1 , the automatic parking system 1 has a perimeter monitoring sensor 3 , various ECUs 4 , and a parking assistance device 5 . The parking assistance device 5 is communicably connected to the perimeter monitoring sensor 3 and various ECUs 4 directly or via an in-vehicle LAN (Local Area Network).
 周辺監視センサ3は、自身の車両Vの周辺環境を監視する自律センサである。例えば、周辺監視センサ3は、歩行者や他車両等の移動する動的物標、路上の構造物等の静止している静的物標といった自車周辺の立体物で構成される障害物OB、駐車場PLに関する情報である駐車情報を示す駐車支援用マーク等を検知対象物として検知する。車両Vには、周辺監視センサ3として、自車周囲の所定範囲を撮像する周辺監視カメラ31、自車周囲の所定範囲に探査波を送信するソナー32、ミリ波レーダ33、LiDAR(Light Detection and Rangingの略称)34等の探査波センサが備えられている。 The surroundings monitoring sensor 3 is an autonomous sensor that monitors the surroundings of the vehicle V itself. For example, the surroundings monitoring sensor 3 detects an obstacle OB composed of three-dimensional objects around the own vehicle, such as moving dynamic targets such as pedestrians and other vehicles, and stationary static targets such as structures on the road. , parking assistance marks indicating parking information, which is information about the parking lot PL, etc., are detected as objects to be detected. In the vehicle V, as the surroundings monitoring sensor 3, a surroundings monitoring camera 31 that captures a predetermined range around the vehicle, a sonar 32 that transmits search waves to a predetermined range around the vehicle, a millimeter wave radar 33, LiDAR (Light Detection and An abbreviation for Ranging) 34 or the like is provided.
 周辺監視カメラ31は、撮像装置に相当するもので、自車の周辺画像を撮影し、その撮像データをセンシング情報として駐車支援装置5へ出力する。ここでは、周辺監視カメラ31として、車両前方、後方、左右側方の画像を撮影する前方カメラ31a、後方カメラ31b、左側方カメラ31c、右側方カメラ31dを例に挙げているが、これに限るものではない。 The surroundings monitoring camera 31 corresponds to an image capturing device, captures an image of the surroundings of the own vehicle, and outputs the imaged data to the parking assistance device 5 as sensing information. Here, the front camera 31a, the rear camera 31b, the left side camera 31c, and the right side camera 31d, which capture images in front of, behind, and on the left and right sides of the vehicle, are exemplified as the peripheral monitoring camera 31, but the camera is limited to this. not a thing
 探査波センサは、探査波を出力すると共にその反射波を取得することで得られた物標との相対速度や相対距離および物標が存在する方位角などの測定結果をセンシング情報として駐車支援装置5へ逐次出力する。ソナー32は、探査波として超音波を用いた測定を行うものであり、車両Vに対して複数箇所に備えられ、例えば前後のバンパーに車両左右方向に複数個並べて配置されており、車両周囲に探査波を出力することで測定を行う。ミリ波レーダ33は、探査波としてミリ波を用いた測定を行う。LiDAR34は、探査波としてレーザ光を用いた測定を行う。ミリ波レーダ33およびLiDAR34は、共に、例えば車両Vの前方の所定範囲内に探査波を出力し、その出力範囲内において測定を行う。 The search wave sensor outputs the search wave and acquires the reflected wave, and the measurement results such as the relative speed and relative distance to the target and the azimuth angle where the target exists are used as sensing information for the parking assist device. 5 sequentially. The sonar 32 performs measurement using ultrasonic waves as survey waves, and is provided at a plurality of locations on the vehicle V. For example, a plurality of sonars 32 are arranged side by side in the left and right direction of the vehicle on the front and rear bumpers, and are arranged around the vehicle. Measurement is performed by outputting a probe wave. The millimeter wave radar 33 performs measurement using millimeter waves as search waves. The LiDAR 34 performs measurements using laser light as the probe wave. Both the millimeter wave radar 33 and the LiDAR 34 output search waves, for example, within a predetermined range in front of the vehicle V, and perform measurement within that output range.
 なお、本実施形態では、周辺監視センサ3として、周辺監視カメラ31、ソナー32、ミリ波レーダ33、LiDAR34を備えたものを例に挙げるが、これらのうちの1つもしくは複数の組み合わせによって周辺監視が行えれば良く、全て備えていなくても良い。 In the present embodiment, the peripheral monitoring sensor 3 includes a peripheral monitoring camera 31, a sonar 32, a millimeter wave radar 33, and a LiDAR 34. It's fine if you can do it, and you don't have to have everything.
 駐車支援装置5は、自動駐車システム1における駐車支援方法を実現するための各種制御を行うためのECU(すなわち、電子制御装置)を構成するものであり、CPU、記憶部50、I/Oなどを備えたマイクロコンピュータによって構成されている。 The parking assistance device 5 constitutes an ECU (that is, an electronic control device) for performing various controls for realizing a parking assistance method in the automatic parking system 1, and includes a CPU, a storage unit 50, an I/O, and the like. It is composed of a microcomputer with
 記憶部50は、ROM、RAM、EEPROM等を含んでいる。すなわち、記憶部50は、RAM等の揮発性メモリと、EEPROM等の不揮発性メモリとを有している。記憶部50は、非遷移有形記録媒体で構成されている。 The storage unit 50 includes ROM, RAM, EEPROM, and the like. That is, the storage unit 50 has a volatile memory such as RAM and a nonvolatile memory such as EEPROM. The storage unit 50 is composed of a non-transition tangible recording medium.
 駐車支援装置5は、利用者による車両Vの駐車操作が行われた際の車両Vの走行経路および走行経路における車両Vの周辺の情報に基づいて車両Vの駐車時に車両Vが通るべき目標経路TPを生成する。“車両Vの周辺の情報”とは、例えば、車両Vの周辺の人や他車両等の動的物標、車両Vの周辺の縁石、建物等の静的物標、各種標識、ガイド線等の路面標示等の情報である。そして、駐車支援装置5は、駐車支援時に、目標経路TPに沿って支援開始位置STPから駐車予定位置SEPまで車両Vを自動的に移動させる。なお、駐車予定位置SEPは、目標経路TPの終点となる位置である。駐車予定位置SEPは、自車の駐車スペースSPとして利用者によって予め登録される。 The parking assistance device 5 determines a target route that the vehicle V should follow when the vehicle V is parked, based on information about the vehicle V's traveling route and the surroundings of the vehicle V on the traveling route when the vehicle V is parked by the user. Generate a TP. The "information around the vehicle V" includes, for example, dynamic targets such as people and other vehicles around the vehicle V, curbs around the vehicle V, static targets such as buildings, various signs, guide lines, and the like. information such as road markings. During parking assistance, the parking assistance device 5 automatically moves the vehicle V from the assistance start position STP to the planned parking position SEP along the target route TP. The planned parking position SEP is the end point of the target route TP. The planned parking position SEP is registered in advance by the user as the parking space SP for the own vehicle.
 具体的には、駐車支援装置5は、利用者による車両Vの駐車操作が行われた際に、周辺監視センサ3での検出結果となるセンシング情報を記憶部50のうち不揮発性メモリに記憶する。そして、駐車支援装置5は、記憶部50に記憶されたセンシング情報、駐車支援時における周辺監視センサ3でのセンシング情報に基づいて、目標経路TPの生成および駐車支援のための各種制御を行う。 Specifically, the parking assistance device 5 stores the sensing information, which is the result of detection by the periphery monitoring sensor 3, in the non-volatile memory of the storage unit 50 when the user performs the parking operation of the vehicle V. . The parking assistance device 5 generates the target route TP and performs various controls for parking assistance based on the sensing information stored in the storage unit 50 and the sensing information from the surrounding monitoring sensor 3 during parking assistance.
 利用者の手動運転時の走行経路および車両Vの周辺の情報を記憶する学習処理は、例えば、図示しない学習スイッチが利用者によって操作された場合など、学習処理を行うことの指示が出されると実行される。また、駐車支援は、駐車支援のスタートスイッチ35が利用者によって操作された場合など、利用者から駐車支援を行うことの指示が出されると実行される。 The learning process for storing information about the driving route and the surroundings of the vehicle V during manual driving by the user is performed when an instruction to perform the learning process is issued, for example, when a learning switch (not shown) is operated by the user. executed. Further, parking assistance is executed when the user issues an instruction to perform parking assistance, such as when the parking assistance start switch 35 is operated by the user.
 駐車支援装置5は、学習処理の実行時に、周辺監視センサ3のセンシング情報に基づいて、車両Vの走行経路における物標、駐車可能なフリースペース、駐車位置等を認識する。これらの認識結果は、記憶部50のうち不揮発性メモリに逐次記憶され、駐車支援時に利用される。 During execution of the learning process, the parking assistance device 5 recognizes targets, parking free spaces, parking positions, etc. on the travel route of the vehicle V based on sensing information from the surroundings monitoring sensor 3 . These recognition results are sequentially stored in the non-volatile memory of the storage unit 50 and used for parking assistance.
 また、駐車支援装置5は、利用者から駐車支援の指示が出されると、記憶部50に記憶されたセンシング情報および駐車支援時の周辺監視センサ3のセンシング情報に基づいて、目標経路TPを生成し、その経路に従った経路追従制御を行う。具体的には、駐車支援装置5は、各種制御を実行する機能部として、認識処理部51、車両情報取得部52、駐車制御部53を有した構成とされている。 Further, when the user issues a parking assistance instruction, the parking assistance device 5 generates a target route TP based on the sensing information stored in the storage unit 50 and the sensing information of the surrounding monitoring sensor 3 during parking assistance. and perform route tracking control according to the route. Specifically, the parking assistance device 5 includes a recognition processing unit 51, a vehicle information acquisition unit 52, and a parking control unit 53 as functional units that execute various controls.
 認識処理部51は、周辺監視センサ3からセンシング情報を入力し、そのセンシング情報に基づいて、駐車しようとしている自車の周辺環境の認識、どのような駐車を行うシーンであるかのシーン認識、さらには自車の周辺に存在する物体の認識を行う。ここでは、認識処理部51を画像認識部51a、空間認識部51b、およびフリースペース認識部51cによって構成している。 The recognition processing unit 51 receives sensing information from the surroundings monitoring sensor 3, and based on the sensing information, recognizes the surrounding environment of the vehicle that is about to be parked, recognizes the scene in which parking is to be performed, Furthermore, it recognizes objects existing around the vehicle. Here, the recognition processing section 51 is composed of an image recognition section 51a, a space recognition section 51b, and a free space recognition section 51c.
 画像認識部51aは、シーン認識、立体物認識等を行う。画像認識部51aによる各種の認識は、センシング情報として入力された周辺監視カメラ31から撮像データを画像解析することで実現される。 The image recognition unit 51a performs scene recognition, three-dimensional object recognition, and the like. Various recognitions by the image recognition unit 51a are realized by image analysis of image data from the peripheral monitoring camera 31 that is input as sensing information.
 シーン認識では、駐車シーンがどのようなシーンであるか否かを認識する。例えば、駐車予定位置SEP近傍に障害物OBがなく車両Vの駐車が特に制限されない通常の駐車シーンであるか、障害物OBによって車両Vの駐車が制限される特殊な駐車シーンであるかの認識を行う。 In scene recognition, it recognizes what kind of scene the parking scene is. For example, it is recognized whether it is a normal parking scene in which there is no obstacle OB near the planned parking position SEP and the parking of the vehicle V is not particularly restricted, or a special parking scene in which the parking of the vehicle V is restricted by the obstacle OB. I do.
 周辺監視カメラ31から入力される撮像データは、その周辺の様子が映し出されたものであるため、その画像を解析すれば、通常の駐車シーンであるか特殊な駐車シーンであるかを判別できる。例えば、撮像データから駐車予定位置SEPの周囲の物体を検出し、その物体が駐車予定位置SEPへの駐車の障害となるのであれば特殊な駐車シーンと判別できる。なお、シーン認識は、周辺監視カメラ31のセンシング情報だけでなく、探査波センサのセンシング情報に基づいて為されていてもよい。 The imaging data input from the surroundings monitoring camera 31 shows the surroundings, so if the image is analyzed, it can be determined whether it is a normal parking scene or a special parking scene. For example, if an object around the planned parking position SEP is detected from the imaging data and the object obstructs parking at the planned parking position SEP, it can be determined as a special parking scene. Note that the scene recognition may be performed based on not only the sensing information of the perimeter monitoring camera 31 but also the sensing information of the survey wave sensor.
 立体物認識では、動的物標や静的物標といった自車周辺に存在する立体物で構成される障害物OBを検知対象物として認識する。この立体物認識によって認識された検知対象物、好ましくはそのうちの静的物標の形状などに基づいて、上記したシーン認識や、障害物OBを含む駐車支援マップの生成が行われる。 In three-dimensional object recognition, obstacles OB composed of three-dimensional objects that exist around the vehicle, such as dynamic targets and static targets, are recognized as objects to be detected. Based on the detection object recognized by this three-dimensional object recognition, preferably the shape of a static target among them, the scene recognition described above and the generation of the parking support map including the obstacle OB are performed.
 空間認識部51bは、立体物認識等を行う。空間認識部51bは、ソナー32、ミリ波レーダ33、LiDAR34の少なくとも1つからのセンシング情報に基づいて、自車の周辺の空間内における立体物認識を行う。ここでの立体物認識については、画像認識部51aで行われる立体物認識と同様である。このため、画像認識部51aと空間認識部51bのいずれか一方が備えられていれば立体物認識を行うことができる。また、本実施形態の場合、空間認識部51bではシーン認識を行っていないが、空間認識部51bにおいて、ソナー32、ミリ波レーダ33、LIDAR34の少なくとも1つからのセンシング情報に基づいてシーン認識を行うこともできる。 The space recognition unit 51b performs three-dimensional object recognition and the like. The space recognition unit 51b recognizes three-dimensional objects in the space around the vehicle based on sensing information from at least one of the sonar 32, the millimeter wave radar 33, and the LiDAR 34. The three-dimensional object recognition here is the same as the three-dimensional object recognition performed by the image recognition section 51a. Therefore, if either one of the image recognition section 51a and the space recognition section 51b is provided, three-dimensional object recognition can be performed. In the case of the present embodiment, the space recognition unit 51b does not perform scene recognition, but the space recognition unit 51b performs scene recognition based on sensing information from at least one of the sonar 32, the millimeter wave radar 33, and the LIDAR 34. can also be done.
 なお、画像認識部51aと空間認識部51bのいずれか一方によって立体物認識やシーン認識を行うことができるが、双方を用いることでより精度良い立体物認識やシーン認識を行うことが可能となる。例えば、画像認識部51aによる立体物認識やシーン認識を空間認識部51bによる立体物認識やシーン認識によって補完することで、より精度良く立体物認識やシーン認識を行うことが可能となる。 Although three-dimensional object recognition and scene recognition can be performed by either the image recognition unit 51a or the space recognition unit 51b, it is possible to perform three-dimensional object recognition and scene recognition with higher accuracy by using both. . For example, by complementing the three-dimensional object recognition and scene recognition by the image recognition unit 51a with the three-dimensional object recognition and scene recognition by the space recognition unit 51b, it is possible to perform three-dimensional object recognition and scene recognition with higher accuracy.
 フリースペース認識部51cは、駐車場PLの中からフリースペースとなっている場所を認識するフリースペース認識を行う。フリースペースは、例えば、駐車場PLの中で車両Vを停止させることが可能な大きさおよび形状となっているスペースを意味している。当該スペースは、駐車場PLの中に複数ある場合に限らず、1つのみある場合もある。 The free space recognition unit 51c performs free space recognition to recognize a free space in the parking lot PL. The free space means, for example, a space with a size and shape that allows the vehicle V to stop in the parking lot PL. The space is not limited to a plurality of spaces in the parking lot PL, and there may be only one space.
 フリースペース認識部51cは、画像認識部51aや空間認識部51bによるシーン認識および立体物認識の認識結果に基づいて、駐車場PLでのフリースペースを認識している。例えば、シーン認識および立体物認識の結果から、駐車場PLの形状や他車両の駐車の有無を把握できるため、それに基づいて駐車場PLの中からフリースペースを認識している。また、フリースペース認識部51cでは、例えば、画像中の各ピクセルを、各ピクセルの周辺情報に基づいてカテゴリ分類するセマンティックセグメンテーションを利用して、画像中からフリースペースを特定する。 The free space recognition unit 51c recognizes free spaces in the parking lot PL based on the recognition results of scene recognition and three-dimensional object recognition by the image recognition unit 51a and the space recognition unit 51b. For example, from the results of scene recognition and three-dimensional object recognition, the shape of the parking lot PL and the presence or absence of parking of other vehicles can be grasped, so based on this, the free space in the parking lot PL is recognized. The free space recognizing unit 51c identifies free spaces in the image, for example, by using semantic segmentation to categorize each pixel in the image based on the peripheral information of each pixel.
 車両情報取得部52は、他のECU4等から車両Vの操作量に関する情報を取得する。具体的には、車両情報取得部52は、車両Vに搭載されたアクセルポジションセンサ、ブレーキ踏力センサ、舵角センサ、車輪速センサ、シフトポジションセンサ等の各センサから出力される検出信号を取得する。 The vehicle information acquisition unit 52 acquires information on the amount of operation of the vehicle V from other ECUs 4 and the like. Specifically, the vehicle information acquisition unit 52 acquires detection signals output from sensors such as an accelerator position sensor, a brake depression force sensor, a steering angle sensor, a wheel speed sensor, and a shift position sensor mounted on the vehicle V. .
 駐車制御部53は、駐車支援に必要とされる各種制御を実行する。具体的には、駐車制御部53は、各種制御を実行する機能部として、経路記憶部54、経路生成部55、位置推定部56、追従制御部57、情報提供部58、画像生成部59を有した構成とされている。 The parking control unit 53 executes various controls required for parking assistance. Specifically, the parking control unit 53 includes a route storage unit 54, a route generation unit 55, a position estimation unit 56, a tracking control unit 57, an information provision unit 58, and an image generation unit 59 as functional units that execute various controls. It is configured with
 経路記憶部54は、利用者による車両Vの駐車操作が行われた際の周辺監視センサ3のセンシング情報を記憶部50に記憶する。例えば、経路記憶部54は、学習処理が開始されると、認識処理部51によって逐次取得される車両Vの走行経路における物標、駐車可能なフリースペース、駐車位置等を経路情報として記憶部50に記憶する。 The route storage unit 54 stores in the storage unit 50 the sensing information of the perimeter monitoring sensor 3 when the user performs the parking operation of the vehicle V. For example, when the learning process is started, the route storage unit 54 stores targets in the travel route of the vehicle V, parking free spaces, parking positions, etc., which are sequentially acquired by the recognition processing unit 51, as route information. memorize to
 また、経路記憶部54は、学習処理が開始されると、周辺監視カメラ31から逐次入力される撮像データ等を経路情報として記憶部50に記憶する。なお、前方カメラ31a、後方カメラ31b、左側方カメラ31c、右側方カメラ31dそれぞれの撮像データを記憶部50に逐次記憶する場合、経路情報の情報量が増え、記憶部50の容量の切迫を招いてしまう。このため、経路記憶部54は、例えば、前方カメラ31a、後方カメラ31b、左側方カメラ31c、右側方カメラ31dの撮像データを合成した合成画像データを記憶部50に記憶するようになっていてもよい。 In addition, when the learning process is started, the route storage unit 54 stores image data and the like sequentially input from the perimeter monitoring camera 31 in the storage unit 50 as route information. Note that when the imaging data of the front camera 31a, the rear camera 31b, the left side camera 31c, and the right side camera 31d are sequentially stored in the storage unit 50, the amount of route information increases, and the capacity of the storage unit 50 becomes tight. I will stay. Therefore, even if the route storage unit 54 stores, in the storage unit 50, synthetic image data obtained by synthesizing the imaging data of the front camera 31a, the rear camera 31b, the left side camera 31c, and the right side camera 31d, for example. good.
 経路生成部55は、シーン認識や立体物認識およびフリースペース認識の結果に基づいて経路生成を行う。経路生成部55は、学習処理時の車両Vの走行経路および当該走行経路における車両Vの周辺の情報に基づいて車両Vの駐車時に車両Vが通るべき目標経路TPを生成する。経路生成部55は、例えば、車両Vの走行経路を基準経路とし、当該基準経路において車両Vと障害物OBとの間隔が所定値以下となる区間がある場合、当該区間を車両Vと障害物OBとの間隔が所定値を超える経路に置き換えたものを目標経路TPとして生成する。なお、障害物OBは、立体物認識で認識された立体物で構成される。 The route generation unit 55 generates a route based on the results of scene recognition, three-dimensional object recognition, and free space recognition. The route generation unit 55 generates a target route TP that the vehicle V should follow when the vehicle V is parked, based on the travel route of the vehicle V during the learning process and information about the surroundings of the vehicle V on the travel route. For example, the route generation unit 55 sets the travel route of the vehicle V as a reference route, and if there is a section in the reference route in which the distance between the vehicle V and the obstacle OB is equal to or less than a predetermined value, the section is defined as the vehicle V and the obstacle OB. A target route TP is generated by replacing it with a route whose distance from the OB exceeds a predetermined value. The obstacle OB is composed of a three-dimensional object recognized by three-dimensional object recognition.
 位置推定部56は、記憶部50に記憶されたセンシング情報および駐車支援時に周辺監視センサ3によって逐次取得されるセンシング情報に基づいて、車両Vの現在位置を推定する。位置推定部56は、例えば、記憶部50に記憶されたセンシング情報と駐車支援時に取得されるセンシング情報とを比較し、それらの差分に基づいて現在位置を推定する。 The position estimation unit 56 estimates the current position of the vehicle V based on the sensing information stored in the storage unit 50 and the sensing information sequentially acquired by the surroundings monitoring sensor 3 during parking assistance. The position estimation unit 56 compares, for example, the sensing information stored in the storage unit 50 with the sensing information acquired during parking assistance, and estimates the current position based on the difference between them.
 追従制御部57は、車両Vの加減速制御や操舵制御などの車両運動制御を行うことで目標経路TPに沿って支援開始位置STPから駐車予定位置SEPまで車両Vを自動的に移動させる。具体的には、追従制御部57は、位置推定部56で推定される車両Vの現在位置が目標経路TPに沿って駐車予定位置SEPに到達するように各種ECU4に対して制御信号を出力する。 The follow-up control unit 57 automatically moves the vehicle V from the support start position STP to the planned parking position SEP along the target route TP by performing vehicle motion control such as acceleration/deceleration control and steering control of the vehicle V. Specifically, the follow-up control unit 57 outputs a control signal to various ECUs 4 so that the current position of the vehicle V estimated by the position estimation unit 56 reaches the planned parking position SEP along the target route TP. .
 各種ECU4は、操舵制御を行うステアリングECU41、加減速制御を行うブレーキECU42、パワーマネージメントECU43、ライト、ドアミラー等の各種電装品の制御を行うボディECU44等が挙げられる。 The various ECUs 4 include a steering ECU 41 that controls steering, a brake ECU 42 that controls acceleration and deceleration, a power management ECU 43, and a body ECU 44 that controls various electrical components such as lights and door mirrors.
 具体的には、追従制御部57は、車両情報取得部52を介して車両Vに搭載されたアクセルポジションセンサ、ブレーキ踏力センサ、舵角センサ、車輪速センサ、シフトポジションセンサ等の各センサから出力される検出信号を取得している。そして、追従制御部57は、取得した検出信号より各部の状態を検出し、目標経路TPに追従して車両Vを移動させるべく、各種ECU4に対して制御信号を出力する。 Specifically, the follow-up control unit 57 outputs from each sensor such as an accelerator position sensor, a brake depression force sensor, a steering angle sensor, a wheel speed sensor, and a shift position sensor mounted on the vehicle V via the vehicle information acquisition unit 52. A detected signal is obtained. Then, the follow-up control unit 57 detects the state of each unit from the acquired detection signal, and outputs control signals to various ECUs 4 in order to move the vehicle V following the target route TP.
 情報提供部58は、HMI(Human Machine Interfaceの略)45を利用して利用者に情報を提供する。HMI45は、利用者に対して各種の支援を行うための機器である。HMI45は、タッチパネル表示部46およびスピーカ47を有する。タッチパネル表示部46は、ナビゲーションシステムまたはメータシステムに用いられるタッチパネル式のディスプレイである。 The information providing unit 58 uses HMI (abbreviation for Human Machine Interface) 45 to provide information to the user. The HMI 45 is a device for providing various types of support to the user. The HMI 45 has a touch panel display section 46 and a speaker 47 . The touch panel display unit 46 is a touch panel type display used in a navigation system or a meter system.
 情報提供部58は、記憶部50に記憶された経路情報に含まれる駐車予定位置SEPに関する情報を、追従制御処理の開始前に、利用者に向けて視覚的な態様で提供する。例えば、情報提供部58は、駐車予定位置SEPの周辺を映した画像を、追従制御処理の開始前に、タッチパネル表示部46に表示する。 The information providing unit 58 provides the user with information regarding the planned parking position SEP included in the route information stored in the storage unit 50 in a visual manner before the follow-up control process is started. For example, the information providing unit 58 displays an image showing the surroundings of the planned parking position SEP on the touch panel display unit 46 before starting the follow-up control process.
 情報提供部58は、利用者にタッチパネル表示部46のタッチ操作を促すために各種ボタンを表示する。各種ボタンは、利用者によってタッチ操作される操作ボタンである。情報提供部58は、例えば、例えば、追従制御処理のスタートボタンSTB、駐車予定位置SEPを選択するための選択ボタンSLB等をタッチパネル表示部46に表示する。本実施形態のタッチパネル表示部46は、情報の表示だけでなく、利用者によって操作される“操作部”を兼ねている。 The information providing unit 58 displays various buttons to prompt the user to perform touch operations on the touch panel display unit 46. Various buttons are operation buttons touch-operated by the user. The information providing unit 58 displays, for example, a start button STB for follow-up control processing, a selection button SLB for selecting the expected parking position SEP, and the like on the touch panel display unit 46 . The touch panel display section 46 of the present embodiment not only displays information, but also serves as an "operation section" operated by the user.
 情報提供部58は、タッチパネル表示部46のタッチ操作の操作信号に応じて、タッチパネル表示部46の表示内容を変更する。例えば、情報提供部58は、利用者によるタッチパネル表示部46の操作信号に応じてタッチパネル表示部46に表示する三次元表示(すなわち、3Dビュー)の視点変更を行う。 The information providing unit 58 changes the display contents of the touch panel display unit 46 according to the operation signal of the touch operation of the touch panel display unit 46. For example, the information providing unit 58 changes the viewpoint of the three-dimensional display (that is, 3D view) displayed on the touch panel display unit 46 in response to an operation signal of the touch panel display unit 46 by the user.
 画像生成部59は、周辺監視カメラ31の撮像データを用いてタッチパネル表示部46に表示する撮像データを生成する。なお、本実施形態では、画像生成部59と画像認識部51aとが別個になっているが、これに限らず、例えば、画像生成部59が画像認識部51aに含まれていてもよい。 The image generation unit 59 generates image data to be displayed on the touch panel display unit 46 using the image data of the perimeter monitoring camera 31 . Although the image generator 59 and the image recognition unit 51a are separate in this embodiment, the image generator 59 may be included in the image recognition unit 51a.
 画像生成部59は、例えば、前方カメラ31a、後方カメラ31b、左側方カメラ31c、右側方カメラ31dの撮像データを用いて周辺画像データ(以下、周辺画像とも呼ぶ)を周期的または不定期に生成する。周辺画像は、車両Vの周囲の領域の少なくとも一部の範囲に対応する画像であり、カメラ視点画像Gcおよび合成画像等を含む。カメラ視点画像Gcは、周辺監視カメラ31のそれぞれのレンズの配設位置を視点とする画像である。合成画像の一つは、車両Vの周囲の任意の位置に設定された仮想視点から車両Vの周囲を見た画像(以下、仮想視点画像とも呼ぶ)である。以下、仮想視点画像の生成方法について説明する。 The image generator 59 periodically or irregularly generates peripheral image data (hereinafter also referred to as peripheral images) using, for example, captured data from the front camera 31a, the rear camera 31b, the left side camera 31c, and the right side camera 31d. do. The peripheral image is an image corresponding to at least a partial range of the area around the vehicle V, and includes the camera viewpoint image Gc, a synthesized image, and the like. The camera viewpoint image Gc is an image with a viewpoint of the arrangement position of each lens of the periphery monitoring camera 31 . One of the synthesized images is an image of the surroundings of the vehicle V viewed from a virtual viewpoint set at an arbitrary position around the vehicle V (hereinafter also referred to as a virtual viewpoint image). A method of generating a virtual viewpoint image will be described below.
 画像生成部59は、前方カメラ31a、後方カメラ31b、左側方カメラ31c、右側方カメラ31dの撮像データに含まれる各ピクセルの情報を、仮想的な三次元空間における所定の投影曲面(例えば、お椀形状の曲面)に投影する。具体的には、画像生成部59は、投影曲面の中心以外の部分に前方カメラ31a、後方カメラ31b、左側方カメラ31c、右側方カメラ31dの撮像データに含まれる各ピクセルの情報を投影する。投影曲面の中心は、車両Vの位置として規定される。そして、画像生成部59は、仮想的な三次元空間に仮想視点を設定し、仮想視点から見て、所定の視野角に含まれる投影曲面のうちの所定領域を画像データとして切り出すことで、仮想視点画像を生成する。このようにして得られる仮想視点画像は、車両Vの周辺が映る画像の三次元表示である。 The image generation unit 59 converts the information of each pixel included in the imaging data of the front camera 31a, the rear camera 31b, the left side camera 31c, and the right side camera 31d into a predetermined projected curved surface (for example, a bowl) in a virtual three-dimensional space. the curved surface of the shape). Specifically, the image generator 59 projects the information of each pixel included in the imaging data of the front camera 31a, the rear camera 31b, the left camera 31c, and the right camera 31d onto a portion other than the center of the projection curved surface. The center of the projected curved surface is defined as the vehicle V position. Then, the image generation unit 59 sets a virtual viewpoint in a virtual three-dimensional space, and extracts a predetermined region of the projected curved surface included in a predetermined viewing angle as image data when viewed from the virtual viewpoint. Generate a viewpoint image. The virtual viewpoint image obtained in this manner is a three-dimensional representation of an image showing the surroundings of the vehicle V. FIG.
 ここで、画像生成部59は、カメラ視点画像Gcおよび仮想視点画像のそれぞれに対して、更に、車両Vを示す仮想車両画像Gvおよび駐車動作をサポートするための線、枠、マーク等を重畳した画像を生成する。仮想車両画像Gvは、例えば、車両Vの形状を示す不透明または半透明なポリゴン等で構成される。 Here, the image generator 59 further superimposes a virtual vehicle image Gv showing the vehicle V and lines, frames, marks, etc. for supporting the parking operation on the camera viewpoint image Gc and the virtual viewpoint image. Generate an image. The virtual vehicle image Gv is composed of, for example, opaque or translucent polygons representing the shape of the vehicle V. FIG.
 以上のようにして、本実施形態に係る自動駐車システム1が構成されている。続いて、このようにして構成された自動駐車システム1の作動について説明する。本実施形態では、図2に示す駐車場PLに車両Vを駐車させる場合を例にして説明する。図2に示す駐車場PLには、車両Vの駐車スペースSPが4つ設定されている。駐車場PLには、車両出入口Bから直線状に延びる通路PSに沿って第1駐車スペースSP1および第2駐車スペースSP2が縦に並んで設けられている。また、駐車場PLには、通路PSに対して交差するように第3駐車スペースSP3および第4駐車スペースSP4が互いに隣接して設けられている。第3駐車スペースSP3および第4駐車スペースSP4は、ビルBLおよび家屋HMとの間に設けられている。第3駐車スペースSP3には、第3駐車スペースSP3の前を通過した後、車両Vの前後移動(すなわち、切り返し)を行うことで、車両Vを前向きで駐車することができる。このことは、第4駐車スペースSP4においても同様である。なお、図2では、第3駐車スペースSP3を駐車予定位置SEPとし、当該駐車予定位置SEPに車両Vを前向きで駐車したものを例示している。 The automatic parking system 1 according to this embodiment is configured as described above. Next, the operation of the automatic parking system 1 configured in this manner will be described. In this embodiment, the case where the vehicle V is parked in the parking lot PL shown in FIG. 2 will be described as an example. Four parking spaces SP for vehicles V are set in the parking lot PL shown in FIG. In the parking lot PL, a first parking space SP1 and a second parking space SP2 are vertically arranged along a passage PS that extends linearly from a vehicle entrance/exit B. As shown in FIG. Also, in the parking lot PL, a third parking space SP3 and a fourth parking space SP4 are provided adjacent to each other so as to cross the passage PS. A third parking space SP3 and a fourth parking space SP4 are provided between the building BL and the house HM. In the third parking space SP3, after passing in front of the third parking space SP3, the vehicle V can be parked facing forward by moving back and forth (that is, turning). This also applies to the fourth parking space SP4. In addition, in FIG. 2, the third parking space SP3 is assumed to be the planned parking position SEP, and the vehicle V is parked facing forward at the planned parking position SEP.
 まず、学習処理の一例について、図3に示すフローチャートを参照しつつ説明する。図3に示す学習処理は、図示しない学習スイッチが利用者によって操作された場合など、学習処理を行うことの指示が出されたときに、所定の制御周期毎に駐車制御部53によって実行される。なお、本フローチャートに示される各処理は、駐車支援装置5の各機能部によって実現される。また、本処理を実現する各ステップは、駐車支援方法を実現する各ステップとしても把握される。 First, an example of the learning process will be described with reference to the flowchart shown in FIG. The learning process shown in FIG. 3 is executed by the parking control unit 53 at each predetermined control cycle when an instruction to perform the learning process is issued, such as when a learning switch (not shown) is operated by the user. . Each processing shown in this flowchart is implemented by each functional unit of the parking assistance device 5 . Further, each step for realizing this processing can also be grasped as each step for realizing the parking assistance method.
 図3に示すように、駐車制御部53は、ステップS100にて、認識処理を開始する。この認識処理では、周辺監視センサ3のセンシング情報に基づいて、認識処理部51によるシーン認識、立体物認識、フリースペース認識を開始する。 As shown in FIG. 3, the parking control unit 53 starts recognition processing in step S100. In this recognition processing, scene recognition, three-dimensional object recognition, and free space recognition by the recognition processing unit 51 are started based on the sensing information of the periphery monitoring sensor 3 .
 続いて、駐車制御部53は、ステップS110にて、学習開始条件が成立したか否かを判定する。学習開始条件は、例えば、駐車場PL周囲における予め利用者が指定した学習開始エリアに車両Vが進入すると成立する条件になっている。なお、学習開始条件は、図示しない学習スイッチがオンされると成立する条件になっていてもよい。 Subsequently, in step S110, the parking control unit 53 determines whether or not the learning start condition is satisfied. The learning start condition is, for example, a condition that is met when the vehicle V enters a learning start area designated in advance by the user around the parking lot PL. Note that the learning start condition may be a condition that is satisfied when a learning switch (not shown) is turned on.
 駐車制御部53は、学習開始条件が成立するまで待機し、学習開始条件が成立すると、ステップS120にて、駐車支援に必要な各種情報の記憶を開始する。駐車制御部53は、例えば、認識処理部51によって逐次取得される車両Vの走行経路における物標、駐車可能なフリースペース、駐車位置等を経路情報として記憶部50に記憶する。本実施形態の駐車制御部53は、車両Vの走行時および駐車位置での駐車時の周辺画像を記憶部50に記憶する。具体的には、駐車制御部53は、前方カメラ31a、後方カメラ31b、左側方カメラ31c、右側方カメラ31dそれぞれで撮像された画像を駐車時の周辺画像として記憶部50に記憶する。なお、記憶部50の記憶容量が充分でない場合は、前方カメラ31a、後方カメラ31b、左側方カメラ31c、右側方カメラ31dそれぞれで撮像された画像を合成した合成画像を駐車時の周辺画像として記憶部50に記憶することが望ましい。 The parking control unit 53 waits until the learning start condition is satisfied, and when the learning start condition is satisfied, in step S120, starts storing various information necessary for parking assistance. The parking control unit 53 stores, for example, targets in the traveling route of the vehicle V, parking available free spaces, parking positions, etc., which are sequentially acquired by the recognition processing unit 51, in the storage unit 50 as route information. The parking control unit 53 of the present embodiment stores in the storage unit 50 peripheral images when the vehicle V is running and when the vehicle is parked at the parking position. Specifically, the parking control unit 53 stores images captured by the front camera 31a, the rear camera 31b, the left side camera 31c, and the right side camera 31d in the storage unit 50 as surrounding images during parking. If the storage capacity of the storage unit 50 is not sufficient, a composite image obtained by synthesizing the images captured by the front camera 31a, the rear camera 31b, the left camera 31c, and the right camera 31d is stored as the surrounding image when parking. Preferably stored in unit 50 .
 続いて、駐車制御部53は、ステップS130にて、学習停止条件が成立したか否かを判定する。学習停止条件は、予め利用者が指定した駐車予定位置SEPまたは駐車予定位置SEPの近傍で車両Vが停止すると成立する条件になっている。なお、学習停止条件は、シフトポジションが駐車を意味するポジション(例えば、Pポジション)に切り替えられた際に成立する条件になっていてもよい。 Subsequently, in step S130, the parking control unit 53 determines whether or not the learning stop condition is satisfied. The learning stop condition is a condition that is met when the vehicle V stops at the planned parking position SEP designated in advance by the user or in the vicinity of the planned parking position SEP. Note that the learning stop condition may be a condition that is met when the shift position is switched to a position that means parking (for example, the P position).
 駐車制御部53は、学習停止条件が成立するまで各種情報の記憶部50への記憶を継続する。一方、駐車制御部53は、学習停止条件が成立すると、ステップS140にて、各種情報の記憶を停止する。 The parking control unit 53 continues storing various information in the storage unit 50 until the learning stop condition is satisfied. On the other hand, when the learning stop condition is established, the parking control unit 53 stops storing various information in step S140.
 続いて、駐車制御部53は、ステップS150にて、HMI45を介して各種情報の記憶が完了した旨を利用者に通知して、学習処理を抜ける。ステップS150の処理では、例えば、学習処理時における車両Vの走行経路や走行経路における周囲の状況を通知するようになっていてもよい。なお、ステップS100~ステップS150までの学習処理は、駐車制御部53の経路記憶部54によって行われる。 Subsequently, in step S150, the parking control unit 53 notifies the user via the HMI 45 that the storage of various information has been completed, and exits the learning process. In the process of step S150, for example, the travel route of the vehicle V during the learning process and the circumstances around the travel route may be notified. The learning process from step S100 to step S150 is performed by the route storage section 54 of the parking control section 53. FIG.
 次に、目標経路TPに沿って支援開始位置STPから駐車予定位置SEPまで車両Vを自動的に移動させる支援処理の一例について、図4に示すフローチャートを参照しつつ説明する。図4に示す支援処理は、少なくとも一回は学習処理がなされている状況下において、所定の制御周期毎に駐車制御部53によって実行される。なお、本フローチャートに示される各処理は、駐車支援装置5の各機能部によって実現される。また、本処理を実現する各ステップは、駐車支援方法を実現する各ステップとしても把握される。 Next, an example of support processing for automatically moving the vehicle V from the support start position STP to the planned parking position SEP along the target route TP will be described with reference to the flowchart shown in FIG. The support process shown in FIG. 4 is executed by the parking control unit 53 in each predetermined control cycle under the condition that the learning process has been performed at least once. Each processing shown in this flowchart is implemented by each functional unit of the parking assistance device 5 . Further, each step for realizing this processing can also be grasped as each step for realizing the parking assistance method.
 図4に示すように、駐車制御部53は、ステップS200にて、周辺監視センサ3のセンシング情報や図示しないGPSや地図データベースを利用して、車両Vの現在位置が支援開始位置STP付近であるか否かを判定する。支援開始位置STPは、駐車場PLの車両出入口B付近に設定される。車両出入口Bは、公道OLと駐車場PLとの境界部分である。なお、支援開始位置STPは、駐車場PL側ではなく、公道OL側に設定されていてもよい。 As shown in FIG. 4, in step S200, the parking control unit 53 determines that the current position of the vehicle V is near the support start position STP using the sensing information of the surroundings monitoring sensor 3, the GPS (not shown), and a map database. Determine whether or not The support start position STP is set near the vehicle entrance/exit B of the parking lot PL. The vehicle entrance/exit B is a boundary portion between the public road OL and the parking lot PL. The support start position STP may be set on the public road OL side instead of the parking lot PL side.
 ステップS200の判定処理の結果、車両Vの現在位置が支援開始位置STP付近である場合、駐車制御部53は、ステップS210の処理に移行する。駐車制御部53は、ステップS210にて、HMI45を介して車両Vの現在位置が支援開始位置STP付近である旨を利用者に通知する。駐車制御部53は、例えば、図5に示すように、タッチパネル表示部46の左領域にカメラ視点画像Gcを表示するとともに、右領域に俯瞰画像Ghを表示することで、車両Vの現在位置が支援開始位置STP付近である旨を利用者に通知する。なお、利用者への通知は、タッチパネル表示部46への支援開始位置STP付近である旨のメッセージ表示、スピーカ47から支援開始位置STP付近である旨を知らせる音声出力等によって実現されていてもよい。 When the current position of the vehicle V is near the support start position STP as a result of the determination process in step S200, the parking control unit 53 proceeds to the process of step S210. In step S210, the parking control unit 53 notifies the user via the HMI 45 that the current position of the vehicle V is near the support start position STP. For example, as shown in FIG. 5, the parking control unit 53 displays the camera viewpoint image Gc in the left area of the touch panel display unit 46, and displays the overhead image Gh in the right area, so that the current position of the vehicle V is displayed. The user is notified that it is near the support start position STP. The notification to the user may be realized by displaying a message on the touch panel display unit 46 indicating that the user is near the support start position STP, or by outputting a voice from the speaker 47 notifying that the user is near the support start position STP. .
 ここで、図5に示すカメラ視点画像Gcは、車両Vの移動予定方向の風景を撮像するカメラ(本例では前方カメラ31a)のレンズの配設位置を視点とした画像である。また、図5に示す俯瞰画像Ghは車両Vの直上に設定された仮想視点から投影曲面を見て所定の視野角に含まれる投影曲面における領域を画像として切り出した仮想視点画像に、仮想車両画像Gvを重畳した画像である。このカメラ視点画像Gcおよび俯瞰画像Ghは、支援処理の実行中に周辺監視カメラ2で撮像された画像を基に画像生成部59によって生成される。なお、本開示の理解を助けるために、カメラ視点画像Gc、俯瞰画像Ghに映る駐車スペースSPや物体等に実物と同様の符号を付している。このことは、カメラ視点画像Gc、俯瞰画像Gh以外についても同様である。 Here, the camera viewpoint image Gc shown in FIG. 5 is an image taken from the viewpoint of the arrangement position of the lens of the camera (the front camera 31a in this example) that captures the scenery in the direction in which the vehicle V is scheduled to move. A bird's-eye view image Gh shown in FIG. It is an image on which Gv is superimposed. The camera viewpoint image Gc and the bird's-eye view image Gh are generated by the image generation unit 59 based on the images captured by the surroundings monitoring camera 2 during execution of the support process. In order to facilitate understanding of the present disclosure, the parking spaces SP, objects, and the like appearing in the camera viewpoint image Gc and the bird's-eye view image Gh are given the same reference numerals as those of the actual objects. This is the same for images other than the camera viewpoint image Gc and the overhead image Gh.
 続いて、駐車制御部53は、ステップS220にて、駐車支援のスタートスイッチ35の操作を介して利用者から駐車支援を行うことの指示があったか否かを判定する。駐車制御部53は、駐車支援のスタートスイッチ35が利用者によってオンに操作されなかった場合に以降の処理をスキップして本処理を抜ける。一方、駐車制御部53は、駐車支援のスタートスイッチ35が利用者によってオンに操作されている場合にステップS230にて目標経路TPの生成処理を行う。以下、ステップS230の処理の詳細について、図6に示すフローチャートを参照しつつ説明する。 Subsequently, in step S220, the parking control unit 53 determines whether or not an instruction to perform parking assistance has been given by the user through the operation of the parking assistance start switch 35. The parking control unit 53 skips subsequent processing and exits this processing when the user does not turn on the parking support start switch 35 . On the other hand, the parking control unit 53 performs processing for generating the target route TP in step S230 when the start switch 35 for parking assistance is turned on by the user. Details of the processing in step S230 will be described below with reference to the flowchart shown in FIG.
 図6に示すように、駐車制御部53は、ステップS300にて、学習処理時に記憶部50に記憶された経路情報を読み込む。駐車制御部53は、駐車位置が異なる複数の経路情報が記憶部50に記憶されている場合、複数の経路情報を読み込む。 As shown in FIG. 6, the parking control unit 53 reads the route information stored in the storage unit 50 during the learning process in step S300. When a plurality of pieces of route information for different parking positions are stored in the storage unit 50, the parking control unit 53 reads the plurality of pieces of route information.
 続いて、駐車制御部53は、ステップS310にて、認識処理を開始する。この認識処理では、周辺監視センサ3のセンシング情報に基づいて、認識処理部51によるシーン認識、立体物認識、フリースペース認識を開始する。 Subsequently, the parking control unit 53 starts recognition processing in step S310. In this recognition processing, scene recognition, three-dimensional object recognition, and free space recognition by the recognition processing unit 51 are started based on the sensing information of the periphery monitoring sensor 3 .
 続いて、駐車制御部53は、ステップS320にて、経路情報に基づいて目標経路TPを生成する。具体的には、駐車制御部53は、学習処理時の車両Vの走行経路および当該走行経路における車両Vの周辺の情報に基づいて車両Vの駐車時に車両Vが通るべき目標経路TPを生成する。この目標経路TPは、図7に示すように、学習処理時と同様に、第3駐車スペースSP3の前を通過した後、車両Vの切り返しを行うことで、車両Vを前向きで第3駐車スペースSP3に駐車する経路となる。 Subsequently, in step S320, the parking control unit 53 generates the target route TP based on the route information. Specifically, the parking control unit 53 generates the target route TP that the vehicle V should follow when the vehicle V is parked, based on the travel route of the vehicle V during the learning process and information about the surroundings of the vehicle V on the travel route. . As shown in FIG. 7, this target route TP passes in front of the third parking space SP3 as in the learning process, and then the vehicle V is turned back so that the vehicle V is directed forward to the third parking space. This is the route for parking at SP3.
 ここで、例えば、第3駐車スペースSP3に車両Vが駐車された際に得られた経路情報が記憶部50に記憶されている場合、駐車制御部53は、第3駐車スペースSP3を駐車予定位置SEPとして、目標経路TPを生成する。 Here, for example, when the route information obtained when the vehicle V is parked in the third parking space SP3 is stored in the storage unit 50, the parking control unit 53 sets the third parking space SP3 to the planned parking position. Generate a target route TP as SEP.
 一方、第3駐車スペースSP3および第4駐車スペースSP4に車両Vが駐車された際に得られた経路情報が記憶部50に記憶されている場合、駐車制御部53は、第3駐車スペースSP3および第4駐車スペースSP4を駐車予定位置SEPの候補位置とする。そして、駐車予定位置SEPの候補位置毎に目標経路TPを生成する。 On the other hand, when the route information obtained when the vehicle V is parked in the third parking space SP3 and the fourth parking space SP4 is stored in the storage unit 50, the parking control unit 53 controls the third parking space SP3 and the fourth parking space SP4. The fourth parking space SP4 is set as a candidate position for the planned parking position SEP. Then, a target route TP is generated for each candidate position of the planned parking position SEP.
 続いて、駐車制御部53は、ステップS330にて、目標経路TP上に学習処理時にない新規な障害物OBがあるかを判定する。具体的には、駐車制御部53は、支援開始位置STPでの立体物認識の認識結果と経路情報に含まれる立体物認識の認識結果に基づいて新規な障害物OBの有無を判定する。なお、障害物OBは、立体物認識で認識された立体物で構成される。 Subsequently, in step S330, the parking control unit 53 determines whether or not there is a new obstacle OB that did not exist during the learning process on the target route TP. Specifically, the parking control unit 53 determines whether or not there is a new obstacle OB based on the recognition result of the three-dimensional object recognition at the support start position STP and the recognition result of the three-dimensional object recognition included in the route information. The obstacle OB is composed of a three-dimensional object recognized by three-dimensional object recognition.
 目標経路TP上に新規な障害物OBが発見されない場合、現在の状況が学習処理時と類似する状況と考えられる。このため、駐車制御部53は、以降の処理をスキップして、本処理を抜ける。  If no new obstacle OB is found on the target route TP, the current situation is considered to be similar to the situation during the learning process. Therefore, the parking control unit 53 skips the subsequent processes and exits this process.
 一方、目標経路TP上に新規な障害物OBがある場合、ステップS320の処理で生成した目標経路TPに沿って駐車予定位置SEPへ車両Vを移動させることが困難となる。このため、駐車制御部53は、ステップS340にて、目標経路TP上にある障害物OBを回避して駐車予定位置SEPに至る物体回避経路を探索し、当該物体回避経路の生成を試みる。具体的には、駐車制御部53は、経路情報に含まれる車両Vの走行経路において、車両Vと障害物OBとの間隔が所定値以下となる区間を探索し、当該区間を車両Vと障害物OBとの間隔が所定値を超える経路に置き換えたものを物体回避経路として生成する。このようにして生成される物体回避経路は、例えば、図8に示すように、車両Vと障害物OBとの衝突が回避された経路となる。なお、障害物OBのうち動的物標については移動していく。このため、物体回避経路は、静的物標のみを避ける経路になっていることが望ましい。 On the other hand, if there is a new obstacle OB on the target route TP, it becomes difficult to move the vehicle V to the planned parking position SEP along the target route TP generated in step S320. Therefore, in step S340, the parking control unit 53 searches for an object avoidance route that avoids the obstacle OB on the target route TP and reaches the planned parking position SEP, and attempts to generate the object avoidance route. Specifically, the parking control unit 53 searches for a section in which the distance between the vehicle V and the obstacle OB is equal to or less than a predetermined value on the traveling route of the vehicle V included in the route information, The object avoidance route is generated by replacing the route with the distance from the object OB exceeding a predetermined value. The object avoidance route generated in this manner is, for example, a route avoiding a collision between the vehicle V and the obstacle OB, as shown in FIG. Of the obstacles OB, dynamic targets move. Therefore, it is desirable that the object avoidance route is a route that avoids only static targets.
 続いて、駐車制御部53は、ステップS350にて、物体回避経路を生成できたかを判定する。物体回避経路を生成できた場合、駐車制御部53は、ステップS360にて、物体回避経路を目標経路TPに設定して、本処理を抜ける。これにより、物体回避経路を目標経路TPに設定することで、後述の駐車予定位置SEPの表示処理時に、物体回避経路が視覚的な態様で利用者に提供される。 Next, in step S350, the parking control unit 53 determines whether the object avoidance route has been generated. If the object avoidance route can be generated, the parking control unit 53 sets the object avoidance route as the target route TP in step S360, and exits this process. Accordingly, by setting the object avoidance route as the target route TP, the object avoidance route is visually provided to the user at the time of display processing of the planned parking position SEP, which will be described later.
 一方、物体回避経路を生成できなかった場合、駐車制御部53は、ステップS370にて、駐車予定位置SEPへの駐車が不可能であることを示す駐車不可フラグをオンして、本処理を抜ける。ステップS320~ステップS370の処理は、駐車制御部53の経路生成部55によって行われる。 On the other hand, if the object avoidance route could not be generated, the parking control unit 53 turns on the parking prohibition flag indicating that parking at the planned parking position SEP is impossible in step S370, and ends this processing. . The processing of steps S320 to S370 is performed by the route generation section 55 of the parking control section 53. FIG.
 ステップS230の目標経路TPの生成処理が完了すると、駐車制御部53は、図4に示すステップS240に移行する。駐車制御部53は、ステップS240にて、駐車予定位置SEPに駐車が可能であるかを判定する。この判定処理では、例えば、駐車不可フラグがオフである場合に、駐車予定位置SEPに駐車が可能と判定し、駐車不可フラグがオンである場合に、駐車予定位置SEPに駐車が不可であると判定する。 When the process of generating the target route TP in step S230 is completed, the parking control unit 53 proceeds to step S240 shown in FIG. In step S240, the parking control unit 53 determines whether the vehicle can be parked at the planned parking position SEP. In this determination processing, for example, when the parking prohibition flag is off, it is determined that parking is possible at the planned parking position SEP, and when the parking prohibition flag is on, it is determined that parking is not possible at the expected parking position SEP. judge.
 駐車予定位置SEPへの駐車が可能である場合、駐車制御部53は、ステップS250にて、駐車予定位置SEPの表示処理を行う。この駐車予定位置SEPの表示処理については、図9に示すフローチャートを参照しつつ説明する。 When parking at the planned parking position SEP is possible, the parking control unit 53 performs display processing of the planned parking position SEP in step S250. The processing for displaying the expected parking position SEP will be described with reference to the flowchart shown in FIG.
 図9に示すように、駐車制御部53は、ステップS400にて、駐車予定位置SEPの候補位置が複数あるかを判定する。例えば、駐車制御部53は、異なる駐車スペースSPに車両Vが駐車された際に得られた経路情報が記憶部50に記憶されているかを判定する。 As shown in FIG. 9, the parking control unit 53 determines in step S400 whether there are multiple candidate positions for the planned parking position SEP. For example, the parking control unit 53 determines whether the storage unit 50 stores route information obtained when the vehicle V is parked in a different parking space SP.
 駐車予定位置SEPの候補位置が複数なかった場合、駐車制御部53は、ステップS410にて、経路情報に含まれる駐車予定位置SEPに関する情報を利用者に向けて視覚的な態様で提供する。このステップS410の処理は、駐車制御部53の情報提供部58によって行われる。 If there are not a plurality of candidate positions for the planned parking position SEP, the parking control unit 53 visually provides the user with information on the planned parking position SEP included in the route information in step S410. The processing of step S410 is performed by the information providing section 58 of the parking control section 53. FIG.
 駐車制御部53は、経路情報のうち駐車予定位置SEPの周辺の情報として得られる仮想駐車画像Gpを追従制御処理の開始前に利用者に向けて提供する。例えば、駐車制御部53は、図10に示すように、タッチパネル表示部46の右上領域に、仮想駐車画像Gpを表示する。仮想駐車画像Gpは、学習処理時に経路情報として記憶部50に記憶された画像に基づいて画像生成部59で生成される。 The parking control unit 53 provides the user with a virtual parking image Gp obtained as information about the vicinity of the planned parking position SEP among the route information before starting the follow-up control process. For example, the parking control unit 53 displays a virtual parking image Gp in the upper right area of the touch panel display unit 46, as shown in FIG. The virtual parking image Gp is generated by the image generation unit 59 based on the image stored in the storage unit 50 as the route information during the learning process.
 具体的には、駐車制御部53は、駐車予定位置SEPの周辺を映す仮想視点画像に対して仮想車両画像Gvおよび駐車予定位置SEPを示す駐車枠画像Gfを重畳したものを仮想駐車画像Gpとしてタッチパネル表示部46の右上領域に表示する。 Specifically, the parking control unit 53 superimposes the virtual vehicle image Gv and the parking frame image Gf indicating the planned parking position SEP on the virtual viewpoint image showing the surroundings of the planned parking position SEP as the virtual parking image Gp. It is displayed in the upper right area of the touch panel display section 46 .
 図11に示すように、仮想車両画像Gvは、利用者が車両Vの現在位置を示す画像と仮想駐車画像Gpとが区別できるように、車両Vを半透明な態様で示した画像(本例ではポリゴン)になっている。また、駐車枠画像Gfは、仮想視点画像に映る駐車枠と区別できるように、青や赤で色付けた太線画像になっている。 As shown in FIG. 11, the virtual vehicle image Gv is an image (this example of Polygon). In addition, the parking frame image Gf is a thick-line image colored in blue or red so that it can be distinguished from the parking frame shown in the virtual viewpoint image.
 仮想駐車画像Gpは、駐車予定位置SEPの周辺が映る画像の三次元表示である。駐車制御部53は、タッチパネル表示部46の利用者によるタッチ操作の操作信号に応じて仮想駐車画像Gpの視点を変更する。 The virtual parking image Gp is a three-dimensional representation of an image showing the surroundings of the planned parking position SEP. The parking control unit 53 changes the viewpoint of the virtual parking image Gp according to the operation signal of the touch operation by the user of the touch panel display unit 46 .
 駐車制御部53は、例えば、仮想駐車画像Gpに示される上下および左右の回転矢印Rに示す方向へのタッチパネル表示部46のフリックやドラッグ等のタッチ操作に応じた操作信号を取得し、当該操作信号に応じて仮想駐車画像Gpの視点を変更する。また、駐車制御部53は、例えば、仮想駐車画像Gpに示されるズームインアイコンZIおよびズームアウトアイコンZOのタッチ操作によって、仮想駐車画像Gpの拡大および縮小を行う。
 ここで、仮想駐車画像Gpの拡大および縮小は、アイコン操作以外の他の操作によって実現されていてもよい。仮想駐車画像Gpの拡大および縮小は、例えば、タッチパネル表示部46の表面上で、2本の指を広げるようにして間隔を広げるピンチアウトおよび2本の指をつまむようにして間隔を狭くさせるピンチインによって実現されていることが望ましい。このような画面操作によって仮想駐車画像Gpの拡大および縮小が実現されていれば、アイコン表示に伴う仮想駐車画像Gpの表示サイズの縮小やアイコンの重畳による画像の一部が見え難くなることを回避することができる。すなわち、画像の表示サイズおよび画像の視認性を確保することができる。これらは、画像の拡大および縮小に限らず、画像の視点変更等においても同様である。
For example, the parking control unit 53 acquires an operation signal corresponding to a touch operation such as a flick or drag on the touch panel display unit 46 in the directions indicated by the vertical and horizontal rotation arrows R shown in the virtual parking image Gp, and performs the operation. The viewpoint of the virtual parking image Gp is changed according to the signal. In addition, the parking control unit 53 enlarges and reduces the virtual parking image Gp, for example, by touching the zoom-in icon ZI and the zoom-out icon ZO shown in the virtual parking image Gp.
Here, enlargement and reduction of the virtual parking image Gp may be realized by operations other than icon operations. Enlargement and reduction of the virtual parking image Gp are realized, for example, by pinching out to widen the distance between two fingers on the surface of the touch panel display unit 46 and pinching in to narrow the distance by pinching the two fingers. It is desirable that If enlargement and reduction of the virtual parking image Gp are realized by such screen operations, it is possible to avoid reduction in the display size of the virtual parking image Gp due to icon display and obscuring part of the image due to superimposition of icons. can do. That is, it is possible to ensure the display size of the image and the visibility of the image. These are not limited to enlargement and reduction of an image, but are the same when changing the viewpoint of an image.
 これらに加えて、駐車制御部53は、タッチパネル表示部46の左領域に車両Vの周辺が映る周辺画像を含む車両周辺画像Gaを表示し、タッチパネル表示部46の右下領域にイラスト画像Giを表示する。 In addition to these, the parking control unit 53 displays a vehicle surrounding image Ga including a surrounding image showing the surroundings of the vehicle V in the left area of the touch panel display unit 46, and an illustration image Gi in the lower right area of the touch panel display unit 46. indicate.
 車両周辺画像Gaは、車両Vの後方に設定された仮想視点から投影曲面を見て所定の視野角に含まれる投影曲面における領域を画像として切り出した仮想視点画像に、仮想車両画像Gv、目標経路画像Gt、アイコンPを重畳した画像である。この車両周辺画像Gaは、支援処理の実行中に周辺監視カメラ2で撮像された画像および目標経路TPに関する情報を基に画像生成部59によって生成される。 The vehicle peripheral image Ga is a virtual viewpoint image obtained by viewing a projected curved surface from a virtual viewpoint set behind the vehicle V and extracting an area on the projected curved surface included in a predetermined viewing angle as an image. It is an image in which an image Gt and an icon P are superimposed. This vehicle surroundings image Ga is generated by the image generating unit 59 based on the image captured by the surroundings monitoring camera 2 during the execution of the support process and the information regarding the target route TP.
 目標経路画像Gtは、車両Vの現在位置から駐車予定位置SEPまでの目標経路TPを示す画像である。駐車制御部53は、目標経路TPにおいて車両周辺画像Gaに映る物体の前にある経路を前経路として特定するとともに、目標経路TPにおいて物体の背後にある経路を背後経路として特定する。例えば、駐車制御部53は、セマンティックセグメンテーションを利用して、車両周辺画像Gaに映る物体を特定し、当該物体と目標経路TPとの前後の位置関係を特定する。その上で、駐車制御部53の画像生成部59は、目標経路画像Gtのうち背後経路に対応する部分Gtbと前経路に対応する部分Gtfとを異なる態様で車両周辺画像Gaに重畳する。例えば、画像生成部59は、前経路に対応する部分Gtfを実線とするとともに背後経路に対応する部分Gtbを破線とした画像を目標経路画像Gtとして車両周辺画像Gaに重畳する。 The target route image Gt is an image showing the target route TP from the current position of the vehicle V to the planned parking position SEP. The parking control unit 53 identifies a route in front of an object appearing in the vehicle peripheral image Ga on the target route TP as a front route, and identifies a route behind the object on the target route TP as a back route. For example, the parking control unit 53 uses semantic segmentation to identify an object appearing in the vehicle peripheral image Ga, and identifies the positional relationship between the object and the target route TP. Then, the image generation unit 59 of the parking control unit 53 superimposes the portion Gtb corresponding to the back route and the portion Gtf corresponding to the front route in the target route image Gt on the vehicle peripheral image Ga in different manners. For example, the image generating unit 59 superimposes an image in which the portion Gtf corresponding to the front route is a solid line and the portion Gtb corresponding to the back route is a broken line as the target route image Gt and superimposed on the vehicle peripheral image Ga.
 車両周辺画像Gaには、仮想車両画像Gvが不透明な態様で表示されている。車両周辺画像Gaには、視点が把握し易いように、仮想車両画像Gvの前方部分が重畳される。なお、車両周辺画像Gaには、車両Vの全体を映した仮想車両画像Gvが重畳されていてもよい。 A virtual vehicle image Gv is displayed in an opaque manner in the vehicle peripheral image Ga. A front portion of the virtual vehicle image Gv is superimposed on the vehicle peripheral image Ga so that the viewpoint can be easily grasped. A virtual vehicle image Gv showing the entire vehicle V may be superimposed on the vehicle peripheral image Ga.
 アイコンPは、駐車予定位置SEPを示すものである。アイコンPは、駐車予定位置SEP付近に重畳される。なお、図10に示す車両周辺画像Gaでは、駐車予定位置SEPが視認できないため、アイコンPを半透明な態様で重畳させている。 The icon P indicates the planned parking position SEP. The icon P is superimposed near the planned parking position SEP. In addition, in the vehicle peripheral image Ga shown in FIG. 10, since the planned parking position SEP cannot be visually recognized, the icon P is superimposed in a translucent manner.
 イラスト画像Giは、車両Vの現在位置、目標経路TP、駐車予定位置SEPの関係を示す画像である。本実施形態のイラスト画像Giは、車両Vの現在位置、目標経路TP、駐車予定位置SEPのすべてを含む全体俯瞰画像で構成されている。イラスト画像Giは、仮想駐車画像Gpおよび車両周辺画像Gaとは異なり、絵や図によって構成されている。イラスト画像Giは、目標経路TPに関する情報および予め用意された車両Vや駐車予定位置SEPを示すアイコンを基に画像生成部59によって生成される。なお、図10に示すイラスト画像Giは、車両Vの現在位置、目標経路TP、駐車予定位置SEPだけが示されているが、障害物OB等の他の情報が示されていてもよい。 The illustration image Gi is an image showing the relationship between the current position of the vehicle V, the target route TP, and the planned parking position SEP. The illustration image Gi of this embodiment is composed of an overall bird's-eye view image including all of the current position of the vehicle V, the target route TP, and the planned parking position SEP. Unlike the virtual parking image Gp and the vehicle peripheral image Ga, the illustration image Gi is composed of pictures and diagrams. The illustration image Gi is generated by the image generation unit 59 based on information about the target route TP and icons indicating the vehicle V and the planned parking position SEP prepared in advance. Note that the illustration image Gi shown in FIG. 10 shows only the current position of the vehicle V, the target route TP, and the planned parking position SEP, but other information such as an obstacle OB may also be shown.
 さらに、駐車制御部53は、タッチパネル表示部46におけるイラスト画像Giの下領域にスタートボタンSTBを表示する。このスタートボタンSTBは、追従制御処理の開始を指示する際に利用者がタッチ操作するボタンである。 Furthermore, the parking control unit 53 displays a start button STB in the area below the illustration image Gi on the touch panel display unit 46. The start button STB is a button touched by the user when instructing the start of the follow-up control process.
 駐車制御部53は、ステップS420にて、利用者によるスタートボタンSTBのタッチ操作があった否かを判定する。そして、駐車制御部53は、スタートボタンSTBのタッチ操作があるまで待機し、スタートボタンSTBのタッチ操作があると、本処理を抜けて、図4に示すステップS260の追従制御処理に移行する。 The parking control unit 53 determines in step S420 whether or not the user has touched the start button STB. Then, the parking control unit 53 waits until the start button STB is touched, and when the start button STB is touched, exits this process and proceeds to the follow-up control process of step S260 shown in FIG.
 一方、駐車予定位置SEPの候補位置が複数あった場合、駐車制御部53は、ステップS430にて、複数の候補位置に関する情報および駐車予定位置SEPを促す情報を利用者に向けて提供する。このステップS430処理は、駐車制御部53の情報提供部58によって行われる。 On the other hand, if there are multiple candidate positions for the planned parking position SEP, the parking control unit 53 provides the user with information on the multiple candidate positions and information prompting for the planned parking position SEP in step S430. This step S430 processing is performed by the information providing section 58 of the parking control section 53 .
 駐車制御部53は、例えば、図12に示すように、仮想駐車画像Gp、車両周辺画像Ga、イラスト画像Giそれぞれに映る駐車予定位置SEPの候補位置に対応する部分に候補位置を示すアイコンP1、P2を重畳するものをタッチパネル表示部46に表示する。 For example, as shown in FIG. 12, the parking control unit 53 displays an icon P1 indicating a candidate position in a portion corresponding to the candidate position of the planned parking position SEP shown in each of the virtual parking image Gp, the vehicle peripheral image Ga, and the illustration image Gi. The image on which P2 is superimposed is displayed on the touch panel display section 46 .
 また、駐車制御部53は、タッチパネル表示部46における車両周辺画像Gaの下領域に各候補位置に対応する選択ボタンSLB1、SLB2を表示し、イラスト画像Giの下領域に駐車予定位置SEPを決定するための決定ボタンDBを表示する。 In addition, the parking control unit 53 displays the selection buttons SLB1 and SLB2 corresponding to the respective candidate positions in the area below the vehicle periphery image Ga on the touch panel display unit 46, and determines the planned parking position SEP in the area below the illustration image Gi. Display a decision button DB for
 その後、駐車制御部53は、ステップS440にて、利用者による候補位置の選択があったか否かを判定する。例えば、駐車制御部53は、選択ボタンSLB1、SLB2のうち一方が選択され、且つ、決定ボタンDBのタッチ操作が行われると、利用者による候補位置の選択があったと判定する。また、駐車制御部53は、選択ボタンSLB1、SLB2のタッチ操作および決定ボタンDBのタッチ操作のうち、一方が行われていないと、利用者による候補位置の選択がないと判定する。 After that, in step S440, the parking control unit 53 determines whether or not the user has selected a candidate position. For example, when one of the selection buttons SLB1 and SLB2 is selected and the decision button DB is touched, the parking control unit 53 determines that the user has selected a candidate position. Further, the parking control unit 53 determines that the user has not selected a candidate position if one of the touch operations of the selection buttons SLB1 and SLB2 and the touch operation of the enter button DB has not been performed.
 駐車制御部53は、利用者による候補位置の選択があるまで待機し、利用者による候補位置の選択があった場合に、ステップS450に移行する。駐車制御部53は、ステップS450にて、利用者が選択した候補位置を駐車予定位置SEPに設定した後、ステップS410に移行する。ここで、駐車制御部53は、複数の候補位置が離れている場合、利用者が候補位置を選択する前は複数の候補位置の全てを映した画像をタッチパネル表示部46に表示する。そして、駐車制御部53は、複数の候補位置が離れている場合、利用者が候補位置を選択した後は選択された候補位置にフォーカスした画像をタッチパネル表示部46に表示する。これによると、利用者が駐車予定位置SEPを明確に把握することができる。 The parking control unit 53 waits until the user selects a candidate position, and when the user selects a candidate position, the process proceeds to step S450. After setting the candidate position selected by the user as the planned parking position SEP in step S450, the parking control unit 53 proceeds to step S410. Here, when the plurality of candidate positions are separated from each other, the parking control unit 53 displays an image showing all of the plurality of candidate positions on the touch panel display unit 46 before the user selects a candidate position. When a plurality of candidate positions are separated from each other, the parking control unit 53 displays an image focused on the selected candidate position on the touch panel display unit 46 after the user selects the candidate position. According to this, the user can clearly grasp the planned parking position SEP.
 このようにして駐車予定位置SEPの表示処理が完了すると、駐車制御部53は、図4に示すステップS250に移行する。駐車制御部53は、ステップS250にて、追従制御処理を開始する。追従制御処理は、目標経路TPに沿って駐車予定位置SEPまで車両Vを自動的に移動させる処理である。追従制御処理については、図13に示すフローチャートを参照しつつ説明する。 When the display processing of the planned parking position SEP is completed in this manner, the parking control unit 53 proceeds to step S250 shown in FIG. The parking control unit 53 starts follow-up control processing in step S250. The follow-up control process is a process of automatically moving the vehicle V to the planned parking position SEP along the target route TP. The follow-up control process will be described with reference to the flowchart shown in FIG.
 図13に示すように、駐車制御部53は、ステップS500にて、認識処理を開始する。この認識処理では、周辺監視センサ3のセンシング情報に基づいて、認識処理部51によるシーン認識、立体物認識、フリースペース認識を開始する。 As shown in FIG. 13, the parking control unit 53 starts recognition processing in step S500. In this recognition processing, scene recognition, three-dimensional object recognition, and free space recognition by the recognition processing unit 51 are started based on the sensing information of the periphery monitoring sensor 3 .
 続いて、駐車制御部53は、ステップS510にて、記憶部50に記憶されたセンシング情報および駐車支援時に周辺監視センサ3によって逐次取得されるセンシング情報に基づいて、車両Vの現在位置を推定する。このステップS510の処理は、駐車制御部53の位置推定部56によって行われる。 Subsequently, in step S510, the parking control unit 53 estimates the current position of the vehicle V based on the sensing information stored in the storage unit 50 and the sensing information sequentially acquired by the periphery monitoring sensor 3 during parking assistance. . The processing of step S510 is performed by the position estimation section 56 of the parking control section 53 .
 続いて、駐車制御部53は、ステップS520にて、車両Vの加減速制御や操舵制御などの車両運動制御を行うことで駐車予定位置SEPへの車両Vの自動駐車を開始する。このステップS520の処理は、駐車制御部53の追従制御部57によって行われる。 Subsequently, in step S520, the parking control unit 53 starts automatic parking of the vehicle V at the planned parking position SEP by performing vehicle motion control such as acceleration/deceleration control and steering control of the vehicle V. The process of step S<b>520 is performed by the follow-up control section 57 of the parking control section 53 .
 続いて、駐車制御部53は、ステップS530にて、目標経路TP上および駐車予定位置SEPに学習処理時にない新規な障害物OBがあるかを判定する。具体的には、駐車制御部53は、追従制御処理の開始後の立体物認識の認識結果と経路情報に含まれる立体物認識の認識結果に基づいて新規な障害物OBの有無を判定する。なお、障害物OBは、立体物認識で認識された立体物で構成される。 Subsequently, in step S530, the parking control unit 53 determines whether there is a new obstacle OB that did not exist during the learning process on the target route TP and the planned parking position SEP. Specifically, the parking control unit 53 determines whether or not there is a new obstacle OB based on the recognition result of the three-dimensional object recognition after the start of the tracking control process and the recognition result of the three-dimensional object recognition included in the route information. The obstacle OB is composed of a three-dimensional object recognized by three-dimensional object recognition.
 目標経路TP上および駐車予定位置SEPに新規な障害物OBがない場合、駐車制御部53は、ステップS540にて、目標経路TPに関する情報を利用者に提供する。このステップS540処理は、駐車制御部53の情報提供部58によって行われる。 If there is no new obstacle OB on the target route TP and at the planned parking position SEP, the parking control unit 53 provides information on the target route TP to the user in step S540. This step S540 processing is performed by the information providing section 58 of the parking control section 53 .
 駐車制御部53は、例えば、図14および図15に示すように、車両周辺画像Gaに映る駐車予定位置SEPに対応する部分にアイコンPを重畳したものをタッチパネル表示部46の左領域に表示する。 For example, as shown in FIGS. 14 and 15, the parking control unit 53 displays, in the left area of the touch panel display unit 46, an icon P superimposed on a portion corresponding to the planned parking position SEP shown in the vehicle peripheral image Ga. .
 ここで、図14は、追従制御処理の開始時のタッチパネル表示部46への表示内容の一例を図示している。また、図15は、追従制御処理の開始後のタッチパネル表示部46への表示内容の一例を図示している。 Here, FIG. 14 illustrates an example of display contents on the touch panel display unit 46 at the start of the follow-up control process. Also, FIG. 15 illustrates an example of display contents on the touch panel display unit 46 after the follow-up control process is started.
 図14に示す車両周辺画像Gaでは、駐車予定位置SEPの大部分がビルBLで隠れるため、アイコンPを半透明な態様で重畳させている。また、図15に示す車両周辺画像Gaでは、駐車予定位置SEPが視認できるため、アイコンPを不透明な態様で重畳させている。これにより、駐車予定位置SEPの位置を把握できるようになっている。なお、図15に示す車両周辺画像Gaでは、駐車予定位置SEPの一部が見切れている。このため、車両周辺画像Gaにおける右側端にアイコンPが重畳されている。 In the vehicle peripheral image Ga shown in FIG. 14, most of the planned parking position SEP is hidden by the building BL, so the icon P is superimposed in a translucent manner. In addition, in the vehicle peripheral image Ga shown in FIG. 15, since the planned parking position SEP can be visually recognized, the icon P is superimposed in an opaque manner. This makes it possible to grasp the position of the planned parking position SEP. In addition, in the vehicle periphery image Ga shown in FIG. 15, a part of the planned parking position SEP is cut off. Therefore, the icon P is superimposed on the right end of the vehicle peripheral image Ga.
 また、駐車制御部53は、俯瞰画像Ghに目標経路画像Gtを重畳したものをタッチパネル表示部46の右領域に表示する。さらに、駐車制御部53は、タッチパネル表示部46における車両周辺画像Gaの下領域に車両Vの自動駐車の進捗状況を示すプログレスバーPBを表示する。このプログレスバーPBは、横長の棒形状を有しており、支援開始位置STPから車両Vの現在位置までの距離が増加するにつれて棒の内側の色付き部分が増えるようになっている。これにより、利用者が自動駐車の進捗状況を視覚的に把握できるようになっている。なお、車両周辺画像Gaの下領域には、例えば、プログレスバーPBの代わりに、駐車予定位置SEPまでの残距離が表示されるようになっていてもよい。 In addition, the parking control unit 53 displays, in the right area of the touch panel display unit 46, the target route image Gt superimposed on the overhead image Gh. Furthermore, the parking control unit 53 displays a progress bar PB indicating the progress of automatic parking of the vehicle V in the area below the vehicle peripheral image Ga on the touch panel display unit 46 . This progress bar PB has a horizontally long bar shape, and as the distance from the support start position STP to the current position of the vehicle V increases, the colored portion inside the bar increases. This allows the user to visually grasp the progress of automatic parking. In addition, the remaining distance to the planned parking position SEP may be displayed in the lower area of the vehicle periphery image Ga, instead of the progress bar PB, for example.
 駐車制御部53は、図16に示すように、駐車予定位置SEPに近づくと仮想視点画像の仮想視点の角度が大きくなり、駐車予定位置SEPから遠ざかると仮想視点画像の仮想視点の角度が小さくなるように、仮想視点画像の表示態様を変化させる。すなわち、利用者の視野と同様に、目標が近づくと画像に表示すべき範囲も狭くなる。これによると、利用者の視野と同様に、仮想視点画像の表示態様を変化させることになるので、利用者が駐車予定位置SEPとの距離を把握し易くなる。 As shown in FIG. 16, the parking control unit 53 increases the angle of the virtual viewpoint of the virtual viewpoint image when approaching the planned parking position SEP, and decreases the angle of the virtual viewpoint of the virtual viewpoint image when moving away from the planned parking position SEP. , the display mode of the virtual viewpoint image is changed. That is, as with the user's field of view, the closer the target is, the narrower the range to be displayed in the image. According to this, since the display mode of the virtual viewpoint image is changed in the same manner as the user's field of view, it becomes easier for the user to grasp the distance from the planned parking position SEP.
 ここで、タッチパネル表示部46の右領域に表示する画像は、俯瞰画像Ghに限定されない。例えば、図17に示すように、俯瞰画像Ghに代えて、イラスト画像Giが、タッチパネル表示部46の右領域に表示されていてもよい。このように、イラスト画像Giをタッチパネル表示部46に表示する場合、イラスト画像Giに車両Vの現在位置、駐車予定位置SEP、支援開始位置STPを示すマークを重畳させることが望ましい。この場合、利用者が自動駐車の進捗状況を視覚的に把握できる。 Here, the image displayed in the right area of the touch panel display section 46 is not limited to the overhead image Gh. For example, as shown in FIG. 17, an illustration image Gi may be displayed in the right area of the touch panel display section 46 instead of the overhead image Gh. In this way, when the illustration image Gi is displayed on the touch panel display unit 46, it is desirable to superimpose marks indicating the current position of the vehicle V, the planned parking position SEP, and the support start position STP on the illustration image Gi. In this case, the user can visually grasp the progress of automatic parking.
 その後、駐車制御部53は、ステップS550にて、車両Vが駐車予定位置SEPに到達したか否かを判定する。駐車制御部53は、車両Vが駐車予定位置SEPに到達していない場合にステップS510の処理に戻り、車両Vが駐車予定位置SEPに到達すると追従制御処理を抜ける。 After that, in step S550, the parking control unit 53 determines whether or not the vehicle V has reached the planned parking position SEP. The parking control unit 53 returns to the process of step S510 when the vehicle V has not reached the planned parking position SEP, and exits the follow-up control process when the vehicle V reaches the planned parking position SEP.
 一方、目標経路TP上および駐車予定位置SEPに新規な障害物OBがある場合、目標経路TPに沿って駐車予定位置SEPへ車両Vを移動させることが困難となる。このため、駐車制御部53は、ステップS560にて、目標経路TP上にある障害物OBを回避して駐車予定位置SEPに至る回避経路を探索し、当該回避経路の生成を試みる。この回避経路の探索等は、ステップS340の処理と同様であるため、その説明を省略する。 On the other hand, if there is a new obstacle OB on the target route TP and at the planned parking position SEP, it becomes difficult to move the vehicle V to the planned parking position SEP along the target route TP. Therefore, in step S560, the parking control unit 53 searches for an avoidance route that avoids the obstacle OB on the target route TP and reaches the planned parking position SEP, and attempts to generate the avoidance route. Since the avoidance route search and the like are the same as the processing in step S340, the description thereof will be omitted.
 ここで、回避経路の探索等に時間を要する場合がある。このため、駐車制御部53は、回避経路の探索を開始する場合は、例えば、図18に示すように、回避経路の探索中である旨のメッセージ画像Gmをタッチパネル表示部46に表示する。このように、システムの内部状態を利用者に通知することで、自動駐車時の経路変更等を利用者に心構えをさせることができる。 Here, it may take time to search for an avoidance route. Therefore, when starting to search for an avoidance route, the parking control unit 53 displays a message image Gm indicating that the avoidance route is being searched on the touch panel display unit 46, as shown in FIG. 18, for example. In this way, by notifying the user of the internal state of the system, it is possible to make the user ready to change the route during automatic parking.
 回避経路の探索中であることの通知は、タッチパネル表示部46へのメッセージ画像Gmの表示に限定されない。例えば、車両周辺画像Gaに示す目標経路TPの色を変更したり、目標経路TPを示す線を点滅したりすることで、回避経路の探索中であることを利用者に通知するようになっていてもよい。また、音声によって回避経路の探索中であることを利用者に通知するようになっていてもよい。 The notification that the avoidance route is being searched is not limited to the display of the message image Gm on the touch panel display unit 46. For example, by changing the color of the target route TP shown in the vehicle peripheral image Ga or by blinking the line indicating the target route TP, the user is notified that the avoidance route is being searched. may Also, the user may be notified by voice that the avoidance route is being searched.
 続いて、駐車制御部53は、ステップS570にて、回避経路を生成できたかを判定する。回避経路を生成できた場合、駐車制御部53は、ステップS580にて、回避経路を目標経路TPに置き換えるとともに、当該回避経路に関する情報をタッチパネル表示部46に表示する。例えば、駐車制御部53は、車両周辺画像Gaおよび俯瞰画像Ghに回避経路を示す画像を重畳したものをタッチパネル表示部46に表示する。また、駐車制御部53は、スピーカ47を利用して回避経路を目標経路TPに置き換える旨を利用者にアナウンスする。なお、目標経路を回避経路に変更する旨のメッセージをタッチパネル表示部46に表示するようになっていてもよい。目標経路を回避経路に変更することは、ユーザが意図しないものであるため、タッチパネル表示部46への回避経路の表示、経路変更に関するメッセージの表示や音声通知を組み合わせることが望ましい。 Next, in step S570, the parking control unit 53 determines whether the avoidance route has been generated. If the avoidance route can be generated, the parking control unit 53 replaces the avoidance route with the target route TP and displays information about the avoidance route on the touch panel display unit 46 in step S580. For example, the parking control unit 53 displays, on the touch panel display unit 46, an image indicating an avoidance route superimposed on the vehicle peripheral image Ga and the overhead image Gh. In addition, the parking control unit 53 uses the speaker 47 to announce to the user that the avoidance route is replaced with the target route TP. A message indicating that the target route is to be changed to the avoidance route may be displayed on the touch panel display section 46 . Since changing the target route to the avoidance route is unintended by the user, it is desirable to combine the display of the avoidance route on the touch panel display unit 46 with the display of a message regarding route change and voice notification.
 その後、駐車制御部53は、ステップS550にて、車両Vが駐車予定位置SEPに到達したか否かを判定する。駐車制御部53は、車両Vが駐車予定位置SEPに到達していない場合にステップS510の処理に戻り、車両Vが駐車予定位置SEPに到達すると追従制御処理を抜ける。 After that, in step S550, the parking control unit 53 determines whether or not the vehicle V has reached the planned parking position SEP. The parking control unit 53 returns to the process of step S510 when the vehicle V has not reached the planned parking position SEP, and exits the follow-up control process when the vehicle V reaches the planned parking position SEP.
 一方、回避経路を生成できなかった場合、駐車制御部53は、ステップS590にて、駐車場PL内において車両Vを停車させることが可能な停車位置TSPを特定する。具体的には、駐車制御部53は、フリースペース認識の認識結果を利用して停車位置TSPを特定する。例えば、図19に示すように、第2駐車スペースSP2が空いていた場合、駐車制御部53は、第2駐車スペースSP2を停車位置TSPとして特定する。この停車位置TSPは、駐車予定位置SEPとは異なる停止可能位置である。なお、駐車制御部53は、第2駐車スペースSP2以外のフリースペースを停車位置TSPとして特定するようになっていてもよい。 On the other hand, if the avoidance route could not be generated, the parking control unit 53 identifies a stop position TSP where the vehicle V can be stopped within the parking lot PL in step S590. Specifically, the parking control unit 53 identifies the stop position TSP using the recognition result of the free space recognition. For example, as shown in FIG. 19, when the second parking space SP2 is vacant, the parking control unit 53 identifies the second parking space SP2 as the stop position TSP. This stop position TSP is a possible stop position different from the expected parking position SEP. The parking control unit 53 may specify a free space other than the second parking space SP2 as the stop position TSP.
 続いて、駐車制御部53は、ステップS600にて、停車位置TSPおよび停車位置TSPまでの経路に関する情報を利用者に提供して、駐車予定位置SEPとは異なる位置での停車を勧める。例えば、第2駐車スペースSP2が停車位置TSPとして特定された場合、駐車制御部53は、図20に示すように、車両周辺画像Gaに映る停車位置TSPに対応する部分にアイコンPを重畳したものをタッチパネル表示部46の左領域に表示する。また、駐車制御部53は、俯瞰画像Ghに停車位置TSPまでの経路を示す画像を重畳したものをタッチパネル表示部46の右領域に表示する。さらに、駐車制御部53は、スピーカ47を利用して、駐車予定位置SEPに駐車できず、駐車予定位置SEPとは異なる位置で停車する旨をアナウンスする。このステップS600の処理は、駐車制御部53の情報提供部58によって行われる。さらに、駐車制御部53は、タッチパネル表示部46における車両周辺画像Gaの下領域にスタートボタンSTBを表示し、このスタートボタンSTBがタッチ操作されると、ステップS610の別位置停止処理に移行する。 Subsequently, in step S600, the parking control unit 53 provides the user with information regarding the stop position TSP and the route to the stop position TSP, and recommends that the user stop at a position different from the planned parking position SEP. For example, when the second parking space SP2 is identified as the stop position TSP, the parking control unit 53 superimposes an icon P on a portion corresponding to the stop position TSP shown in the vehicle peripheral image Ga, as shown in FIG. is displayed in the left area of the touch panel display unit 46 . In addition, the parking control unit 53 displays, in the right area of the touch panel display unit 46, an image showing the route to the stop position TSP superimposed on the bird's-eye view image Gh. Furthermore, the parking control unit 53 uses the speaker 47 to announce that the vehicle cannot be parked at the planned parking position SEP and will stop at a position different from the planned parking position SEP. The processing of step S600 is performed by the information providing section 58 of the parking control section 53. FIG. Further, the parking control unit 53 displays a start button STB in the area below the vehicle periphery image Ga on the touch panel display unit 46, and when the start button STB is touch-operated, the process proceeds to another position stop processing in step S610.
 続いて、駐車制御部53は、ステップS610にて、停車位置TSPに車両Vを移動させて停車させる別位置停車処理を実行する。駐車制御部53は、車両Vの加減速制御や操舵制御などの車両運動制御を行うことで停車位置TSPへの車両Vの自動駐車を開始する。このステップS610の処理は、駐車制御部53の追従制御部57によって行われる。車両Vが停車位置TSPに到達すると、駐車制御部53は、停車位置TSPに車両Vを停車させて本処理を抜ける。 Subsequently, in step S610, the parking control unit 53 executes another position stop processing for moving the vehicle V to the stop position TSP and stopping the vehicle. The parking control unit 53 starts automatic parking of the vehicle V at the stop position TSP by performing vehicle motion control such as acceleration/deceleration control and steering control of the vehicle V. FIG. The process of step S610 is performed by the follow-up control section 57 of the parking control section 53. FIG. When the vehicle V reaches the stop position TSP, the parking control unit 53 stops the vehicle V at the stop position TSP and exits from this process.
 図4に戻り、ステップS240の判定結果が駐車予定位置SEPへの駐車が不可である場合、駐車制御部53は、ステップS270にて、車両Vを停車することが可能な停車可能位置を特定する。換言すれば、駐車制御部53は、目標経路TPの生成時に、物体回避経路を生成できなかった場合、停車可能位置を特定する。具体的には、駐車制御部53は、フリースペース認識の認識結果を利用して停車可能位置を特定する。例えば、図19に示すように、第2駐車スペースSP2が空いていた場合、駐車制御部53は、第2駐車スペースSP2を停車可能位置として特定する。この停車可能位置は、駐車予定位置SEPとは異なる停止可能位置である。なお、駐車制御部53は、第2駐車スペースSP2以外のフリースペースを停車可能位置として特定するようになっていてもよい。 Returning to FIG. 4, when the determination result in step S240 indicates that parking at the planned parking position SEP is not possible, the parking control unit 53 specifies a possible stop position where the vehicle V can be stopped in step S270. . In other words, the parking control unit 53 specifies the stop possible position when the object avoidance route cannot be generated when the target route TP is generated. Specifically, the parking control unit 53 uses the recognition result of the free space recognition to specify the position where the vehicle can be stopped. For example, as shown in FIG. 19, when the second parking space SP2 is vacant, the parking control unit 53 identifies the second parking space SP2 as a possible stop position. This stopable position is a stopable position different from the expected parking position SEP. The parking control unit 53 may specify a free space other than the second parking space SP2 as a possible stop position.
 続いて、駐車制御部53は、ステップS280にて、停車可能位置および停車可能位置までの経路に関する情報を利用者に提供して、駐車予定位置SEPとは異なる位置での停車を勧める。この処理では、例えば、車両周辺画像Gaや俯瞰画像Ghに映る停車可能位置にアイコンPを重畳したものをタッチパネル表示部46に表示する。また、駐車制御部53は、スピーカ47を利用して、駐車予定位置SEPに駐車できず、駐車予定位置SEPとは異なる位置で停車する旨をアナウンスする。このステップS280の処理は、駐車制御部53の情報提供部58によって行われる。さらに、駐車制御部53は、タッチパネル表示部46にスタートボタンSTBを表示し、このスタートボタンSTBがタッチ操作されると、ステップS290の別位置停止処理に移行する。 Subsequently, in step S280, the parking control unit 53 provides the user with information on the possible stop position and the route to the possible stop position, and recommends that the user stop at a position different from the planned parking position SEP. In this process, for example, the touch panel display unit 46 displays an icon P superimposed on the stopable position shown in the vehicle peripheral image Ga or the bird's-eye view image Gh. In addition, the parking control unit 53 uses the speaker 47 to announce that the vehicle cannot be parked at the planned parking position SEP and will stop at a position different from the planned parking position SEP. The processing of step S280 is performed by the information providing section 58 of the parking control section 53. FIG. Further, the parking control unit 53 displays the start button STB on the touch panel display unit 46, and when the start button STB is touch-operated, the process proceeds to another position stop processing in step S290.
 続いて、駐車制御部53は、ステップS290にて、停車可能位置に車両Vを移動させて停車させる別位置停車処理を実行する。駐車制御部53は、車両Vの加減速制御や操舵制御などの車両運動制御を行うことで停車可能位置への車両Vの自動駐車を開始する。このステップS290の処理は、駐車制御部53の追従制御部57によって行われる。車両Vが停車可能位置に到達すると、駐車制御部53は、停車可能位置に車両Vを停車させて本処理を抜ける。 Subsequently, in step S290, the parking control unit 53 executes another position stop processing for moving the vehicle V to a position where the vehicle can be stopped and stopping the vehicle. The parking control unit 53 performs vehicle motion control such as acceleration/deceleration control and steering control of the vehicle V to start automatic parking of the vehicle V at a position where the vehicle can be stopped. The process of step S290 is performed by the follow-up control section 57 of the parking control section 53. FIG. When the vehicle V reaches the stoppable position, the parking control unit 53 stops the vehicle V at the stoppable position and exits from this process.
 以上説明した駐車支援装置5および駐車支援方法は、利用者による車両Vの駐車操作が行われた際の車両Vの走行経路および走行経路における車両Vの周辺の情報を含む経路情報に基づいて車両Vの駐車時に車両Vが通るべき目標経路TPを生成する。駐車支援装置5および駐車支援方法は、目標経路TPに沿って駐車予定位置SEPまで車両Vを自動的に移動させる追従制御処理を行う。駐車支援装置5および駐車支援方法は、上記の経路情報に含まれる駐車予定位置SEPに関する情報を、追従制御処理の開始前に利用者に向けて視覚的な態様で提供する。 The parking assistance device 5 and the parking assistance method described above are used to park the vehicle V based on route information including information about the vehicle V's surroundings on the travel route and the travel route when the vehicle V is parked by the user. A target route TP that the vehicle V should follow when V is parked is generated. The parking assistance device 5 and the parking assistance method perform follow-up control processing for automatically moving the vehicle V to the planned parking position SEP along the target route TP. The parking assistance device 5 and the parking assistance method provide the information regarding the planned parking position SEP included in the above route information to the user in a visual manner before starting the follow-up control process.
 これによると、利用者は、駐車予定位置SEPを明確に把握した上で当該駐車予定位置SEPへの車両Vの自動駐車を開始させることができる。したがって、本開示の駐車支援装置5および駐車支援方法によれば、自動駐車のユーザビリティの向上を図ることができる。 According to this, the user can start automatically parking the vehicle V at the planned parking position SEP after clearly grasping the planned parking position SEP. Therefore, according to the parking assistance device 5 and the parking assistance method of the present disclosure, usability of automatic parking can be improved.
 また、本実施形態によれば、以下の効果を得ることができる。 Also, according to this embodiment, the following effects can be obtained.
 (1)情報提供部58は、経路情報のうち駐車予定位置SEPの周辺の情報として得られる画像に対して、当該画像に映る駐車予定位置SEPに車両Vを示す仮想車両画像Gvを重畳させたものを追従制御処理の開始前に利用者に向けて提供する。これによると、利用者は、追従制御処理の開始前に駐車予定位置SEPへの車両Vの駐車状態を視覚的に把握することが可能となる。すなわち、利用者は、追従制御処理の開始前に自動駐車による車両Vの駐車位置をイメージし易くなる。このことは、利用者の満足度を高める要因となり、自動駐車のユーザビリティの向上に大きく寄与する。 (1) The information providing unit 58 superimposes a virtual vehicle image Gv showing the vehicle V on the planned parking position SEP in the image obtained as the information about the planned parking position SEP in the route information. Provide the object to the user before the start of the follow-up control process. According to this, the user can visually grasp the parking state of the vehicle V at the planned parking position SEP before the follow-up control process is started. That is, the user can easily visualize the parking position of the vehicle V by automatic parking before the follow-up control process is started. This is a factor that increases user satisfaction, and greatly contributes to improving the usability of automatic parking.
 (2)仮想車両画像Gvは、車両Vを半透明な態様で示した画像である。これによれば、駐車予定位置SEPへの車両Vの駐車状態を示す画像が車両Vの現在位置を示すものでないことを利用者に認識させることができる。すなわち、駐車予定位置SEPへの車両Vの駐車状態を示す画像が車両Vの現在位置を示すもの勘違いされることを抑制することができる。 (2) The virtual vehicle image Gv is an image showing the vehicle V in a translucent manner. According to this, the user can be made to recognize that the image showing the parking state of the vehicle V at the planned parking position SEP does not show the current position of the vehicle V. FIG. That is, it is possible to prevent misunderstanding that the image showing the parking state of the vehicle V at the planned parking position SEP shows the current position of the vehicle V.
 (3)情報提供部58は、経路情報のうち駐車予定位置SEPの周辺の情報として得られる画像に対して、駐車予定位置SEPを示す駐車枠画像Gfを重畳させたものを追従制御処理の開始前に利用者に向けて提供する。これによると、駐車予定位置SEPが強調されることで、利用者は、追従制御処理の開始前に駐車予定位置SEPを視覚的に把握し易くなる。 (3) The information providing unit 58 superimposes the parking frame image Gf indicating the planned parking position SEP on the image obtained as the information about the planned parking position SEP among the route information, and starts the follow-up control process. provided to users in advance. According to this, by emphasizing the planned parking position SEP, it becomes easier for the user to visually grasp the planned parking position SEP before the follow-up control process is started.
 (4)情報提供部58は、駐車予定位置SEPの周辺が映る画像の三次元表示を利用者に向けて提供しつつ、利用者によるタッチパネル表示部46の操作信号に応じて三次元表示の視点変更を行う。これによると、駐車予定位置SEPに関する詳細な情報を利用者に提供することが可能になる。特に、タッチパネル表示部46のタッチ操作に応じて三次元表示の視点を変更することができるようになっていることで、利用者の意向に沿った情報の提供が可能となる。 (4) The information providing unit 58 provides the user with a three-dimensional display of an image showing the surroundings of the planned parking position SEP, and the viewpoint of the three-dimensional display according to the operation signal of the touch panel display unit 46 by the user. make changes. According to this, it becomes possible to provide the user with detailed information regarding the planned parking position SEP. In particular, since it is possible to change the viewpoint of the three-dimensional display according to the touch operation of the touch panel display section 46, it is possible to provide information in accordance with the user's intention.
 (5)情報提供部58は、駐車予定位置SEPの候補となる候補位置が経路情報に複数含まれている場合、経路情報に含まれる複数の候補位置に関する情報を利用者に向けて視覚的な態様で提供する。そして、情報提供部58は、複数の候補位置の中から駐車予定位置SEPの選択を促すための情報を提供する。このように、利用者が駐車予定位置SEPを選択することができるようになっていれば、利用者の意向を適切に反映した駐車支援を実現することができる。 (5) When the route information includes a plurality of candidate positions for the planned parking position SEP, the information providing unit 58 visually presents information regarding the plurality of candidate positions included in the route information to the user. provided in a manner. Then, the information providing unit 58 provides information for prompting selection of the planned parking position SEP from among the plurality of candidate positions. Thus, if the user can select the intended parking position SEP, it is possible to realize parking assistance that appropriately reflects the user's intention.
 (6)追従制御部57は、追従制御処理の開始後に駐車予定位置SEPへ車両Vを駐車できない事態が生じた場合、追従制御処理の開始後に得られる車両Vの周辺の情報に基づいて駐車予定位置SEPとは異なる停止可能位置を特定する。そして、情報提供部58は、追従制御処理の開始後に駐車予定位置SEPへ車両Vを駐車できない事態が生じた場合、停止可能位置での停車を勧めるための情報を提供する。このように、自動駐車の開始後に駐車予定位置SEPへ車両Vを駐車できない事態が生じたとしても、利用者に向けて駐車予定位置SEPとは異なる停止可能位置での停車を勧めるようになっていることが望ましい。なお、駐車予定位置SEPへ車両Vを駐車できない事態は、例えば、他車両が駐車予定位置SEPに駐車されていたり、駐車予定位置SEPに車両Vの駐車を阻害する障害物OBが設置されていたりする場合が挙げられる。 (6) If the vehicle V cannot be parked at the planned parking position SEP after the follow-up control process is started, the follow-up control unit 57 determines whether the vehicle V will be parked based on information about the surroundings of the vehicle V obtained after the start of the follow-up control process. A possible stop position different from the position SEP is specified. Then, when the vehicle V cannot be parked at the planned parking position SEP after the follow-up control process is started, the information providing unit 58 provides information for recommending that the vehicle be stopped at a possible stop position. In this way, even if the vehicle V cannot be parked at the planned parking position SEP after the automatic parking starts, the user is encouraged to stop at a possible stop position different from the planned parking position SEP. It is desirable to be A situation in which the vehicle V cannot be parked at the planned parking position SEP is, for example, when another vehicle is parked at the planned parking position SEP, or an obstacle OB is installed at the planned parking position SEP to obstruct the parking of the vehicle V. There are cases where
 (7)情報提供部58は、追従制御処理の開始後、追従制御処理の実行中に得られた車両Vの周辺が映る周辺画像に目標経路TPを示す目標経路画像Gtを重畳させたものを利用者に向けて提供する。このように、自動駐車時に利用者に対して車両Vが走行する予定の経路が視覚的な態様で提供される構成になっていることが望ましい。これによれば、利用者は、駐車予定位置SEPまでの車両Vの走行経路を明確に把握した上で当該駐車予定位置SEPへの車両Vの自動駐車を提供することができる。 (7) After starting the follow-up control process, the information providing unit 58 superimposes a target route image Gt indicating the target route TP on a surrounding image showing the surroundings of the vehicle V obtained during execution of the follow-up control process. provided to users. In this manner, it is desirable that the route along which the vehicle V is scheduled to travel is provided visually to the user during automatic parking. According to this, the user can clearly grasp the travel route of the vehicle V to the planned parking position SEP, and then automatically park the vehicle V at the planned parking position SEP.
 (8)情報提供部58は、目標経路TPにおいて周辺画像に映る物体の前にある経路を前経路として特定するとともに、目標経路TPにおいて物体の背後にある経路を背後経路として特定する。情報提供部58は、目標経路画像Gtのうち背後経路に対応する部分Gtbと前経路に対応する部分Gtfとを異なる態様で周辺画像に重畳する。このように、車両Vの目標経路TPは、利用者が実際に見える経路と実際には見えない経路とが区別された態様で提供される構成になっていることが望ましい。 (8) The information providing unit 58 identifies the path in front of the object appearing in the surrounding image on the target path TP as the front path, and identifies the path behind the object on the target path TP as the back path. The information providing unit 58 superimposes the portion Gtb corresponding to the back route and the portion Gtf corresponding to the front route in the target route image Gt on the surrounding images in different manners. In this way, it is desirable that the target route TP of the vehicle V is provided in such a manner that the route that the user can actually see and the route that the user cannot actually see are distinguished.
 (9)情報提供部58は、車両Vの現在位置、目標経路TP、駐車予定位置SEPの関係を示すイラスト画像Giを利用者に向けて提供する。イラスト画像Giは、撮像画像等に比べて余分な情報量が少ないので、車両Vの現在位置、目標経路TP、駐車予定位置SEPが際立つ。このため、車両Vの現在位置、目標経路TP、駐車予定位置SEPを示すイラスト画像Giを利用者に提供することで、自動駐車の概要が利用者に伝え易くなる。 (9) The information providing unit 58 provides the user with an illustration image Gi showing the relationship between the current position of the vehicle V, the target route TP, and the planned parking position SEP. Since the illustration image Gi has a smaller amount of extra information than the captured image or the like, the current position of the vehicle V, the target route TP, and the planned parking position SEP stand out. Therefore, by providing the user with an illustration image Gi showing the current position of the vehicle V, the target route TP, and the planned parking position SEP, it becomes easier to convey an overview of automatic parking to the user.
 (10)追従制御部57は、目標経路TP上に障害物OBが発見された場合、障害物OBを回避して駐車予定位置SEPに至る回避経路の生成を試みる。そして、情報提供部58は、追従制御部57によって回避経路が生成された場合に当該回避経路に関する情報を視覚的な態様で利用者に提供する。このように、目標経路TP上に障害物OBがある場合は回避経路に関する情報が視覚的な態様で利用者に提供される構成になっていることが望ましい。 (10) When an obstacle OB is found on the target route TP, the follow-up control unit 57 tries to generate an avoidance route that avoids the obstacle OB and reaches the planned parking position SEP. Then, when the follow-up control unit 57 generates an avoidance route, the information providing unit 58 provides the user with information regarding the avoidance route in a visual manner. In this way, it is desirable to provide the user with information on the avoidance route in a visual manner when there is an obstacle OB on the target route TP.
 (11)追従制御部57は、回避経路を生成できなかった場合、車両Vを停車することが可能な停車位置TSPを特定する。情報提供部58は、停車位置TSPおよび停車位置TSPまでの経路に関する情報を利用者に提供する。このように、回避経路を生成できない場合、回避経路の代替として車両Vの停車位置TSP等に関する情報が利用者に提供される構成になっていることが望ましい。 (11) If the avoidance route cannot be generated, the follow-up control unit 57 identifies a stop position TSP where the vehicle V can be stopped. The information providing unit 58 provides the user with information regarding the stop position TSP and the route to the stop position TSP. In this way, when the avoidance route cannot be generated, it is desirable to provide the user with information regarding the stop position TSP of the vehicle V, etc., as an alternative for the avoidance route.
 (12)情報提供部58は、追従制御部57による回避経路の生成が開始されると、追従制御部57による回避経路の生成中であること示す情報を利用者に提供する。このように、回避経路に関する情報等を利用者に提供する前に、回避経路の生成中であることを利用者に伝えることが望ましい。経路変更が唐突に伝えられるのではなく、段階的に伝えられることになることで、経路変更に伴う利用者の心理的な負担の軽減を図ることが期待できる。 (12) When the follow-up control unit 57 starts generating the avoidance route, the information providing unit 58 provides the user with information indicating that the follow-up control unit 57 is generating the avoidance route. In this way, it is desirable to notify the user that the avoidance route is being generated before providing the user with information on the avoidance route. It can be expected that the route change will be transmitted in stages rather than suddenly, thereby reducing the psychological burden on the user accompanying the route change.
 (13)経路生成部55は、追従制御処理が開始される前に目標経路TP上に障害物OBが発見された場合、追従制御処理の開始前に障害物OBを回避して前記駐車予定位置に至る物体回避経路の生成を試みる。情報提供部58は、経路生成部55によって物体回避経路が生成された場合、追従制御処理の開始前の段階で物体回避経路に関する情報を視覚的な態様で利用者に提供する。このように、自動駐車の開始前に目標経路TP上に障害物OBがあることが判った場合は、自動駐車の開始前に、物体回避経路に関する情報が視覚的な態様で利用者に提供される構成になっていることが望ましい。 (13) When an obstacle OB is found on the target route TP before the follow-up control process is started, the path generation unit 55 avoids the obstacle OB before the start of the follow-up control process and determines the parking position. Attempt to generate an object avoidance path leading to When the object avoidance route is generated by the route generation unit 55, the information providing unit 58 provides the user with information on the object avoidance route in a visual manner before the follow-up control process is started. In this way, when it is found that there is an obstacle OB on the target route TP before starting automatic parking, information regarding the object avoidance route is provided to the user in a visual manner before starting automatic parking. It is desirable to have a configuration that
 (14)追従制御部57は、経路生成部55にて物体回避経路を生成できなかった場合、車両Vを停車することが可能な停車可能位置を特定する。そして、情報提供部58は、停車可能位置および停車可能位置までの経路に関する情報を利用者に提供する。このように、自動駐車の開始前に駐車予定位置SEPへの駐車ができないことが判った場合は、物体回避経路の代替として車両Vの停車可能位置等に関する情報が利用者に提供される構成になっていることが望ましい。 (14) If the route generation unit 55 fails to generate an object avoidance route, the follow-up control unit 57 identifies a possible stop position where the vehicle V can be stopped. Then, the information providing unit 58 provides the user with information regarding the possible stop position and the route to the possible stop position. In this manner, when it is determined that parking at the planned parking position SEP is not possible before the start of automatic parking, the user is provided with information regarding possible stop positions of the vehicle V as an alternative to the object avoidance route. It is desirable that
 (他の実施形態)
 以上、本開示の代表的な実施形態について説明したが、本開示は、上述の実施形態に限定されることなく、例えば、以下のように種々変形可能である。
(Other embodiments)
Although representative embodiments of the present disclosure have been described above, the present disclosure is not limited to the above-described embodiments, and can be modified in various ways, for example, as follows.
 上述の実施形態では、駐車支援装置5の詳細な構成、学習処理の詳細な内容、追従制御処理の詳細な内容について説明したが、これらに限定されず、これらの一部が異なっていてもよい。 In the above-described embodiment, the detailed configuration of the parking assistance device 5, the detailed content of the learning process, and the detailed content of the follow-up control process have been described. .
 上述の駐車支援装置5は、追従制御処理の開始前に、駐車予定位置SEPに関する情報として仮想駐車画像Gpをタッチパネル表示部46に表示しているが、これに限定されない。駐車支援装置5は、例えば、駐車予定位置SEPでの探査波センサの検出結果を画像化したものを利用者に向けて提供するようになっていてもよい。 Although the parking assistance device 5 described above displays the virtual parking image Gp on the touch panel display unit 46 as information about the expected parking position SEP before starting the follow-up control process, the present invention is not limited to this. The parking assistance device 5 may provide the user with an image of the detection result of the search wave sensor at the planned parking position SEP, for example.
 上述の駐車支援装置5は、追従制御処理の開始前に、仮想駐車画像Gpだけでなく、車両周辺画像Ga、イラスト画像Giをタッチパネル表示部46に表示しているが、これに限定されず、例えば、仮想駐車画像Gpだけを表示するようになっていてもよい。 The above-described parking assistance device 5 displays not only the virtual parking image Gp but also the vehicle surrounding image Ga and the illustration image Gi on the touch panel display unit 46 before starting the follow-up control process. For example, only the virtual parking image Gp may be displayed.
 上述の実施形態では、車両周辺画像Gaがタッチパネル表示部46の左領域に表示され、俯瞰画像Gh、仮想駐車画像Gp、イラスト画像Giがタッチパネル表示部46の右領域に表示されるようになっているが、画像表示のレイアウト等は、これに限定されない。タッチパネル表示部46における画像表示のレイアウトおよび画像サイズ等は、上述したものとは異なっていてもよい。 In the above-described embodiment, the vehicle peripheral image Ga is displayed in the left area of the touch panel display section 46, and the overhead image Gh, the virtual parking image Gp, and the illustration image Gi are displayed in the right area of the touch panel display section 46. However, the image display layout and the like are not limited to this. The image display layout, image size, and the like on the touch panel display unit 46 may be different from those described above.
 上述の実施形態では、HMI45がタッチパネル表示部46を有しているが、HMI45は、これに限定されない。HMI45は、タッチパネル表示部46に代えて、例えば、リモコン等の操作デバイスによって操作するディスプレイを有するもので構成されていてもよい。HMI45は、ナビゲーションシステムの一部を利用して実現されていてもよい。 Although the HMI 45 has the touch panel display unit 46 in the above embodiment, the HMI 45 is not limited to this. The HMI 45 may have a display operated by an operation device such as a remote controller instead of the touch panel display unit 46, for example. The HMI 45 may be implemented using part of the navigation system.
 また、タッチパネル表示部46が操作部を兼ねているが、これに限らず、操作部と表示部とは別体で構成されていてもよい。操作部は、タッチ操作に限らず、例えば、利用者の音声による操作によって行われるようになっていてもよい。 Further, although the touch panel display unit 46 also serves as an operation unit, the operation unit and the display unit may be configured separately. The operation unit is not limited to the touch operation, and may be operated by the user's voice, for example.
 上述の実施形態では、駐車支援装置5は、駐車予定位置SEPの周辺の情報として得られる画像に対して仮想車両画像Gvおよび駐車枠画像Gfそれぞれを重畳させたもの利用者に向けて提供するようになっていることが望ましいが、これに限定されない。駐車支援装置5は、例えば、駐車予定位置SEPの周辺の情報として得られる画像そのものや、当該画像に対して仮想車両画像Gvおよび駐車枠画像Gfの一方を重畳させたものを利用者に向けて提供するようになっていてもよい。なお、仮想車両画像Gvは、車両Vを半透明な態様で示したものに限らず、不透明な態様で示したものであってもよい。 In the above-described embodiment, the parking assistance device 5 superimposes the virtual vehicle image Gv and the parking frame image Gf on an image obtained as information about the area around the planned parking position SEP, and provides the image to the user. Although it is desirable to be, it is not limited to this. The parking assistance device 5, for example, directs to the user an image itself obtained as information about the vicinity of the planned parking position SEP, or an image obtained by superimposing one of the virtual vehicle image Gv and the parking frame image Gf on the image. may be provided. Note that the virtual vehicle image Gv is not limited to showing the vehicle V in a translucent manner, and may also show the vehicle V in an opaque manner.
 上述の実施形態の如く、駐車支援装置5は、駐車予定位置SEPの周辺が映る画像の三次元表示を利用者に向けて提供しつつ、利用者による操作部の操作に応じて三次元表示の視点変更を行うようになっていることが望ましいが、これに限定されない。駐車支援装置5は、例えば、駐車予定位置SEPの周辺が映る画像の二次元表示を利用者に向けて提供するようになっていてもよい。 As in the above-described embodiment, the parking assistance device 5 provides the user with a three-dimensional display of an image showing the surroundings of the planned parking position SEP, and changes the three-dimensional display according to the operation of the operation unit by the user. Although it is desirable to change the viewpoint, the present invention is not limited to this. The parking assistance device 5 may, for example, provide the user with a two-dimensional display of an image showing the surroundings of the planned parking position SEP.
 上述の実施形態の如く、駐車支援装置5は、駐車予定位置SEPの候補となる候補位置が経路情報に複数含まれている場合、経路情報に含まれる複数の候補位置に関する情報を利用者に向けて視覚的な態様で提供することが望ましいが、これに限定されない。駐車支援装置5は、例えば、駐車予定位置SEPの候補となる候補位置が経路情報に複数含まれている場合、予め定めた判断基準に基づいて、複数の候補位置の1つを駐車予定位置SEPに自動的に設定するようになっていてもよい。 As in the above-described embodiment, when the route information includes a plurality of candidate positions for the planned parking position SEP, the parking assistance device 5 directs the information regarding the plurality of candidate positions included in the route information to the user. Although it is desirable to provide it in a visual manner, it is not limited to this. For example, if the route information includes a plurality of candidate positions for the planned parking position SEP, the parking assistance device 5 selects one of the plurality of candidate positions as the planned parking position SEP based on predetermined criteria. may be automatically set to
 上述の実施形態の如く、駐車支援装置5は、プログレスバーPB等によって利用者が自動駐車の進捗状況を視覚的に把握できるようになっていることが望ましいが、これに限定されない。駐車支援装置5は、例えば、利用者が自動駐車の進捗状況を聴覚で把握できるようになっていてもよい。 As in the above-described embodiment, it is desirable that the parking assistance device 5 allows the user to visually grasp the progress of automatic parking by using the progress bar PB or the like, but it is not limited to this. The parking assistance device 5 may, for example, allow the user to audibly grasp the progress of automatic parking.
 上述の実施形態の如く、駐車支援装置5は、目標までの距離に応じて仮想視点画像の仮想視点の角度を変化させるようになっていることが望ましいが、これに限定されない。駐車支援装置5は、目標までの距離によらず、仮想視点画像の角度が一定になっていてもよい。 As in the above embodiment, it is desirable that the parking assistance device 5 changes the angle of the virtual viewpoint of the virtual viewpoint image according to the distance to the target, but it is not limited to this. The parking assistance device 5 may keep the angle of the virtual viewpoint image constant regardless of the distance to the target.
 上述の実施形態の如く、駐車支援装置5は、自動駐車の開始後に駐車予定位置SEPへ駐車できない事態が生じた場合に利用者に向けて駐車予定位置SEPとは異なる停車可能位置での停車を勧めるようになっていることが望ましいが、これに限定されない。駐車支援装置5は、例えば、駐車予定位置SEPへ駐車できない事態が生じた場合、その旨を伝え、その場で車両Vを停車して、自動駐車を強制的に終了するようになっていてもよい。 As in the above-described embodiment, the parking assistance device 5 instructs the user to stop the vehicle at a possible stop position different from the planned parking position SEP when the vehicle cannot be parked at the planned parking position SEP after the start of automatic parking. Although it is preferable that it is recommended, it is not limited to this. For example, when a situation occurs in which the vehicle cannot be parked at the planned parking position SEP, the parking assistance device 5 notifies the driver of the situation, stops the vehicle V on the spot, and forcibly terminates the automatic parking. good.
 上述の実施形態の如く、駐車支援装置5は、追従制御処理の開始後、追従制御処理の実行中に得られた周辺画像に目標経路TPを示す目標経路画像Gtを重畳させたものを利用者に向けて提供するようになっていることが望ましいが、これに限定されない。駐車支援装置5は、例えば、追従制御処理の開始後、追従制御処理の実行中に得られた周辺画像をそのまま表示するようになっていてもよい。 As in the above-described embodiment, after the follow-up control process is started, the parking assistance device 5 superimposes the target route image Gt indicating the target route TP on the surrounding image obtained during the execution of the follow-up control process. Although it is desirable to provide to the customer, it is not limited to this. For example, after starting the follow-up control process, the parking assistance device 5 may display the peripheral image obtained during the execution of the follow-up control process as it is.
 上述の実施形態の如く、駐車支援装置5は、目標経路画像Gtのうち物体の背後にある部分と物体の前にある部分とを異なる態様で表示するようになっていることが望ましいが、これに限定されない。駐車支援装置5は、例えば、目標経路画像Gtのうち物体の背後にある部分と物体の前にある部分とを同じ態様で表示するようになっていてもよい。 As in the above-described embodiment, it is desirable that the parking assistance device 5 displays the part behind the object and the part in front of the object in the target route image Gt in different manners. is not limited to The parking assistance device 5 may display, for example, the part behind the object and the part in front of the object in the target route image Gt in the same manner.
 上述の実施形態の如く、駐車支援装置5は、自動駐車時に車両Vの現在位置、目標経路TP、駐車予定位置SEPの関係を示すイラスト画像Giを利用者に向けて提供することが望ましいが、これに限らず、イラスト画像Giを提供しないようになっていてもよい。 As in the above-described embodiment, the parking assistance device 5 preferably provides the user with an illustration image Gi showing the relationship between the current position of the vehicle V, the target route TP, and the planned parking position SEP during automatic parking. Not limited to this, the illustration image Gi may not be provided.
 上述の実施形態の如く、駐車支援装置5は、目標経路TP上に障害物OBがある場合に障害物OBを回避する経路を探査する際に、そのことを利用者に通知するようになっていることが望ましいが、これに限らず、何も通知しないようになっていてもよい。 As in the above-described embodiment, the parking assistance device 5 notifies the user when searching for a route to avoid the obstacle OB on the target route TP. Although it is desirable that there is, it is not limited to this, and nothing may be notified.
 上述の実施形態の如く、駐車支援装置5は、目標経路TP上に障害物OBがある場合に障害物OBを回避する経路を探査するようになっていることが望ましいが、これに限定されない。駐車支援装置5は、障害物OBの回避経路を探索せず、例えば、その場での停車を促したり、駐車位置の指定を要求したりするようになっていてもよい。 As in the above-described embodiment, it is desirable that the parking assistance device 5 searches for a route that avoids the obstacle OB when there is an obstacle OB on the target route TP, but is not limited to this. The parking assistance device 5 may, for example, prompt the vehicle to stop on the spot or request the designation of the parking position without searching for an avoidance route for the obstacle OB.
 上述の実施形態では、本開示の駐車支援装置5を複数の駐車スペースSPのある駐車場PLでの駐車支援に適用した例について説明したが、駐車支援装置5の適用対象は、これに限定されない。駐車支援装置5は、例えば、自宅前のように1つの駐車スペースSPが設けられた土地等での駐車支援にも適用することができる。 In the above-described embodiment, an example in which the parking assistance device 5 of the present disclosure is applied to parking assistance in a parking lot PL having a plurality of parking spaces SP has been described, but the application target of the parking assistance device 5 is not limited to this. . The parking assistance device 5 can also be applied to parking assistance in a land or the like where one parking space SP is provided, such as in front of one's house.
 上述の実施形態において、実施形態を構成する要素は、特に必須であると明示した場合および原理的に明らかに必須であると考えられる場合等を除き、必ずしも必須のものではないことは言うまでもない。 It goes without saying that, in the above-described embodiments, the elements that make up the embodiments are not necessarily essential unless explicitly stated as essential or clearly considered essential in principle.
 上述の実施形態において、実施形態の構成要素の個数、数値、量、範囲等の数値が言及されている場合、特に必須であると明示した場合および原理的に明らかに特定の数に限定される場合等を除き、その特定の数に限定されない。 In the above-described embodiments, when numerical values such as the number, numerical value, amount, range, etc. of the constituent elements of the embodiment are mentioned, when it is explicitly stated that they are essential, and in principle they are clearly limited to a specific number It is not limited to that particular number, unless otherwise specified.
 上述の実施形態において、構成要素等の形状、位置関係等に言及するときは、特に明示した場合および原理的に特定の形状、位置関係等に限定される場合等を除き、その形状、位置関係等に限定されない。 In the above-described embodiments, when referring to the shape, positional relationship, etc. of components, etc., the shape, positional relationship, etc., unless otherwise specified or limited in principle to a specific shape, positional relationship, etc. etc. is not limited.
 本開示の制御部及びその手法は、コンピュータプログラムにより具体化された一つ乃至は複数の機能を実行するようにプログラムされたプロセッサ及びメモリを構成することによって提供された専用コンピュータで、実現されてもよい。本開示の制御部及びその手法は、一つ以上の専用ハードウエア論理回路によってプロセッサを構成することによって提供された専用コンピュータで、実現されてもよい。本開示の制御部及びその手法は、一つ乃至は複数の機能を実行するようにプログラムされたプロセッサ及びメモリと一つ以上のハードウエア論理回路によって構成されたプロセッサとの組み合わせで構成された一つ以上の専用コンピュータで、実現されてもよい。また、コンピュータプログラムは、コンピュータにより実行されるインストラクションとして、コンピュータ読み取り可能な非遷移有形記録媒体に記憶されていてもよい。 The controller and techniques of the present disclosure are implemented on a dedicated computer provided by configuring a processor and memory programmed to perform one or more functions embodied by the computer program. good too. The controller and techniques of the present disclosure may be implemented in a dedicated computer provided by configuring the processor with one or more dedicated hardware logic circuits. The control unit and method of the present disclosure is a combination of a processor and memory programmed to perform one or more functions and a processor configured by one or more hardware logic circuits. It may be implemented on one or more dedicated computers. The computer program may also be stored as computer-executable instructions on a computer-readable non-transitional tangible recording medium.
 [本開示の特徴]
 本開示は、以下の技術的な特徴を開示する。
[Features of the present disclosure]
The present disclosure discloses the following technical features.
 [開示1]
 駐車支援装置であって、
 利用者による車両(V)の駐車操作が行われた際の前記車両の走行経路および前記走行経路における前記車両の周辺の情報を含む経路情報に基づいて前記車両の駐車時に前記車両が通るべき目標経路(TP)を生成する経路生成部(55)と、
 前記目標経路に沿って駐車予定位置(SEP)まで前記車両を自動的に移動させる追従制御処理を行う追従制御部(57)と、
 前記利用者に情報を提供する情報提供部(58)と、を備え、
 前記情報提供部は、前記経路情報に含まれる前記駐車予定位置に関する情報を、前記追従制御処理の開始前に前記利用者に向けて視覚的な態様で提供する、駐車支援装置。
[Disclosure 1]
A parking assistance device,
A target to be passed by the vehicle when the vehicle is parked based on route information including a travel route of the vehicle and information about the surroundings of the vehicle on the travel route when the vehicle (V) is parked by a user. a route generation unit (55) that generates a route (TP);
a follow-up control unit (57) that performs a follow-up control process for automatically moving the vehicle to the planned parking position (SEP) along the target route;
An information providing unit (58) that provides information to the user,
The parking assistance device, wherein the information providing unit provides the information regarding the planned parking position included in the route information in a visual form to the user before starting the follow-up control process.
 [開示2]
 前記情報提供部は、前記経路情報のうち前記駐車予定位置の周辺の情報として得られる画像に対して、当該画像に映る前記駐車予定位置に前記車両を示す仮想車両画像(Gv)を重畳させたものを前記追従制御処理の開始前に前記利用者に向けて提供する、開示1に記載の駐車支援装置。
[Disclosure 2]
The information providing unit superimposes a virtual vehicle image (Gv) showing the vehicle at the planned parking position in the image obtained as information about the planned parking position in the route information. The parking assistance device according to Disclosure 1, wherein an object is provided to the user before the follow-up control process is started.
 [開示3]
 前記仮想車両画像は、前記車両を半透明な態様で示した画像である、開示2に記載の駐車支援装置。
[Disclosure 3]
The parking assistance device according to Disclosure 2, wherein the virtual vehicle image is an image showing the vehicle in a translucent manner.
 [開示4]
 前記情報提供部は、前記経路情報のうち前記駐車予定位置の周辺の情報として得られる画像に対して、前記駐車予定位置を示す駐車枠画像(Gf)を重畳させたものを前記追従制御処理の開始前に前記利用者に向けて提供する、開示1ないし3のいずれか1つに記載の駐車支援装置。
[Disclosure 4]
The information providing unit superimposes a parking frame image (Gf) indicating the planned parking position on an image obtained as information around the planned parking position among the route information in the follow-up control process. 4. The parking assistance device according to any one of Disclosures 1 to 3, provided to the user before starting.
 [開示5]
 前記情報提供部は、前記駐車予定位置の周辺が映る画像の三次元表示を前記利用者に向けて提供しつつ、前記利用者による操作部(46)の操作信号に応じて前記三次元表示の視点変更を行う、開示1ないし4のいずれか1つに記載の駐車支援装置。
[Disclosure 5]
The information providing unit provides the user with a three-dimensional display of an image showing the surroundings of the planned parking position, and the three-dimensional display in response to an operation signal of the operation unit (46) by the user. 5. The parking assistance device according to any one of Disclosures 1 to 4, which changes a viewpoint.
 [開示6]
 前記情報提供部は、前記駐車予定位置の候補となる候補位置が前記経路情報に複数含まれている場合、前記経路情報に含まれる複数の前記候補位置に関する情報を前記利用者に向けて視覚的な態様で提供するとともに、複数の前記候補位置の中から前記駐車予定位置の選択を促すための情報を提供する、開示1ないし5のいずれか1つに記載の駐車支援装置。
[Disclosure 6]
When the route information includes a plurality of candidate positions as candidates for the planned parking position, the information providing unit visually displays information regarding the plurality of candidate positions included in the route information to the user. 6. The parking assistance device according to any one of Disclosures 1 to 5, which provides information for prompting selection of the planned parking position from among the plurality of candidate positions.
 [開示7]
 前記追従制御部は、前記追従制御処理の開始後に前記駐車予定位置へ前記車両を駐車できない事態が生じた場合、前記追従制御処理の開始後に得られる前記車両の周辺の情報に基づいて前記駐車予定位置とは異なる停止可能位置を特定し、
 前記情報提供部は、前記追従制御処理の開始後に前記駐車予定位置へ前記車両を駐車できない事態が生じた場合、前記停止可能位置での停車を勧めるための情報を提供する、開示1ないし6のいずれか1つに記載の駐車支援装置。
[Disclosure 7]
When the vehicle cannot be parked at the planned parking position after the follow-up control process is started, the follow-up control unit controls the parking plan based on information about the vehicle's surroundings obtained after the start of the follow-up control process. Identify a possible stop position that is different from the position,
When the vehicle cannot be parked at the planned parking position after the follow-up control process is started, the information providing unit provides information for recommending that the vehicle be stopped at the stoppable position. The parking assistance device according to any one of the above.
 [開示8]
 前記情報提供部は、前記追従制御処理の開始後、前記追従制御処理の実行中に得られた前記車両の周辺が映る周辺画像に前記目標経路を示す目標経路画像(Gt)を重畳させたものを前記利用者に向けて提供する、開示1ないし7のいずれか1つに記載の駐車支援装置。
[Disclosure 8]
After the follow-up control process is started, the information providing unit superimposes a target route image (Gt) indicating the target route on a surrounding image showing the surroundings of the vehicle obtained during execution of the follow-up control process. toward the user, the parking assistance device according to any one of Disclosures 1 to 7.
 [開示9]
 前記情報提供部は、前記目標経路画像のうち、
 前記目標経路において前記周辺画像に映る物体の前にある経路を前経路として特定するとともに、前記目標経路において前記物体の背後にある経路を背後経路として特定し、
 前記目標経路画像のうち前記背後経路に対応する部分(Gtb)と前記前経路に対応する部分(Gtf)とを異なる態様で前記周辺画像に重畳する、開示8に記載の駐車支援装置。
[Disclosure 9]
The information providing unit, of the target route image,
identifying a route in front of the object appearing in the peripheral image on the target route as a front route, and identifying a route behind the object on the target route as a back route;
The parking assistance device according to disclosure 8, wherein a portion (Gtb) corresponding to the back route and a portion (Gtf) corresponding to the front route in the target route image are superimposed on the peripheral image in different modes.
 [開示10]
 前記情報提供部は、前記車両の現在位置、前記目標経路、前記駐車予定位置の関係を示すイラスト画像(Gi)を前記利用者に向けて提供する、開示1ないし9のいずれか1つに記載の駐車支援装置。
[Disclosure 10]
10. The information providing unit according to any one of Disclosures 1 to 9, wherein the information providing unit provides the user with an illustration image (Gi) showing a relationship between the current position of the vehicle, the target route, and the planned parking position. parking assistance device.
 [開示11]
 前記追従制御部は、前記目標経路上に障害物が発見された場合、障害物を回避して前記駐車予定位置に至る回避経路の生成を試み、
 前記情報提供部は、前記追従制御部によって前記回避経路が生成された場合に前記回避経路に関する情報を視覚的な態様で前記利用者に提供する、開示1ないし9のいずれか1つに記載の駐車支援装置。
[Disclosure 11]
When an obstacle is found on the target route, the follow-up control unit tries to generate an avoidance route that avoids the obstacle and reaches the planned parking position,
10. The information providing unit according to any one of Disclosures 1 to 9, wherein when the follow-up control unit generates the avoidance route, the information providing unit provides the user with information regarding the avoidance route in a visual manner. parking aid.
 [開示12]
 前記追従制御部は、前記回避経路を生成できなかった場合、前記車両を停車することが可能な停車位置を特定し、
 前記情報提供部は、前記停車位置および前記停車位置までの経路に関する情報を前記利用者に提供する、開示11に記載の駐車支援装置。
[Disclosure 12]
The follow-up control unit identifies a stop position where the vehicle can be stopped when the avoidance route cannot be generated,
12. The parking assistance device according to Disclosure 11, wherein the information providing unit provides the user with information regarding the stop position and a route to the stop position.
 [開示13]
 前記情報提供部は、前記追従制御部による前記回避経路の生成が開始されると、前記追従制御部による前記回避経路の生成中であること示す情報を前記利用者に提供する、開示11または12に記載の駐車支援装置。
[Disclosure 13]
Disclosure 11 or 12, wherein, when the following control unit starts generating the avoidance route, the information providing unit provides the user with information indicating that the avoidance route is being generated by the following control unit. The parking assistance device described in .
 [開示14]
 前記経路生成部は、前記追従制御処理が開始される前に前記目標経路上に障害物が発見された場合、前記追従制御処理の開始前に障害物を回避して前記駐車予定位置に至る物体回避経路の生成を試み、
 前記情報提供部は、前記経路生成部によって前記物体回避経路が生成された場合、前記追従制御処理の開始前の段階で前記物体回避経路に関する情報を視覚的な態様で前記利用者に提供する、開示1ないし13のいずれか1つに記載の駐車支援装置。
[Disclosure 14]
When an obstacle is found on the target route before the follow-up control process is started, the route generation unit avoids the obstacle before the follow-up control process is started and reaches the planned parking position. Attempt to generate an escape route,
When the object avoidance route is generated by the route generation unit, the information providing unit provides the user with information regarding the object avoidance route in a visual manner before the follow-up control process is started. A parking assist device according to any one of Disclosures 1 to 13.
 [開示15]
 前記追従制御部は、前記経路生成部にて前記物体回避経路を生成できなかった場合、前記車両を停車することが可能な停車可能位置を特定し、
 前記情報提供部は、前記停車可能位置および前記停車可能位置までの経路に関する情報を前記利用者に提供する、開示14に記載の駐車支援装置。
[Disclosure 15]
The follow-up control unit identifies a possible stop position where the vehicle can be stopped when the object avoidance route cannot be generated by the route generation unit,
15. The parking assistance device according to Disclosure 14, wherein the information providing unit provides the user with information about the possible stop position and a route to the possible stop position.
 [開示16]
 駐車支援方法であって、
 利用者による車両(V)の駐車操作が行われた際の前記車両の走行経路および前記走行経路における前記車両の周辺の情報を含む経路情報に基づいて前記車両の駐車時に前記車両が通るべき目標経路(TP)を生成することと、
 前記目標経路に沿って駐車予定位置(SEP)まで前記車両を自動的に移動させる追従制御処理を行うことと、
 前記利用者に情報を提供することと、を含み、
 前記利用者に情報を提供することでは、前記経路情報に含まれる前記駐車予定位置に関する情報を、前記追従制御処理の開始前に前記利用者に向けて視覚的な態様で提供することを含む、駐車支援方法。
[Disclosure 16]
A parking assistance method comprising:
A target to be passed by the vehicle when the vehicle is parked based on route information including a travel route of the vehicle when the user performs a parking operation of the vehicle (V) and information about the surroundings of the vehicle on the travel route. generating a path (TP);
performing follow-up control processing for automatically moving the vehicle to a planned parking position (SEP) along the target route;
providing information to the user;
Providing the information to the user includes providing the information on the planned parking position included in the route information in a visual manner to the user before starting the follow-up control process. Parking assistance method.

Claims (16)

  1.  駐車支援装置であって、
     利用者による車両(V)の駐車操作が行われた際の前記車両の走行経路および前記走行経路における前記車両の周辺の情報を含む経路情報に基づいて前記車両の駐車時に前記車両が通るべき目標経路(TP)を生成する経路生成部(55)と、
     前記目標経路に沿って駐車予定位置(SEP)まで前記車両を自動的に移動させる追従制御処理を行う追従制御部(57)と、
     前記利用者に情報を提供する情報提供部(58)と、を備え、
     前記情報提供部は、前記経路情報に含まれる前記駐車予定位置に関する情報を、前記追従制御処理の開始前に前記利用者に向けて視覚的な態様で提供する、駐車支援装置。
    A parking assistance device,
    A target to be passed by the vehicle when the vehicle is parked based on route information including a travel route of the vehicle and information about the surroundings of the vehicle on the travel route when the vehicle (V) is parked by a user. a route generation unit (55) that generates a route (TP);
    a follow-up control unit (57) that performs a follow-up control process for automatically moving the vehicle to the planned parking position (SEP) along the target route;
    An information providing unit (58) that provides information to the user,
    The parking assistance device, wherein the information providing unit provides the information regarding the planned parking position included in the route information in a visual form to the user before starting the follow-up control process.
  2.  前記情報提供部は、前記経路情報のうち前記駐車予定位置の周辺の情報として得られる画像に対して、当該画像に映る前記駐車予定位置に前記車両を示す仮想車両画像(Gv)を重畳させたものを前記追従制御処理の開始前に前記利用者に向けて提供する、請求項1に記載の駐車支援装置。 The information providing unit superimposes a virtual vehicle image (Gv) showing the vehicle at the planned parking position in the image obtained as information about the planned parking position in the route information. 2. The parking assistance device according to claim 1, wherein an object is provided to the user before starting the follow-up control process.
  3.  前記仮想車両画像は、前記車両を半透明な態様で示した画像である、請求項2に記載の駐車支援装置。 The parking assistance device according to claim 2, wherein the virtual vehicle image is an image showing the vehicle in a translucent manner.
  4.  前記情報提供部は、前記経路情報のうち前記駐車予定位置の周辺の情報として得られる画像に対して、前記駐車予定位置を示す駐車枠画像(Gf)を重畳させたものを前記追従制御処理の開始前に前記利用者に向けて提供する、請求項1ないし3のいずれか1つに記載の駐車支援装置。 The information providing unit superimposes a parking frame image (Gf) indicating the planned parking position on an image obtained as information around the planned parking position among the route information in the follow-up control process. 4. The parking assistance device according to any one of claims 1 to 3, provided to the user before starting.
  5.  前記情報提供部は、前記駐車予定位置の周辺が映る画像の三次元表示を前記利用者に向けて提供しつつ、前記利用者による操作部(46)の操作信号に応じて前記三次元表示の視点変更を行う、請求項1ないし3のいずれか1つに記載の駐車支援装置。 The information providing unit provides the user with a three-dimensional display of an image showing the surroundings of the planned parking position, and the three-dimensional display in response to an operation signal of the operation unit (46) by the user. 4. The parking assistance device according to any one of claims 1 to 3, wherein the viewpoint is changed.
  6.  前記情報提供部は、前記駐車予定位置の候補となる候補位置が前記経路情報に複数含まれている場合、前記経路情報に含まれる複数の前記候補位置に関する情報を前記利用者に向けて視覚的な態様で提供するとともに、複数の前記候補位置の中から前記駐車予定位置の選択を促すための情報を提供する、請求項1ないし3のいずれか1つに記載の駐車支援装置。 When the route information includes a plurality of candidate positions as candidates for the planned parking position, the information providing unit visually displays information regarding the plurality of candidate positions included in the route information to the user. 4. The parking assistance device according to claim 1, wherein the information is provided in a convenient manner, and information is provided for prompting selection of the planned parking position from among the plurality of candidate positions.
  7.  前記追従制御部は、前記追従制御処理の開始後に前記駐車予定位置へ前記車両を駐車できない事態が生じた場合、前記追従制御処理の開始後に得られる前記車両の周辺の情報に基づいて前記駐車予定位置とは異なる停止可能位置を特定し、
     前記情報提供部は、前記追従制御処理の開始後に前記駐車予定位置へ前記車両を駐車できない事態が生じた場合、前記停止可能位置での停車を勧めるための情報を提供する、請求項1ないし3のいずれか1つに記載の駐車支援装置。
    When the vehicle cannot be parked at the planned parking position after the follow-up control process is started, the follow-up control unit controls the parking plan based on information about the vehicle's surroundings obtained after the start of the follow-up control process. Identify a possible stop position that is different from the position,
    4. The information providing unit provides information for recommending that the vehicle be stopped at the stoppable position when the vehicle cannot be parked at the planned parking position after the follow-up control process is started. The parking assistance device according to any one of
  8.  前記情報提供部は、前記追従制御処理の開始後、前記追従制御処理の実行中に得られた前記車両の周辺が映る周辺画像に前記目標経路を示す目標経路画像(Gt)を重畳させたものを前記利用者に向けて提供する、請求項1ないし3のいずれか1つに記載の駐車支援装置。 After the follow-up control process is started, the information providing unit superimposes a target route image (Gt) indicating the target route on a surrounding image showing the surroundings of the vehicle obtained during execution of the follow-up control process. 4. The parking assistance device according to any one of claims 1 to 3, wherein the parking assistance device provides a to the user.
  9.  前記情報提供部は、前記目標経路画像のうち、
     前記目標経路において前記周辺画像に映る物体の前にある経路を前経路として特定するとともに、前記目標経路において前記物体の背後にある経路を背後経路として特定し、
     前記目標経路画像のうち前記背後経路に対応する部分(Gtb)と前記前経路に対応する部分(Gtf)とを異なる態様で前記周辺画像に重畳する、請求項8に記載の駐車支援装置。
    The information providing unit, of the target route image,
    identifying a route in front of the object appearing in the peripheral image on the target route as a front route, and identifying a route behind the object on the target route as a back route;
    9. The parking assistance device according to claim 8, wherein a portion (Gtb) corresponding to the back route and a portion (Gtf) corresponding to the front route in the target route image are superimposed on the peripheral image in different modes.
  10.  前記情報提供部は、前記車両の現在位置、前記目標経路、前記駐車予定位置の関係を示すイラスト画像(Gi)を前記利用者に向けて提供する、請求項1ないし3のいずれか1つに記載の駐車支援装置。 4. The vehicle according to any one of claims 1 to 3, wherein said information providing unit provides said user with an illustration image (Gi) showing the relationship between the current position of said vehicle, said target route, and said planned parking position. A parking assist device as described.
  11.  前記追従制御部は、前記目標経路上に障害物が発見された場合、障害物を回避して前記駐車予定位置に至る回避経路の生成を試み、
     前記情報提供部は、前記追従制御部によって前記回避経路が生成された場合に前記回避経路に関する情報を視覚的な態様で前記利用者に提供する、請求項1ないし3のいずれか1つに記載の駐車支援装置。
    When an obstacle is found on the target route, the follow-up control unit tries to generate an avoidance route that avoids the obstacle and reaches the planned parking position,
    4. The information providing unit according to any one of claims 1 to 3, wherein when the follow-up control unit generates the avoidance route, the information providing unit provides the user with information regarding the avoidance route in a visual manner. parking assistance device.
  12.  前記追従制御部は、前記回避経路を生成できなかった場合、前記車両を停車することが可能な停車位置を特定し、
     前記情報提供部は、前記停車位置および前記停車位置までの経路に関する情報を前記利用者に提供する、請求項11に記載の駐車支援装置。
    The follow-up control unit identifies a stop position where the vehicle can be stopped when the avoidance route cannot be generated,
    12. The parking assistance device according to claim 11, wherein said information providing unit provides said user with information about said stop position and a route to said stop position.
  13.  前記情報提供部は、前記追従制御部による前記回避経路の生成が開始されると、前記追従制御部による前記回避経路の生成中であること示す情報を前記利用者に提供する、請求項11に記載の駐車支援装置。 12. The information providing unit according to claim 11, wherein when the follow-up control unit starts generating the avoidance route, the information providing unit provides the user with information indicating that the follow-up control unit is generating the avoidance route. A parking assist device as described.
  14.  前記経路生成部は、前記追従制御処理が開始される前に前記目標経路上に障害物が発見された場合、前記追従制御処理の開始前に障害物を回避して前記駐車予定位置に至る物体回避経路の生成を試み、
     前記情報提供部は、前記経路生成部によって前記物体回避経路が生成された場合、前記追従制御処理の開始前の段階で前記物体回避経路に関する情報を視覚的な態様で前記利用者に提供する、請求項1ないし3のいずれか1つに記載の駐車支援装置。
    When an obstacle is found on the target route before the follow-up control process is started, the route generation unit avoids the obstacle before the follow-up control process is started and reaches the planned parking position. Attempt to generate an escape route,
    When the object avoidance route is generated by the route generation unit, the information providing unit provides the user with information regarding the object avoidance route in a visual manner before the follow-up control process is started. The parking assistance device according to any one of claims 1 to 3.
  15.  前記追従制御部は、前記経路生成部にて前記物体回避経路を生成できなかった場合、前記車両を停車することが可能な停車可能位置を特定し、
     前記情報提供部は、前記停車可能位置および前記停車可能位置までの経路に関する情報を前記利用者に提供する、請求項14に記載の駐車支援装置。
    The follow-up control unit identifies a possible stop position where the vehicle can be stopped when the object avoidance route cannot be generated by the route generation unit,
    15. The parking assistance device according to claim 14, wherein said information providing unit provides said user with information on said stopable position and a route to said stopable position.
  16.  駐車支援方法であって、
     利用者による車両(V)の駐車操作が行われた際の前記車両の走行経路および前記走行経路における前記車両の周辺の情報を含む経路情報に基づいて前記車両の駐車時に前記車両が通るべき目標経路(TP)を生成することと、
     前記目標経路に沿って駐車予定位置(SEP)まで前記車両を自動的に移動させる追従制御処理を行うことと、
     前記利用者に情報を提供することと、を含み、
     前記利用者に情報を提供することでは、前記経路情報に含まれる前記駐車予定位置に関する情報を、前記追従制御処理の開始前に前記利用者に向けて視覚的な態様で提供することを含む、駐車支援方法。
    A parking assistance method comprising:
    A target to be passed by the vehicle when the vehicle is parked based on route information including a travel route of the vehicle and information about the surroundings of the vehicle on the travel route when the vehicle (V) is parked by a user. generating a path (TP);
    performing follow-up control processing for automatically moving the vehicle to a planned parking position (SEP) along the target route;
    providing information to the user;
    Providing the information to the user includes providing the information on the planned parking position included in the route information in a visual manner to the user before starting the follow-up control process. Parking assistance method.
PCT/JP2022/031613 2021-08-24 2022-08-22 Parking assistance device and parking assistance method WO2023027039A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2023543911A JPWO2023027039A1 (en) 2021-08-24 2022-08-22
CN202280055839.8A CN117836183A (en) 2021-08-24 2022-08-22 Parking support device and parking support method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021136598 2021-08-24
JP2021-136598 2021-08-24

Publications (1)

Publication Number Publication Date
WO2023027039A1 true WO2023027039A1 (en) 2023-03-02

Family

ID=85323198

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/031613 WO2023027039A1 (en) 2021-08-24 2022-08-22 Parking assistance device and parking assistance method

Country Status (3)

Country Link
JP (1) JPWO2023027039A1 (en)
CN (1) CN117836183A (en)
WO (1) WO2023027039A1 (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001133277A (en) * 1999-11-09 2001-05-18 Equos Research Co Ltd Navigation device
JP2004340827A (en) * 2003-05-16 2004-12-02 Xanavi Informatics Corp Route chart display method and display control device
JP2010034645A (en) * 2008-07-25 2010-02-12 Nissan Motor Co Ltd Parking assistance apparatus, and parking assistance method
JP2011035729A (en) * 2009-08-03 2011-02-17 Alpine Electronics Inc Apparatus and method for displaying vehicle-surrounding image
JP2011079372A (en) * 2009-10-05 2011-04-21 Sanyo Electric Co Ltd Parking assistance device
JP2012066614A (en) * 2010-09-21 2012-04-05 Aisin Seiki Co Ltd Parking support system
JP2018184091A (en) * 2017-04-26 2018-11-22 株式会社Jvcケンウッド Driving assistance device, driving assistance method and program
JP2018203214A (en) * 2017-06-09 2018-12-27 アイシン精機株式会社 Parking support device, parking support method, driving support device and driving support method
WO2019058781A1 (en) * 2017-09-20 2019-03-28 日立オートモティブシステムズ株式会社 Parking assistance device
DE102018220298A1 (en) * 2017-11-28 2019-05-29 Jaguar Land Rover Limited Parking assistance procedure and device
WO2020095636A1 (en) * 2018-11-09 2020-05-14 日立オートモティブシステムズ株式会社 Parking assistance device and parking assistance method
US20200307616A1 (en) * 2019-03-26 2020-10-01 DENSO TEN AMERICA Limited Methods and systems for driver assistance

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001133277A (en) * 1999-11-09 2001-05-18 Equos Research Co Ltd Navigation device
JP2004340827A (en) * 2003-05-16 2004-12-02 Xanavi Informatics Corp Route chart display method and display control device
JP2010034645A (en) * 2008-07-25 2010-02-12 Nissan Motor Co Ltd Parking assistance apparatus, and parking assistance method
JP2011035729A (en) * 2009-08-03 2011-02-17 Alpine Electronics Inc Apparatus and method for displaying vehicle-surrounding image
JP2011079372A (en) * 2009-10-05 2011-04-21 Sanyo Electric Co Ltd Parking assistance device
JP2012066614A (en) * 2010-09-21 2012-04-05 Aisin Seiki Co Ltd Parking support system
JP2018184091A (en) * 2017-04-26 2018-11-22 株式会社Jvcケンウッド Driving assistance device, driving assistance method and program
JP2018203214A (en) * 2017-06-09 2018-12-27 アイシン精機株式会社 Parking support device, parking support method, driving support device and driving support method
WO2019058781A1 (en) * 2017-09-20 2019-03-28 日立オートモティブシステムズ株式会社 Parking assistance device
DE102018220298A1 (en) * 2017-11-28 2019-05-29 Jaguar Land Rover Limited Parking assistance procedure and device
WO2020095636A1 (en) * 2018-11-09 2020-05-14 日立オートモティブシステムズ株式会社 Parking assistance device and parking assistance method
US20200307616A1 (en) * 2019-03-26 2020-10-01 DENSO TEN AMERICA Limited Methods and systems for driver assistance

Also Published As

Publication number Publication date
CN117836183A (en) 2024-04-05
JPWO2023027039A1 (en) 2023-03-02

Similar Documents

Publication Publication Date Title
EP3367367B1 (en) Parking support method and parking support device
CN108140311B (en) Parking assistance information display method and parking assistance device
JP6493545B2 (en) Information presenting apparatus and information presenting method
JP6547836B2 (en) Parking support method and parking support apparatus
RU2734643C1 (en) Parking assistance method for parking assistance device and parking assistance device
US11479238B2 (en) Parking assist system
JP4614005B2 (en) Moving locus generator
WO2020261781A1 (en) Display control device, display control program, and persistent tangible computer-readable medium
JP7218822B2 (en) display controller
CN110831818B (en) Parking assist method and parking assist device
JPWO2018012474A1 (en) Image control device and display device
JP7443705B2 (en) Peripheral monitoring device
CN111891119A (en) Automatic parking control method and system
JP6981433B2 (en) Driving support device
US20220309803A1 (en) Image display system
CN112124092A (en) Parking assist system
US20200398865A1 (en) Parking assist system
WO2023027039A1 (en) Parking assistance device and parking assistance method
US20220308345A1 (en) Display device
US11222552B2 (en) Driving teaching device
JP2022043996A (en) Display control device and display control program
JP7473087B2 (en) Parking assistance device and parking assistance method
WO2023002863A1 (en) Driving assistance device, driving assistance method
US20240075879A1 (en) Display system and display method
JP2023123353A (en) Information processing device, information processing method and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22861327

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023543911

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 202280055839.8

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE