WO2018017094A1 - Assisted self parking - Google Patents

Assisted self parking Download PDF

Info

Publication number
WO2018017094A1
WO2018017094A1 PCT/US2016/043281 US2016043281W WO2018017094A1 WO 2018017094 A1 WO2018017094 A1 WO 2018017094A1 US 2016043281 W US2016043281 W US 2016043281W WO 2018017094 A1 WO2018017094 A1 WO 2018017094A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
trajectory
controller
user
reference image
Prior art date
Application number
PCT/US2016/043281
Other languages
French (fr)
Inventor
Ramchandra Ganesh Karandikar
Jinesh J. Jain
Jonathan Thomas Mullen
Original Assignee
Ford Global Technologies, Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ford Global Technologies, Llc filed Critical Ford Global Technologies, Llc
Priority to PCT/US2016/043281 priority Critical patent/WO2018017094A1/en
Publication of WO2018017094A1 publication Critical patent/WO2018017094A1/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B62LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
    • B62DMOTOR VEHICLES; TRAILERS
    • B62D15/00Steering not otherwise provided for
    • B62D15/02Steering position indicators ; Steering position determination; Steering aids
    • B62D15/027Parking aids, e.g. instruction means
    • B62D15/0285Parking performed automatically
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/0011Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot associated with a remote control arrangement
    • G05D1/0038Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot associated with a remote control arrangement by providing the operator with simple or augmented images from one or more cameras located onboard the vehicle, e.g. tele-operation

Definitions

  • This invention relates to self-parking vehicles.
  • FIG. 1 is a schematic block diagram of a system for implementing embodiments of the invention
  • FIG. 2 is a schematic block diagram of an example computing device suitable for implementing methods in accordance with embodiments of the invention
  • Fig. 3 is a process flow diagram of a method for receiving user specification of a trajectory in a driver-assisted auto-parking system in accordance with an embodiment of the present invention
  • Fig. 4A is a schematic block diagram illustrating an example parking scenario
  • Fig. 4B is an example reference image showing a user-specified trajectory in accordance with an embodiment of the present invention.
  • FIG. 5 is a process flow diagram of a method for modifying a user- specified trajectory in accordance with an embodiment of the present invention
  • FIG. 6 is a process flow diagram of a method for incorporating external camera images into a driver-assisted auto-parking system in accordance with an embodiment of the present invention
  • Fig. 7A is a schematic block diagram an example auto parking scenario including an external camera
  • Fig. 7B illustrates a reference image using an image from an external camera in accordance with an embodiment of the present invention
  • Fig. 8 is a process flow diagram of a method for using stored trajectories in accordance with an embodiment of the present invention.
  • a system 100 may include a controller 102 housed within a vehicle.
  • the vehicle may include any vehicle known in the art.
  • the vehicle may have all of the structures and features of any vehicle known in the art including, wheels, a drive train coupled to the wheels, an engine coupled to the drive train, a steering system, a braking system, and other systems known in the art to be included in a vehicle.
  • the controller 102 may perform autonomous navigation and collision avoidance.
  • image data, other sensor data, and possibly audio data may be analyzed to identify obstacles.
  • the controller 102 may receive one or more image streams from one or more imaging devices 104. For example, one or more cameras may be mounted to the vehicle and output image streams received by the controller 102. The controller 102 may also receive outputs from one or more other sensors 106. Sensors 106 may include sensing devices such as RADAR (Radio Detection and Ranging), LIDAR (Light Detection and Ranging), SONAR (Sound Navigation and Ranging), and the like. Sensors 106 may include one or more microphones or microphone arrays providing one or more audio streams to the controller 102. For example, one or more microphones or microphone arrays may be mounted to an exterior of the vehicle. The microphones 106 may include directional microphones having a sensitivity that varies with angle. [0018] The controller 102 may execute a collision avoidance module 108 that receives streams of information from the imaging devices 104 and sensors 106, identifies possible obstacles using the streams of information, and takes measures to avoid them while guiding the vehicle to a desired destination.
  • RADAR
  • the collision avoidance module 108 may include an image sharing module 110a that builds a reference image representing surroundings of a vehicle housing the controller 102 based on the outputs of the imaging devices 104 and sensors 106. The image sharing module 110a then provides this reference image to a mobile device of a driver positioned outside of the vehicle according to the methods disclosed herein.
  • the collision avoidance module 108 may include a trajectory module 110b that receives a trajectory specified by a user with respect to the reference image.
  • a validation module 110c then validates the trajectory, ensuring that no obstacles or other impassible areas are present that would preclude the vehicle from following the trajectory.
  • the validation module 110c may adjust the trajectory to circumvent obstacles.
  • the collision avoidance module 108 guides the vehicle along the trajectory.
  • the collision avoidance module 108 may include an obstacle identification module HOd, a collision prediction module HOe, and a decision module 1 lOf.
  • the obstacle identification module HOd analyzes the streams of information form the imaging devices 104 and sensors 106 and identifies potential obstacles, including people, animals, vehicles, buildings, curbs, and other objects and structures.
  • the collision prediction module HOe predicts which obstacles are likely to collide with the vehicle based on its current trajectory.
  • the collision prediction module HOe may evaluate the likelihood of collision with objects identified by the obstacle identification module 1 lOd.
  • the decision module HOf may make a decision to follow the trajectory received from a user, stop, accelerate, deviate from the trajectory, etc. in order to avoid obstacles.
  • the manner in which the collision prediction module HOe predicts potential collisions and the manner in which the decision module HOf takes action to avoid potential collisions may be according to any method or system known in the art of autonomous vehicles.
  • the decision module 11 Of may control the trajectory of the vehicle by actuating one or more actuators 112 controlling the direction and speed of the vehicle.
  • the actuators 112 may include a steering actuator 114a, an accelerator actuator 114b, and a brake actuator 114c.
  • the configuration of the actuators 114a- 114c may be according to any implementation of such actuators known in the art of autonomous vehicles.
  • a mobile device 116 of a driver may be in data communication with the controller 102, such as by means of BLUETOOTH, WI-FI, or some other wireless connection.
  • the mobile device 116 may be embodied as a mobile phone, tablet computer, wearable computer, notebook computer, or any other type of portable computing device.
  • IVI in-vehicle infotainment
  • FIG. 2 is a block diagram illustrating an example computing device 200.
  • Computing device 200 may be used to perform various procedures, such as those discussed herein.
  • the controller 102 and mobile device 116 may have some or all of the attributes of the computing device 200.
  • Computing device 200 includes one or more processor(s) 202, one or more memory device(s) 204, one or more interface(s) 206, one or more mass storage device(s) 208, one or more Input/Output (I/O) device(s) 210, and a display device 230 all of which are coupled to a bus 212.
  • Processor(s) 202 include one or more processors or controllers that execute instructions stored in memory device(s) 204 and/or mass storage device(s) 208.
  • Processor(s) 202 may also include various types of computer-readable media, such as cache memory.
  • Memory device(s) 204 include various computer-readable media, such as volatile memory (e.g., random access memory (RAM) 214) and/or nonvolatile memory (e.g., read-only memory (ROM) 216). Memory device(s) 204 may also include rewritable ROM, such as Flash memory.
  • volatile memory e.g., random access memory (RAM) 2114
  • ROM read-only memory
  • Memory device(s) 204 may also include rewritable ROM, such as Flash memory.
  • Mass storage device(s) 208 include various computer readable media, such as magnetic tapes, magnetic disks, optical disks, solid-state memory (e.g., Flash memory), and so forth. As shown in Fig. 2, a particular mass storage device is a hard disk drive 224. Various drives may also be included in mass storage device(s) 208 to enable reading from and/or writing to the various computer readable media. Mass storage device(s) 208 include removable media 226 and/or non-removable media.
  • I/O device(s) 210 include various devices that allow data and/or other information to be input to or retrieved from computing device 200.
  • Example I/O device(s) 210 include cursor control devices, keyboards, keypads, microphones, monitors or other display devices, speakers, printers, network interface cards, modems, lenses, CCDs or other image capture devices, and the like.
  • Display device 230 includes any type of device capable of displaying information to one or more users of computing device 200. Examples of display device 230 include a monitor, display terminal, video projection device, and the like.
  • Interface(s) 206 include various interfaces that allow computing device 200 to interact with other systems, devices, or computing environments.
  • Example interface(s) 206 include any number of different network interfaces 220, such as interfaces to local area networks (LANs), wide area networks (WANs), wireless networks, and the Internet.
  • Other interface(s) include user interface 218 and peripheral device interface 222.
  • the interface(s) 206 may also include one or more peripheral interfaces such as interfaces for printers, pointing devices (mice, track pad, etc.), keyboards, and the like.
  • Bus 212 allows processor(s) 202, memory device(s) 204, interface(s) 206, mass storage device(s) 208, I/O device(s) 210, and display device 230 to communicate with one another, as well as other devices or components coupled to bus 212.
  • Bus 212 represents one or more of several types of bus structures, such as a system bus, PCI bus, IEEE 1394 bus, USB bus, and so forth.
  • programs and other executable program components are shown herein as discrete blocks, although it is understood that such programs and components may reside at various times in different storage components of computing device 200, and are executed by processor(s) 202.
  • the systems and procedures described herein can be implemented in hardware, or a combination of hardware, software, and/or firmware.
  • ASICs application specific integrated circuits
  • the illustrated method 300 may be executed by the controller 102.
  • the method 300 may be executed in response to an instruction to self- park.
  • the instruction to self-park may be received by way of an input device coupled to the controller 102 or the mobile device 116. Self-parking may be performed while the user is located outside of the vehicle.
  • the method 300 includes receiving 302 sensor outputs from the imaging devices 104 and one or more other sensors 106 and identifying 304 features in the outputs. This may include performing any method known in the art to identify and classify objects in images and point clouds obtained using LIDAR, RADAR, or any other imaging system.
  • the method 300 may include building 306 a reference image. This may include generating the reference image based on an original image obtained from an imaging device wherein features identified at step 304 are highlighted (outline added, highlighted with a noticeable color, lightened, darkened, or otherwise modified). In some embodiments, building 306 a reference image may include generating a computer rendering of objects sensed using the sensors 106 or imaging devices 104.
  • the reference image is then transmitted 308 to the mobile device 116.
  • the mobile device 116 displays 310 the reference image in an interface displayed on the mobile device 116.
  • a trajectory is then received 312 through this interface.
  • a vehicle 400 housing the controller 102 may be located in the illustrated scenario.
  • a user may instruct the controller 102 to self-park in the garage 402. This requires the vehicle 400 to drive along the driveway 404, bypass obstacles 406, 408 and avoid curbs 410.
  • the vehicle 400 includes a forward facing camera 104a, a rearward facing camera 104b, and may include one or more lateral cameras 104c, 104d.
  • Other sensors 106 such as LIDAR and RADAR sensors are also mounted to the vehicle and have the garage 402, driveway 404, and obstacles 406, 408, 410 in its field of view. A user may wish to cause the vehicle to traverse trajectory 412 to enter the garage 402.
  • the illustrated image represents a reference image that may be generated by the controller 102 at step 306 based on the scenario shown in Fig. 4A.
  • the image is an output of the forward facing camera 104a. Images of the obstacles 406, 408 and curbs 410 may be highlighted in the reference image both to make the user aware of their presence and to indicate that they were successfully detected by the controller 102.
  • the user may specify the trajectory 412 by tapping the screen of the mobile device 116 to leave trail of markers 414.
  • the interface may display the markers 414 to show the selected trajectory.
  • the trajectory 412 may then be generated such that it passes through the locations of the markers 414.
  • the real- world locations corresponding to markers 414 may be assumed to be locations on a ground plane (the driveway 404 in this example) corresponding to the location in the image that was tapped.
  • the corresponding real -world path of the trajectory 412 may be determined based on the locations of the markers 414 in the reference image.
  • a user may specify the trajectory 412 by tracing a line on the screen rather than tapping discrete locations. Again, the trajectory 412 will be assumed to be the path along the ground plane 404 corresponding to the line traced by the user.
  • the trajectory is transmitted 314 to the controller 102.
  • the controller 102 determines the real -world path corresponding to the trajectory 412 as discussed above.
  • the controller 102 may then validate 316 the trajectory 412.
  • the trajectory 412 may be validated by evaluating the features identified at step 304 and ensuring that the trajectory 412 will not cause the vehicle 400 to impact an obstacle, collide with a moving obstacle, or otherwise be damaged while traversing the trajectory 412.
  • Validating 316 the trajectory may include determining an alternative trajectory 416 (see Fig. 4B) that varies from the trajectory 412 as needed to avoid obstacles 406, 408.
  • a threshold distance e.g. 0.1 to 0.5 meters
  • the controller 102 may transmit a message to the mobile device 116 requesting acceptance of the new trajectory 416. This may include superimposing an image of the new trajectory 416 on the reference image along with an image of the original trajectory 412. The user may then input acceptance or rejection of the new trajectory 416, and this input is then returned to the controller 102.
  • the method 300 may end. If the new trajectory 416 is accepted, then the new trajectory 416 is executed 318. Where no adjustment is needed, the original trajectory 412 is executed 318. Executing 318 the trajectory may include causing the vehicle 400 to traverse the trajectory 412, 416 by actuating one or more of the actuators 112. [0047] During execution 318 of the trajectory, the controller 102 may instruct the mobile device 116 to display an interface enabling a user to invoke an emergency stop. If a user input is received that invokes the emergency stop, the mobile device 116 transmits this input to the controller 102, which will then cause the vehicle 400 to stop.
  • Executing 318 the trajectory 412, 416 may include traversing the trajectory subject to constraints on speed, acceleration, proximity to obstacles, or any other criteria, which may be programmed into the controller 102 by default or specified by the user, such as through an interface on the mobile device 116.
  • executing 318 the trajectory 412, 416 may include executing the illustrated method 500.
  • the method 500 may be executed with respect to a "current trajectory," which is initially either the original trajectory 412 or the new trajectory 416 if the controller 416 determined it to be necessary as discussed above.
  • the method 500 may include traversing 502 the current trajectory.
  • the method 500 may be executed iteratively, such that traversing 502 the current trajectory comprises traversing an incremental distance, e.g. 0.1 to 0.5 meters along the current trajectory or for an incremental period of time, e.g. 0.1 to 0.5 seconds.
  • the controller 102 may update 504 the reference image to show the current location of the vehicle 400 along the current trajectory, such as by superimposing a marker on the reference image.
  • the controller 102 then transmits updates to the reference image to the mobile device 116, which then displays them to the user. In some embodiments, this may include transmitting an image that is the current output of an imaging device, e.g.
  • the method 500 may include sensing 506 the surroundings of the vehicle using the imaging devices 104 and the sensors 106.
  • the method 500 may then include evaluating 508 the current trajectory with respect to the outputs of the sensors.
  • evaluating 508 may include detecting obstacles, detecting potential collisions based on the current velocity of the vehicle 400 and the velocity of any moving obstacles, and any other action known in the art for detecting potential collisions of an autonomous vehicle.
  • the method 500 may include evaluating 512 whether the adjustment is large enough to exceed a threshold condition.
  • the size of an adjustment may be measured in angles, e.g. whether the adjustment will change the angle of the vehicle 400 greater than a threshold angle.
  • the size of an adjustment may be measured in distance, i.e. the separation between the adjusted trajectory and the original trajectory 412, 416 (or the current trajectory in some embodiments) and their point of greatest separation may be compared to a distance threshold.
  • the method 500 may include requesting 516 confirmation from the user. This may include sending a request for confirmation to the mobile device 116.
  • the request for confirmation may include a reference image having the user specified trajectory 412 and the adjusted trajectory superimposed on a reference image, i.e. a current image captured using an imaging device 104.
  • the reference image may then be displayed on the mobile device 116 with an interface for receiving an input either accepting or rejecting the adjusted trajectory.
  • a user input to the interface is then returned to the controller 102. If the input is found 518 to be acceptance of the adjusted trajectory, then the current trajectory is changed 514 to the adjusted trajectory and processing continues at step 502.
  • the illustrated method 600 may be executed to control self-parking of the vehicle 400 using an external image.
  • the method 600 may be invoked by a user either while the user is in the vehicle 400 or out of the vehicle by way of the mobile device 116.
  • the user may be located outside of the vehicle 400 during execution of the method 600.
  • the method 600 may include receiving 602 sensor outputs from the imaging devices 104 and sensors 106.
  • the method 600 may further include receiving 604 an image from an external camera ("external image").
  • the external image may be received from the mobile device 116.
  • the user may exit the vehicle 400, take a picture of a desired parking spot and an area surrounding the vehicle 400, and transmit the picture to the controller 102 for use as the external image.
  • the external image may come from other sources, such as security cameras, cameras mounted to other vehicles, and the like.
  • the method 600 may include analyzing the outputs of the sensors and identifying 606 obstacles. Identifying 606 obstacles may include any method for identifying obstacles known in the art. The method 600 may further include identifying 608 obstacles in the external camera image. Identifying obstacles 608 may include performing image analysis to identify and classify objects in the external image. Identifying 608 obstacles may include performing any method for image analysis known in the art. [0057] The method 600 may include fusing 610 the external image and sensor outputs. In particular, the location of features in the external image may be mapped to the locations of corresponding features in the sensor outputs, thereby relating locations in the external image to coordinate system of the sensor outputs, e.g. the point cloud of a LIDAR sensor. Accordingly, objects identified in the sensor outputs may be mapped to locations in the external image and these locations may then be highlighted to generate the reference image. The reference image may then be transmitted 612 to the mobile device 116.
  • the mobile device 116 receives the reference image, displays 614 the reference image, receives 616 a user trajectory, and transmits 618 the user trajectory to the controller 102. Steps 614 to 618 may be performed in the same manner as for 310 to 314 of the method 300.
  • the controller 102 validates 620 the trajectory and executes 622 the trajectory in the same manner as for the method 300, except that the trajectory is specified with reference to the external image. Accordingly, the trajectory may be translated to the coordinate system of the point clouds output by the sensors 106 using the relationship determined at step 610. In addition, obstacles and their locations as identified in the external image may be added to obstacles sensed using the outputs of the imaging devices 104 and sensors 106 such that the field of view of the controller is enhanced by the external image.
  • Figs. 7A and 7B illustrate an example application of the method 600.
  • a user at a hotel 700 may exit the vehicle and take a picture of a desired parking location using the mobile device 116.
  • the reference image may therefore include an image from the perspective of the mobile device 116, with the locations of vehicles ⁇ '02-706 shown.
  • a user may input a trajectory 708 showing a path for the vehicle 400 to follow to arrive at a desired parking location.
  • the trajectory 708 may be validated and executed in the same manner as for the method 300. In particular, this may include adjustments to the trajectory 708 as part of the validation step or during execution of the trajectory 708 as described above.
  • the illustrated method 800 may be executed by the controller 102 to identify previously used trajectories.
  • the method 800 could be used, for example, at a user's home. In this manner, a user may position the vehicle at or near a previous starting point and the controller 102 may then use a stored trajectory to self-park without requiring the user to specify a trajectory.
  • the method 800 may include storing 802 identified features detected in the outputs of the imaging devices 104 and sensors 106 during an iteration of the method 300 or 600.
  • Features may include landmarks such as buildings, signs, topology, trees, etc. This set of features and the trajectory followed during the iteration of the method 300 or 600 may then be stored.
  • the features and trajectory may also be stored with the location where the method 300 or 600 was executed, e.g. a GPS (global positioning system) coordinate.
  • the trajectory that is stored may be the user trajectory 412, the adjusted trajectory 416 after the validation step, or the trajectory that was actually followed by the vehicle, taking into account adjustments during execution of a trajectory 412 or 416.
  • Step 802 may be performed repeatedly at multiple locations such that a library of feature sets and corresponding trajectories may be stored by the controller 102. Step 802 may be performed with respect to multiple users such that one user may benefit from the feature sets and trajectories of other users.
  • the method 800 may include receiving 804 an instruction to self-park. In response to the instruction, the controller 102 senses the surroundings of the vehicle in the same manner as for either of the method 300 and the method 600. The controller 102 then identifies 806 features identified in the outputs of the imaging devices 104 and sensors 106 ("the current feature set") to one or more stored feature sets and compares 808 the current feature set to one or more stored feature sets. Comparing 808 the current feature set to one or more stored feature sets may include comparing current feature set to one or more stored features sets having corresponding locations closest to the current location of the controller 102, as determined by a GPS receiver included in the controller 102.
  • Comparing 808 may include any comparison method known in the art.
  • the locations of features in the current and stored feature sets may be compared to determine whether the shape and location of features in the stored feature set match those of features in the current feature set within a tolerance.
  • a stored reference image is found 810 to match the current reference image within the tolerance, then the stored trajectory corresponding to the stored reference image is used 812, i.e. the controller 102 will cause the vehicle to autonomously follow the stored trajectory subject to a validation step and adjustments during execution as described above.
  • a trajectory is requested and received 814 from the user, such as according to the method 300 or the method 600.
  • Implementations of the systems, devices, and methods disclosed herein may comprise or utilize a special purpose or general-purpose computer including computer hardware, such as, for example, one or more processors and system memory, as discussed herein. Implementations within the scope of the present disclosure may also include physical and other computer-readable media for carrying or storing computer- executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system. Computer-readable media that store computer-executable instructions are computer storage media (devices). Computer-readable media that carry computer- executable instructions are transmission media.
  • implementations of the disclosure can comprise at least two distinctly different kinds of computer-readable media: computer storage media (devices) and transmission media.
  • Computer storage media (devices) includes RAM, ROM, EEPROM, CD- ROM, solid state drives (“SSDs”) (e.g., based on RAM), Flash memory, phase-change memory (“PCM”), other types of memory, other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.
  • An implementation of the devices, systems, and methods disclosed herein may communicate over a computer network.
  • a "network" is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices.
  • Transmissions media can include a network and/or data links, which can be used to carry desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of computer-readable media.
  • Computer-executable instructions comprise, for example, instructions and data which, when executed at a processor, cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions.
  • the computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code.
  • the disclosure may be practiced in network computing environments with many types of computer system configurations, including, an in-dash vehicle computer, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, tablets, pagers, routers, switches, various storage devices, and the like.
  • the disclosure may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks.
  • program modules may be located in both local and remote memory storage devices.
  • ASICs application specific integrated circuits
  • sensor embodiments discussed above may comprise computer hardware, software, firmware, or any combination thereof to perform at least a portion of their functions.
  • a sensor may include computer code configured to be executed in one or more processors, and may include hardware logic/electrical circuitry controlled by the computer code.
  • These example devices are provided herein purposes of illustration, and are not intended to be limiting. Embodiments of the present disclosure may be implemented in further types of devices, as would be known to persons skilled in the relevant art(s). At least some embodiments of the disclosure have been directed to computer program products comprising such logic (e.g., in the form of software) stored on any computer useable medium. Such software, when executed in one or more data processing devices, causes a device to operate as described herein.
  • Computer program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object-oriented programming language such as Java, Smalltalk, C++, or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
  • the program code may execute entirely on a computer system as a stand-alone software package, on a stand-alone hardware unit, partly on a remote computer spaced some distance from the computer, or entirely on a remote computer or server.
  • the remote computer may be connected to the computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • These computer program instructions may also be stored in a non- transitory computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.

Abstract

A controller for an autonomous vehicle generates a reference image according to sensor outputs that shows surroundings of the vehicle. The reference image is transmitted to the mobile device of a driver. While the driver is located outside of the vehicle, the driver specifies, on the mobile device, a trajectory to be followed by the vehicle to self-park. The controller receives the trajectory, validates it, and executes it while making adjustments in response to detected obstacles or other conditions. The reference image and trajectory may be stored and may be subsequently used to self-park the vehicle. The controller may receive an external image from the mobile device, fuse it with data from the sensor outputs, and transmit this as the reference image to the mobile device.

Description

Title: ASSISTED SELF PARKING
BACKGROUND
FIELD OF THE INVENTION
[001] This invention relates to self-parking vehicles.
BACKGROUND OF THE INVENTION
[002] Most current auto-parking solutions require the driver to be present inside the car with the possibility of having the driver engage the accelerator and brake controls of the car. Some systems involve the driver standing outside the car but give limited control to the driver to execute the parking maneuver. Parking a vehicle must often be done with tight constraints on available free space, sensor visibility, a temporary parking map, and the like. These constraints make it difficult for auto-parking solutions to be executed reliably in some scenarios such as 'tight parking spots' in city centers. Current systems also find it difficult to handle common parking scenarios like parking the vehicle in a cluttered drive-way, parking the vehicle in a garage, and other non-conventional situations.
[003] The systems and methods disclosed herein provide an improved approach for incorporating driver assistance into an auto-parking solution.
BRIEF DESCRIPTION OF THE DRAWINGS
[004] In order that the advantages of the invention will be readily understood, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered limiting of its scope, the invention will be described and explained with additional specificity and detail through use of the accompanying drawings, in which:
[005] Fig. 1 is a schematic block diagram of a system for implementing embodiments of the invention;
[006] Fig. 2 is a schematic block diagram of an example computing device suitable for implementing methods in accordance with embodiments of the invention;
[007] Fig. 3 is a process flow diagram of a method for receiving user specification of a trajectory in a driver-assisted auto-parking system in accordance with an embodiment of the present invention;
[008] Fig. 4A is a schematic block diagram illustrating an example parking scenario;
[009] Fig. 4B is an example reference image showing a user-specified trajectory in accordance with an embodiment of the present invention;
[0010] Fig. 5 is a process flow diagram of a method for modifying a user- specified trajectory in accordance with an embodiment of the present invention;
[0011] Fig. 6 is a process flow diagram of a method for incorporating external camera images into a driver-assisted auto-parking system in accordance with an embodiment of the present invention;
[0012] Fig. 7A is a schematic block diagram an example auto parking scenario including an external camera;
[0013] Fig. 7B illustrates a reference image using an image from an external camera in accordance with an embodiment of the present invention; and [0014] Fig. 8 is a process flow diagram of a method for using stored trajectories in accordance with an embodiment of the present invention.
DETAILED DESCRIPTION
[0015] Referring to Fig. 1, a system 100 may include a controller 102 housed within a vehicle. The vehicle may include any vehicle known in the art. The vehicle may have all of the structures and features of any vehicle known in the art including, wheels, a drive train coupled to the wheels, an engine coupled to the drive train, a steering system, a braking system, and other systems known in the art to be included in a vehicle.
[0016] As discussed in greater detail herein, the controller 102 may perform autonomous navigation and collision avoidance. In particular, image data, other sensor data, and possibly audio data may be analyzed to identify obstacles.
[0017] The controller 102 may receive one or more image streams from one or more imaging devices 104. For example, one or more cameras may be mounted to the vehicle and output image streams received by the controller 102. The controller 102 may also receive outputs from one or more other sensors 106. Sensors 106 may include sensing devices such as RADAR (Radio Detection and Ranging), LIDAR (Light Detection and Ranging), SONAR (Sound Navigation and Ranging), and the like. Sensors 106 may include one or more microphones or microphone arrays providing one or more audio streams to the controller 102. For example, one or more microphones or microphone arrays may be mounted to an exterior of the vehicle. The microphones 106 may include directional microphones having a sensitivity that varies with angle. [0018] The controller 102 may execute a collision avoidance module 108 that receives streams of information from the imaging devices 104 and sensors 106, identifies possible obstacles using the streams of information, and takes measures to avoid them while guiding the vehicle to a desired destination.
[0019] The collision avoidance module 108 may include an image sharing module 110a that builds a reference image representing surroundings of a vehicle housing the controller 102 based on the outputs of the imaging devices 104 and sensors 106. The image sharing module 110a then provides this reference image to a mobile device of a driver positioned outside of the vehicle according to the methods disclosed herein.
[0020] The collision avoidance module 108 may include a trajectory module 110b that receives a trajectory specified by a user with respect to the reference image. A validation module 110c then validates the trajectory, ensuring that no obstacles or other impassible areas are present that would preclude the vehicle from following the trajectory. The validation module 110c may adjust the trajectory to circumvent obstacles.
[0021] Once the trajectory is validated, the collision avoidance module 108 then guides the vehicle along the trajectory. In particular, the collision avoidance module 108 may include an obstacle identification module HOd, a collision prediction module HOe, and a decision module 1 lOf.
[0022] The obstacle identification module HOd analyzes the streams of information form the imaging devices 104 and sensors 106 and identifies potential obstacles, including people, animals, vehicles, buildings, curbs, and other objects and structures.
[0023] The collision prediction module HOe predicts which obstacles are likely to collide with the vehicle based on its current trajectory. The collision prediction module HOe may evaluate the likelihood of collision with objects identified by the obstacle identification module 1 lOd.
[0024] The decision module HOf may make a decision to follow the trajectory received from a user, stop, accelerate, deviate from the trajectory, etc. in order to avoid obstacles. The manner in which the collision prediction module HOe predicts potential collisions and the manner in which the decision module HOf takes action to avoid potential collisions may be according to any method or system known in the art of autonomous vehicles.
[0025] The decision module 11 Of may control the trajectory of the vehicle by actuating one or more actuators 112 controlling the direction and speed of the vehicle. For example, the actuators 112 may include a steering actuator 114a, an accelerator actuator 114b, and a brake actuator 114c. The configuration of the actuators 114a- 114c may be according to any implementation of such actuators known in the art of autonomous vehicles.
[0026] A mobile device 116 of a driver (or other user) may be in data communication with the controller 102, such as by means of BLUETOOTH, WI-FI, or some other wireless connection. The mobile device 116 may be embodied as a mobile phone, tablet computer, wearable computer, notebook computer, or any other type of portable computing device. Although the systems and methods disclosed herein are advantageously implemented with the user outside of the vehicle, the actions ascribed herein to the mobile device 116 may also be performed by an in-vehicle infotainment (IVI) system coupled to the controller 102.
[0027] Fig. 2 is a block diagram illustrating an example computing device 200. Computing device 200 may be used to perform various procedures, such as those discussed herein. The controller 102 and mobile device 116 may have some or all of the attributes of the computing device 200.
[0028] Computing device 200 includes one or more processor(s) 202, one or more memory device(s) 204, one or more interface(s) 206, one or more mass storage device(s) 208, one or more Input/Output (I/O) device(s) 210, and a display device 230 all of which are coupled to a bus 212. Processor(s) 202 include one or more processors or controllers that execute instructions stored in memory device(s) 204 and/or mass storage device(s) 208. Processor(s) 202 may also include various types of computer-readable media, such as cache memory.
[0029] Memory device(s) 204 include various computer-readable media, such as volatile memory (e.g., random access memory (RAM) 214) and/or nonvolatile memory (e.g., read-only memory (ROM) 216). Memory device(s) 204 may also include rewritable ROM, such as Flash memory.
[0030] Mass storage device(s) 208 include various computer readable media, such as magnetic tapes, magnetic disks, optical disks, solid-state memory (e.g., Flash memory), and so forth. As shown in Fig. 2, a particular mass storage device is a hard disk drive 224. Various drives may also be included in mass storage device(s) 208 to enable reading from and/or writing to the various computer readable media. Mass storage device(s) 208 include removable media 226 and/or non-removable media.
[0031] I/O device(s) 210 include various devices that allow data and/or other information to be input to or retrieved from computing device 200. Example I/O device(s) 210 include cursor control devices, keyboards, keypads, microphones, monitors or other display devices, speakers, printers, network interface cards, modems, lenses, CCDs or other image capture devices, and the like. [0032] Display device 230 includes any type of device capable of displaying information to one or more users of computing device 200. Examples of display device 230 include a monitor, display terminal, video projection device, and the like.
[0033] Interface(s) 206 include various interfaces that allow computing device 200 to interact with other systems, devices, or computing environments. Example interface(s) 206 include any number of different network interfaces 220, such as interfaces to local area networks (LANs), wide area networks (WANs), wireless networks, and the Internet. Other interface(s) include user interface 218 and peripheral device interface 222. The interface(s) 206 may also include one or more peripheral interfaces such as interfaces for printers, pointing devices (mice, track pad, etc.), keyboards, and the like.
[0034] Bus 212 allows processor(s) 202, memory device(s) 204, interface(s) 206, mass storage device(s) 208, I/O device(s) 210, and display device 230 to communicate with one another, as well as other devices or components coupled to bus 212. Bus 212 represents one or more of several types of bus structures, such as a system bus, PCI bus, IEEE 1394 bus, USB bus, and so forth.
[0035] For purposes of illustration, programs and other executable program components are shown herein as discrete blocks, although it is understood that such programs and components may reside at various times in different storage components of computing device 200, and are executed by processor(s) 202. Alternatively, the systems and procedures described herein can be implemented in hardware, or a combination of hardware, software, and/or firmware. For example, one or more application specific integrated circuits (ASICs) can be programmed to carry out one or more of the systems and procedures described herein. [0036] Referring to Fig. 3, the illustrated method 300 may be executed by the controller 102. The method 300 may be executed in response to an instruction to self- park. The instruction to self-park may be received by way of an input device coupled to the controller 102 or the mobile device 116. Self-parking may be performed while the user is located outside of the vehicle.
[0037] The method 300 includes receiving 302 sensor outputs from the imaging devices 104 and one or more other sensors 106 and identifying 304 features in the outputs. This may include performing any method known in the art to identify and classify objects in images and point clouds obtained using LIDAR, RADAR, or any other imaging system.
[0038] The method 300 may include building 306 a reference image. This may include generating the reference image based on an original image obtained from an imaging device wherein features identified at step 304 are highlighted (outline added, highlighted with a noticeable color, lightened, darkened, or otherwise modified). In some embodiments, building 306 a reference image may include generating a computer rendering of objects sensed using the sensors 106 or imaging devices 104.
[0039] The reference image is then transmitted 308 to the mobile device 116. The mobile device 116 displays 310 the reference image in an interface displayed on the mobile device 116. A trajectory is then received 312 through this interface.
[0040] Referring to Figs. 4A, while still referring to Fig. 3, a vehicle 400 housing the controller 102 may be located in the illustrated scenario. A user may instruct the controller 102 to self-park in the garage 402. This requires the vehicle 400 to drive along the driveway 404, bypass obstacles 406, 408 and avoid curbs 410. [0041] The vehicle 400 includes a forward facing camera 104a, a rearward facing camera 104b, and may include one or more lateral cameras 104c, 104d. Other sensors 106, such as LIDAR and RADAR sensors are also mounted to the vehicle and have the garage 402, driveway 404, and obstacles 406, 408, 410 in its field of view. A user may wish to cause the vehicle to traverse trajectory 412 to enter the garage 402.
[0042] Referring to Fig. 4B, the illustrated image represents a reference image that may be generated by the controller 102 at step 306 based on the scenario shown in Fig. 4A. As is apparent, the image is an output of the forward facing camera 104a. Images of the obstacles 406, 408 and curbs 410 may be highlighted in the reference image both to make the user aware of their presence and to indicate that they were successfully detected by the controller 102.
[0043] The user may specify the trajectory 412 by tapping the screen of the mobile device 116 to leave trail of markers 414. As the user taps the screen, the interface may display the markers 414 to show the selected trajectory. The trajectory 412 may then be generated such that it passes through the locations of the markers 414. The real- world locations corresponding to markers 414 may be assumed to be locations on a ground plane (the driveway 404 in this example) corresponding to the location in the image that was tapped. Using a three-dimensional point cloud generated using the sensors 106 or images from the imaging devices 104, the corresponding real -world path of the trajectory 412 may be determined based on the locations of the markers 414 in the reference image.
[0044] In some embodiments, a user may specify the trajectory 412 by tracing a line on the screen rather than tapping discrete locations. Again, the trajectory 412 will be assumed to be the path along the ground plane 404 corresponding to the line traced by the user.
[0045] Referring again to Fig. 3, after receiving 312 the trajectory 412, the trajectory is transmitted 314 to the controller 102. The controller 102 then determines the real -world path corresponding to the trajectory 412 as discussed above. The controller 102 may then validate 316 the trajectory 412. The trajectory 412 may be validated by evaluating the features identified at step 304 and ensuring that the trajectory 412 will not cause the vehicle 400 to impact an obstacle, collide with a moving obstacle, or otherwise be damaged while traversing the trajectory 412.
[0046] Validating 316 the trajectory may include determining an alternative trajectory 416 (see Fig. 4B) that varies from the trajectory 412 as needed to avoid obstacles 406, 408. In some embodiments, where the alternative trajectory 416 varies from the original trajectory 412 by more than a threshold distance (e.g. 0.1 to 0.5 meters) then validation is requested from the user before traversing the alternative trajectory. For example, the controller 102 may transmit a message to the mobile device 116 requesting acceptance of the new trajectory 416. This may include superimposing an image of the new trajectory 416 on the reference image along with an image of the original trajectory 412. The user may then input acceptance or rejection of the new trajectory 416, and this input is then returned to the controller 102. If the new trajectory 416 is not accepted, then the method 300 may end. If the new trajectory 416 is accepted, then the new trajectory 416 is executed 318. Where no adjustment is needed, the original trajectory 412 is executed 318. Executing 318 the trajectory may include causing the vehicle 400 to traverse the trajectory 412, 416 by actuating one or more of the actuators 112. [0047] During execution 318 of the trajectory, the controller 102 may instruct the mobile device 116 to display an interface enabling a user to invoke an emergency stop. If a user input is received that invokes the emergency stop, the mobile device 116 transmits this input to the controller 102, which will then cause the vehicle 400 to stop.
[0048] Executing 318 the trajectory 412, 416 may include traversing the trajectory subject to constraints on speed, acceleration, proximity to obstacles, or any other criteria, which may be programmed into the controller 102 by default or specified by the user, such as through an interface on the mobile device 116.
[0049] Referring to Fig. 5, executing 318 the trajectory 412, 416 may include executing the illustrated method 500. The method 500 may be executed with respect to a "current trajectory," which is initially either the original trajectory 412 or the new trajectory 416 if the controller 416 determined it to be necessary as discussed above.
[0050] The method 500 may include traversing 502 the current trajectory. The method 500 may be executed iteratively, such that traversing 502 the current trajectory comprises traversing an incremental distance, e.g. 0.1 to 0.5 meters along the current trajectory or for an incremental period of time, e.g. 0.1 to 0.5 seconds. As the current trajectory is traversed, the controller 102 may update 504 the reference image to show the current location of the vehicle 400 along the current trajectory, such as by superimposing a marker on the reference image. The controller 102 then transmits updates to the reference image to the mobile device 116, which then displays them to the user. In some embodiments, this may include transmitting an image that is the current output of an imaging device, e.g. a video feed that is the output of the forward facing camera 104a in the illustrated example, having obstacles highlighted and the current trajectory superimposed on the images of the video feed. [0051] The method 500 may include sensing 506 the surroundings of the vehicle using the imaging devices 104 and the sensors 106. The method 500 may then include evaluating 508 the current trajectory with respect to the outputs of the sensors. In particular, evaluating 508 may include detecting obstacles, detecting potential collisions based on the current velocity of the vehicle 400 and the velocity of any moving obstacles, and any other action known in the art for detecting potential collisions of an autonomous vehicle.
[0052] If the current trajectory is determined 510 to have a high likelihood of resulting in a collision such that adjustment is necessary, the method 500 may include evaluating 512 whether the adjustment is large enough to exceed a threshold condition. The size of an adjustment may be measured in angles, e.g. whether the adjustment will change the angle of the vehicle 400 greater than a threshold angle. The size of an adjustment may be measured in distance, i.e. the separation between the adjusted trajectory and the original trajectory 412, 416 (or the current trajectory in some embodiments) and their point of greatest separation may be compared to a distance threshold.
[0053] If the adjustment is not found 512 to exceed the threshold, then the current trajectory is changed 514 to the adjusted trajectory and processing continues at step 502. If the adjustment is found 512 to exceed the threshold, then the method 500 may include requesting 516 confirmation from the user. This may include sending a request for confirmation to the mobile device 116. The request for confirmation may include a reference image having the user specified trajectory 412 and the adjusted trajectory superimposed on a reference image, i.e. a current image captured using an imaging device 104. The reference image may then be displayed on the mobile device 116 with an interface for receiving an input either accepting or rejecting the adjusted trajectory. A user input to the interface is then returned to the controller 102. If the input is found 518 to be acceptance of the adjusted trajectory, then the current trajectory is changed 514 to the adjusted trajectory and processing continues at step 502.
[0054] Referring to Fig. 6, the illustrated method 600 may be executed to control self-parking of the vehicle 400 using an external image. The method 600 may be invoked by a user either while the user is in the vehicle 400 or out of the vehicle by way of the mobile device 116. The user may be located outside of the vehicle 400 during execution of the method 600.
[0055] The method 600 may include receiving 602 sensor outputs from the imaging devices 104 and sensors 106. The method 600 may further include receiving 604 an image from an external camera ("external image"). The external image may be received from the mobile device 116. For example, the user may exit the vehicle 400, take a picture of a desired parking spot and an area surrounding the vehicle 400, and transmit the picture to the controller 102 for use as the external image. The external image may come from other sources, such as security cameras, cameras mounted to other vehicles, and the like.
[0056] The method 600 may include analyzing the outputs of the sensors and identifying 606 obstacles. Identifying 606 obstacles may include any method for identifying obstacles known in the art. The method 600 may further include identifying 608 obstacles in the external camera image. Identifying obstacles 608 may include performing image analysis to identify and classify objects in the external image. Identifying 608 obstacles may include performing any method for image analysis known in the art. [0057] The method 600 may include fusing 610 the external image and sensor outputs. In particular, the location of features in the external image may be mapped to the locations of corresponding features in the sensor outputs, thereby relating locations in the external image to coordinate system of the sensor outputs, e.g. the point cloud of a LIDAR sensor. Accordingly, objects identified in the sensor outputs may be mapped to locations in the external image and these locations may then be highlighted to generate the reference image. The reference image may then be transmitted 612 to the mobile device 116.
[0058] The mobile device 116 receives the reference image, displays 614 the reference image, receives 616 a user trajectory, and transmits 618 the user trajectory to the controller 102. Steps 614 to 618 may be performed in the same manner as for 310 to 314 of the method 300.
[0059] The controller 102 then validates 620 the trajectory and executes 622 the trajectory in the same manner as for the method 300, except that the trajectory is specified with reference to the external image. Accordingly, the trajectory may be translated to the coordinate system of the point clouds output by the sensors 106 using the relationship determined at step 610. In addition, obstacles and their locations as identified in the external image may be added to obstacles sensed using the outputs of the imaging devices 104 and sensors 106 such that the field of view of the controller is enhanced by the external image.
[0060] Figs. 7A and 7B illustrate an example application of the method 600. As shown in Fig. 7A a user at a hotel 700 may exit the vehicle and take a picture of a desired parking location using the mobile device 116. As shown in Fig. 7B, the reference image may therefore include an image from the perspective of the mobile device 116, with the locations of vehicles Ί '02-706 shown. As for the method 300, a user may input a trajectory 708 showing a path for the vehicle 400 to follow to arrive at a desired parking location. The trajectory 708 may be validated and executed in the same manner as for the method 300. In particular, this may include adjustments to the trajectory 708 as part of the validation step or during execution of the trajectory 708 as described above.
[0061] Referring to Fig. 8, the illustrated method 800 may be executed by the controller 102 to identify previously used trajectories. The method 800 could be used, for example, at a user's home. In this manner, a user may position the vehicle at or near a previous starting point and the controller 102 may then use a stored trajectory to self-park without requiring the user to specify a trajectory.
[0062] For example, the method 800 may include storing 802 identified features detected in the outputs of the imaging devices 104 and sensors 106 during an iteration of the method 300 or 600. Features may include landmarks such as buildings, signs, topology, trees, etc. This set of features and the trajectory followed during the iteration of the method 300 or 600 may then be stored. The features and trajectory may also be stored with the location where the method 300 or 600 was executed, e.g. a GPS (global positioning system) coordinate. The trajectory that is stored may be the user trajectory 412, the adjusted trajectory 416 after the validation step, or the trajectory that was actually followed by the vehicle, taking into account adjustments during execution of a trajectory 412 or 416.
[0063] Step 802 may be performed repeatedly at multiple locations such that a library of feature sets and corresponding trajectories may be stored by the controller 102. Step 802 may be performed with respect to multiple users such that one user may benefit from the feature sets and trajectories of other users. [0064] The method 800 may include receiving 804 an instruction to self-park. In response to the instruction, the controller 102 senses the surroundings of the vehicle in the same manner as for either of the method 300 and the method 600. The controller 102 then identifies 806 features identified in the outputs of the imaging devices 104 and sensors 106 ("the current feature set") to one or more stored feature sets and compares 808 the current feature set to one or more stored feature sets. Comparing 808 the current feature set to one or more stored feature sets may include comparing current feature set to one or more stored features sets having corresponding locations closest to the current location of the controller 102, as determined by a GPS receiver included in the controller 102.
[0065] Comparing 808 may include any comparison method known in the art. In particular, the locations of features in the current and stored feature sets may be compared to determine whether the shape and location of features in the stored feature set match those of features in the current feature set within a tolerance.
[0066] If a stored reference image is found 810 to match the current reference image within the tolerance, then the stored trajectory corresponding to the stored reference image is used 812, i.e. the controller 102 will cause the vehicle to autonomously follow the stored trajectory subject to a validation step and adjustments during execution as described above.
[0067] If the stored feature set is not found 810 to match the feature set, then a trajectory is requested and received 814 from the user, such as according to the method 300 or the method 600.
[0068] In the above disclosure, reference has been made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration specific implementations in which the disclosure may be practiced. It is understood that other implementations may be utilized and structural changes may be made without departing from the scope of the present disclosure. References in the specification to "one embodiment," "an embodiment," "an example embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
[0069] Implementations of the systems, devices, and methods disclosed herein may comprise or utilize a special purpose or general-purpose computer including computer hardware, such as, for example, one or more processors and system memory, as discussed herein. Implementations within the scope of the present disclosure may also include physical and other computer-readable media for carrying or storing computer- executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system. Computer-readable media that store computer-executable instructions are computer storage media (devices). Computer-readable media that carry computer- executable instructions are transmission media. Thus, by way of example, and not limitation, implementations of the disclosure can comprise at least two distinctly different kinds of computer-readable media: computer storage media (devices) and transmission media. [0070] Computer storage media (devices) includes RAM, ROM, EEPROM, CD- ROM, solid state drives ("SSDs") (e.g., based on RAM), Flash memory, phase-change memory ("PCM"), other types of memory, other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.
[0071] An implementation of the devices, systems, and methods disclosed herein may communicate over a computer network. A "network" is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a transmission medium. Transmissions media can include a network and/or data links, which can be used to carry desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of computer-readable media.
[0072] Computer-executable instructions comprise, for example, instructions and data which, when executed at a processor, cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims.
[0073] Those skilled in the art will appreciate that the disclosure may be practiced in network computing environments with many types of computer system configurations, including, an in-dash vehicle computer, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, tablets, pagers, routers, switches, various storage devices, and the like. The disclosure may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. In a distributed system environment, program modules may be located in both local and remote memory storage devices.
[0074] Further, where appropriate, functions described herein can be performed in one or more of: hardware, software, firmware, digital components, or analog components. For example, one or more application specific integrated circuits (ASICs) can be programmed to carry out one or more of the systems and procedures described herein. Certain terms are used throughout the description and claims to refer to particular system components. As one skilled in the art will appreciate, components may be referred to by different names. This document does not intend to distinguish between components that differ in name, but not function. [0075] It should be noted that the sensor embodiments discussed above may comprise computer hardware, software, firmware, or any combination thereof to perform at least a portion of their functions. For example, a sensor may include computer code configured to be executed in one or more processors, and may include hardware logic/electrical circuitry controlled by the computer code. These example devices are provided herein purposes of illustration, and are not intended to be limiting. Embodiments of the present disclosure may be implemented in further types of devices, as would be known to persons skilled in the relevant art(s). At least some embodiments of the disclosure have been directed to computer program products comprising such logic (e.g., in the form of software) stored on any computer useable medium. Such software, when executed in one or more data processing devices, causes a device to operate as described herein.
[0076] Computer program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object-oriented programming language such as Java, Smalltalk, C++, or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on a computer system as a stand-alone software package, on a stand-alone hardware unit, partly on a remote computer spaced some distance from the computer, or entirely on a remote computer or server. In the latter scenario, the remote computer may be connected to the computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). [0077] The present invention is described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions or code. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
[0078] These computer program instructions may also be stored in a non- transitory computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
[0079] The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. [0080] While various embodiments of the present disclosure have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail can be made therein without departing from the spirit and scope of the disclosure. Thus, the breadth and scope of the present disclosure should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents. The foregoing description has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. Further, it should be noted that any or all of the aforementioned alternate implementations may be used in any combination desired to form additional hybrid implementations of the disclosure.

Claims

CLAIMS:
1. A method comprising, using a controller housed in a vehicle:
receiving outputs of sensors coupled to the vehicle controller;
generating a reference image of an area adjacent the vehicle using the outputs; transmitting the reference image to a mobile device;
receiving, from the mobile device, a user trajectory with respect to the reference image; and
autonomously driving the vehicle along the user trajectory.
2. The method of claim 1, wherein autonomously driving the vehicle along the user trajectory comprises:
identifying, by the controller, obstacles around the vehicle using the outputs of the sensors;
evaluating, by the controller, the user trajectory with respect to the obstacles; validating, by the controller, the user trajectory as not causing collision with any of the obstacles; and
autonomously driving, by the controller, the vehicle along the user trajectory in response to validating the user trajectory.
3. The method of claim 1, wherein the sensors comprise an imaging device and at least one other sensor;
wherein generating the reference image comprises:
receiving, by the controller, an image from the imaging device;
receiving, by the controller, an output of the at least one other sensor; identifying, by the controller, a feature and a location of the feature from the output of the at least one other sensor; and
generating, by the controller, the reference image as the image from the imaging device with a highlighted region corresponding to the feature and the location of the feature.
4. The method of claim 1, wherein autonomously driving the vehicle along the user trajectory comprises:
identifying, by the controller, obstacles around the vehicle using the outputs of the sensors; and
adjusting, by the controller, an actual trajectory of the vehicle with respect to the user trajectory according to locations of the obstacles.
5. The method of claim 1, wherein autonomously driving the vehicle along the trajectory comprises:
identifying, by the controller, obstacles around the vehicle using the outputs of the sensors;
determining, by the controller, that a new trajectory of the vehicle must be used that is different from the user trajectory by an amount exceeding a threshold in order to avoid one or more of the obstacles;
in response to determining that the new trajectory of the vehicle must be used that is different from the user trajectory by the amount exceeding the threshold in order to avoid one or more of the obstacles, requesting, by the controller, confirmation of the new trajectory from a user; receiving, by the controller confirmation of the new trajectory from the user; and in response to receiving the confirmation of the new trajectory from the user, autonomously driving the vehicle along the new trajectory.
6. The method of claim 1, wherein generating the reference image of the area adjacent the vehicle using the outputs comprises:
receiving, by the controller, an external image from a camera external to the vehicle and not mounted to the vehicle;
identifying, by the controller, first obstacles in the outputs of the sensors; and generating, by the controller, the reference image generating, as the external image with highlighted regions corresponding to the first obstacles.
7. The method of claim 1, wherein the sensors include at least one of a camera, a light distancing and ranging (LIDAR) sensor, and a radio distance and ranging (RADAR) sensor.
8. The method of claim 1, further comprising:
storing, by the controller, the reference image and the user trajectory;
receiving, by the controller, an instruction to self-park subsequent to storing the reference image and the user trajectory;
determining, by the controller, that the outputs of the sensors match the reference image; and in response to determining that the outputs of the sensors match the reference image, autonomously guiding the vehicle along the user trajectory without requesting user input of a trajectory.
9. The method of claim 1, further comprising:
receiving, by the controller, an emergency stop instruction from the mobile device; and
in response to receiving the emergency stop instruction from the mobile device, causing, by the controller, stopping of the vehicle.
10. The method of claim 1, wherein the mobile device is located outside of the vehicle.
11. A vehicle comprising:
a plurality of sensors mounted to the vehicle;
a controller housed in the vehicle, the controller programmed to—
receive outputs of sensors coupled to the vehicle controller; generate a reference image of an area adjacent the vehicle using the outputs;
transmit the reference image to a mobile device;
receive, from the mobile device, a user trajectory with respect to the reference image; and
autonomously drive the vehicle along the user trajectory.
12. The vehicle of claim 11, wherein the controller is further programmed to autonomously drive the vehicle along the user trajectory by:
identify obstacles around the vehicle using the outputs of the sensors;
evaluate the user trajectory with respect to the obstacles;
validating the user trajectory as not causing collision with any of the obstacles; and
autonomously driving the vehicle along the user trajectory in response to validating the user trajectory.
13. The vehicle of claim 11, wherein the sensors comprise an imaging device and at least one other sensor;
wherein the controller is further programmed to generate the reference image by: receiving an image from the imaging device; receiving an output of the at least one other sensor;
identifying a feature and a location of the feature from the output of the at least one other sensor; and
generating the reference image as the image from the imaging device with a highlighted region corresponding to the feature and the location of the feature.
14. The vehicle of claim 13, wherein the at least one other sensor comprises at least one of a light distancing and ranging (LIDAR) sensor and a radio distance and ranging (RADAR) sensor.
15. The vehicle of claim 11, wherein controller is further programmed to autonomously drive the vehicle along the user trajectory by:
identifying obstacles around the vehicle using the outputs of the sensors; and adjusting an actual trajectory of the vehicle with respect to the user trajectory according to locations of the obstacles.
16. The vehicle of claim 11, wherein controller is further programmed to autonomously drive the vehicle along the user trajectory by:
identifying obstacles around the vehicle using the outputs of the sensors; and if a new trajectory of the vehicle must be used that is different from the user trajectory by an amount exceeding a threshold in order to avoid one or more of the obstacles, requesting confirmation of the new trajectory from a user;
if confirmation of the new trajectory is received from the user, autonomously driving the vehicle along the new trajectory.
17. The vehicle of claim 11, wherein the controller is further programmed to generate the reference image of the area adjacent the vehicle using the outputs by:
receiving an external image from a camera external to the vehicle and not mounted to the vehicle;
identifying first obstacles in the outputs of the sensors; and
generating the reference image as the external image with highlighted regions corresponding to the first obstacles.
18. The vehicle of claim 11, wherein the controller is further programmed to: store the reference image and the user trajectory;
receive an instruction to self-park subsequent to storing the reference image and the user trajectory; and
if the outputs of the sensors match the reference image, autonomously guide the vehicle along the user trajectory without requesting user input of a trajectory.
19. The vehicle of claim 11, wherein the controller is further programmed to: receive an emergency stop instruction from the mobile device; and
in response to receiving the emergency stop instruction from the mobile device, cause stopping of the vehicle.
20. The vehicle of claim 11, further comprising an accelerator actuator, steering actuator, and brake actuator;
wherein the controller is further programmed to autonomously drive the vehicle along the user trajectory by activating one or more of the accelerator actuator, steering actuator, and brake actuator.
PCT/US2016/043281 2016-07-21 2016-07-21 Assisted self parking WO2018017094A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2016/043281 WO2018017094A1 (en) 2016-07-21 2016-07-21 Assisted self parking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2016/043281 WO2018017094A1 (en) 2016-07-21 2016-07-21 Assisted self parking

Publications (1)

Publication Number Publication Date
WO2018017094A1 true WO2018017094A1 (en) 2018-01-25

Family

ID=60996003

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2016/043281 WO2018017094A1 (en) 2016-07-21 2016-07-21 Assisted self parking

Country Status (1)

Country Link
WO (1) WO2018017094A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2568749A (en) * 2017-11-28 2019-05-29 Jaguar Land Rover Ltd Imaging apparatus and method
CN110795253A (en) * 2018-08-03 2020-02-14 奥迪股份公司 Operating system and operating method for remotely controlling a motor vehicle
CN111103874A (en) * 2018-10-26 2020-05-05 百度在线网络技术(北京)有限公司 Method, apparatus, device, and medium for controlling automatic driving of vehicle
DE102019127259A1 (en) * 2019-10-10 2021-04-29 Ford Global Technologies, Llc Method for operating a motor vehicle with a self-parking function
CN114613180A (en) * 2022-02-22 2022-06-10 恒大新能源汽车投资控股集团有限公司 Autonomous parking method, device, vehicle and parking lot end server

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050043871A1 (en) * 2003-07-23 2005-02-24 Tomohiko Endo Parking-assist device and reversing-assist device
US20110001614A1 (en) * 2009-07-01 2011-01-06 Ghneim Maher M Rear Camera Backup Assistance With Touchscreen Display
US20120188100A1 (en) * 2011-01-25 2012-07-26 Electronics And Telecommunications Research Institute Terminal, apparatus and method for providing customized auto-valet parking service
US20140052336A1 (en) * 2012-08-15 2014-02-20 GM Global Technology Operations LLC Directing vehicle into feasible region for autonomous and semi-autonomous parking
US20140197940A1 (en) * 2011-11-01 2014-07-17 Aisin Seiki Kabushiki Kaisha Obstacle alert device
WO2014139821A1 (en) * 2013-03-15 2014-09-18 Volkswagen Aktiengesellschaft Automatic driving route planning application
US20150346718A1 (en) * 2014-05-27 2015-12-03 Here Global B.V. Autonomous Vehicle Monitoring and Control

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050043871A1 (en) * 2003-07-23 2005-02-24 Tomohiko Endo Parking-assist device and reversing-assist device
US20110001614A1 (en) * 2009-07-01 2011-01-06 Ghneim Maher M Rear Camera Backup Assistance With Touchscreen Display
US20120188100A1 (en) * 2011-01-25 2012-07-26 Electronics And Telecommunications Research Institute Terminal, apparatus and method for providing customized auto-valet parking service
US20140197940A1 (en) * 2011-11-01 2014-07-17 Aisin Seiki Kabushiki Kaisha Obstacle alert device
US20140052336A1 (en) * 2012-08-15 2014-02-20 GM Global Technology Operations LLC Directing vehicle into feasible region for autonomous and semi-autonomous parking
WO2014139821A1 (en) * 2013-03-15 2014-09-18 Volkswagen Aktiengesellschaft Automatic driving route planning application
US20150346718A1 (en) * 2014-05-27 2015-12-03 Here Global B.V. Autonomous Vehicle Monitoring and Control

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2568749A (en) * 2017-11-28 2019-05-29 Jaguar Land Rover Ltd Imaging apparatus and method
GB2568749B (en) * 2017-11-28 2020-09-02 Jaguar Land Rover Ltd Imaging apparatus and method
CN110795253A (en) * 2018-08-03 2020-02-14 奥迪股份公司 Operating system and operating method for remotely controlling a motor vehicle
CN111103874A (en) * 2018-10-26 2020-05-05 百度在线网络技术(北京)有限公司 Method, apparatus, device, and medium for controlling automatic driving of vehicle
DE102019127259A1 (en) * 2019-10-10 2021-04-29 Ford Global Technologies, Llc Method for operating a motor vehicle with a self-parking function
CN114613180A (en) * 2022-02-22 2022-06-10 恒大新能源汽车投资控股集团有限公司 Autonomous parking method, device, vehicle and parking lot end server

Similar Documents

Publication Publication Date Title
US10943485B2 (en) Perception assistant for autonomous driving vehicles (ADVs)
US10668925B2 (en) Driver intention-based lane assistant system for autonomous driving vehicles
EP3324332B1 (en) Method and system to predict vehicle traffic behavior for autonomous vehicles to make driving decisions
US10810872B2 (en) Use sub-system of autonomous driving vehicles (ADV) for police car patrol
US11269352B2 (en) System for building a vehicle-to-cloud real-time traffic map for autonomous driving vehicles (ADVS)
US11260855B2 (en) Methods and systems to predict object movement for autonomous driving vehicles
CN110268413B (en) Low level sensor fusion
EP3342666B1 (en) Method and system for operating autonomous driving vehicles using graph-based lane change guide
US10807599B2 (en) Driving scenario based lane guidelines for path planning of autonomous driving vehicles
US10712746B2 (en) Method and system to construct surrounding environment for autonomous vehicles to make driving decisions
RU2656933C2 (en) Method and device for early warning during meeting at curves
JP7355877B2 (en) Control methods, devices, electronic devices, and vehicles for road-cooperative autonomous driving
US10908608B2 (en) Method and system for stitching planning trajectories from consecutive planning cycles for smooth control execution of autonomous driving vehicles
US10054945B2 (en) Method for determining command delays of autonomous vehicles
WO2018017094A1 (en) Assisted self parking
KR20160009828A (en) Apparatus and Method for controlling Vehicle using Vehicle Communication
US11551373B2 (en) System and method for determining distance to object on road
US20200189569A1 (en) Driver verified self parking
US11904853B2 (en) Apparatus for preventing vehicle collision and method thereof
US20200391729A1 (en) Method to monitor control system of autonomous driving vehicle with multiple levels of warning and fail operations
US11221405B2 (en) Extended perception based on radar communication of autonomous driving vehicles
US10860868B2 (en) Lane post-processing in an autonomous driving vehicle
KR102359497B1 (en) A vehicle-platoons implementation under autonomous driving system designed for single vehicle
KR20230005034A (en) Autonomous Vehicle, Control system for remotely controlling the same, and method thereof
KR102531722B1 (en) Method and apparatus for providing a parking location using vehicle's terminal

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16909687

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16909687

Country of ref document: EP

Kind code of ref document: A1