CN112002032A - Method, device, equipment and computer readable storage medium for guiding vehicle driving - Google Patents

Method, device, equipment and computer readable storage medium for guiding vehicle driving Download PDF

Info

Publication number
CN112002032A
CN112002032A CN201910373007.0A CN201910373007A CN112002032A CN 112002032 A CN112002032 A CN 112002032A CN 201910373007 A CN201910373007 A CN 201910373007A CN 112002032 A CN112002032 A CN 112002032A
Authority
CN
China
Prior art keywords
vehicle
information
dimensional
sensing device
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910373007.0A
Other languages
Chinese (zh)
Inventor
孙占娥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201910373007.0A priority Critical patent/CN112002032A/en
Publication of CN112002032A publication Critical patent/CN112002032A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles

Abstract

According to an embodiment of the present disclosure, a method, an apparatus, a device, and a computer-readable storage medium for guiding driving of a vehicle are provided. The method includes a vehicle external sensing device acquiring sensor data of a vehicle and a vehicle surroundings; extracting first region data of the vehicle and second region data of a vehicle surroundings from the sensor data; establishing a first constraint equation, a second constraint equation and a third constraint equation before a sensing device-vehicle-map; acquiring position and posture information of a vehicle and state information of an obstacle; determining the driving information of the vehicle to the destination; the travel information is transmitted to the vehicle. The method realizes real-time guided driving of the vehicle through cooperation of the movable external sensing equipment and the vehicle, and is a brand-new intelligent driving implementation mode.

Description

Method, device, equipment and computer readable storage medium for guiding vehicle driving
Technical Field
Embodiments of the present disclosure relate generally to the field of intelligent driving, and more particularly, to a method, apparatus, device, and computer-readable storage medium for guiding driving of a vehicle.
Background
In recent years, intelligent driving and even automatic driving technologies develop rapidly, and the main research direction in the industry aims to improve the single-vehicle intelligentization level and enhance the perception capability of a vehicle to the surrounding environment by means of high-precision sensor equipment installed on the vehicle. A typical case is a company's autonomous vehicle such as a hundredth class, which carries a powerful lidar, such as a 128-line lidar, mounted in a higher position in the middle of the roof in order to maximize the range that can be perceived. The adverse effect is that the closer distance around the vehicle cannot be effectively covered, and a measuring blind area is generated. For an open simple scene, obstacles such as pedestrians cannot appear in the measuring blind area, but for a complex scene that pedestrians and vehicles such as a parking lot are mixed, the pedestrians or children may appear in the measuring blind area, vehicles cannot sense the obstacles and continue to keep a driving state, and dangerous situations such as collision can occur. To solve the above-mentioned problems of the single-vehicle intelligent automatic driving scheme, research into a solution for assisting the driving of a vehicle by means of a sensor device outside the vehicle is started.
Disclosure of Invention
According to an embodiment of the present disclosure, a solution for guiding a vehicle driving is provided.
In a first aspect of the present disclosure, a method of guiding a vehicle drive is provided. The method comprises the following steps: the method comprises the steps that firstly, scene information of a vehicle and the surrounding environment of the vehicle is obtained from sensing equipment outside the vehicle; secondly, determining the state information of the vehicle from the scene information; thirdly, determining obstacle state information except the vehicle in the surrounding environment of the vehicle from the scene information; fourthly, determining vehicle running information based on the state information of the vehicle, the state information of the obstacle and the destination information of the vehicle; and fifthly, transmitting the vehicle information, the obstacle information and the running information to the vehicle.
In a second aspect of the present disclosure, an apparatus for guiding driving of a vehicle is provided. The device includes: a movable carrier configured to mount a sensing device, movable in synchronization with a target vehicle; a sensing device configured to acquire scene information of a target vehicle and a vehicle surrounding environment; a data processing module configured to extract driving information of the planned vehicle from the scene information; a communication device configured to transmit the travel information to the vehicle.
In a third aspect of the present disclosure, there is provided an electronic device comprising: one or more processors; memory for storing one or more programs that, when executed by the one or more processors, cause the electronic device to implement a method according to the first aspect of the disclosure; there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of the first aspect of the disclosure.
It should be understood that the statements herein reciting aspects are not intended to limit the critical or essential features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, like or similar reference characters designate like or similar elements, and wherein:
FIG. 1 illustrates a schematic diagram of an example environment, in accordance with some embodiments of the present disclosure;
FIG. 2 illustrates scene information acquired by a sensing device according to some embodiments of the present disclosure;
FIG. 3 illustrates a schematic view of an example environment after guiding a device movement, in accordance with some embodiments of the present disclosure;
FIG. 4 illustrates the sensing device acquiring bottom scene information after directing the device movement according to some embodiments of the present disclosure;
FIG. 5 illustrates a schematic diagram of determining vehicle three-dimensional position and attitude information in accordance with some embodiments of the present disclosure;
FIG. 6 illustrates a method of moving a vehicle guidance device according to one embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
In describing embodiments of the present disclosure, the terms "include" and its derivatives should be interpreted as being inclusive, i.e., "including but not limited to. The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The terms "first," "second," and the like may refer to different or the same object. Other explicit and implicit definitions are also possible below.
As mentioned above, the research direction of intelligent driving is beginning to expand from single-vehicle intelligence to assisting vehicle driving through vehicle external sensors. Taking the automatic passenger-replacing parking technology as an example, the automatic passenger-replacing parking technology is an important part of high-grade intelligent driving and has higher landing application value. The automatic passenger-replacing parking technology comprises two technical schemes: one of them is for improving the intelligent level of car end, installs high accuracy sensor equipment for the vehicle, makes it possess the ability of accurate perception surrounding environment. And the other method is to enhance the intellectualization of the parking lot, densely arrange sensor equipment at the site end, sense and position the vehicles entering the site, and transmit the key information back to the vehicles. The two methods enable the vehicle to have the ability of sensing the environment, and ensure the accuracy and the safety of the vehicle in the driving process.
In both embodiments, there are technical problems that are difficult to overcome. Among them among the car end intelligence scheme, as mentioned above, in order to solve the measurement blind area of vehicle less distance all around, need additionally to install both sides and fore-and-aft camera or laser radar. In order to realize the sensing of 360-degree dead angles of the whole vehicle, 6 cameras and 3 laser radars are generally needed. Compared with the prior art, the throughput and the processing rate of the vehicle-end computing platform on the sensor data are very poor, and the parallel processing of a plurality of devices cannot be met. In addition, no laser radar meeting the vehicle specification standard is available in the existing manufacturing technology level, and the intelligent integral scheme at the vehicle end faces the bottleneck. In the field terminal intelligent scheme, the parking lot is densely distributed with laser radar sensors, and the problem of insufficient data processing capacity of a computing platform is also faced. In other embodiments, a cloud computing scheme is adopted, and communication delay caused by the cloud computing scheme is usually as high as tens of milliseconds, so that the requirement of real-time control of the vehicle cannot be met. For the above reasons, both schemes have great technical problems and safety risks.
According to an embodiment of the present disclosure, a solution for guiding vehicle driving based on an external sensing device is proposed. In the scheme, the external sensing equipment is subjected to measurement and sensing relative to the vehicle at an objective third person weighing visual angle, so that the problem of a measurement blind area of active sensing of vehicle-mounted sensing equipment is solved; on the other hand, for the parking lot, the device for guiding the vehicle to drive belongs to the autonomous independent mobile equipment, the whole area does not need to be deployed in a covering mode, namely only one device needs to be installed in the whole parking lot, and the vehicle driving is guided in the global range. In terms of computational load, a relatively common computing platform can accomplish computational tasks due to only a single device and a small number of sensing devices on the device.
In the scheme, in the first step, scene information of a vehicle and a surrounding environment of the vehicle is acquired from sensing equipment outside the vehicle, wherein the scene information can comprise image data acquired by a camera; secondly, determining the state information of the vehicle from the scene information, wherein the state information comprises the position of the vehicle and the like; thirdly, determining obstacle state information except the vehicle in the surrounding environment of the vehicle from the scene information; fourthly, determining vehicle running information including a vehicle steering wheel and an accelerator brake state based on the state information of the vehicle, the state information of the obstacle and the destination information of the vehicle; and fifthly, transmitting the vehicle information, the obstacle information and the running information to the vehicle. Thereby completing the flow of guiding the vehicle driving.
Embodiments of the present disclosure will be described below in detail with reference to the accompanying drawings.
FIG. 1 illustrates a schematic diagram of an example environment 100 in which embodiments of the present disclosure may be implemented. Some typical objects are schematically shown in the example environment 100, including a road 101, a vehicle 102, a sliding guide 103, a vehicle guiding device 104, a sensing device 105, a sensing device measurement area 106. It should be understood that these illustrated facilities and objects are examples only, and that the presence of objects that may be present in different traffic environments will vary depending on the actual situation. The scope of the present disclosure is not limited in this respect.
In the example of fig. 1, one vehicle 101 is traveling on a road 102. Vehicle 101 may be any type of vehicle that may carry people and/or things and be moved by a powered system such as an engine, including but not limited to a car, truck, bus, electric vehicle, motorcycle, recreational vehicle, train, etc. Vehicle 101 may be a vehicle with some autonomous driving capability, such a vehicle also being referred to as an autonomous vehicle. Of course, the vehicle 101 may also be a vehicle without autopilot capability.
In some embodiments, the sensing device 105 in the environment 100 may be an end-of-the-field device independent of the vehicle 101 for monitoring a condition of the environment 100 to obtain sensory information related to the environment 100. In some embodiments, the sensing device 105 may be mounted on the vehicle guidance apparatus 104. In some embodiments, the vehicle guide 104 may be slidably mounted on the slide rail 103. In some embodiments, the measurement area 106 of the sensing device 105 covers the position of the vehicle 101. In an embodiment of the present disclosure, the sensing device 105 comprises an image sensor to acquire image information of the road 102 and the vehicle 101 in the environment 100. In some embodiments, the sensing device may also include one or more other types of sensors, such as lidar, millimeter wave radar, and the like.
A process of guiding vehicle driving according to an embodiment of the present disclosure will be described below with reference to fig. 2 to 6.
Fig. 2 shows image information 200 of the vehicle 102 and the environment around the vehicle acquired by the sensing device 105 according to the embodiment of the present disclosure. The image information 200 is divided into two regions by an image processing method such as an object detection and voice division algorithm, wherein the region 202 is first region data corresponding to the vehicle 101, the region 203 is second region data corresponding to the environment around the vehicle, and the region 201 is a division boundary line of the first region data and the second region data. In some embodiments, the sensing device is a lidar, and the lidar measurement point cloud is correspondingly segmented into vehicle 101 first region data and vehicle surrounding scene second region data.
Fig. 3 shows a situation that the guiding device 104 moves a distance along the sliding guide 103 in the embodiment of the present disclosure, fig. 4 shows the scene information 400 collected in the state of fig. 3 in the embodiment of the present disclosure, and it can be seen that the image information 400 is divided into two regions, wherein 402 is the first region data corresponding to the vehicle 101, 403 is the second region data corresponding to the environment around the vehicle, and 401 is the dividing boundary line between the first region data and the second region data.
Continuing to refer to fig. 2, 3, 4, the position of the sensing device 105 mounted on the guide 104 is also moved due to the movement of the guide 104. The collected scene information changes, and intuitively speaking, the first area corresponding to the vehicle 101 is reduced from the area 202 to the area 402. According to the change of the information in the first area data, the size information of the vehicle in the image can be obtained by combining the moving distance of the guide device 104, the key point data in the image is further extracted, and the three-dimensional information of the vehicle can be reconstructed by adopting a mature visual projection method. In some embodiments, the three-dimensional information is a three-dimensional model, a three-dimensional point cloud, three-dimensional point color information.
Fig. 5 illustrates a method of determining attitude information of a three-dimensional position of the vehicle 101 in an embodiment of the present disclosure. Map information of the vehicle 101 and the scene 100 area is first obtained from an external device, which in some embodiments is a cloud server, a local area network server, or some other terminal device of a communication protocol. The map information is a three-dimensional map of the scene, and in some embodiments, the map information may be a point cloud map, a geographic information map, or a map composed of color texture information.
Referring to fig. 2 and 5, a constraint between the scene 200 and the three-dimensional information of the vehicle 101 is established by using a PnP feature matching algorithm by using the first region data 202 of the vehicle 101 in the scene 200 and the three-dimensional information of the vehicle 101 obtained by three-dimensional reconstruction. Since the scene 200 is information acquired by the sensing device 105 in the scene shown in fig. 1, the scene 200 has a corresponding relationship with the position and orientation information of the sensing device 105. In an embodiment of the present disclosure, the correspondence is embodied as an intrinsic parameter of the camera or an intrinsic parameter of the lidar. In conjunction with the correspondence, a first constraint relationship 511 between the position and orientation 501 of the sensing device 105 and the position and orientation 502 of the vehicle 101 is obtained.
Similarly, constraints between the map information of the scene 200 are established using the map information and the second region data 203 of the vehicle surroundings in the scene 200 using the PnP feature matching algorithm. A second constrained relationship 512 between the position attitude 501 of the sensing device 105 and the map coordinate system 503 is further derived.
Continuing with fig. 5, according to common knowledge or rules of law, for example, during the driving of a vehicle, the tire is constantly in contact with the ground, so that the tire position point is necessarily located at the ground of the map coordinate system 503, and can be represented as a point on a plane by a method of geometric analysis. The horizontal plane of the vehicle and the horizontal plane in the map coordinate system are kept parallel at all times, and the two planes can be expressed to be parallel by a geometric analysis method. The above relationship constitutes a third constraint relationship 513 between the position and attitude 502 of the vehicle and the map coordinate system 503.
Continuing with reference to FIG. 5, first constraint relationship 511, second constraint relationship 512, and third constraint relationship 513 are each expressed by the following equation:
Figure 409202DEST_PATH_IMAGE001
where x is the vehicle position and attitude 502, an accurate solution is obtained by solving the above equation set.
According to an embodiment of the present disclosure, the sensing device 105 is mounted on a vehicle guiding apparatus 104 that can be moved. Referring to fig. 1 and 3, when the vehicle 101 moves the position, the vehicle guiding apparatus 104 is adapted to move so that the measurement area 106 of the sensing device 105 can cover the position of the vehicle 101. In some embodiments, the sensing device 105 is mounted on a wheeled or other self-propelled mobile platform to accomplish the above-described process of adapting the movement.
Fig. 6 illustrates how the vehicle guiding device 104 is adapted to be moved in an embodiment of the present disclosure. The direction of travel of the vehicle 101 is forward and the next moment is to sweep through the grey area 601, the position of the vehicle guidance device 104 is adjusted and the measurement area 106 of the sensing device 105 is pointed at the grey area. By the adjustment as above, the sensing device 105, although not covering the rear area of the vehicle, can ensure the perception of any obstacle in the traveling direction of the vehicle 101, thereby ensuring the safety.
In some embodiments, the scene in which the vehicle is operating is located in the open road, the interior road, of the outdoor area.
According to an embodiment of the present disclosure, the electronic device includes a computing unit that can perform various appropriate actions and processes according to computer program instructions stored in a Read Only Memory (ROM) or computer program instructions loaded from the storage unit 908 into a Random Access Memory (RAM). In the RAM, various programs and data required for the operation of the device can also be stored. The computing unit, the ROM, and the RAM are connected to each other by a bus. An input/output (I/O) interface is also connected to the bus.
A plurality of components in the device are connected to I/O, including: an input unit such as a keyboard, a mouse, etc.; an output unit such as various types of displays, speakers, and the like; storage units such as magnetic disks, optical disks, and the like; and a communication unit such as a network card, modem, wireless communication transceiver, etc. The communication unit allows the device to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit may be a variety of general purpose and/or special purpose processing components having processing and computing capabilities. Some examples of computational units include, but are not limited to, Central Processing Units (CPUs), Graphics Processing Units (GPUs), various specialized Artificial Intelligence (AI) computational chips, various computational units running machine learning model algorithms, Digital Signal Processors (DSPs), and any suitable processors, controllers, microcontrollers, etc. The computing unit 901 may perform the respective methods and processes described above.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a load programmable logic device (CPLD), and the like.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Further, while operations are depicted in a particular order, this should be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (10)

1. A method of directing vehicle driving, comprising:
s11, acquiring scene information of the vehicle and the surrounding environment of the vehicle by a sensing device outside the vehicle;
s12, determining the state information of the vehicle from the scene information;
s13, determining the state information of obstacles except the vehicle in the surrounding environment of the vehicle from the scene information;
s14, determining the vehicle running information based on the state information of the vehicle, the obstacle state information and the vehicle destination information;
s15, the vehicle information, the obstacle information and the running information are sent to the vehicle.
2. The method of claim 1, determining the status information of the vehicle from the context information, characterized by:
s21, extracting first area data containing the vehicle from the scene information;
s22, extracting second area data which contain the vehicle surrounding environment but do not contain the vehicle from the scene information;
s23, acquiring the three-dimensional position and posture information of the vehicle.
3. The method of claim 1, the sensing device being mounted on a movable carrier; adjusting, by the movable carrier, a position of a sensing device, an observation distance and an observation angle that remain predetermined with the vehicle.
4. The method according to claims 2 and 3, obtaining three-dimensional information of the vehicle, characterized in that:
s41, moving the sensing device to obtain the first area data containing the vehicle at different observation positions;
s42, obtaining three-dimensional information of the vehicle by using a three-dimensional reconstruction method, wherein the three-dimensional information comprises at least one of the following items: three-dimensional model, three-dimensional point cloud, three-dimensional point color information.
5. The method according to claims 2 and 4, acquiring three-dimensional position and attitude information of the vehicle, characterized in that:
s51, obtaining map information of the surrounding environment of the vehicle from external data;
s52, based on the first area data of the vehicle and the three-dimensional information of the vehicle, acquiring a first constraint equation of the position and the attitude between the sensing device and the vehicle by using a three-dimensional matching method;
s53, obtaining a second constraint equation of position and attitude between the sensing device and the map coordinate system using a three-dimensional matching method based on the second area data of the vehicle surroundings and the map information;
s54, determining a third constraint equation for the position and attitude between the vehicle and the map information, the third constraint equation using at least one of the following conditions: the vehicle tire is contacted with the ground, and the horizontal plane of the vehicle is parallel to the ground;
s55, determining three-dimensional position and posture information of the vehicle based on the first constraint equation, the second constraint equation and the third constraint equation;
s56, iteratively acquiring the three-dimensional information of the vehicle and acquiring the three-dimensional position and posture information of the vehicle so as to further accurately acquire the three-dimensional information and the three-dimensional position and posture information;
the three-dimensional matching method is a PnP algorithm.
6. The method of claims 1 and 3, wherein the sensing device moves adaptively with the vehicle, enabling real-time acquisition of status information of the vehicle.
7. The method of claim 1, wherein the context information is sensor data of the sensing device, the sensor data comprising at least one of: scanning point cloud data, image data, ultrasonic data and millimeter wave data by using a laser radar;
the status information includes at least one of: position information, attitude information, (angular) velocity information, (angular) acceleration information;
the travel information includes at least one of: position information, attitude information, (angular) velocity information, (angular) acceleration information, steering wheel angle, throttle control amount, brake control amount.
8. An apparatus for guiding driving of a vehicle, comprising:
a movable carrier configured to mount a sensing device, movable in synchronization with a target vehicle;
a sensing device configured to acquire scene information of a target vehicle and a vehicle surrounding environment;
a data processing module configured to extract driving information planning the vehicle from the scene information;
a communication device configured to transmit the travel information to the vehicle.
9. The apparatus of claim 8, wherein the movable carrier is a track moving table, a wheel moving table.
10. An electronic device, the electronic device comprising:
one or more processors;
memory storing one or more programs that, when executed by the one or more processors, cause the electronic device to implement the method of any of claims 1-9;
computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-9.
CN201910373007.0A 2019-05-07 2019-05-07 Method, device, equipment and computer readable storage medium for guiding vehicle driving Pending CN112002032A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910373007.0A CN112002032A (en) 2019-05-07 2019-05-07 Method, device, equipment and computer readable storage medium for guiding vehicle driving

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910373007.0A CN112002032A (en) 2019-05-07 2019-05-07 Method, device, equipment and computer readable storage medium for guiding vehicle driving

Publications (1)

Publication Number Publication Date
CN112002032A true CN112002032A (en) 2020-11-27

Family

ID=73461186

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910373007.0A Pending CN112002032A (en) 2019-05-07 2019-05-07 Method, device, equipment and computer readable storage medium for guiding vehicle driving

Country Status (1)

Country Link
CN (1) CN112002032A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102022203276A1 (en) 2022-04-01 2023-10-05 Robert Bosch Gesellschaft mit beschränkter Haftung Automated valet parking with rail-based sensor system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574952A (en) * 2013-10-15 2015-04-29 福特全球技术公司 Aerial data for vehicle navigation
CN105702083A (en) * 2016-04-13 2016-06-22 重庆邮电大学 Distributed vision-based parking lot-vehicle cooperative intelligent parking system and method
CN106781675A (en) * 2017-01-21 2017-05-31 顾红波 A kind of system and method for collecting parking lot information
CN106845491A (en) * 2017-01-18 2017-06-13 浙江大学 Automatic correction method based on unmanned plane under a kind of parking lot scene
CN106846870A (en) * 2017-02-23 2017-06-13 重庆邮电大学 The intelligent parking system and method for the parking lot vehicle collaboration based on centralized vision
CN107169468A (en) * 2017-05-31 2017-09-15 北京京东尚科信息技术有限公司 Method for controlling a vehicle and device
CN107833473A (en) * 2017-11-30 2018-03-23 上海孩子国科教设备有限公司 Guiding system and vehicle based on unmanned plane
CN108803604A (en) * 2018-06-06 2018-11-13 深圳市易成自动驾驶技术有限公司 Vehicular automatic driving method, apparatus and computer readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574952A (en) * 2013-10-15 2015-04-29 福特全球技术公司 Aerial data for vehicle navigation
CN105702083A (en) * 2016-04-13 2016-06-22 重庆邮电大学 Distributed vision-based parking lot-vehicle cooperative intelligent parking system and method
CN106845491A (en) * 2017-01-18 2017-06-13 浙江大学 Automatic correction method based on unmanned plane under a kind of parking lot scene
CN106781675A (en) * 2017-01-21 2017-05-31 顾红波 A kind of system and method for collecting parking lot information
CN106846870A (en) * 2017-02-23 2017-06-13 重庆邮电大学 The intelligent parking system and method for the parking lot vehicle collaboration based on centralized vision
CN107169468A (en) * 2017-05-31 2017-09-15 北京京东尚科信息技术有限公司 Method for controlling a vehicle and device
CN107833473A (en) * 2017-11-30 2018-03-23 上海孩子国科教设备有限公司 Guiding system and vehicle based on unmanned plane
CN108803604A (en) * 2018-06-06 2018-11-13 深圳市易成自动驾驶技术有限公司 Vehicular automatic driving method, apparatus and computer readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102022203276A1 (en) 2022-04-01 2023-10-05 Robert Bosch Gesellschaft mit beschränkter Haftung Automated valet parking with rail-based sensor system

Similar Documents

Publication Publication Date Title
US20210365750A1 (en) Systems and methods for estimating future paths
CN108572663B (en) Target tracking
JP7073315B2 (en) Vehicles, vehicle positioning systems, and vehicle positioning methods
US20200026282A1 (en) Lane/object detection and tracking perception system for autonomous vehicles
US10394243B1 (en) Autonomous vehicle technology for facilitating operation according to motion primitives
US11308391B2 (en) Offline combination of convolutional/deconvolutional and batch-norm layers of convolutional neural network models for autonomous driving vehicles
US20200346662A1 (en) Information processing apparatus, vehicle, mobile object, information processing method, and program
WO2021217420A1 (en) Lane tracking method and apparatus
EP3710980A1 (en) Autonomous vehicle lane boundary detection systems and methods
US10509412B2 (en) Movable body control system
US11460851B2 (en) Eccentricity image fusion
CN112512887B (en) Driving decision selection method and device
CN113791621B (en) Automatic steering tractor and airplane docking method and system
CN114442101B (en) Vehicle navigation method, device, equipment and medium based on imaging millimeter wave radar
US11214160B2 (en) System for automated charging of autonomous vehicles
EP4222035A1 (en) Methods and systems for performing outlet inference by an autonomous vehicle to determine feasible paths through an intersection
WO2022142839A1 (en) Image processing method and apparatus, and intelligent vehicle
CN113674355A (en) Target identification and positioning method based on camera and laser radar
WO2021000787A1 (en) Method and device for road geometry recognition
CN112002032A (en) Method, device, equipment and computer readable storage medium for guiding vehicle driving
CN116135654A (en) Vehicle running speed generation method and related equipment
WO2023009794A1 (en) Three-dimensional object detection based on image data
CN116710809A (en) System and method for monitoring LiDAR sensor health
DE112021006760T5 (en) Methods and systems for creating a longitudinal plan for an autonomous vehicle based on the behavior of unsafe road users
CN115508841A (en) Road edge detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20201127

WD01 Invention patent application deemed withdrawn after publication