CN111959511B - Vehicle control method and device - Google Patents

Vehicle control method and device Download PDF

Info

Publication number
CN111959511B
CN111959511B CN202010871155.8A CN202010871155A CN111959511B CN 111959511 B CN111959511 B CN 111959511B CN 202010871155 A CN202010871155 A CN 202010871155A CN 111959511 B CN111959511 B CN 111959511B
Authority
CN
China
Prior art keywords
vehicle
image
view
position information
target vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010871155.8A
Other languages
Chinese (zh)
Other versions
CN111959511A (en
Inventor
刘畅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010871155.8A priority Critical patent/CN111959511B/en
Publication of CN111959511A publication Critical patent/CN111959511A/en
Application granted granted Critical
Publication of CN111959511B publication Critical patent/CN111959511B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/18Propelling the vehicle
    • B60W30/18009Propelling the vehicle related to particular drive situations
    • B60W30/18163Lane change; Overtaking manoeuvres
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0015Planning or execution of driving tasks specially adapted for safety
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/303Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using joined images, e.g. multiple camera images
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/8066Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring rearward traffic
    • B60W2420/408
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2552/00Input parameters relating to infrastructure
    • B60W2552/50Barriers

Abstract

The invention provides a vehicle control method, a vehicle control device, electronic equipment and a storage medium; the method comprises the following steps: acquiring a dynamic detection result and a static detection result of a rearview vehicle of a target vehicle; determining obstacle position information corresponding to the target vehicle and vehicle position information of a rear-view vehicle of the target vehicle by combining the dynamic detection result and the static detection result; determining relative position information of the target vehicle and the rearview vehicle based on the vehicle position information and the obstacle position information; controlling the target vehicle to perform lane switching based on the relative position information; the lane changing processing of the automatic driving vehicle in the driving process can be realized, and the autonomous level of the automatic driving vehicle is improved.

Description

Vehicle control method and device
Technical Field
The invention relates to the technical field of artificial intelligence and automatic driving, in particular to a vehicle control method, a vehicle control device, electronic equipment and a storage medium.
Background
Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making. As one of the important application directions in the field of artificial intelligence, the automatic driving technology is becoming mature, and the successful conversion from exploratory research to commercial application is completed.
In the related art, information acquisition and detection of an autonomous vehicle during driving are mostly realized through a forward-looking camera mounted on the autonomous vehicle, such as obstacle detection, lane line detection, and the like, so as to ensure that driving behaviors such as following, lane keeping, and the like of the autonomous vehicle are effectively completed. However, the automatic driving vehicle only has the forward-looking vision, so that the automatic driving vehicle only has the sensing capability of forward-looking obstacles, and the lane changing function in the automatic driving process cannot be realized.
Disclosure of Invention
Embodiments of the present invention provide a vehicle control method and apparatus, an electronic device, and a storage medium, which can implement lane change processing of an autonomous vehicle in a driving process, and improve an autonomous level of the autonomous vehicle.
The technical scheme of the embodiment of the invention is realized as follows:
the embodiment of the invention provides a vehicle control method, which comprises the following steps:
acquiring a dynamic detection result and a static detection result of a rearview vehicle of a target vehicle;
determining obstacle position information corresponding to the target vehicle and vehicle position information of a rear-view vehicle of the target vehicle by combining the dynamic detection result and the static detection result;
determining relative position information of the target vehicle and the rearview vehicle based on the vehicle position information and the obstacle position information;
and controlling the target vehicle to perform lane switching based on the relative position information.
An embodiment of the present invention further provides a vehicle control apparatus, including:
the acquisition module is used for acquiring a dynamic detection result and a static detection result of a rearview vehicle of the target vehicle;
a first determining module, configured to determine, by combining the dynamic detection result and the static detection result, obstacle position information corresponding to the target vehicle and vehicle position information of a rear-view vehicle of the target vehicle;
the second determination module is used for determining the relative position information of the target vehicle and the rearview vehicle based on the vehicle position information and the obstacle position information;
and the control module is used for controlling the target vehicle to carry out lane switching based on the relative position information.
In the above scheme, the obtaining module is further configured to perform radar detection on a rearview vehicle of the target vehicle, and use an obtained radar detection result as the dynamic detection result;
and acquiring an image of a rear-view vehicle including the target vehicle, and taking the acquired image of the rear-view vehicle including the target vehicle as the static detection result.
In the above scheme, the first determining module is further configured to project the radar detection result onto the image to obtain obstacle position information corresponding to a target vehicle detected by the radar in the image;
and carrying out vehicle identification on the image to obtain the vehicle position information of the rear-view vehicle in the image.
In the above scheme, the image includes: a first image corresponding to the left side of the body of the target vehicle and including a rear-view vehicle, and a second image corresponding to the right side of the body of the target vehicle and including a rear-view vehicle;
the first determining module is further configured to splice the first image and the second image to obtain a spliced image;
and carrying out vehicle identification on the spliced image to obtain the vehicle position information of the rear-view vehicle in the image.
In the above scheme, the first determining module is further configured to obtain image parameters of the first image and the second image respectively;
when the image parameter is an image channel, overlapping the first image and the second image according to the image channels of the first image and the second image to obtain a spliced image;
when the image parameter is the image height, transversely splicing the first image and the second image according to the image heights of the first image and the second image to obtain a spliced image;
and when the image parameter is the image width, longitudinally splicing the first image and the second image according to the image widths of the first image and the second image to obtain a spliced image.
In the above scheme, the first determining module is further configured to perform feature extraction on the image through a neural network model to obtain a feature map corresponding to the image;
carrying out vehicle identification on the feature map corresponding to the image through the neural network model, and predicting to obtain coordinate information of the rear-view vehicle in the feature map;
and obtaining the vehicle position information of the rear-view vehicle in the image based on the coordinate information of the rear-view vehicle in the feature map and the downsampling multiplying power of the feature map relative to the image.
In the above scheme, the second determining module is further configured to match the vehicle position information with the obstacle position information to obtain obstacle position information corresponding to the rear-view vehicle in the obstacle position information;
and carrying out back projection processing on the vehicle position information and the obstacle position information corresponding to the rear-view vehicle to obtain the relative position information of the target vehicle and the rear-view vehicle.
In the scheme, the vehicle position information is acquired through a camera sensor, and the obstacle position information is acquired through radar detection;
the second determining module is further configured to obtain calibration parameters of the radar and the camera sensor, respectively;
and carrying out back projection processing on the vehicle position information and the obstacle position information corresponding to the rear-view vehicle based on the acquired calibration parameters to obtain the relative position information of the target vehicle and the rear-view vehicle.
In the above scheme, the control module is further configured to obtain a relative speed between the target vehicle and a rear-view vehicle of the target vehicle;
determining a safe lane in a lane adjacent to the current driving lane of the target vehicle based on the relative position information and the relative speed;
controlling the target vehicle to switch from a current driving lane to the safe lane.
An embodiment of the present invention further provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the vehicle control method provided by the embodiment of the invention when executing the executable instructions stored in the memory.
The embodiment of the invention also provides a computer-readable storage medium, which stores executable instructions, and when the executable instructions are executed by a processor, the vehicle control method provided by the embodiment of the invention is realized.
The embodiment of the invention has the following beneficial effects:
obtaining a dynamic detection result and a static detection result of a rear-view vehicle of the automatic driving vehicle, and then obtaining obstacle position information in a rear-view range corresponding to the target vehicle and vehicle position information of the rear-view vehicle of the target vehicle based on the dynamic detection result and the static detection result, thereby determining relative position information of the automatic driving vehicle and the rear-view vehicle based on the vehicle position information and the obstacle position information, and controlling the automatic driving vehicle to switch roads based on the relative position information; in this way, the relative position of the autonomous vehicle and the rear-view vehicle is detected, so that lane changing processing of the autonomous vehicle in the driving process can be realized, and the autonomous level of the autonomous vehicle is improved.
Drawings
Fig. 1 is a schematic diagram of a vehicle control method provided in the related art;
FIG. 2 is a schematic diagram of an implementation scenario of a vehicle control method provided by an embodiment of the invention;
FIG. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart diagram of a vehicle control method provided by an embodiment of the present invention;
FIG. 5 is a schematic diagram of a rear view sensor of an autonomous vehicle according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating the projection of radar detection results provided by an embodiment of the present invention;
FIG. 7 is a schematic diagram of an image stitching result provided by an embodiment of the present invention;
FIG. 8 is a schematic structural diagram of a vehicle recognition neural network model of an image provided by an embodiment of the invention;
FIG. 9 is a schematic diagram of a vehicle identification result of a left image of an autonomous vehicle provided by an embodiment of the invention;
FIG. 10 is a schematic diagram illustrating rear view vehicle identification results for an autonomous vehicle provided by an embodiment of the invention;
FIG. 11 is a schematic flow chart diagram of a vehicle control method provided by an embodiment of the present invention;
FIG. 12 is a schematic structural diagram of a vehicle control method according to an embodiment of the present invention;
fig. 13 is a schematic structural diagram of a vehicle control device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail with reference to the accompanying drawings, the described embodiments should not be construed as limiting the present invention, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first \ second \ third" are only to distinguish similar objects and do not denote a particular order, but rather the terms "first \ second \ third" are used to interchange specific orders or sequences, where appropriate, to enable embodiments of the invention described herein to be practiced in other than the order shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein is for the purpose of describing embodiments of the invention only and is not intended to be limiting of the invention.
Before further detailed description of the embodiments of the present invention, terms and expressions mentioned in the embodiments of the present invention are explained, and the terms and expressions mentioned in the embodiments of the present invention are applied to the following explanations.
1) The automatic driving is to guide and decide a vehicle driving task without testing the physical driving operation of a driver, and replace the control behavior of the driver to ensure that the vehicle can complete the function of safe driving.
2) An autonomous vehicle, also known as an unmanned vehicle or a computer-driven vehicle, is an intelligent vehicle for realizing autonomous traveling along a road in an unmanned state.
3) A rear-view vehicle is a vehicle which automatically drives the vehicle in the rear-view visual field direction.
4) The lane changing function has two realization modes, one is the trigger type lane changing, and the other is the autonomous lane changing. The triggered lane change means that the lane change behavior of the automatic driving vehicle is triggered by a signal sent by a driver, and the automatic driving vehicle carries out corresponding lane change behavior after receiving the signal. The autonomous lane change means that the autonomous vehicle autonomously determines whether or not a lane change is required without receiving a lane change signal from a driver, and performs the lane change when the lane change is required.
In the related art, most of the automatic driving sensing designs are forward view field designs as shown in fig. 1, and fig. 1 is a schematic diagram of a vehicle control method provided in the related art, which collects and detects forward view field information, such as obstacle detection, lane line detection, and the like, through a forward view camera installed in an automatic driving vehicle, so as to ensure that driving behaviors such as following, lane keeping, and the like of the automatic driving vehicle are effectively completed. However, when the automatic driving vehicle is required to have a lane changing function, it is not enough to have only a forward-looking vision detection algorithm. Since the field of view of the front-view camera does not include the rear vehicle, the field of view from the sensor does not satisfy the obstacle detection function when the autonomous vehicle changes lanes. Therefore, the automatic driving vehicle only has the sensing capability of a forward-looking obstacle and cannot realize the lane changing function.
Based on the above explanations of terms and expressions involved in the embodiments of the present invention, an implementation scenario of the vehicle control method provided in the embodiments of the present invention is described below, referring to fig. 2, and fig. 2 is a schematic diagram of the implementation scenario of the vehicle control method provided in the embodiments of the present invention, in order to support an exemplary application, an autonomous vehicle 200 (provided with an autonomous controller) is connected to an autonomous server 100 through a network 30, where the network 30 may be a wide area network or a local area network, or a combination of the two, and data transmission is implemented using a wireless or wired link.
An autonomous vehicle 200 for acquiring a dynamic detection result and a static detection result of a rear-view vehicle of a target vehicle; uploading the dynamic detection result and the static detection result to the autopilot server 100;
an autonomous driving server 100 for receiving the dynamic detection result and the static detection result uploaded by the autonomous driving vehicle 200; determining obstacle position information corresponding to the target vehicle and vehicle position information of a rear-view vehicle of the target vehicle by combining the dynamic detection result and the static detection result; determining relative position information of the target vehicle and the rear-view vehicle based on the vehicle position information and the obstacle position information, and returning to the autonomous vehicle 200;
and the automatic driving vehicle 200 is used for receiving the relative position information sent by the automatic driving server and controlling the target vehicle to carry out lane switching based on the relative position information.
In practical applications, the automatic driving server 100 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, middleware service, a domain name service, a security service, a CDN, and a big data and artificial intelligence platform. The autonomous vehicle and the autonomous server may be directly or indirectly connected through wired or wireless communication, and the present invention is not limited thereto.
The following describes in detail a hardware structure of an electronic device of a vehicle control method according to an embodiment of the present invention. Referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, where the electronic device 300 shown in fig. 3 includes: at least one processor 310, memory 350, at least one network interface 320, and a user interface 330. The various components in electronic device 300 are coupled together by a bus system 340. It will be appreciated that the bus system 340 is used to enable connected communication between these components. The bus system 340 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 340 in fig. 3.
The Processor 310 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 330 includes one or more output devices 331, including one or more speakers and/or one or more visual display screens, that enable presentation of media content. The user interface 330 also includes one or more input devices 332, including user interface components to facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 350 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 350 optionally includes one or more storage devices physically located remote from processor 310.
The memory 350 may include either volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 350 described in embodiments of the invention is intended to comprise any suitable type of memory.
In some embodiments, memory 350 is capable of storing data, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below, to support various operations.
An operating system 351 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 352 for communicating to other computing devices via one or more (wired or wireless) network interfaces 320, exemplary network interfaces 320 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a presentation module 353 for enabling presentation of information (e.g., a user interface for operating peripherals and displaying content and information) via one or more output devices 331 (e.g., a display screen, speakers, etc.) associated with the user interface 330;
an input processing module 354 for detecting one or more user inputs or interactions from one of the one or more input devices 332 and translating the detected inputs or interactions.
In some embodiments, the vehicle control apparatus provided by the embodiment of the present invention may be implemented in software, and fig. 3 shows the vehicle control apparatus 355 stored in the memory 350, which may be software in the form of programs and plug-ins, and the like, and includes the following software modules: the obtaining module 3551, the first determining module 3552, the second determining module 3553, and the control module 3554, which are logical and thus may be arbitrarily combined or further separated depending on the functionality implemented, the functionality of each of which will be described below.
In other embodiments, the vehicle control Device provided by the embodiments of the present invention may be implemented by a combination of hardware and software, and as an example, the vehicle control Device provided by the embodiments of the present invention may be a processor in the form of a hardware decoding processor, which is programmed to execute the vehicle control method provided by the embodiments of the present invention, for example, the processor in the form of the hardware decoding processor may be one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components.
Based on the above description of the implementation scenario of the vehicle control method and the electronic device according to the embodiment of the present invention, the vehicle control method according to the embodiment of the present invention is described below. Referring to fig. 4, fig. 4 is a schematic flow chart of a vehicle control method provided by the embodiment of the invention; in some embodiments, the vehicle control method may be implemented by an autonomous driving controller alone, or by a server and the autonomous driving controller in cooperation, taking the autonomous driving controller as an example, the vehicle control method provided by the embodiments of the present invention includes:
step 401: the autonomous driving controller obtains a dynamic detection result and a static detection result of a rear-view vehicle of the target vehicle.
Here, in practical applications, an autonomous vehicle is provided with an autonomous controller. During the driving process of the automatic driving vehicle, the automatic driving controller needs to detect the driving environment, such as obstacle detection, lane line detection and the like in the forward view visual field range and the backward view visual field range, so as to ensure the normal driving of the automatic driving vehicle.
In an embodiment of the present invention, an autonomous driving controller obtains dynamic detection results and static detection results of a rear-view vehicle of an autonomous driving vehicle (i.e., a target vehicle). Specifically, the dynamic detection result may be detected by controlling the distance sensor, and the static detection result may be an image detected by the image sensor. The target vehicle is an autonomous vehicle, and the rear-view vehicle is a vehicle in a rear-view direction of the autonomous vehicle.
In some embodiments, the autonomous driving controller may obtain the dynamic detection results and the static detection results of the rear-view vehicle of the target vehicle by: radar detection is carried out on a rearview vehicle of a target vehicle, and an obtained radar detection result is used as a dynamic detection result; and acquiring an image of the rear-view vehicle including the target vehicle, and taking the acquired image of the rear-view vehicle including the target vehicle as a static detection result.
In practical applications, the embodiment of the present invention provides a sensor for the rear view of the autonomous vehicle, such as a radar sensor, a camera sensor, and the like. Then radar detection is carried out on the rearview vehicle of the target vehicle through a radar sensor, and the obtained radar detection result is used as a dynamic detection result; and acquiring an image including a rear-view vehicle, and taking the acquired image as a static detection result.
In some embodiments, the autonomous driving controller may radar detect a rear-view vehicle of the target vehicle and acquire an image including the rear-view vehicle by: the radar detection is respectively carried out on the rearview vehicles on the left side and the right side of the vehicle body of the target vehicle, and images including the rearview vehicles on the left side and the right side of the vehicle body of the target vehicle are respectively obtained.
Here, a sensing system including a radar sensor and a camera sensor is provided for a rear view field of the autonomous vehicle to ensure detection capability for dynamic and static obstacles, respectively. Specifically, referring to fig. 5, fig. 5 is a schematic diagram of an architecture of a rear view sensor of an autonomous vehicle according to an embodiment of the present invention. Here, for the autonomous vehicle, the embodiment of the present invention proposes to add 4 sensors for the rear view field, i.e. two rear view camera sensors and two rear view radar sensors, as shown in the (1) sub-diagram of fig. 5, C1Indicating left rear camera sensor, C2Representing a right rear camera sensor; r1Indicating left rear radar sensor, R2Representing a right rear radar sensor; accordingly, the rear view coverage of the autonomous vehicle is shown in the (2) sub-diagram of fig. 5.
Therefore, when the automatic driving vehicle acquires the vehicle in the rear view visual field range, the automatic driving vehicle can simultaneously acquire the vehicle through the rear view radar sensors and the camera sensors arranged on the left side and the right side of the vehicle body. Specifically, the rear-view vehicles on the left and right sides of the body of the autonomous vehicle are subjected to radar detection by the radar sensor, and images including the rear-view vehicles on the left and right sides of the body of the autonomous vehicle are acquired by the camera sensor.
Step 402: and determining the position information of the obstacle corresponding to the target vehicle and the vehicle position information of the rear-view vehicle of the target vehicle by combining the dynamic detection result and the static detection result.
After the dynamic detection result and the static detection result of the automatic driving vehicle are obtained, the obstacle position information corresponding to the automatic driving vehicle and the vehicle position information of the rear-view vehicle are analyzed by combining the dynamic detection result and the static detection result.
In some embodiments, the automatic driving controller may analyze the obtained dynamic detection result and the static detection result to determine obstacle position information corresponding to the target vehicle and vehicle position information of a rear-view vehicle of the target vehicle by: projecting the radar detection result to the image to obtain obstacle position information corresponding to the target vehicle detected by the radar in the image; and carrying out vehicle identification on the image to obtain vehicle position information of the rear-view vehicle in the image.
After the radar detection result (i.e., the dynamic detection result) of the rear-view vehicle of the target vehicle and the image including the rear-view vehicle (i.e., the static detection result) are obtained, since the position information included in the radar detection result is the position information in the three-dimensional coordinate, when the obstacle position information corresponding to the mobile driving vehicle and the vehicle position information of the rear-view vehicle are analyzed based on the radar detection result and the image including the rear-view vehicle, it is necessary to project the radar detection result onto the image first, so as to obtain the obstacle position information detected by the radar in the image.
When the automatic driving vehicle acquires the vehicle in the rear view field range, the automatic driving vehicle acquires the vehicle simultaneously through the rear view radar sensors and the camera sensors arranged on the left side and the right side of the vehicle body. Therefore, when performing the projection process, the automatic driving controller can obtain the position information of the obstacle detected by the radar in the image by the following method: and respectively projecting radar detection results of the left side and the right side of the vehicle body corresponding to the target vehicle to corresponding images including the rearview vehicle to obtain the position information of the obstacle detected by the radar in the images. Referring to fig. 6, fig. 6 is a schematic diagram of a projection result of a radar detection result provided by an embodiment of the present invention, where a black dot is a visualization result of the radar detection result projected onto an image.
Here, in practical applications, after obtaining the obstacle position information corresponding to the autonomous driving vehicle in the image, it is also necessary to perform vehicle recognition on the acquired image including the rear-view vehicle to obtain the vehicle position information of the rear-view vehicle in the image.
In some embodiments, since the images of the rear-view vehicles on the left and right sides of the body of the target vehicle are respectively acquired, namely, the images include a first image corresponding to the rear-view vehicle on the left side of the body of the target vehicle and a second image corresponding to the rear-view vehicle on the right side of the body of the target vehicle; accordingly, the automatic driving controller may perform vehicle recognition on the image by: splicing the first image and the second image to obtain a spliced image; and carrying out vehicle identification on the spliced images to obtain vehicle position information of the rear-view vehicle in the images.
In some embodiments, the autopilot controller may obtain the stitched image by: respectively acquiring image parameters of a first image and a second image; when the image parameter is an image channel, overlapping the first image and the second image according to the image channels of the first image and the second image to obtain a spliced image; when the image parameter is the image height, transversely splicing the first image and the second image according to the image heights of the first image and the second image to obtain a spliced image; and when the image parameter is the image width, longitudinally splicing the first image and the second image according to the image widths of the first image and the second image to obtain a spliced image.
In practical application, the acquired images of the left side and the right side of the vehicle body including the rearview vehicle can be spliced. Specifically, image parameters of the first image and the second image, such as an image channel, an image height and an image width, are respectively obtained; when the image parameter is an image channel, overlapping the first image and the second image according to the image channels of the first image and the second image to obtain a spliced image; when the image parameter is the image height, transversely splicing the first image and the second image according to the image heights of the first image and the second image to obtain a spliced image; and when the image parameter is the image width, longitudinally splicing the first image and the second image according to the image widths of the first image and the second image to obtain a spliced image. Exemplarily, referring to fig. 7, fig. 7 is a schematic diagram of an image stitching result provided by an embodiment of the present invention. Here, as shown in fig. 7(1), the image channel based stitching is obtained; as shown in fig. 7(2), the image height-based stitching is obtained; as shown in fig. 7(3), the image width-based stitching is obtained.
It should be noted that the image stitching method according to the embodiment of the present invention is not limited to the above method, and is not described herein again. By applying the embodiment, the detection time of the rearview vehicle of the rearview image can be greatly saved through the parallel detection scheme, so that the delay of a sensing system is reduced, and the real-time performance of the rearview vehicle detection of the automatic driving vehicle is ensured.
After the stitched image is obtained, vehicle identification needs to be performed on the stitched image. In some embodiments, the autonomous driving controller may perform vehicle recognition on the image by: extracting the features of the image through a neural network model to obtain a feature map corresponding to the image; vehicle identification is carried out on the feature map corresponding to the image through a neural network model, and coordinate information of the rearview vehicle in the feature map is obtained through prediction; and obtaining the vehicle position information of the rear-view vehicle in the image based on the coordinate information of the rear-view vehicle in the feature map and the downsampling multiplying factor of the feature map relative to the image.
In practical application, the spliced rear view images can be detected in parallel through a vehicle detection neural network model trained in advance. Specifically, the vehicle detection neural network model is constructed and trained based on a deep convolutional neural network, and in actual implementation, the vehicle detection neural network model can be constructed and trained based on a network structure of a backbone network ResNet18, see fig. 8, and fig. 8 is a schematic structural diagram of the vehicle identification neural network model of the image provided by the embodiment of the present invention.
When the rearview vehicle detection is carried out, the characteristic extraction layer of the constructed and trained vehicle detection neural network model is used for carrying out the characteristic extraction on the spliced images to obtain the characteristic diagram corresponding to the images; and then, the rear-view vehicle is identified through a prediction layer (namely a comb layer) of the vehicle detection neural network model, and coordinate information of the rear-view vehicle in the characteristic diagram is obtained through prediction. For example, referring to fig. 9, fig. 9 is a schematic diagram of a vehicle recognition result of a left image of an autonomous vehicle according to an embodiment of the present invention, where an area corresponding to a dashed frame is a position of a recognized rear-view vehicle.
After obtaining the prediction result output by the vehicle detection neural network model, for the coordinate information of the predicted rear-view vehicle in the feature map, the embodiment of the present invention further reduces the obtained coordinate information prediction values in the feature map to the image coordinate systems of two original rear-view images according to the original stitching method, so as to obtain the rear-view vehicle detection result shown in fig. 10, where fig. 10 is a schematic diagram of the rear-view vehicle identification result of the automatically driven vehicle provided by the embodiment of the present invention, fig. 10(1) shows the identification result of the image of the right rear-view vehicle, fig. 10(2) shows the identification result of the image of the left rear-view vehicle, and here, the area corresponding to the solid line frame is the position of the identified rear-view vehicle. In practical implementation, the vehicle position information of the rear-view vehicle in the original image can be calculated according to the predicted coordinate information of the rear-view vehicle in the feature map and the downsampling magnification of the feature map relative to the original image.
Step 403: and determining relative position information of the target vehicle and the rearview vehicle based on the vehicle position information and the obstacle position information.
After vehicle position information of the rear-view vehicle in the image and obstacle position information detected by a radar in the image are obtained, relative position information of the target vehicle and the rear-view vehicle is determined based on the obtained vehicle position information and the obstacle position information.
In some embodiments, the autonomous driving controller may determine the relative position information of the target vehicle and the rear-view vehicle by: matching the vehicle position information with the obstacle position information to obtain obstacle position information corresponding to the rearview vehicle in the obstacle position information; and carrying out back projection processing on the vehicle position information and the obstacle position information corresponding to the rear-view vehicle to obtain the relative position information of the target vehicle and the rear-view vehicle.
In practical applications, all objects detected by the radar are referred to as obstacles. Therefore, the radar detection result obtained by the radar detecting the rear-view vehicle of the target vehicle includes information on all obstacles in the rear-view field. When the radar detection result is projected onto the image in this way, the position information of the obstacle detected by the radar in the obtained image also includes the position information of the rear-view vehicle.
Based on this, when the relative position information of the target vehicle and the rear-view vehicle is determined, the vehicle position information of the rear-view vehicle in the obtained image and the obstacle position information detected by the radar in the image can be associated and matched to obtain the obstacle position information corresponding to the rear-view vehicle in the obstacle position information. Here, the obstacle position information corresponding to the rear-view vehicle is position information in two-dimensional coordinates, and therefore, in practical applications, it is also necessary to perform back projection processing on the position information of the rear-view vehicle in two-dimensional coordinates to obtain position information of the rear-view vehicle in three-dimensional coordinates, thereby obtaining relative position information between the target vehicle and the rear-view vehicle.
In some embodiments, the image including the rear-view vehicle may be acquired by a camera sensor (i.e., vehicle position information is acquired by a camera sensor), and the obstacle position information is acquired by radar detection; accordingly, the automatic driving controller may obtain the relative position information of the target vehicle and the rear-view vehicle by: respectively acquiring calibration parameters of a radar sensor and a camera sensor; and carrying out back projection processing on the vehicle position information and the obstacle position information corresponding to the rear-view vehicle based on the acquired calibration parameters to obtain the relative position information of the target vehicle and the rear-view vehicle.
In practical applications, images including a rear view vehicle are captured by a camera sensor. Therefore, calibration parameters of the radar sensor and the camera sensor can be respectively obtained, and the calibration parameters are parameters set when the camera sensor or the radar sensor is installed, such as the height of the center point of the camera, the emission angle of radar signals and the like; and carrying out back projection processing on the vehicle position information and the obstacle position information of the rearview vehicle in the two-dimensional coordinate based on the acquired calibration parameters to obtain the position information of the rearview vehicle in the three-dimensional coordinate, thereby obtaining the relative position information of the target vehicle and the rearview vehicle.
Step 404: based on the relative position information, the control target vehicle performs lane switching.
In some embodiments, the autonomous driving controller may control the target vehicle to make a lane change by: acquiring relative speeds of a target vehicle and a rearview vehicle of the target vehicle; determining a safe lane among lanes adjacent to a current driving lane of the target vehicle based on the relative position information and the relative speed; the control target vehicle switches from the current travel lane to the safe lane.
After obtaining the relative position information of the target vehicle and the rear-view vehicle, the relative speed of the target vehicle and the rear-view vehicle of the target vehicle may also be obtained, and specifically, the relative speed may also be obtained based on radar detection. And determining a safe lane in the adjacent lane of the current driving lane of the target vehicle based on the obtained relative position information and relative speed, wherein the safe lane can ensure that the target vehicle can realize lane change and accidents such as collision and the like do not occur. At this time, the control target vehicle switches from the current traveling lane to the safe lane. In practical implementation, in the process of controlling the automatic driving vehicle to change lanes, the obstacle detection result, the lane line detection result and the like in the forward-looking range are also required to be combined to ensure the safe driving of the automatic driving vehicle.
By applying the embodiment of the invention, the dynamic detection result and the static detection result of the rear-view vehicle of the automatic driving vehicle are obtained, and then the obstacle position information in the rear-view range corresponding to the target vehicle and the vehicle position information of the rear-view vehicle of the target vehicle are obtained based on the dynamic detection result and the static detection result, so that the relative position information of the automatic driving vehicle and the rear-view vehicle is determined based on the vehicle position information and the obstacle position information, and the automatic driving vehicle is controlled to switch roads based on the relative position information; in this way, the relative position of the autonomous vehicle and the rear-view vehicle is detected, so that lane changing processing of the autonomous vehicle in the driving process can be realized, and the autonomous level of the autonomous vehicle is improved.
Next, a description will be continued on a vehicle control method provided by an embodiment of the invention. In some embodiments, the vehicle control method provided in the embodiment of the present invention may be implemented by an autonomous vehicle and an autonomous driving server in cooperation, referring to fig. 11, where fig. 11 is a schematic flowchart of the vehicle control method provided in the embodiment of the present invention, and the vehicle control method provided in the embodiment of the present invention includes:
step 1101: the automatic driving vehicle carries out radar detection on rear-view vehicles on the left side and the right side of the vehicle body through radar sensors to obtain radar detection results; and rear view images including a rear view vehicle on the left and right sides of the vehicle body are acquired by the camera sensors.
Here, the vehicle control method provided by the embodiment of the invention may be specifically implemented by an automatic driving controller of an automatic driving vehicle.
Here, a sensing system including a radar sensor and a camera sensor is provided for a rear view field of the autonomous vehicle to ensure detection capability for dynamic and static obstacles, respectively. Specifically, referring to fig. 5, fig. 5 is a schematic diagram of an architecture of a rear view sensor of an autonomous vehicle according to an embodiment of the present invention. Here, for the autonomous vehicle, the embodiment of the present invention proposes to add 4 sensors for the rear view field, i.e. two rear view camera sensors and two rear view radar sensors, as shown in the (1) sub-diagram of fig. 5, C1Indicating left rear camera sensor, C2Representing a right rear camera sensor;R1indicating left rear radar sensor, R2Representing a right rear radar sensor; accordingly, the rear view coverage of the autonomous vehicle is shown in the (2) sub-diagram of fig. 5.
Therefore, when the automatic driving vehicle acquires the vehicle in the rear view visual field range, the automatic driving vehicle can simultaneously acquire the vehicle through the rear view radar sensors and the camera sensors arranged on the left side and the right side of the vehicle body. Specifically, radar detection is performed on rear-view vehicles on the left and right sides of a vehicle body of the autonomous vehicle by a radar sensor, and images including the rear-view vehicles on the left and right sides of the vehicle body of the autonomous vehicle are acquired by a camera sensor.
Step 1102: and sending the radar detection result and a rear view image comprising a rear view vehicle to an automatic driving server.
Step 1103: the autopilot server receives the radar detection results and a rear view image including a rear view vehicle.
Step 1104: and projecting the radar detection result to the rear view image to obtain the position information of the obstacle detected by the radar in the rear view image.
Step 1105: and carrying out vehicle identification on the rear view image to obtain vehicle position information of the rear view vehicle in the rear view image.
Here, the rear view image includes a first image including a rear view vehicle corresponding to the left side of the body of the subject vehicle, and a second image including a rear view vehicle corresponding to the right side of the body of the subject vehicle. Before the back view image is identified, the first image and the second image can be spliced, and specifically, image parameters of the first image and the second image are respectively obtained; when the image parameter is an image channel, overlapping the first image and the second image according to the image channels of the first image and the second image to obtain a spliced image; when the image parameter is the image height, transversely splicing the first image and the second image according to the image heights of the first image and the second image to obtain a spliced image; and when the image parameter is the image width, longitudinally splicing the first image and the second image according to the image widths of the first image and the second image to obtain a spliced image. Exemplarily, referring to fig. 7, fig. 7 is a schematic diagram of an image stitching result provided by an embodiment of the present invention. Here, as shown in fig. 7(1), the image is obtained based on image channel stitching; as shown in fig. 7(2), the image height-based stitching is obtained; as shown in fig. 7(3), the image width-based stitching is obtained.
And then, identifying the spliced rear view image, for example, identifying a vehicle through a pre-trained neural network model to obtain vehicle position information of a rear-view vehicle in the rear view image. Specifically, feature extraction can be carried out on the spliced images through a feature extraction layer of the neural network model to obtain a feature map corresponding to the images; then, identifying the rearview vehicle through a prediction layer of the neural network model, and predicting to obtain coordinate information of the rearview vehicle in the characteristic diagram; the obtained predicted values of the coordinate information of the feature map are restored to the image coordinate systems of the two original rear-view images, respectively, so that the rear-view vehicle detection result shown in fig. 10 is obtained. In practical implementation, the vehicle position information of the rear-view vehicle in the original image can be calculated according to the predicted coordinate information of the rear-view vehicle in the feature map and the downsampling magnification of the feature map relative to the original image.
Step 1106: and matching the vehicle position information with the obstacle position information to obtain the obstacle position information corresponding to the rearview vehicle in the obstacle position information.
Step 1107: and carrying out back projection processing on the vehicle position information and the obstacle position information corresponding to the rear-view vehicle to obtain the relative position information of the target vehicle and the rear-view vehicle.
Here, calibration parameters of the radar and the camera sensor can be acquired respectively; and carrying out back projection processing on the vehicle position information and the obstacle position information corresponding to the rear-view vehicle based on the acquired calibration parameters to obtain the relative position information of the automatic driving vehicle and the rear-view vehicle.
Step 1108: the relative position information is returned to the autonomous vehicle.
Step 1109: the autonomous vehicle receives the relative position information and obtains the relative speed of the autonomous vehicle and the rear-view vehicle.
Step 1110: and determining a safe lane in the adjacent lane of the current driving lane of the target vehicle based on the relative position information and the relative speed, and switching from the current driving lane to the safe lane.
In practical implementation, in the process of controlling the automatic driving vehicle to change lanes, the obstacle detection result, the lane line detection result and the like in the forward-looking range are also required to be combined to ensure the safe driving of the automatic driving vehicle.
By applying the embodiment of the invention, the rear view sensing system is arranged for the automatic driving vehicle, and other rear view vehicles in the rear view field range of the automatic driving vehicle are detected, so that the relative position information of the automatic driving vehicle and other rear view vehicles is obtained, and the lane changing function of the automatic driving vehicle is realized based on the relative position information.
An exemplary application of the embodiments of the present invention in a practical application scenario will be described below.
As one of the important application directions in the field of artificial intelligence, the automatic driving technology is becoming mature, and the successful conversion from exploratory research to commercial application is completed. The perception system is an important part of autonomous driving as the "otoscope" of an autonomous vehicle. The safe driving of the automatic driving depends on the correctness, the real-time property and the robustness of a sensing system, and the performance of the sensing system strongly depends on a sensor deployment scheme at a vehicle end.
In practical applications, one of the important capabilities of an autonomous vehicle is lane change capability. The lane changing capability is a further embodiment of the autonomy grade of the automatic driving vehicle and is one of the key technical difficulties in the automatic driving from the L2 grade to the L3 grade. For an autonomous vehicle, there are two ways to implement the lane change function: one is the triggered lane change, and the other is the autonomous lane change. The triggered lane change means that the lane change behavior of the automatic driving vehicle is triggered by a signal sent by a driver, and the automatic driving vehicle carries out corresponding lane change behavior after receiving the signal. The autonomous lane change means that the autonomous vehicle autonomously determines whether or not a lane change is required without receiving a lane change signal from a driver, and performs the lane change when the lane change is required.
In any of the lane change functions, the autonomous vehicle is required to have an ability to effectively detect the other vehicle in a rear view Region (Region of interest). However, most of the automatic driving sensing designs in the related art are forward-looking vision designs as shown in fig. 1, which perform forward-looking vision information acquisition and detection, such as obstacle detection, lane line detection, and the like, through a forward-looking camera mounted on an automatic driving vehicle, so as to ensure that driving behaviors such as following, lane keeping, and the like of the automatic driving vehicle are effectively completed. However, when the automatic driving vehicle is required to have a lane changing function, it is not enough to have only a forward-looking vision detection algorithm. As described above, the autonomous vehicle is required to have the capability of detecting the left and right rear vehicles of the autonomous vehicle, regardless of the trigger type lane change or the self-service type lane change. Since the field of view of the front-view camera does not include the rear vehicle, the field of view from the sensor does not satisfy the obstacle detection function when the autonomous vehicle changes lanes. Therefore, the automatic driving vehicle only has the sensing capability of a forward-looking obstacle and cannot realize the lane changing function.
Based on this, embodiments of the present invention provide a vehicle control method to solve at least the above existing problems, and the following description is continued. Referring to fig. 12, fig. 12 is a schematic structural diagram of a vehicle control method according to an embodiment of the present invention, including a rear view sensing unit, a rear view image parallel detection unit, and a rear view obstacle detection unit of an autonomous driving vehicle, which are described separately below.
(1) Rear view sensing unit of automatic driving vehicle
The embodiment of the invention provides a sensing scheme simultaneously comprising a radar and a camera aiming at the rear view field of an automatic driving vehicle, so that the same spatial position can ensure that the radar and the camera are observed simultaneously in the rear view field of the automatic driving vehicle, thereby providing guarantee for subsequent rear view obstacle detection.
As shown in FIG. 5, for an autonomous vehicle, embodiments of the present invention propose sensors for which 4 rear views are added, i.e., two rear view camera sensors and two rear view radar sensors, such asFIG. 5(1) is a drawing showing1Indicating left rear camera sensor, C2Representing a right rear camera sensor; r1Indicating left rear radar sensor, R2Representing a right rear radar sensor; accordingly, the rear view coverage of the autonomous vehicle is shown in the (2) sub-diagram of fig. 5.
In this way, when the automatic driving vehicle acquires the vehicle within the rear view field range, the vehicle can be simultaneously acquired through the rear view radar sensors and the camera sensors arranged on the left side and the right side of the vehicle body. Specifically, radar detection is performed on rear-view vehicles on the left and right sides of a vehicle body of the autonomous vehicle by a radar sensor, and images including the rear-view vehicles on the left and right sides of the vehicle body of the autonomous vehicle are acquired by a camera sensor.
Here, referring to FIG. 5, for obstacles to the left and right rear of the autonomous vehicle, there is at least one sensor observation that can provide a sensed observation input of the obstacle for the subsequent obstacle detection algorithm; meanwhile, except the range of about 1 meter beside the vehicle body, the two sensors, namely the camera and the radar, can provide effective obstacle observation for a certain point in space, so that effective sensing observation is provided for subsequent obstacle fusion detection.
(2) Rear view image parallel detection unit
On the basis of the rear-view sensing unit in the step (1), the embodiment of the invention provides a parallelized rear-view vehicle detection scheme for the rear-view field detection of the automatic driving vehicle. The scheme can carry out parallelization rear-view vehicle detection on the left rear-view image and the right rear-view image at the same moment, and provides a rear-view vehicle detection result at the same moment. Compared with the scheme that the left image and the right image are required to be detected in series, the parallel detection scheme provided by the embodiment of the invention can greatly save the detection time of the rearview vehicle of the rearview image, thereby reducing the delay of a sensing system and ensuring the real-time property of the rearview vehicle detection of the automatic driving vehicle.
In practical application, in the embodiment of the present invention, the rear view images including the rear-view vehicle on the left and right sides of the vehicle body, which are acquired by the camera sensors, are first stitched. Specifically, image parameters of the left-side rear-view image and the right-side rear-view image, such as an image channel, an image height and an image width, are respectively obtained; when the image parameter is an image channel, overlapping the left rear-view image and the right rear-view image according to the image channels of the left rear-view image and the right rear-view image to obtain a spliced rear-view image; when the image parameter is the image height, transversely splicing the left rear-view image and the right rear-view image according to the image heights of the left rear-view image and the right rear-view image to obtain spliced rear-view images; and when the image parameter is the image width, longitudinally splicing the left rear-view image and the right rear-view image according to the image widths of the left rear-view image and the right rear-view image to obtain a spliced rear-view image. Illustratively, the stitched rear view image can be seen in fig. 7. It should be noted that the image stitching method according to the embodiment of the present invention is not limited to the above-mentioned method, and no matter which stitching method is used, the present invention introduces a full convolution operation, and the proposed rear-view image detection unit can perform parallelized rear-view vehicle detection on the stitched rear-view image.
And after the spliced rear-view images are obtained, carrying out parallel detection on the spliced rear-view images through a pre-trained rear-view vehicle detection neural network model. Specifically, the back-view vehicle detection neural network model is constructed and trained based on a deep convolutional neural network, and in actual implementation, the back-view vehicle detection neural network model can be constructed based on a network structure of which the backbone network is ResNet18, as shown in fig. 8. When the rearview vehicle is detected, extracting the characteristics of the rearview image through a characteristic extraction layer of the constructed rearview vehicle detection neural network model to obtain a characteristic image corresponding to the rearview image; and then, carrying out rear-view vehicle identification through a prediction layer (namely a comb layer) of the rear-view vehicle detection neural network model, and predicting to obtain coordinate information of the rear-view vehicle in the characteristic diagram.
After the prediction result output by the neural network model for detecting the rear-view vehicle is obtained, for the coordinate information of the predicted rear-view vehicle in the feature map, the embodiment of the invention further restores the obtained coordinate information prediction values in the feature map to the image coordinate systems of the two original rear-view images according to the original splicing method, thereby obtaining the detection result of the rear-view vehicle shown in fig. 10. Specifically, vehicle position information of the rear-view vehicle in the original image is calculated according to the predicted coordinate information of the rear-view vehicle in the feature map and the downsampling multiplying power of the feature map relative to the original image.
Here, it should be noted that the rear-view vehicle detection scheme proposed by the embodiment of the present invention is not limited to a specific detection algorithm: such as a single-phase based detection algorithm (e.g., SSD, YOLO), or a dual-phase based detection algorithm (e.g., Fast-RCNN), may be applied to the rear-view vehicle detection scheme framework provided by the embodiments of the present invention.
(3) Rear-view obstacle detection unit
All objects detected by the radar are referred to herein as obstacles. And aiming at the radar detection result, projecting the radar detection result to a rearview image acquired by a camera sensor to obtain the position information of the obstacle detected by the radar in the rearview image.
And (3) performing correlation matching on the vehicle position information of the rearview vehicle in the rear view image obtained in the step (2) and the obstacle position information detected by the radar in the rear view image to obtain the obstacle position information of the corresponding rearview vehicle in the obstacle position information. Here, since the obstacle position information corresponding to the rear-view vehicle is position information in two-dimensional coordinates, in practical applications, it is also necessary to perform back projection processing on the position information of the rear-view vehicle in two-dimensional coordinates to obtain position information of the rear-view vehicle in three-dimensional coordinates, thereby obtaining relative position information between the autonomous vehicle and the rear-view vehicle.
In practical application, calibration parameters of the radar sensor and the camera sensor can be respectively obtained, wherein the calibration parameters are parameters set when the camera sensor or the radar sensor is installed, such as the height of a center point of the camera, the emission angle of a radar signal and the like; and carrying out back projection processing on the vehicle position information of the rear-view vehicle and the obstacle position information under the two-dimensional coordinate based on the acquired calibration parameters to obtain the position information of the rear-view vehicle under the three-dimensional coordinate, thereby obtaining the relative position information of the automatic driving vehicle and the rear-view vehicle.
After the relative position information of the automatic driving vehicle and the rear-view vehicle is obtained, the relative speed of the automatic driving vehicle and the rear-view vehicle of the automatic driving vehicle can be obtained; determining a safe lane among lanes adjacent to a current driving lane of the target vehicle based on the relative position information and the relative speed; the control target vehicle switches from the current travel lane to the safe lane. In practical implementation, in the process of controlling the automatic driving vehicle to change lanes, the obstacle detection result, the lane line detection result and the like in the forward-looking range are also required to be combined to ensure the safe driving of the automatic driving vehicle.
By applying the embodiment of the invention, on the first aspect, a rear-view obstacle sensing system aiming at a lane change scene of an automatic driving vehicle is provided by adding a sensor for left and right rear-view fields to the automatic driving vehicle, so that various sensing observations can be effectively provided for a rear-view obstacle detection scheme, the robustness of the sensing system is ensured, and the lane change function of the automatic driving vehicle during driving is realized; in the second aspect, through the parallelization rear view image vehicle detection scheme, parallelization prediction processing can be simultaneously carried out on the rear view images acquired by the left rear side camera sensor and the right rear side camera sensor, and vehicle detection results of the camera sensors on the two sides are obtained at the same time; in a third aspect, the rear-view vehicle detection scheme provided by the embodiment of the invention is not limited by specific specification and model of the sensor, and has strong expansibility.
Continuing with the description of the vehicle control 355 provided by embodiments of the present invention, in some embodiments, the vehicle control can be implemented as a software module. Referring to fig. 13, fig. 13 is a schematic structural diagram of a vehicle control device 355 according to an embodiment of the present invention, and the vehicle control device 355 according to the embodiment of the present invention includes:
an obtaining module 3551, configured to obtain a dynamic detection result and a static detection result of a rear-view vehicle of a target vehicle;
a first determining module 3552, configured to determine obstacle position information corresponding to the target vehicle and vehicle position information of a rear-view vehicle of the target vehicle by combining the dynamic detection result and the static detection result;
a second determining module 3553, configured to determine relative position information of the target vehicle and the rear-view vehicle based on the vehicle position information and the obstacle position information;
a control module 3554 configured to control the target vehicle to perform lane switching based on the relative position information.
In some embodiments, the obtaining module 3551 is further configured to perform radar detection on a rear-view vehicle of the target vehicle, and use a resulting radar detection result as the dynamic detection result;
and acquiring an image of a rear-view vehicle including the target vehicle, and taking the acquired image of the rear-view vehicle including the target vehicle as the static detection result.
In some embodiments, the first determining module 3552 is further configured to project the radar detection result onto the image, so as to obtain obstacle position information corresponding to a target vehicle detected by the radar in the image;
and carrying out vehicle identification on the image to obtain the vehicle position information of the rear-view vehicle in the image.
In some embodiments, the image comprises: a first image corresponding to the left side of the body of the target vehicle and including a rear-view vehicle, and a second image corresponding to the right side of the body of the target vehicle and including a rear-view vehicle;
the first determining module 3552 is further configured to stitch the first image and the second image to obtain a stitched image;
and carrying out vehicle identification on the spliced image to obtain the vehicle position information of the rear-view vehicle in the image.
In some embodiments, the first determining module 3552 is further configured to obtain image parameters of the first image and the second image, respectively;
when the image parameter is an image channel, overlapping the first image and the second image according to the image channels of the first image and the second image to obtain a spliced image;
when the image parameter is the image height, transversely splicing the first image and the second image according to the image heights of the first image and the second image to obtain a spliced image;
and when the image parameter is the image width, longitudinally splicing the first image and the second image according to the image widths of the first image and the second image to obtain a spliced image.
In some embodiments, the first determining module 3552 is further configured to perform feature extraction on the image through a neural network model, so as to obtain a feature map corresponding to the image;
carrying out vehicle identification on the feature map corresponding to the image through the neural network model, and predicting to obtain coordinate information of the rear-view vehicle in the feature map;
and obtaining the vehicle position information of the rear-view vehicle in the image based on the coordinate information of the rear-view vehicle in the feature map and the downsampling multiplying power of the feature map relative to the image.
In some embodiments, the second determining module 3553 is further configured to match the vehicle position information with the obstacle position information to obtain obstacle position information corresponding to the rear-view vehicle in the obstacle position information;
and carrying out back projection processing on the vehicle position information and the obstacle position information corresponding to the rear-view vehicle to obtain the relative position information of the target vehicle and the rear-view vehicle.
In some embodiments, the vehicle position information is acquired by a camera sensor, and the obstacle position information is acquired by radar detection;
the second determining module 3553 is further configured to obtain calibration parameters of the radar and the camera sensor, respectively;
and carrying out back projection processing on the vehicle position information and the obstacle position information corresponding to the rear-view vehicle based on the acquired calibration parameters to obtain the relative position information of the target vehicle and the rear-view vehicle.
In some embodiments, the control module 3554 is further configured to obtain a relative speed of the target vehicle with a rear-view vehicle of the target vehicle;
determining a safe lane among lanes adjacent to a current driving lane of the target vehicle based on the relative position information and the relative speed;
controlling the target vehicle to switch from a current driving lane to the safe lane.
By applying the embodiment of the invention, the dynamic detection result and the static detection result of the rear-view vehicle of the automatic driving vehicle are obtained, and then the obstacle position information in the rear-view range corresponding to the target vehicle and the vehicle position information of the rear-view vehicle of the target vehicle are obtained based on the dynamic detection result and the static detection result, so that the relative position information of the automatic driving vehicle and the rear-view vehicle is determined based on the vehicle position information and the obstacle position information, and the automatic driving vehicle is controlled to switch roads based on the relative position information; in this way, the relative position of the autonomous vehicle and the rear-view vehicle is detected, so that lane changing processing of the autonomous vehicle in the driving process can be realized, and the autonomous level of the autonomous vehicle is improved.
An embodiment of the present invention further provides an electronic device, where the electronic device includes:
a memory for storing executable instructions;
and the processor is used for realizing the vehicle control method provided by the embodiment of the invention when executing the executable instructions stored in the memory.
Embodiments of the present invention also provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the vehicle control method provided by the embodiment of the invention.
The embodiment of the invention also provides a computer-readable storage medium, which stores executable instructions, and when the executable instructions are executed by a processor, the vehicle control method provided by the embodiment of the invention is realized.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories. The computer may be a variety of computing devices including intelligent terminals and servers.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
The above description is only an example of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present invention are included in the protection scope of the present invention.

Claims (6)

1. A vehicle control method, characterized by comprising:
radar detection is respectively carried out on rearview vehicles on the left side and the right side of the vehicle body of a target vehicle through radars on the left side and the right side of the vehicle body of the target vehicle to obtain radar detection results, and radar detection results are obtained
The method comprises the steps that rear view images including rear-view vehicles on the left side and the right side of a vehicle body of a target vehicle are obtained through camera sensors on the left side and the right side of the vehicle body of the target vehicle, and the rear view images include a first image corresponding to the left side of the vehicle body and a second image corresponding to the right side of the vehicle body;
and splicing the first image and the second image according to one of the following splicing modes to obtain a spliced image: superposing the first image and the second image according to an image channel; transversely splicing the first image and the second image according to the image height; longitudinally splicing the first image and the second image according to the image width;
carrying out vehicle identification on the spliced image to obtain coordinate information of the rear-view vehicle in a characteristic diagram of the spliced image, restoring the coordinate information to the image coordinate of the rear-view image by combining the splicing mode, and obtaining vehicle position information of the rear-view vehicle in the rear-view image;
projecting the radar detection result to the rear-view image to obtain obstacle position information corresponding to the target vehicle detected by the radar in the rear-view image;
matching the vehicle position information with the obstacle position information to obtain obstacle position information corresponding to the rearview vehicle in the obstacle position information;
carrying out back projection processing on the vehicle position information and the obstacle position information corresponding to the rearview vehicle based on calibration parameters of the radar and the camera sensor to obtain the relative position information of the target vehicle and the rearview vehicle;
and controlling the target vehicle to perform lane switching based on the relative position information.
2. The method according to claim 1, wherein the performing vehicle identification on the stitched image to obtain coordinate information of the rear-view vehicle in the feature map of the stitched image, and restoring the coordinate information to the image coordinate of the rear-view image in combination with the adopted stitching method to obtain the vehicle position information of the rear-view vehicle in the rear-view image comprises:
performing downsampling processing on the spliced image according to downsampling multiplying power, and performing feature extraction on the spliced image subjected to the downsampling processing through a neural network model to obtain a feature map corresponding to the spliced image;
vehicle identification is carried out on the feature map corresponding to the spliced image through the neural network model, and coordinate information of the rear-view vehicle in the feature map is obtained through prediction;
and reducing the coordinate information to the image coordinate of the rear-view image based on the downsampling multiplying power by combining the adopted splicing mode to obtain the vehicle position information of the rear-view vehicle in the rear-view image.
3. The method of claim 1, wherein the controlling the target vehicle for lane switching based on the relative position information comprises:
acquiring relative speeds of the target vehicle and a rearview vehicle of the target vehicle;
determining a safe lane in a lane adjacent to the current driving lane of the target vehicle based on the relative position information and the relative speed;
controlling the target vehicle to switch from a current driving lane to the safe lane.
4. A vehicle control apparatus, characterized in that the apparatus comprises:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for respectively carrying out radar detection on rear-view vehicles on the left side and the right side of a vehicle body of a target vehicle through radars on the left side and the right side of the vehicle body of the target vehicle to obtain radar detection results, and acquiring rear-view images including the rear-view vehicles on the left side and the right side of the vehicle body of the target vehicle through camera sensors on the left side and the right side of the vehicle body of the target vehicle, wherein the rear-view images include a first image corresponding to the left side of the vehicle body and a second image corresponding to the right side of the vehicle body;
the first determining module is configured to stitch the first image and the second image according to one of the following stitching manners to obtain a stitched image: superposing the first image and the second image according to an image channel; transversely splicing the first image and the second image according to the image height; longitudinally splicing the first image and the second image according to the image width;
the first determining module is further configured to perform vehicle identification on the stitched image to obtain coordinate information of the rear-view vehicle in the feature map of the stitched image, and restore the coordinate information to the image coordinate of the rear-view image in combination with the adopted stitching manner to obtain vehicle position information of the rear-view vehicle in the rear-view image; projecting the radar detection result to the rear-view image to obtain obstacle position information corresponding to the target vehicle detected by the radar in the rear-view image;
the second determining module is used for matching the vehicle position information with the obstacle position information to obtain obstacle position information corresponding to the rearview vehicle in the obstacle position information; carrying out back projection processing on the vehicle position information and the obstacle position information corresponding to the rearview vehicle based on calibration parameters of the radar and the camera sensor to obtain the relative position information of the target vehicle and the rearview vehicle;
and the control module is used for controlling the target vehicle to carry out lane switching based on the relative position information.
5. An electronic device, characterized in that the electronic device comprises:
a memory for storing executable instructions;
a processor for implementing the vehicle control method of any one of claims 1 to 3 when executing executable instructions stored in the memory.
6. A computer readable storage medium storing executable instructions, wherein the executable instructions, when executed by a processor, implement the vehicle control method of any one of claims 1 to 3.
CN202010871155.8A 2020-08-26 2020-08-26 Vehicle control method and device Active CN111959511B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010871155.8A CN111959511B (en) 2020-08-26 2020-08-26 Vehicle control method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010871155.8A CN111959511B (en) 2020-08-26 2020-08-26 Vehicle control method and device

Publications (2)

Publication Number Publication Date
CN111959511A CN111959511A (en) 2020-11-20
CN111959511B true CN111959511B (en) 2022-06-03

Family

ID=73390457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010871155.8A Active CN111959511B (en) 2020-08-26 2020-08-26 Vehicle control method and device

Country Status (1)

Country Link
CN (1) CN111959511B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113619606A (en) * 2021-09-17 2021-11-09 中国第一汽车股份有限公司 Obstacle determination method, apparatus, device and storage medium
CN117077407B (en) * 2023-08-18 2024-01-30 北京华如科技股份有限公司 Target detection method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110032949A (en) * 2019-03-22 2019-07-19 北京理工大学 A kind of target detection and localization method based on lightweight convolutional neural networks
CN110466512A (en) * 2019-07-25 2019-11-19 东软睿驰汽车技术(沈阳)有限公司 A kind of vehicle lane change method, apparatus and system
CN110827197A (en) * 2019-10-08 2020-02-21 武汉极目智能技术有限公司 Method and device for detecting and identifying vehicle all-round looking target based on deep learning

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160115448A (en) * 2015-03-27 2016-10-06 주식회사 만도 Driving assistant system in a vehicle and method thereof
CN106926779B (en) * 2017-03-09 2019-10-29 吉利汽车研究院(宁波)有限公司 A kind of vehicle lane change auxiliary system
CN107154022B (en) * 2017-05-10 2019-08-27 北京理工大学 A kind of dynamic panorama mosaic method suitable for trailer
CN110356339B (en) * 2018-03-26 2022-07-15 比亚迪股份有限公司 Lane change blind area monitoring method and system and vehicle
CN110386065B (en) * 2018-04-20 2021-09-21 比亚迪股份有限公司 Vehicle blind area monitoring method and device, computer equipment and storage medium
CN108639048B (en) * 2018-05-15 2020-03-03 智车优行科技(北京)有限公司 Automobile lane change assisting method and system and automobile
CN110533958A (en) * 2018-05-24 2019-12-03 上海博泰悦臻电子设备制造有限公司 Vehicle lane change based reminding method and system
CN110936893B (en) * 2018-09-21 2021-12-14 驭势科技(北京)有限公司 Blind area obstacle processing method and device, vehicle-mounted equipment and storage medium
KR20190083317A (en) * 2019-06-20 2019-07-11 엘지전자 주식회사 An artificial intelligence apparatus for providing notification related to lane-change of vehicle and method for the same

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110032949A (en) * 2019-03-22 2019-07-19 北京理工大学 A kind of target detection and localization method based on lightweight convolutional neural networks
CN110466512A (en) * 2019-07-25 2019-11-19 东软睿驰汽车技术(沈阳)有限公司 A kind of vehicle lane change method, apparatus and system
CN110827197A (en) * 2019-10-08 2020-02-21 武汉极目智能技术有限公司 Method and device for detecting and identifying vehicle all-round looking target based on deep learning

Also Published As

Publication number Publication date
CN111959511A (en) 2020-11-20

Similar Documents

Publication Publication Date Title
CN109117709B (en) Collision avoidance system for autonomous vehicles
CN109421738B (en) Method and apparatus for monitoring autonomous vehicles
US20230236602A1 (en) Systems and Methods for Controlling an Autonomous Vehicle with Occluded Sensor Zones
CN111442776B (en) Method and equipment for sequential ground scene image projection synthesis and complex scene reconstruction
US9815462B2 (en) Path determination for automated vehicles
CN111959511B (en) Vehicle control method and device
CN111874006A (en) Route planning processing method and device
CN103730026A (en) Apparatus and method for determining parking area
CN105684039B (en) Condition analysis for driver assistance systems
US11415997B1 (en) Autonomous driving simulations based on virtual simulation log data
CN108944920A (en) It is generated in road vehicle application program and using the method and system of perception scene figure
DE112018004891T5 (en) IMAGE PROCESSING DEVICE, IMAGE PROCESSING PROCESS, PROGRAM AND MOBILE BODY
CN113561963A (en) Parking method and device and vehicle
US11702044B2 (en) Vehicle sensor cleaning and cooling
CN114194190A (en) Lane maneuver intention detection system and method
US10860868B2 (en) Lane post-processing in an autonomous driving vehicle
CN115257768A (en) Intelligent driving vehicle environment sensing method, system, equipment and medium
DE102023104789A1 (en) TRACKING OF MULTIPLE OBJECTS
CN117130298A (en) Method, device and storage medium for evaluating an autopilot system
CN113386738A (en) Risk early warning system, method and storage medium
CN113650607B (en) Low-speed scene automatic driving method, system and automobile
CN113071515B (en) Movable carrier control method, device, movable carrier and storage medium
CN113895429A (en) Automatic parking method, system, terminal and storage medium
CN116626670A (en) Automatic driving model generation method and device, vehicle and storage medium
CN111655542A (en) Data processing method, device and equipment and movable platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant