CN113077428A - Collision detection method and device, vehicle-mounted terminal and storage medium - Google Patents

Collision detection method and device, vehicle-mounted terminal and storage medium Download PDF

Info

Publication number
CN113077428A
CN113077428A CN202110332876.6A CN202110332876A CN113077428A CN 113077428 A CN113077428 A CN 113077428A CN 202110332876 A CN202110332876 A CN 202110332876A CN 113077428 A CN113077428 A CN 113077428A
Authority
CN
China
Prior art keywords
grid
image
detected
obstacle
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110332876.6A
Other languages
Chinese (zh)
Other versions
CN113077428B (en
Inventor
王路遥
王俊杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai OFilm Smart Car Technology Co Ltd
Original Assignee
Shanghai OFilm Smart Car Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai OFilm Smart Car Technology Co Ltd filed Critical Shanghai OFilm Smart Car Technology Co Ltd
Priority to CN202110332876.6A priority Critical patent/CN113077428B/en
Publication of CN113077428A publication Critical patent/CN113077428A/en
Application granted granted Critical
Publication of CN113077428B publication Critical patent/CN113077428B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The application discloses a collision detection method, a collision detection device, a vehicle-mounted terminal and a storage medium, and belongs to the technical field of data processing. The method comprises the following steps: generating a multilayer grid image with a pyramid structure according to the obstacle image; taking the topmost grid image as a current grid image, and acquiring a to-be-detected area corresponding to the target vehicle on the current grid image; detecting whether each grid in the area to be detected is in an obstacle grid where an obstacle is located; when a first grid in the area to be detected is located in an obstacle grid, determining that the first grid is located in a target grid area corresponding to a next grid image, taking the next grid image as a current grid image, taking the target grid area as an area to be detected, detecting the area to be detected again, and determining that the target vehicle collides if the first grid is located in the area to be detected of the bottom grid image. The scheme can reduce the data volume used in collision detection and improve the detection efficiency.

Description

Collision detection method and device, vehicle-mounted terminal and storage medium
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a collision detection method and apparatus, a vehicle-mounted terminal, and a storage medium.
Background
With the rapid development of scientific technology, most of the existing vehicles are provided with vehicle-mounted terminals, and the vehicles can realize the functions of sensing the surrounding environment, planning the path and the like through the mounted vehicle-mounted terminals. In the processes of automatic parking, automatic driving, automatic planning and the like, the information of surrounding obstacles needs to be detected through the vehicle-mounted terminal so as to judge whether the vehicle collides with the surrounding obstacles. At present, in the process of detecting whether a vehicle collides with a peripheral obstacle, a preset collision detection algorithm is applied to detect, and generally, on a grid map, each grid needs to be detected, and after traversing all the grids, a result of whether the vehicle collides with the peripheral obstacle is obtained.
In the above technical solution, each grid needs to be detected, which results in huge data amount to be detected and longer time consumption in the detection process, and reduces the detection efficiency.
Disclosure of Invention
The embodiment of the application provides a collision detection method, a collision detection device, a vehicle-mounted terminal and a storage medium, so that the time consumption of a detection process can be reduced, and the detection efficiency of whether a vehicle collides or not can be improved.
In one aspect, an embodiment of the present application provides a collision detection method, where the method includes:
generating a multilayer raster image with a pyramid structure according to an obstacle image, wherein the image precision and the number of layers of each layer of raster image in the multilayer raster image are in an inverse correlation relationship;
taking the topmost grid image as a current grid image, and acquiring a to-be-detected area corresponding to the target vehicle on the current grid image;
detecting whether each grid in the area to be detected is in the obstacle grid or not according to the obstacle grid in the current grid image;
when a first grid in the area to be detected is located in the obstacle grid, determining a target grid area corresponding to the first grid in a next grid image, taking the next grid image as a new current grid image, taking the target grid area as an area to be detected of the new current grid image, and re-executing the step of detecting whether each grid in the area to be detected is located in the obstacle grid according to the obstacle grid in the current grid image until the current grid image is a bottommost grid image;
and if the first grid in the barrier grid exists in the area to be detected of the bottommost grid image, determining that the target vehicle collides.
In the embodiment of the application, a multi-layer grid image with a pyramid structure is established, the top image is used as a current grid image from the top image, an area to be detected of a target vehicle on the current grid image is obtained, whether each grid in the area to be detected is in an obstacle grid is detected, when a first grid in the area to be detected is in the obstacle grid, a target grid area corresponding to the first grid in a next grid image is determined, the next grid image is used as the current grid image, the target grid area is used as the area to be detected until the current grid image is the bottom grid image, so that collision detection of the vehicle on the grid image is completed, if a certain grid in the top grid image is in the obstacle grid, collision conditions of the grid corresponding to the next grid are continuously judged, for grids which are not positioned in the barrier grids, collision detection is not needed to be carried out on the next layer, and the scheme carries out collision detection on the grids positioned in the barrier grids, so that the data volume of the collision detection is reduced, the time consumption of the detection is shortened, and the detection efficiency is improved.
As an optional implementation manner, in an aspect of the embodiment of the present application, before the detecting, according to an obstacle grid in the current grid image, whether each grid in the area to be detected is within the obstacle grid, the method further includes:
screening each grid in the region to be detected to obtain each grid in a model grid region, wherein the model grid region is a grid region occupied by a vehicle model of the target vehicle on the current grid image;
the detecting whether each grid in the area to be detected is located in the obstacle grid according to the obstacle grid in the current grid image includes:
and detecting whether each grid in the to-be-detected region, which is located in the model grid region, is located in the obstacle grid or not according to the obstacle grid in the current grid image.
In the embodiment of the application, before detecting whether each grid in the to-be-detected area is in the obstacle grid according to the obstacle grid in the current grid image, whether each grid is in the grid area occupied by the vehicle model is also detected, so that the scheme only needs to detect the grids in the grid area occupied by the vehicle model, the data volume of the collision detection grid is reduced, the detection time is shortened, and the detection efficiency is improved.
As an optional implementation manner, in an aspect of the embodiment of the present application, the vehicle model is a quadrilateral, and the screening the grids in the region to be detected to obtain the grids in the model grid region includes:
for the angular points of each grid contained in the region to be detected, obtaining a vector cross product result between the angular point of each grid and the four angular points of the quadrangle;
detecting whether the corner points of each grid are in the grid region of the model or not according to the vector cross multiplication result;
when a first corner point of the corner points of each grid is in the model grid region, determining that each grid adjacent to the first corner point is in the model grid region;
and when a first corner point of the corner points of each grid is not in the model grid region, determining that each grid adjacent to the first corner point is not in the model grid region.
In the embodiment of the application, whether a certain corner point is in the grid region of the model is judged through the cross multiplication result of the vector, and when the corner point is in the grid region of the model, the grid adjacent to the corner point is determined to be in the grid region of the model, so that the accuracy of judging whether the certain grid is in the vehicle model can be improved.
As an optional implementation manner, in an aspect of the embodiment of the present application, before obtaining a result of cross-multiplication of vectors between a corner point of each grid and four corner points of the quadrangle, for a corner point of each grid included in the region to be detected, the method further includes:
and sequentially acquiring the corner points of each grid contained in the area to be detected according to the step length of two grids.
In the embodiment of the application, the corner points of each grid in the to-be-detected area can be extracted in a mode of separating two grids, so that the extracted corner points are detected, the data volume of whether the detection grids are in the grid area occupied by the vehicle model or not is reduced, and the detection efficiency is improved.
As an optional implementation manner, in an aspect of the embodiment of the present application, the obstacle image is a raster image at a bottommost layer in the multi-layer raster images, and before the obtaining, by using the topmost raster image as a current raster image, a corresponding to-be-detected region on the current raster image of the target vehicle, the method further includes:
acquiring four corner point grids of a vehicle model of the target vehicle in the obstacle image;
determining that the target vehicle has collided when a first corner grid is within the obstacle grid in the obstacle image, the first corner grid being any one of the four corner grids;
and when the grids of all the corner points are not positioned in the barrier grids in the barrier image, executing the step of taking the topmost grid image as the current grid image and acquiring the corresponding to-be-detected area of the target vehicle on the current grid image.
In the embodiment of the application, the collision condition of the corner grids is judged in advance on the obstacle image, and the detection is started from the topmost grid image under the condition that the corner grids do not collide, so that the time-consuming problem caused by the step of still executing the detection when four corner grids of a vehicle collide is avoided, and the detection efficiency is improved.
As an optional implementation manner, in an aspect of the embodiment of the present application, the acquiring a corresponding region to be detected of the target vehicle on the current raster image includes:
acquiring a grid area corresponding to a circumscribed rectangle of the vehicle model of the target vehicle on the current grid image;
and taking the grid region corresponding to the circumscribed rectangle as the region to be detected.
In the embodiment of the application, the external rectangle is made on the vehicle model of the target vehicle, the grid region corresponding to the external rectangle is obtained, the problem of detection errors caused by the fact that the vehicle model occupies irregular grid regions on the grid map can be avoided, and the detection accuracy is improved.
As an optional implementation manner, in an aspect of the embodiment of the present application, the detecting whether each grid in the area to be detected is located in an obstacle grid according to the obstacle grid in the current grid image includes:
acquiring each grid in the area to be detected according to a first preset sequence;
and detecting whether the acquired grid is in the barrier grid or not according to the barrier grid in the current grid image.
In the embodiment of the application, each grid in the detection area is sequentially detected according to the first preset sequence, one grid is detected after the other grid is detected, and the grids are sequentially detected, so that the efficiency and the accuracy of the detection process can be improved.
As an optional implementation manner, in an aspect of the embodiments of the present application, the method further includes:
if the first grid in the barrier grid does not exist in the area to be detected of the bottommost grid image, detecting whether each grid in the area to be detected in the topmost grid image executes the step of detecting whether the obtained grid is in the barrier grid according to the barrier grid in the current grid image;
when each grid in the area to be detected in the topmost grid image is subjected to the step of detecting whether the acquired grid is in the barrier grid according to the barrier grid in the current grid image, determining that the target vehicle does not collide;
when each grid in the to-be-detected area in the topmost grid image does not execute the step of detecting whether the acquired grid is located in the obstacle grid according to the obstacle grid in the current grid image, the step of acquiring each grid in the to-be-detected area according to a first preset sequence is executed.
In the embodiment of the application, if the first grid in the barrier grid does not exist in the region to be detected of the grid image at the bottommost layer, whether collision detection is carried out on all grids in the region to be detected in the grid image at the topmost layer is continuously detected, if collision detection is carried out on all grids, the fact that the first grid in the barrier grid does not exist in the grid image at the bottommost layer is indicated, it is determined that the target vehicle does not collide, and the accuracy of judging that the target vehicle does not collide is improved.
As an optional implementation manner, in an aspect of this embodiment of the present application, before the generating a multi-layer raster image of a pyramid structure according to an obstacle image, the method further includes:
acquiring the number of layers of a multilayer raster image of a pyramid structure to be generated;
obtaining each scaling value corresponding to each layer number according to the layer number;
the generating of the multi-layer raster image of the pyramid structure according to the obstacle image includes:
and scaling the obstacle image according to each scaling value to generate a multilayer grid image with the pyramid structure with the layer number.
In the embodiment of the application, the vehicle-mounted terminal can further generate the grid image of the pyramid structure by acquiring the number of layers of the pyramid structure and acquiring the corresponding number of scaling values and scaling the obstacle image according to the scaling values, so that the flexibility and diversity of the pyramid structure generation are improved.
As an optional implementation manner, in an aspect of the embodiments of the present application, the acquiring the number of layers of the multilayer raster image of the pyramid structure to be generated includes:
acquiring environmental parameters of the surrounding environment of the target vehicle, wherein the environmental parameters comprise one or more of the movement speed of the surrounding environment relative to the target vehicle and the weather condition of the surrounding environment;
and acquiring the number of layers of the multilayer raster image of the pyramid structure to be generated according to the environment parameters.
In the embodiment of the application, the vehicle-mounted terminal acquires the number of layers of the pyramid structure to be generated by combining the environment parameters, so that the generated grid image of the pyramid structure is more consistent with the environment where the vehicle is located, and the accuracy and flexibility of detection are improved.
As an optional implementation manner, in an aspect of the embodiment of the present application, after determining that the target vehicle collides if the first grid exists in the region to be detected of the lowermost grid image, the method further includes:
acquiring a first coincidence region in a bottommost grid image, wherein the first coincidence region is a grid region where a vehicle model of the target vehicle coincides with the obstacle grid;
and generating a first driving instruction according to the first overlapping area and the model grid area, wherein the first driving instruction is used for controlling the target vehicle to drive so that the vehicle model of the target vehicle does not collide with the obstacle.
In the embodiment of the application, after the vehicle-mounted terminal judges that the target vehicle collides, the vehicle-mounted terminal can also generate the first driving instruction to control the target vehicle to drive again, so that the target vehicle does not collide with the obstacle, and the driving safety of the vehicle is improved.
In another aspect, an embodiment of the present application provides a collision detection apparatus, including:
the image generation module is used for generating a multilayer raster image with a pyramid structure according to an obstacle image, wherein the image precision of each layer of raster image in the multilayer raster image is in an inverse correlation relation with the number of layers;
the area acquisition module is used for taking the topmost grid image as a current grid image and acquiring a corresponding area to be detected of the target vehicle on the current grid image;
the area detection module is used for detecting whether each grid in the area to be detected is located in the obstacle grid or not according to the obstacle grid in the current grid image;
a circular execution module, configured to determine, when a first grid in the area to be detected is located in the obstacle grid, a target grid area corresponding to a next-layer grid image of the first grid, use the next-layer grid image as a new current grid image, use the target grid area as an area to be detected of the new current grid image, and execute the step of detecting, according to the obstacle grid in the current grid image, whether each grid in the area to be detected is located in the obstacle grid again until the current grid image is a bottom-layer grid image;
and the collision determining module is used for determining that the target vehicle collides if a first grid in the barrier grid exists in the to-be-detected region of the bottommost grid image.
In another aspect, an embodiment of the present application provides an in-vehicle terminal, including a memory and a processor, where the memory stores a computer program, and when the computer program is executed by the processor, the processor is enabled to implement the collision detection method according to the above aspect and any optional implementation manner thereof.
In another aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the collision detection method according to the above another aspect and its optional modes.
In another aspect, the present application provides a computer program product, which when run on a computer, causes the computer to execute the collision detection method according to the above one aspect.
In another aspect, an application publishing platform is provided, and is configured to publish a computer program product, where when the computer program product runs on a computer, the computer is caused to perform the collision detection method according to the above aspect.
The technical scheme provided by the embodiment of the application can at least comprise the following beneficial effects:
by establishing a multi-layer grid image with a pyramid structure, starting from the topmost image, taking the topmost image as a current grid image, acquiring a to-be-detected area of a target vehicle on the current grid image, detecting whether each grid in the to-be-detected area is in an obstacle grid, when a first grid in the to-be-detected area is in the obstacle grid, determining a target grid area corresponding to the first grid in a next grid image, taking the next grid image as the current grid image, taking the target grid area as the to-be-detected area, continuing to perform collision detection on the to-be-detected area until the current grid image is the bottommost grid image, and completing collision detection of the vehicle on each grid image, wherein the image precision of each grid image in the multi-layer grid image is in an anti-correlation relation with the number of layers, if a certain grid in the topmost grid image is not in the obstacle grid, the grid is determined to have no collision condition in each grid on the next layer corresponding to the grid, and collision detection in the next layer is not needed.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow chart of a method of collision detection provided by an exemplary embodiment of the present application;
FIG. 2 is a schematic diagram of a grid image of a two-layer pyramid structure according to an exemplary embodiment of the present application;
FIG. 3 is a flow chart of a method of collision detection provided by an exemplary embodiment of the present application;
FIG. 4 is a schematic diagram of a structure of a region to be detected according to an exemplary embodiment of the present application;
FIG. 5 is a schematic diagram of a current raster image structure to which an exemplary embodiment of the present application relates;
FIG. 6 is a flow chart of a method of collision detection provided by an exemplary embodiment of the present application;
fig. 7 is a block diagram of a collision detection apparatus according to an exemplary embodiment of the present application;
fig. 8 is a schematic structural diagram of a vehicle-mounted terminal according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Reference herein to "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
It should be noted that the terms "first", "second", "third" and "fourth", etc. in the description and claims of the present application are used for distinguishing different objects, and are not used for describing a specific order. The terms "comprises," "comprising," and "having," and any variations thereof, of the embodiments of the present application, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The scheme provided by the application can be used in the process of adapting to different use scenes by switching the optical filters when the terminal used by people in daily life comprises the electromagnetic dual-optical-filter switcher, and for convenience of understanding, firstly, the application architecture related to the embodiment of the application is briefly introduced below.
In daily life, safe driving of a vehicle is crucial. In particular, the change in the position of an obstacle in the environment around the vehicle has a great influence on the safety of the vehicle in traveling, regardless of whether the vehicle is traveling or is parked, and therefore, it is important to detect a collision between the vehicle and the obstacle in the environment around the vehicle.
For example, after the vehicle detects its own surrounding environment, a camera may be used to obtain a fish eye image of the surrounding environment, the vehicle may identify the fish eye image through its own vehicle-mounted terminal to know which obstacles are included in its own surrounding environment, the area occupied by the obstacles is displayed on the grid map, and whether the vehicle collides with the obstacles is determined by combining the area on the grid map where the vehicle model is located. For example, in an automatic parking process based on a visual or ultrasonic radar, the vehicle-mounted terminal needs to plan a parking path meeting vehicle kinematics according to a detected parking space, and in the process of generating the parking path, the vehicle-mounted terminal needs to refer to obstacle information around the vehicle obtained by visual detection (for example, the above-mentioned fisheye image acquired by the camera) and ultrasonic detection (for example, the image acquired by the ultrasonic radar), so as to update the obstacle information on the map and call a corresponding collision detection algorithm, so that the parking path can avoid the obstacle and avoid collision with the obstacle.
In a collision detection scheme, for a memory parking or automatic passenger-replacing parking project, a vehicle is required to drive from a point A to a point B along a specified learning path or a global path, when a collision detection algorithm finds that the vehicle can collide when running along an original path, a path re-planning algorithm is called to re-plan an adjustment path for avoiding obstacles, and in the process of generating the adjustment path for avoiding obstacles, the collision detection algorithm is required to be called to judge whether the vehicle collides with the obstacles when the vehicle is in different poses on a grid map. Generally, on a grid map, a passable area and a non-passable area (i.e. an area where an obstacle is located) are identified by different pixel values, and a collision detection algorithm is a grid for judging whether the non-passable area exists in several grids occupied by a vehicle.
The common path planning algorithm is a hybrid a-star algorithm, and a collision detection algorithm needs to be called in the process of expanding nodes each time in the hybrid a-star algorithm to judge whether the vehicle pose of a new expanded node collides with an obstacle. In the process of the path planning algorithm, most of time is consumed in the collision detection algorithm, and the time for generating the nodes, sequencing the nodes and inserting the nodes is only a small part, so that how to improve the detection efficiency of the collision detection algorithm is an extremely important problem. For the collision detection process in the hybrid A-star algorithm, most grids on the grid map are passable areas without collision, and most of the consumed time of the collision detection algorithm is used for judging the grids in the passable areas, so that the problems of long time and long consumed time in the collision detection process are caused. The method ensures that the real-time requirement cannot be met under the condition of more expansion nodes, the vehicle needs to stop in place to wait for the planning of the path by a path re-planning cost-effective method, and the continuity and the convenience of vehicle running are greatly reduced.
At present, in the related art, the time consumption of collision detection is shortened by simply reducing the precision of a grid map, but the accuracy of collision detection is reduced along with the reduction of the map precision, so that some false detection situations occur, and the problem of low accuracy of collision detection also exists, and the efficiency of collision detection is influenced.
In order to improve the detection efficiency of the collision detection algorithm on the grid map, the method and the device for detecting the collision on the grid map provide a solution, the map precision can be reduced, meanwhile, the detection accuracy of the collision detection algorithm is improved, and the detection efficiency of the collision detection algorithm is improved.
Referring to fig. 1, a method flow chart of a collision detection method according to an exemplary embodiment of the present application is shown. The collision detection method is applied to the vehicle-mounted terminal, and as shown in fig. 1, the collision detection method may include the following steps.
Step 101, generating a multilayer raster image with a pyramid structure according to an obstacle image, wherein the image precision of each layer of raster image in the multilayer raster image is in an inverse correlation relation with the number of layers.
The obstacle image may be a raster image generated by the in-vehicle terminal from an obstacle in the surrounding environment. For example, a passable area and a non-passable area (i.e., an area where an obstacle is located) are identified by different pixel values to generate an obstacle image, and the in-vehicle terminal generates a multi-layer raster image based on the obstacle image, where the higher the number of layers, the lower the image accuracy of each layer of raster image, that is, as the number of layers increases, the larger the scale of the raster image becomes, but the lower the accuracy becomes.
And 102, taking the topmost grid image as a current grid image, and acquiring a corresponding to-be-detected area of the target vehicle on the current grid image.
The topmost grid image is a layer of grid image with the highest layer number in the multilayer grid images with the pyramid structure, and the image accuracy of the layer of grid image is also the lowest.
And the vehicle-mounted terminal takes the topmost raster image as the current raster image from the topmost raster image of the multilayer raster images, and acquires the corresponding to-be-detected area of the target vehicle on the current raster image. The region to be detected may be a grid region where the vehicle model of the target vehicle is located, or may be a grid region obtained by regularizing the shape of the vehicle model of the target vehicle. The regularization may refer to the bounding of a rectangle or a bounding of a square.
Step 103, detecting whether each grid in the area to be detected is in the obstacle grid or not according to the obstacle grid in the current grid image.
In the process of generating the multi-layer grid image of the pyramid structure according to the obstacle image, the obstacle images of the first layers of the pyramid structure are sequentially generated from the obstacle image of the bottommost layer, a grid area of a next layer image corresponding to a certain grid of a previous layer image is found, and if one grid in the corresponding grid area of the next layer image is an obstacle grid, the grid of the previous layer image is marked as the obstacle grid. The vehicle-mounted terminal detects whether each grid in the area to be detected in the current grid map is in the obstacle grid or not, so that the subsequent steps are executed. If yes, go to step 104, otherwise go to step 107.
And 104, when the first grid in the area to be detected is located in the barrier grid, determining a target grid area of the first grid corresponding to the next grid image, taking the next grid image as a new current grid image, and taking the target grid area as a new area to be detected of the current grid image.
Step 105, detecting whether the current raster image is the bottommost raster image.
If yes, go to step 106, otherwise return to step 103.
The first grid is equivalent to a grid in an obstacle grid, that is, all grids existing in the obstacle grid in the area to be detected are the first grid, the vehicle-mounted terminal can determine a target grid area corresponding to the first grid in a next grid image, take the next grid image as a new current grid image, take the target grid area as a grid area to be detected of the new current grid image, and return to step 103 to perform collision detection on the new current grid image and the grid area to be detected of the new current grid image.
For example, please refer to fig. 2, which illustrates a schematic diagram of a two-layer pyramid grid image according to an exemplary embodiment of the present application. The first-layer grid map 201 includes an area to be detected 202, a first grid 203 in the area to be detected 202 is located in an obstacle grid, the vehicle-mounted terminal can acquire a target grid area 205 corresponding to the first grid 203 in a next-layer grid image 204, take the next-layer grid image 204 as a new current grid image, take the target grid area 205 as a new area to be detected, and perform the step of detecting whether each grid in the area to be detected is located in the obstacle grid according to the obstacle grid in the current grid image again until the current grid image is the bottommost grid image.
And step 106, if the first grid exists in the to-be-detected region of the bottommost grid image, determining that the target vehicle collides.
Optionally, if the first grid exists in all the regions to be detected in the bottommost grid image, it is described that the intersection grid exists between the vehicle model of the target vehicle and the obstacle grid in the bottommost grid image, and it can be determined that the target vehicle may collide.
Step 107, it is determined that the target vehicle does not collide.
That is, if the first grid does not exist in all the grids to be detected in the current grid image, which indicates that no intersecting grid exists between the vehicle model of the target vehicle and the obstacle grid of the obstacle in the current grid image, it is determined that the target vehicle does not collide.
In summary, by establishing a multi-layer grid image with a pyramid structure, starting from the topmost image, taking the topmost image as a current grid image, acquiring an area to be detected of a target vehicle on the current grid image, detecting whether each grid in the area to be detected is located in an obstacle grid, when a first grid in the area to be detected is located in the obstacle grid, determining that the first grid is located in a target grid area corresponding to a next grid image, taking the next grid image as the current grid image, taking the target grid area as the area to be detected, continuing to perform collision detection on the area to be detected until the current grid image is the bottommost grid image, and completing collision detection of the vehicle on each grid image, that is, in the above-mentioned cyclic steps 103 to 105, if no grid in the obstacle grid exists in the area to be detected of the next grid image, the method can continuously return to the previous layer and continuously detect the grids in the next area to be detected, and if the grids in the topmost grid image are not in the barrier grids, the fact that the grids corresponding to the grids in the next layer have no collision condition is shown, and collision detection in the next layer is not needed.
In a possible implementation manner, in the scheme provided by the application, whether a certain grid is located inside the vehicle model or not can be detected, collision detection is performed on the grid located inside the vehicle model, and collision detection is not performed on the grid outside the vehicle model, so that the detected data volume is further reduced, and the collision detection efficiency is improved.
Referring to fig. 3, a flowchart of a method of collision detection provided by an exemplary embodiment of the present application is shown. The collision detection method is applied to the vehicle-mounted terminal, and as shown in fig. 3, the collision detection method may include the following steps.
Step 301, generating a multilayer raster image with a pyramid structure according to the obstacle image, wherein the image precision of each layer of raster image in the multilayer raster image is in an inverse correlation relation with the number of layers.
Optionally, the obstacle image may be a grid image obtained by fusing the visual data acquired by the vehicle-mounted terminal according to the camera and the ultrasonic data acquired by the radar, each grid in the obstacle image may be divided into a passable area and a non-passable area, and the non-passable area may be regarded as a position where the obstacle is located. In the obstacle image, the vehicle-mounted terminal can simulate a vehicle model of the vehicle-mounted terminal in a subsequent section of a driving route of the current driving position of the vehicle according to the driving route of the vehicle, and in the obstacle image, collision detection is carried out on each grid in the simulated vehicle model, so that collision prediction is realized. Optionally, the driving route may be obtained in advance by the vehicle-mounted terminal through a path planning algorithm, and the subsequent driving route may be a driving route corresponding to a preset time length in a non-driving route in the current driving process.
Optionally, the obstacle image may be a bottom layer image of the multi-layer raster image, and the upper layer image may be obtained by scaling the precision of the obstacle image according to a preset ratio, where the raster images with different precisions are each layer of raster images in the pyramid structure. In the application, the accuracy of the bottommost layer is the highest, and the accuracy of the topmost layer is the lowest.
Optionally, the vehicle-mounted terminal may actively acquire the number of layers of the multilayer raster image of the pyramid structure to be generated, and acquire each scaling value corresponding to each number of layers according to the number of layers; and scaling the obstacle image according to each scaling value to generate a multilayer raster image with a pyramid structure with the number of layers. The vehicle-mounted terminal may generate a multi-layer raster image according to a preset number of layers, or may acquire the multi-layer raster image according to an environmental parameter.
For example, each time the vehicle-mounted terminal generates a multilayer raster image with a pyramid structure based on an obstacle image in a path planning process, the default number of layers is 4, the default ratio includes a first preset ratio, a second preset ratio and a third preset ratio, and the vehicle-mounted terminal zooms the obstacle image according to each default ratio to generate the multilayer raster image with the pyramid structure with 4 layers. The default number of layers and the default ratio may be preset by a developer.
In a possible implementation manner, the vehicle-mounted terminal may also obtain an environmental parameter of a surrounding environment where the target vehicle is located, and obtain the number of layers of the multilayer raster image of the pyramid structure to be generated according to the environmental parameter. Wherein the environmental parameter includes one or more of a speed of movement of the surrounding environment relative to the target vehicle, a weather condition of the surrounding environment. For example, the environment parameter includes a movement speed of the surrounding environment relative to the target vehicle, after the vehicle-mounted terminal acquires the movement speed of the surrounding environment relative to the target vehicle, it is determined in which speed interval the movement speed is located, and the corresponding number of layers is acquired by querying a correspondence table between the speed interval and the number of layers.
Please refer to table 1, which shows a table of correspondence between speed intervals and layer numbers according to an exemplary embodiment of the present application.
Speed interval Number of layers
0~40km/h 7
40~80km/h 5
80~120km/h 3
…… ……
TABLE 1
As shown in table 1, if the moving speed is 50km/h (kilometers per hour), the in-vehicle terminal determines that the moving speed is in a speed interval of 40 to 80km/h, and can acquire that the number of layers corresponding to the speed interval is 5.
Optionally, if the environment parameter includes a weather condition of the surrounding environment, after the vehicle-mounted terminal acquires the weather condition of the surrounding environment, the vehicle-mounted terminal may acquire the corresponding number of floors by querying the corresponding relationship table between the weather condition and the number of floors. Please refer to table 2, which shows a table of correspondence between weather conditions and the number of floors provided in an exemplary embodiment of the present application.
Weather conditions Number of layers
In sunny days 5
Rain-proof 4
Fog with large size 6
…… ……
TABLE 1
As shown in table 1, if the weather condition of the surrounding environment is a foggy weather, the vehicle-mounted terminal may acquire that the number of floors corresponding to the weather condition is 6.
Optionally, when the scaling values of the corresponding layers are obtained according to the number of layers, the above may be performed by using a preset formula, for example, the preset formula may be as follows:
Figure BDA0002996930530000111
where n is the number of layers, i.e. each previous layer, the raster image precision of the layer is reduced by half. The above results are 5 layers, and the vehicle-mounted terminal obtains that the scaling value corresponding to the first layer is
Figure BDA0002996930530000112
The scaling value obtained to correspond to the second layer is
Figure BDA0002996930530000113
Obtaining the scaling value corresponding to the third layer is
Figure BDA0002996930530000114
And in the same way, obtaining respective scaling values of the 5 layers, and scaling the obstacle map according to the respective scaling values of the layers to obtain the multilayer raster image.
Step 302, taking the topmost grid image as the current grid image, and acquiring the corresponding to-be-detected area of the target vehicle on the current grid image.
Optionally, the vehicle-mounted terminal performs collision detection from the topmost grid image, uses the topmost grid image as the current grid image, and determines the to-be-detected area corresponding to the target vehicle on the current grid image.
In a possible implementation manner, the vehicle-mounted terminal may obtain a grid area corresponding to a circumscribed rectangle of a vehicle model of the target vehicle on the current grid image; and acquiring the grid region corresponding to the circumscribed rectangle as a region to be detected. For example, please refer to fig. 4, which shows a schematic structural diagram of a region to be detected according to an exemplary embodiment of the present application. As shown in FIG. 4, a vehicle model 401 is included on the topmost grid image 400, circumscribing a rectangle 402. The vehicle-mounted terminal can make a circumscribed rectangle 402 on the topmost grid image 400 according to the grid where the outermost corner of the vehicle model 401 is located, and the region of the circumscribed rectangle 402 is used as the region to be detected. For example, the grid coordinates of each corner point of the vehicle model 401 acquired by the in-vehicle terminal on the topmost grid image 400 are (3,2), (4,1), (6,5), and (8,3), and the area of the circumscribed rectangle 402 is an area surrounded on the grid map, where y is 1, y is 5, x is 3, and x is 8.
Optionally, the obstacle image is a raster image at a lowermost layer of the multi-layer raster image. In one possible implementation manner, before step 302, the vehicle-mounted terminal may acquire each corner grid of the vehicle model of the target vehicle in the obstacle image; detecting whether each corner point grid is in an impassable area in an obstacle image, and determining that a target vehicle collides when a first corner point grid is in the obstacle grid in the obstacle image, wherein the first corner point grid is any one of the corner point grids; and when the grids of all the corner points are not positioned in the barrier grids in the barrier image, the step of taking the topmost grid image as the current grid image and acquiring the corresponding to-be-detected area of the target vehicle on the current grid image is executed.
That is, the vehicle-mounted terminal may also first determine on the original accuracy map, check whether grids at each corner point of the vehicle model on the original accuracy map are passable, and if one of the grids is not passable, may directly determine that the target vehicle may collide, without performing subsequent steps, reduce time consumption for collision detection, and if grids at each corner point of the vehicle model on the original accuracy map are passable, continue to perform step 302, and subsequently detect the internal grids. For example, in fig. 4, grid coordinates of each corner grid of the vehicle model are (3,2), (4,1), (6,5), (8,3), the vehicle-mounted terminal may detect the four corner grids respectively, and when all of the four corner grids are in the passable region, step 302 is performed again, and if any one corner grid is in the non-passable region, it is directly determined that the target vehicle may collide with the obstacle.
And 303, screening each grid in the area to be detected to obtain each grid in the grid area of the model, wherein the grid area of the model is the grid area occupied by the vehicle model of the target vehicle on the current grid image.
Optionally, the obtained region to be detected is a circumscribed rectangle of the vehicle model, and the region to be detected also contains grids outside the vehicle model and inside the vehicle model.
In one possible implementation manner, the vehicle-mounted terminal can judge whether a certain grid is in the interior of the vehicle model through the sign consistency of vector cross multiplication. For example, the vehicle model is a quadrangle, and the vehicle-mounted terminal acquires vector cross-multiplication results between corner points of each grid and four corner points of the quadrangle for the corner points of each grid contained in the region to be detected; detecting whether the corner points of each grid are in the grid region of the model or not according to the vector cross multiplication result; when a first corner point of corner points of each grid is in the model grid region, determining that each grid adjacent to the first corner point is in the model grid region; and when a first corner point of the corner points of each grid is not in the model grid region, determining that each grid adjacent to the first corner point is not in the model grid region.
Optionally, for any corner point of any grid in the area to be detected, the vehicle-mounted terminal may generate each vector pointing to the corner point from four corners according to the corner point and four corners of the vehicle model, the vehicle-mounted terminal may further acquire vectors on four sides of the vehicle model from the four corners of the vehicle model according to a clockwise or counterclockwise sequence, the vehicle-mounted terminal determines a vector group needing cross multiplication from the vectors, wherein each vector group comprises two vectors with the same starting point, then the vehicle-mounted terminal can determine four vector groups from the vectors, and calculating cross multiplication results between the vector groups, detecting whether signs of the cross multiplication results of the two vector groups including the opposite sides of the vehicle model are consistent, if so, determining that the corner point is between the two opposite sides, and if not, determining that the corner point is not between the two opposite sides. When the corner point is between the four edges, the corner point is determined to be in the model grid area, and if the corner point is not between the four edges, the corner point is determined not to be in the model grid area.
Referring to fig. 5, a schematic structural diagram of a current raster image according to an exemplary embodiment of the present application is shown. As shown in fig. 5, the grid image 500 includes a vehicle model 501, a region to be detected 502, grid corner points 503, and vehicle model corner points 504. The vehicle-mounted terminal may obtain a first vector 505, a second vector 506, a third vector 507, a fourth vector 508, a fifth vector 509, a sixth vector 510, a seventh vector 511, and an eighth vector 512 according to the coordinates of the grid corner 503 in the current grid map and the vehicle model corner 504. As can be seen from fig. 5, the first vector is a vector pointing from the first corner to the second corner, the second vector is a vector pointing from the first corner to the third corner, the third vector is a vector pointing from the third corner to the fourth corner, the fourth vector is a vector pointing from the fourth corner to the first corner, the fifth vector is a vector pointing from the first corner to the first corner, the sixth vector is a vector pointing from the second corner to the first corner, the seventh vector is a vector pointing from the third corner to the first corner, and the eighth vector is a vector pointing from the fourth corner to the first corner.
The in-vehicle terminal may calculate a vector cross-multiplication result by using the first vector and the fifth vector as one vector group, calculate a vector cross-multiplication result by using the second vector and the sixth vector as one vector group, calculate a vector cross-multiplication result by using the third vector and the seventh vector as one vector group, and calculate a vector cross-multiplication result by using the fourth vector and the eighth vector as one vector group. The vehicle-mounted terminal detects whether the cross multiplication result symbols of the two vector groups containing the opposite sides of the vehicle model are consistent, namely, detects whether the directions of the first vector and the fifth vector after cross multiplication and the directions of the third vector and the seventh vector after cross multiplication are consistent, and if so, indicates that the grid angular point 503 is between the first vector and the third vector. Correspondingly, the vehicle-mounted terminal detects whether the cross-multiplied direction of the second vector and the sixth vector is consistent with the cross-multiplied direction of the fourth vector and the eighth vector, and if so, the grid angular point 503 is between the second vector and the fourth vector. Thereby determining whether grid corner 503 is within the model grid area occupied by vehicle model 501.
If the grid corner 503 is in the model grid region occupied by the vehicle model 501, all four adjacent grids around the corner are regarded as grids in the model grid region, and if the grid corner 503 is not in the model grid region occupied by the vehicle model 501, all four adjacent grids around the corner are regarded as grids not in the model grid region, so that each grid in the model grid region is obtained.
In a possible implementation manner, when obtaining the corner points of each grid, the vehicle-mounted terminal may further sequentially obtain the corner points of each grid included in the area to be detected according to the step length of two grids apart. That is, each corner point is acquired by spacing two grids, and the acquired corner points are detected in the above manner.
And 304, detecting whether each grid in the model grid area in the area to be detected is in the obstacle grid according to the obstacle grid in the current grid image.
Optionally, for a certain grid, if the grid is in the model grid region of the vehicle model, it is further detected whether the grid is in the obstacle grid.
In a possible implementation manner, the vehicle-mounted terminal may obtain each grid in the area to be detected according to a first preset sequence; detecting whether the obtained grid is in an obstacle grid or not according to the obstacle grid in the current grid image; and when the acquired grid is not in the barrier grid, acquiring another grid in the area to be detected, and executing the step of detecting whether the acquired grid is in the barrier grid according to the barrier grid in the current grid image. For example, the first preset sequence is that the ordinate increases and the abscissa increases in the region to be detected, the first acquired grid of the vehicle-mounted terminal is the grid with the smallest ordinate and abscissa in the region to be detected, the grid is detected, if the grid is located in an obstacle grid, step 305 is executed, and if the grid is not located in an obstacle grid, the ordinate of the grid is added by one to acquire the next grid.
Step 305, when the first grid in the area to be detected is in the obstacle grid, determining a target grid area of the first grid corresponding to the next grid image, taking the next grid image as a new current grid image, and taking the target grid area as a new area to be detected of the current grid image.
Step 306, detect whether the current raster image is the bottommost raster image.
If yes, go to step 307, otherwise return to step 304.
When a first grid in the area to be detected is located in an obstacle grid, determining a target grid area corresponding to the next grid image of the first grid according to the corresponding relation between all grid images, taking the next grid image as a current grid image, taking the target grid area as an area to be detected, and detecting each grid in the area to be detected again until the current grid image is the bottommost grid image. In a possible implementation manner, the vehicle-mounted terminal records pixel values on grids of the obstacle grid in each grid image, for example, each grid pixel value corresponding to the obstacle grid is 255, and if the pixel value of the first grid in the area to be detected is also 255, it may be determined that the grid is a grid within the obstacle grid. Alternatively, when the vehicle-mounted terminal obtains the grid maps of each layer, the grid coordinates of the obstacle grid therein may also be obtained, and if the grid coordinate of a certain grid is within the grid coordinate range of the obstacle grid, the grid is determined to be the grid within the obstacle grid. The present application does not limit how to determine whether a certain grid is located within an obstacle grid.
Optionally, the first grid in the area to be detected has a target grid area corresponding to the first grid on the next grid image, the vehicle-mounted terminal continues to perform the steps from step 303 to step 306 on the target grid area, and detects the area to be detected on the next grid image, which may refer to the contents described in the steps from step 303 to step 306, and is not described herein again until the current grid image is the bottom grid image.
Step 307, detecting whether a first grid exists in the region to be detected of the bottommost grid image.
That is, if the detection result of step 306 is the bottom-most grid image, it is detected whether the first grid exists in the region to be detected of the bottom-most grid image. If so, go to step 308, otherwise go to step 309.
Step 308, the target vehicle is determined to have collided.
That is, if the first grid exists in the to-be-detected region of the current grid image, it is determined that the target vehicle collides. Optionally, if the current raster image is the bottommost raster image, a raster in the barrier raster also exists in the to-be-detected region of the bottommost raster image, which indicates that the vehicle model collides with the barrier, and the vehicle-mounted terminal obtains a collision detection result accordingly.
In a possible implementation manner, the vehicle-mounted terminal may further acquire a first overlapping area in the bottom grid image, where the first overlapping area is a grid area where a vehicle model of the target vehicle overlaps with an obstacle grid; and generating a first driving instruction according to the first overlapping area and the model grid area, wherein the first driving instruction is used for controlling the target vehicle to drive so that the vehicle model of the target vehicle does not collide with the obstacle. That is, after it is determined that the vehicle model may collide with the obstacle grid, the vehicle-mounted terminal may generate the first travel command in time, so as to control the target vehicle to travel such that the vehicle model of the target vehicle does not collide with the obstacle, thereby achieving the purpose of adjusting the travel route in time.
Step 309, detecting whether each grid in the region to be detected in the topmost grid image executes a step of detecting whether the acquired grid is in an obstacle grid according to the obstacle grid in the current grid image.
That is, if the detection result in step 307 is that the first grid in the obstacle grid does not exist in the to-be-detected region of the bottom-layer grid image, it is continuously detected whether each grid in the to-be-detected region in the top-layer grid image performs the step of detecting whether the acquired grid is in the obstacle grid according to the obstacle grid in the current grid image.
And when each grid in the area to be detected in the topmost grid image executes the step of detecting whether the acquired grid is in the obstacle grid according to the obstacle grid in the current grid image, executing the step 310, otherwise, returning to the step 304.
In step 310, it is determined that the target vehicle will not collide.
That is, when each grid in the area to be detected in the topmost grid image performs a step of detecting whether the acquired grid is located in an obstacle grid according to the obstacle grid in the current grid image, it is determined that the target vehicle does not collide. That is, before obtaining a collision detection result that the vehicle model does not collide with the obstacle, the vehicle-mounted terminal needs to determine whether each grid in the to-be-detected region in the topmost grid image is detected, and if the grids are detected, no grid in the obstacle grid exists in the to-be-detected region in the bottommost grid image, which indicates that the vehicle model does not collide with the obstacle, and the vehicle-mounted terminal can correspondingly obtain the collision detection result. If not, another grid in the topmost grid image may be acquired continuously according to the above acquisition order, and the collision detection process of steps 303 to 309 in the present application may be performed on the grid continuously.
In summary, by establishing a multi-layer grid image with a pyramid structure, starting from the top-layer image, taking the top-layer image as a current grid image, acquiring an area to be detected of a target vehicle on the current grid image, detecting whether each grid in the area to be detected is in an obstacle grid, when a first grid in the area to be detected is in the obstacle grid, determining a target grid area corresponding to the first grid in a next-layer grid image, taking the next-layer grid image as the current grid image, taking the target grid area as the area to be detected, continuing to perform collision detection on the area to be detected until the current grid image is the bottom-layer grid image, and completing collision detection of the vehicle on each grid image, because the image accuracy of each layer of grid image in the multi-layer grid image is in an inverse correlation relationship with the number of layers, if a certain grid in the top-layer grid image is not in the obstacle grid, the grid is determined to have no collision condition in each grid on the next layer corresponding to the grid, and collision detection in the next layer is not needed.
In addition, whether each grid is located in the grid region occupied by the vehicle model or not is detected, so that the scheme only needs to detect the grids in the grid region occupied by the vehicle model, and other grids outside the vehicle model are not needed to be detected, the data volume of collision detection is reduced, the time consumed for detection is shortened, and the detection efficiency is improved.
In addition, the vehicle-mounted terminal acquires the number of layers of the pyramid structure to be generated by combining the environment parameters, so that the generated grid image of the pyramid structure is more consistent with the environment where the vehicle is located, and the accuracy and flexibility of detection can be improved.
Taking the grid map example that the established multilayer pyramid structure is a three-layer pyramid structure, in the path planning process, the flow of the scheme is adopted for the collision detection process. Referring to fig. 6, a flowchart of a method of collision detection provided by an exemplary embodiment of the present application is shown. The collision detection method is applied to the vehicle-mounted terminal, and as shown in fig. 6, the collision detection method may include the following steps.
Step 601, generating a multilayer raster image with a pyramid structure according to the obstacle image.
Step 602, four grids corresponding to four corner points of the vehicle model are determined on the obstacle image.
Step 603, detecting whether none of the four grids has collided.
If none of the four grids has collided, step 604 is executed, and if any of the four grids has collided, step 614 is executed, i.e. it is determined that the vehicle model will collide with the obstacle.
And step 604, taking the topmost grid map as a current grid map, determining a grid area of an external quadrangle where the vehicle model is located on the current grid map, and taking the grid area of the external quadrangle as an area to be detected.
Step 605, obtain the next grid in the area to be detected.
Step 606, it is checked whether the grid is inside the vehicle.
If so, go to step 607, otherwise go to step 605.
Step 607, it is checked whether the grid is an impassable area.
If so, go to step 608, otherwise, go to step 605.
Step 608, a target grid area corresponding to the grid is determined on the next grid map.
And step 609, taking the next grid map as the current grid map, and taking the determined target grid area corresponding to the grid as the area to be detected.
Step 610, detecting whether the current grid map is in the bottom grid map.
If yes, go to step 611, otherwise go to step 605.
Step 611, whether a grid in the impassable area exists in the area to be detected on the grid map at the bottom layer.
If so, determining that the vehicle model can collide with the obstacle, and executing step 614, and if not, executing step 612.
Step 612, checking whether each grid in the region to be checked in the topmost layer has executed step 606.
If yes, go to step 613, otherwise go to step 605.
Step 613, it is determined that the vehicle model will not collide with the obstacle.
And step 614, determining that the vehicle model can collide with the obstacle.
In summary, by establishing a multi-layer grid image with a pyramid structure, starting from the top-layer image, taking the top-layer image as a current grid image, acquiring an area to be detected of a target vehicle on the current grid image, detecting whether each grid in the area to be detected is in an obstacle grid, when a first grid in the area to be detected is in the obstacle grid, determining a target grid area corresponding to the first grid in a next-layer grid image, taking the next-layer grid image as the current grid image, taking the target grid area as the area to be detected, continuing to perform collision detection on the area to be detected until the current grid image is the bottom-layer grid image, and completing collision detection of the vehicle on each grid image, because the image accuracy of each layer of grid image in the multi-layer grid image is in an inverse correlation relationship with the number of layers, if a certain grid in the top-layer grid image is not in the obstacle grid, the grid is determined to have no collision condition in each grid on the next layer corresponding to the grid, and collision detection in the next layer is not needed.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 7, a block diagram of a collision detection apparatus provided in an exemplary embodiment of the present application is shown, where the collision detection apparatus 700 may be applied to a vehicle-mounted terminal; the collision detecting apparatus includes:
the image generation module 701 is configured to generate a multilayer raster image with a pyramid structure according to an obstacle image, where image accuracy of each layer of raster image in the multilayer raster image is in an inverse correlation with the number of layers;
the area acquisition module 702 is configured to take the topmost grid image as a current grid image and acquire a to-be-detected area corresponding to the target vehicle on the current grid image;
an area detection module 703, configured to detect whether each grid in the area to be detected is located in an obstacle grid according to the obstacle grid in the current grid image;
a loop execution module 704, configured to determine, when a first grid in the area to be detected is located in the obstacle grid, a target grid area corresponding to a next-layer grid image of the first grid, use the next-layer grid image as a new current grid image, use the target grid area as an area to be detected of the new current grid image, and re-execute the step of detecting, according to the obstacle grid in the current grid image, whether each grid in the area to be detected is located in the obstacle grid until the current grid image is a bottom-layer grid image;
a collision determination module 705, configured to determine that the target vehicle collides if a first grid in the obstacle grid exists in the to-be-detected region of the bottom grid image.
In summary, by establishing a multi-layer grid image with a pyramid structure, starting from the top-layer image, taking the top-layer image as a current grid image, acquiring an area to be detected of a target vehicle on the current grid image, detecting whether each grid in the area to be detected is in an obstacle grid, when a first grid in the area to be detected is in the obstacle grid, determining a target grid area corresponding to the first grid in a next-layer grid image, taking the next-layer grid image as the current grid image, taking the target grid area as the area to be detected, continuing to perform collision detection on the area to be detected until the current grid image is the bottom-layer grid image, and completing collision detection of the vehicle on each grid image, because the image accuracy of each layer of grid image in the multi-layer grid image is in an inverse correlation relationship with the number of layers, if a certain grid in the top-layer grid image is not in the obstacle grid, the grid is determined to have no collision condition in each grid on the next layer corresponding to the grid, and collision detection in the next layer is not needed.
Optionally, the apparatus further comprises:
a first obtaining module, configured to, before the area detection module 703 detects whether each grid in the area to be detected is located in an obstacle grid according to the obstacle grid in the current grid image, screen each grid in the area to be detected, and obtain each grid in a model grid area, where the model grid area is a grid area occupied by a vehicle model of the target vehicle on the current grid image;
the area detection module 703 is configured to detect whether each grid in the model grid area in the area to be detected is located in the obstacle grid according to the obstacle grid in the current grid image.
Optionally, the vehicle model is a quadrilateral, and the first obtaining module is configured to,
for the angular points of each grid contained in the region to be detected, obtaining a vector cross product result between the angular point of each grid and the four angular points of the quadrangle;
detecting whether the corner points of each grid are in the grid region of the model or not according to the vector cross multiplication result;
when a first corner point of the corner points of each grid is in the model grid region, determining that each grid adjacent to the first corner point is in the model grid region;
and when a first corner point of the corner points of each grid is not in the model grid region, determining that each grid adjacent to the first corner point is not in the model grid region.
Optionally, the apparatus further comprises:
and the second acquisition module is used for sequentially acquiring the angular points of each grid contained in the area to be detected according to the step length of two grids at intervals before acquiring the vector cross multiplication result between the angular point of each grid and the four angular points of the quadrangle for the angular point of each grid contained in the area to be detected.
Optionally, the obstacle image is a raster image at a lowermost layer in the multi-layer raster image, and the apparatus further includes:
a third obtaining module, configured to obtain each grid occupied by four corner points of a vehicle model of a target vehicle in the obstacle image before the region obtaining module 702 uses a topmost grid image as a current grid image and obtains a corresponding region to be detected of the target vehicle on the current grid image;
a first determination module configured to determine that the target vehicle collides when a first corner grid is located within the obstacle grid in the obstacle image, the first corner grid being any one of grids occupied by four corner points of a vehicle model of the target vehicle in the obstacle image;
and the first execution module is used for executing the step of taking the topmost grid image as the current grid image and acquiring the corresponding to-be-detected area of the target vehicle on the current grid image when all the corner point grids are not positioned in the obstacle grids in the obstacle image.
Optionally, the area obtaining module 702 is configured to,
acquiring a grid area corresponding to a circumscribed rectangle of the vehicle model of the target vehicle on the current grid image;
and taking the grid region corresponding to the circumscribed rectangle as the region to be detected.
Optionally, the area detection module is configured to,
acquiring each grid in the area to be detected according to a first preset sequence;
and detecting whether the acquired grid is in the barrier grid or not according to the barrier grid in the current grid image.
Optionally, the apparatus further comprises:
a first detection module, configured to detect whether each grid in the to-be-detected region in the topmost grid image executes the step of detecting whether the acquired grid is located in the obstacle grid according to the obstacle grid in the current grid image, if there is no first grid located in the obstacle grid in the to-be-detected region in the bottommost grid image;
a first determining module, configured to determine that the target vehicle does not collide when each grid in the to-be-detected area in the topmost grid image performs the step of detecting whether the acquired grid is located in the obstacle grid according to the obstacle grid in the current grid image;
a second execution module, configured to, when none of the grids in the to-be-detected region in the topmost grid image execute the step of detecting whether the acquired grid is located in the obstacle grid according to the obstacle grid in the current grid image, execute the step of acquiring the grids in the to-be-detected region according to a first preset order.
Optionally, the apparatus further comprises:
a fourth obtaining module, configured to obtain the number of layers of the multilayer raster image of the pyramid structure to be generated before the image generating module 701 generates the multilayer raster image of the pyramid structure according to the obstacle image;
a fifth obtaining module, configured to obtain, according to the layer number, each scaling value corresponding to each layer number;
the image generation module 701 is configured to generate, for each image,
and scaling the obstacle image according to each scaling value to generate a multilayer grid image with the pyramid structure with the layer number.
Optionally, the fourth obtaining module is configured to,
acquiring environmental parameters of the surrounding environment of the target vehicle, wherein the environmental parameters comprise one or more of the movement speed of the surrounding environment relative to the target vehicle and the weather condition of the surrounding environment;
and acquiring the number of layers of the multilayer raster image of the pyramid structure to be generated according to the environment parameters.
Optionally, the apparatus further comprises:
a sixth obtaining module, configured to, if a first grid exists in the to-be-detected region of the bottom grid image in the collision determination module 705, determine that the target vehicle collides, and then obtain a first overlapping region in the bottom grid image, where the first overlapping region is a grid region where a vehicle model of the target vehicle overlaps with the obstacle grid;
and the instruction generating module is used for generating a first driving instruction according to the first overlapping area and the model grid area, wherein the first driving instruction is used for controlling the target vehicle to drive so that the vehicle model of the target vehicle does not collide with the obstacle.
Referring to fig. 8, a schematic structural diagram of a vehicle-mounted terminal according to an exemplary embodiment of the present application is shown. As shown in fig. 8, the in-vehicle terminal 800 may be provided as the terminal device relating to the above-described embodiment. Referring to FIG. 8, the in-vehicle terminal 800 includes a processing component 822, which further includes one or more processors and memory resources, represented by memory 832, for storing instructions, such as application programs, that are executable by the processing component 822. The application programs stored in memory 832 may include one or more modules that each correspond to a set of instructions. Furthermore, the processing component 822 is configured to execute instructions to perform all or part of the steps performed by the in-vehicle terminal in the above-mentioned collision detection method, i.e. the in-vehicle terminal comprises a memory in which a computer program is stored and a processor, and the computer program, when executed by the processor, causes the processor to implement the collision detection method as shown in any one or more of fig. 1, fig. 3 or fig. 4.
The in-vehicle terminal 800 may further include a power supply component 826 configured to perform power management of the in-vehicle terminal 800, a wired or wireless network interface 850 configured to connect the in-vehicle terminal 800 to a network, and an input/output (I/O) interface 838. The in-vehicle terminal 800 may operate based on an operating system stored in memory 832, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
The embodiment of the application also discloses a computer readable storage medium which stores a computer program, wherein the computer program realizes the method in the embodiment of the method when being executed by a processor.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art should also appreciate that the embodiments described in this specification are all alternative embodiments and that the acts and modules involved are not necessarily required for this application.
In various embodiments of the present application, it should be understood that the size of the serial number of each process described above does not mean that the execution sequence is necessarily sequential, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present application, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, may be embodied in the form of a software product, stored in a memory, including several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of the embodiments of the present application.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by hardware instructions of a program, and the program may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory (CD-ROM), or other Memory, such as a magnetic disk, or a combination thereof, A tape memory, or any other medium readable by a computer that can be used to carry or store data.
The foregoing describes a collision detection method, apparatus, camera module, terminal device and storage medium disclosed in the embodiments of the present application by way of example, and a principle and an implementation of the present application are described herein by way of example, and the description of the foregoing embodiments is only used to help understand the method and core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (14)

1. A collision detection method, characterized in that the method comprises:
generating a multilayer raster image with a pyramid structure according to an obstacle image, wherein the image precision and the number of layers of each layer of raster image in the multilayer raster image are in an inverse correlation relationship;
taking the topmost grid image as a current grid image, and acquiring a to-be-detected area corresponding to the target vehicle on the current grid image;
detecting whether each grid in the area to be detected is in the obstacle grid or not according to the obstacle grid in the current grid image;
when a first grid in the area to be detected is located in the obstacle grid, determining a target grid area corresponding to the first grid in a next grid image, taking the next grid image as a new current grid image, taking the target grid area as an area to be detected of the new current grid image, and re-executing the step of detecting whether each grid in the area to be detected is located in the obstacle grid according to the obstacle grid in the current grid image until the current grid image is a bottommost grid image;
and if the first grid in the barrier grid exists in the area to be detected of the bottommost grid image, determining that the target vehicle collides.
2. The method of claim 1, further comprising, before the detecting whether each grid in the area to be detected is within an obstacle grid according to the obstacle grid in the current grid image,:
screening each grid in the region to be detected to obtain each grid in a model grid region, wherein the model grid region is a grid region occupied by a vehicle model of the target vehicle on the current grid image;
the detecting whether each grid in the area to be detected is located in the obstacle grid according to the obstacle grid in the current grid image includes:
and detecting whether each grid in the to-be-detected region, which is located in the model grid region, is located in the obstacle grid according to the obstacle grid in the current grid image.
3. The method according to claim 2, wherein the vehicle model is a quadrilateral, and the screening of each grid in the region to be detected to obtain each grid in the region of the model grid comprises:
for the angular points of each grid contained in the region to be detected, obtaining a vector cross product result between the angular point of each grid and the four angular points of the quadrangle;
detecting whether the corner points of each grid are in the grid region of the model or not according to the vector cross multiplication result;
when a first corner point of the corner points of each grid is in the model grid region, determining that each grid adjacent to the first corner point is in the model grid region;
and when a first corner point of the corner points of each grid is not in the model grid region, determining that each grid adjacent to the first corner point is not in the model grid region.
4. The method according to claim 3, before the obtaining, for the corner of each grid included in the region to be detected, a result of cross-multiplication of vectors between the corner of each grid and four corners of the quadrangle, further comprising:
and sequentially acquiring the corner points of each grid contained in the area to be detected according to the step length of two grids.
5. The method according to claim 1, wherein the obstacle image is a raster image at a bottommost layer in the multi-layer raster images, and further comprising, before taking a topmost layer raster image as a current raster image and acquiring a corresponding region to be detected of a target vehicle on the current raster image:
acquiring each grid occupied by four corner points of the vehicle model of the target vehicle in the obstacle image;
determining that the target vehicle collides when a first corner grid is within the obstacle grid in the obstacle image, the first corner grid being any one of grids occupied by four corner points of a vehicle model of the target vehicle in the obstacle image;
and when the grids of all the corner points are not positioned in the barrier grids in the barrier image, executing the step of taking the topmost grid image as the current grid image and acquiring the corresponding to-be-detected area of the target vehicle on the current grid image.
6. The method according to claim 1, wherein the acquiring of the corresponding to-be-detected region of the target vehicle on the current raster image comprises:
acquiring a grid area corresponding to a circumscribed rectangle of the vehicle model of the target vehicle on the current grid image;
and taking the grid region corresponding to the circumscribed rectangle as the region to be detected.
7. The method according to any one of claims 1 to 6, wherein the detecting whether each grid in the area to be detected is within an obstacle grid according to the obstacle grid in the current grid image comprises:
acquiring each grid in the area to be detected according to a first preset sequence;
and detecting whether the acquired grid is in the barrier grid or not according to the barrier grid in the current grid image.
8. The method of claim 7, further comprising:
if the first grid in the barrier grid does not exist in the area to be detected of the bottommost grid image, detecting whether each grid in the area to be detected in the topmost grid image executes the step of detecting whether the obtained grid is in the barrier grid according to the barrier grid in the current grid image;
when each grid in the area to be detected in the topmost grid image is subjected to the step of detecting whether the acquired grid is in the barrier grid according to the barrier grid in the current grid image, determining that the target vehicle does not collide;
when each grid in the to-be-detected area in the topmost grid image does not execute the step of detecting whether the acquired grid is located in the obstacle grid or not according to the obstacle grid in the current grid image, executing the step of acquiring each grid in the to-be-detected area according to a first preset sequence.
9. The method of any of claims 1 to 6, further comprising, prior to the generating a multi-layered raster image of a pyramidal structure from an obstacle image:
acquiring the number of layers of a multilayer raster image of a pyramid structure to be generated;
obtaining each scaling value corresponding to each layer number according to the layer number;
the generating of the multi-layer raster image of the pyramid structure according to the obstacle image includes:
and scaling the obstacle image according to each scaling value to generate a multilayer grid image with the pyramid structure with the layer number.
10. The method of claim 9, wherein the obtaining the number of layers of the multi-layer raster image of the pyramid structure to be generated comprises:
acquiring environmental parameters of the surrounding environment of the target vehicle, wherein the environmental parameters comprise one or more of the movement speed of the surrounding environment relative to the target vehicle and the weather condition of the surrounding environment;
and acquiring the number of layers of the multilayer raster image of the pyramid structure to be generated according to the environment parameters.
11. The method according to any one of claims 2 to 6, wherein after determining that the target vehicle has collided if the first grid exists in the region to be detected of the lowermost grid image, further comprising:
acquiring a first coincidence region in a bottommost grid image, wherein the first coincidence region is a grid region where a vehicle model of the target vehicle coincides with the obstacle grid;
and generating a first driving instruction according to the first overlapping area and the model grid area, wherein the first driving instruction is used for controlling the target vehicle to drive so that the vehicle model of the target vehicle does not collide with the obstacle.
12. A collision detecting apparatus, characterized in that the apparatus comprises:
the image generation module is used for generating a multilayer raster image with a pyramid structure according to an obstacle image, wherein the image precision of each layer of raster image in the multilayer raster image is in an inverse correlation relation with the number of layers;
the area acquisition module is used for taking the topmost grid image as a current grid image and acquiring a corresponding area to be detected of the target vehicle on the current grid image;
the area detection module is used for detecting whether each grid in the area to be detected is located in the obstacle grid or not according to the obstacle grid in the current grid image;
a circular execution module, configured to determine, when a first grid in the area to be detected is located in the obstacle grid, a target grid area corresponding to a next-layer grid image of the first grid, use the next-layer grid image as a new current grid image, use the target grid area as an area to be detected of the new current grid image, and execute the step of detecting, according to the obstacle grid in the current grid image, whether each grid in the area to be detected is located in the obstacle grid again until the current grid image is a bottom-layer grid image;
and the collision determining module is used for determining that the target vehicle collides if a first grid in the barrier grid exists in the to-be-detected region of the bottommost grid image.
13. An in-vehicle terminal characterized by comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to implement the collision detection method according to any one of claims 1 to 11.
14. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a collision detection method according to any one of claims 1 to 11.
CN202110332876.6A 2021-03-29 2021-03-29 Collision detection method, device, vehicle-mounted terminal and storage medium Active CN113077428B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110332876.6A CN113077428B (en) 2021-03-29 2021-03-29 Collision detection method, device, vehicle-mounted terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110332876.6A CN113077428B (en) 2021-03-29 2021-03-29 Collision detection method, device, vehicle-mounted terminal and storage medium

Publications (2)

Publication Number Publication Date
CN113077428A true CN113077428A (en) 2021-07-06
CN113077428B CN113077428B (en) 2023-12-08

Family

ID=76610950

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110332876.6A Active CN113077428B (en) 2021-03-29 2021-03-29 Collision detection method, device, vehicle-mounted terminal and storage medium

Country Status (1)

Country Link
CN (1) CN113077428B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105088995A (en) * 2015-08-31 2015-11-25 山东碧空环保科技股份有限公司 Road dust removal system
WO2018104191A1 (en) * 2016-12-06 2018-06-14 Siemens Aktiengesellschaft Automated open space identification by means of difference analysis for vehicles
CN110861639A (en) * 2019-11-28 2020-03-06 安徽江淮汽车集团股份有限公司 Parking information fusion method and device, electronic equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105088995A (en) * 2015-08-31 2015-11-25 山东碧空环保科技股份有限公司 Road dust removal system
WO2018104191A1 (en) * 2016-12-06 2018-06-14 Siemens Aktiengesellschaft Automated open space identification by means of difference analysis for vehicles
CN110861639A (en) * 2019-11-28 2020-03-06 安徽江淮汽车集团股份有限公司 Parking information fusion method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
战强;吴佳;: "未知环境下移动机器人单目视觉导航算法", 北京航空航天大学学报, no. 06 *

Also Published As

Publication number Publication date
CN113077428B (en) 2023-12-08

Similar Documents

Publication Publication Date Title
CN110388931A (en) The two-dimentional bounding box of object is converted into the method for the three-dimensional position of automatic driving vehicle
Tsai et al. Real-time indoor scene understanding using bayesian filtering with motion cues
CN111133447A (en) Object detection and detection confidence suitable for autonomous driving
CN110462543A (en) The method based on emulation that perception for assessing automatic driving vehicle requires
CN110386142A (en) Pitch angle calibration method for automatic driving vehicle
CN111582189B (en) Traffic signal lamp identification method and device, vehicle-mounted control terminal and motor vehicle
CN110221616A (en) A kind of method, apparatus, equipment and medium that map generates
CN111489368B (en) Method and device for detecting pseudo 3D boundary based on CNN by using instance segmentation
WO2020082777A1 (en) Parking space detection method and apparatus
CN112710317A (en) Automatic driving map generation method, automatic driving method and related product
CN111062405A (en) Method and device for training image recognition model and image recognition method and device
EP2589933B1 (en) Navigation device, method of predicting a visibility of a triangular face in an electronic map view
KR102606629B1 (en) Method, apparatus and computer program for generating road network data to automatic driving vehicle
JP2022129175A (en) Vehicle evaluation method and vehicle evaluation device
CN115406457A (en) Driving region detection method, system, equipment and storage medium
CN112150538B (en) Method and device for determining vehicle pose in three-dimensional map construction process
CN115705060A (en) Behavior planning for autonomous vehicles in yield scenarios
JP2020119519A (en) Method for detecting pseudo-3d bounding box to be used for military purpose, smart phone, or virtual driving based-on cnn capable of switching modes according to conditions of objects, and device using the same
CN113076824A (en) Parking space acquisition method and device, vehicle-mounted terminal and storage medium
CN112649012A (en) Trajectory planning method, equipment, medium and unmanned equipment
CN113077428B (en) Collision detection method, device, vehicle-mounted terminal and storage medium
CN116523970A (en) Dynamic three-dimensional target tracking method and device based on secondary implicit matching
CN113460040B (en) Parking path determination method and device, vehicle and storage medium
CN116048067A (en) Parking path planning method, device, vehicle and storage medium
CN115061499A (en) Unmanned aerial vehicle control method and unmanned aerial vehicle control device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant