CN113486837A - Automatic driving control method for low-pass obstacle - Google Patents

Automatic driving control method for low-pass obstacle Download PDF

Info

Publication number
CN113486837A
CN113486837A CN202110815596.0A CN202110815596A CN113486837A CN 113486837 A CN113486837 A CN 113486837A CN 202110815596 A CN202110815596 A CN 202110815596A CN 113486837 A CN113486837 A CN 113486837A
Authority
CN
China
Prior art keywords
image
obstacle
information
low
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110815596.0A
Other languages
Chinese (zh)
Other versions
CN113486837B (en
Inventor
黄秋生
王宏乾
金豆
杨潘
王吉宽
淳海晏
李二宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Jianghuai Automobile Group Corp
Original Assignee
Anhui Jianghuai Automobile Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Jianghuai Automobile Group Corp filed Critical Anhui Jianghuai Automobile Group Corp
Priority to CN202110815596.0A priority Critical patent/CN113486837B/en
Publication of CN113486837A publication Critical patent/CN113486837A/en
Application granted granted Critical
Publication of CN113486837B publication Critical patent/CN113486837B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0011Planning or execution of driving tasks involving control alternatives for a single driving scenario, e.g. planning several paths to avoid obstacles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses an automatic driving control method for a low-pass obstacle, which is characterized in that two-stage visual recognition is carried out on image content acquired in the continuous driving process of an automatic driving vehicle to obtain accurate information of a target obstacle, an accurate distance measurement result of the low-pass obstacle and the vehicle is obtained by combining a pre-constructed image distance measurement engine, and finally, the self-adaptive obstacle avoidance decision taking the type of the obstacle as the guide is realized by utilizing the recognized information of the target obstacle and the accurate distance measurement result. The invention realizes the accurate detection of the low-pass degree obstacle, and can flexibly adjust the driving safety measures by matching with the detection result of the low-pass degree obstacle, thereby achieving the purpose of reasonably avoiding or passing through the low-pass degree obstacle with pertinence.

Description

Automatic driving control method for low-pass obstacle
Technical Field
The invention relates to the field of automatic driving, in particular to an automatic driving control method for a low-pass obstacle.
Background
The automatic driving automobile realizes the perception of the environment through a high-precision map + positioning system, a radar detection system and a high-definition camera. The radar detection system can realize the distance measurement of the barrier, the high-definition camera collects images, and whether the barrier is a person or a vehicle or other things can be judged through machine vision identification. The radar is suitable for detecting objects with a certain height in an induction range, but the road condition of the running vehicle is complex, and the radar is difficult to detect low-pass objects such as a damping zone, a hollow area, occasional masonry, surface water and the like on the ground, wherein the low-pass obstacle refers to an obstacle (which can be simply understood as a low obstacle) occupying a small area in the Z direction relative to a vehicle coordinate system; the laser radar installed on the automatic driving automobile can detect the low-pass obstacle and draw the point cloud data, but is also limited by the capture angle of the target scene object, the imaging resolution of the low-pass obstacle is not too high, and the generated three-dimensional point cloud data is difficult to accurately describe the distance of the low-pass obstacle.
Thus, the prior art does not have ideal ranging effect for low-pass obstacles. Due to the lack of accurate distance detection, it is expected that the trajectory planning of the domain controller of state of the art autonomous vehicles in the face of low-pass obstacles is simplified.
Disclosure of Invention
In view of the above, the present invention aims to provide an automatic driving control method for a low-pass obstacle, so as to obtain a more accurate low-pass obstacle detection result, and further achieve accurate avoidance and passing control.
The technical scheme adopted by the invention is as follows:
an automatic driving control method for a low-pass obstacle, comprising:
continuously receiving images of a front road collected by a camera arranged at the front part of a vehicle in the driving process;
performing primary identification on an object in each frame of the image, and judging whether a suspected low-pass obstacle exists in the image;
if yes, recording identification information of the suspected low-pass obstacle;
based on the identification information, performing fine identification on the suspected low-pass obstacle in the subsequent acquired image;
when the suspected low-pass obstacle is identified as a target obstacle, acquiring type information of the target obstacle, and combining a pre-constructed image ranging engine to obtain ranging information;
according to the current vehicle driving information, the ranging information and the information of one or more of the target obstacles: and determining an avoidance passing strategy aiming at the type information of the current target obstacle by using the position information and the size information.
In at least one possible implementation manner, the obtaining of the ranging information by combining with the pre-constructed image ranging engine includes:
and performing feature extraction on any frame of image used for obtaining the fine identification result, inputting the frame of image into the image ranging engine, and outputting distance prediction information of the vehicle and the target obstacle by the image ranging engine.
In at least one possible implementation manner, the training manner of the image ranging engine includes:
setting a marker for representing a target low-pass obstacle in a road in advance;
acquiring an image sample containing the marker through a camera on the vehicle;
labeling color information and distance information of an object in the image sample of each frame, wherein the distance information represents the distance between the object in the image and the vehicle;
constructing a pixel vector matrix of each frame of image sample based on the labeling result;
inputting the image samples and the corresponding pixel vector matrixes into an image ranging engine, and enabling the image ranging engine to lock the identifiers and the distance information from each frame of image through iterative learning;
counting and training all image samples to obtain stable distance information and unstable distance information, and constructing an initial coordinate vector matrix;
combining the unstable distance information to construct a coordinate offset vector matrix;
performing point multiplication on the coordinate offset vector matrix and the initial coordinate vector matrix to optimize corresponding unstable distance information;
and updating the initial coordinate vector matrix according to the optimized distance information to obtain a target coordinate vector matrix for outputting the ranging information.
In at least one possible implementation manner, the constructing a pixel vector matrix of each frame of image sample includes:
obtaining a pixel vector matrix of each frame of image according to the pixel value imaged by the camera;
each pixel point corresponds to one element in the pixel vector matrix, and each element at least comprises a color vector and a coordinate vector; the color vector is a three-dimensional vector constructed from three color components, and the coordinate vector is a three-dimensional vector constructed from three coordinate components with respect to the origin of the vehicle coordinate system.
In at least one possible implementation manner, the performing statistics to obtain stable distance information and unstable distance information by training all image samples, and constructing an initial coordinate vector matrix includes:
determining pixel points with the distance information statistical variance lower than a set threshold value in the image sample as credible pixel points; determining pixel points with the distance information statistical variance higher than a set threshold value in the image sample as unreliable pixel points;
obtaining the mean value of the coordinate vectors corresponding to the credible pixel points based on all the image samples;
based on all image samples, performing distance information fitting on the untrusted pixel points according to the coordinate vectors of the trusted pixel points adjacent to the untrusted pixel points;
and constructing an initial coordinate vector matrix by using the mean value and the fitted distance information.
In at least one possible implementation manner, a coordinate offset vector with a value smaller than 1 is set for an untrusted pixel point, and a numerical value of the corresponding offset vector is determined according to a strategy that the statistical variance is larger and the offset vector is smaller;
and constructing a coordinate offset vector matrix by using the offset vectors after the values are obtained.
In at least one possible implementation manner, the method further includes: and if the suspected low-pass obstacle exists in the image, suspending the acceleration mode of the vehicle and entering a pre-deceleration mode until a fine identification result is obtained.
In at least one possible implementation manner, the method further includes: before distance measurement is carried out, whether distance measurement is carried out or not is decided according to current driving information and road condition information of the vehicle.
The design concept of the invention is that two-stage visual recognition is carried out on the image content acquired in the continuous driving process of the automatic driving vehicle to obtain the accurate information of the target obstacle, the accurate distance measurement result of the low-pass obstacle and the vehicle is obtained by combining a pre-constructed image distance measurement engine, and finally the self-adaptive obstacle avoidance decision taking the type of the obstacle as the guide is realized by utilizing the recognized information of the target obstacle and the accurate distance measurement result. The invention realizes the accurate detection of the low-pass degree obstacle, and can flexibly adjust the driving safety measures by matching with the detection result of the low-pass degree obstacle, thereby achieving the purpose of reasonably avoiding or passing through the low-pass degree obstacle with pertinence.
Drawings
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described with reference to the accompanying drawings, in which:
fig. 1 is a flowchart of an automatic driving control method for a low-pass obstacle according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative only and should not be construed as limiting the invention.
The invention provides an embodiment of an automatic driving control method for a low-pass obstacle, and specifically, as shown in fig. 1, the method may include the following steps:
step S1, continuously receiving the images of the front road collected by the camera arranged at the front part of the vehicle in the driving process;
step S2, carrying out primary identification on the object in each frame of image, and judging whether a suspected low-pass obstacle exists in the image;
if yes, executing step S3, recording identification information of the suspected low-pass obstacle; the identification information may include, but is not limited to, one or more of the following: relative position information, size information, and visual characteristic information.
Step S4, based on the identification information, finely identifying the suspected low-pass obstacle in the subsequent collected image;
step S5, when the suspected low-pass obstacle is identified as a target obstacle, acquiring type information of the target obstacle, and obtaining ranging information by combining a pre-constructed image ranging engine;
step S6, according to the current vehicle running information, the distance measuring information and the information of one or more of the following target obstacles: and determining an avoidance passing strategy aiming at the type information of the current target obstacle by using the position information and the size information.
Further, the obtaining of the ranging information in combination with the pre-constructed image ranging engine includes:
and performing feature extraction on any frame of image used for obtaining the fine identification result, inputting the frame of image into the image ranging engine, and outputting distance prediction information of the vehicle and the target obstacle by the image ranging engine.
Further, the training mode of the image ranging engine comprises:
setting a marker for representing a target low-pass obstacle in a road in advance;
acquiring an image sample containing the marker through a camera on the vehicle;
labeling color information and distance information of an object in the image sample of each frame, wherein the distance information represents the distance between the object in the image and the vehicle;
constructing a pixel vector matrix of each frame of image sample based on the labeling result;
inputting the image samples and the corresponding pixel vector matrixes into an image ranging engine, and enabling the image ranging engine to lock the identifiers and the distance information from each frame of image through iterative learning;
counting and training all image samples to obtain stable distance information and unstable distance information, and constructing an initial coordinate vector matrix;
combining the unstable distance information to construct a coordinate offset vector matrix;
performing point multiplication on the coordinate offset vector matrix and the initial coordinate vector matrix to optimize corresponding unstable distance information;
and updating the initial coordinate vector matrix according to the optimized distance information to obtain a target coordinate vector matrix for outputting the ranging information.
Further, the constructing a pixel vector matrix of each frame of image sample includes:
obtaining a pixel vector matrix of each frame of image according to the pixel value imaged by the camera;
each pixel point corresponds to one element in the pixel vector matrix, and each element at least comprises a color vector and a coordinate vector; the color vector is a three-dimensional vector constructed from three color components, and the coordinate vector is a three-dimensional vector constructed from three coordinate components with respect to the origin of the vehicle coordinate system.
Further, the statistics utilizes all image samples to train to obtain stable distance information and unstable distance information, and the constructing of the initial coordinate vector matrix includes:
determining pixel points with the distance information statistical variance lower than a set threshold value in the image sample as credible pixel points; determining pixel points with the distance information statistical variance higher than a set threshold value in the image sample as unreliable pixel points;
obtaining the mean value of the coordinate vectors corresponding to the credible pixel points based on all the image samples;
based on all image samples, performing distance information fitting on the untrusted pixel points according to the coordinate vectors of the trusted pixel points adjacent to the untrusted pixel points;
and constructing an initial coordinate vector matrix by using the mean value and the fitted distance information.
Further, coordinate offset vectors with values smaller than 1 are set for the untrusted pixel points, and the numerical values of the corresponding offset vectors are determined according to the strategy that the statistical variance is larger and the offset vectors are smaller;
and constructing a coordinate offset vector matrix by using the offset vectors after the values are obtained.
Further, the method further comprises: and if the suspected low-pass obstacle exists in the image, suspending the acceleration mode of the vehicle and entering a pre-deceleration mode until a fine identification result is obtained.
Further, the method further comprises: before distance measurement is carried out, whether distance measurement is carried out or not is decided according to current driving information and road condition information of the vehicle.
To facilitate an understanding of the above embodiments and their preferred versions, reference is made to the following detailed description:
during the driving process of the automatic driving automobile, the camera continuously captures images of the road in front to form a frame of plane image (called frame image). The camera transmits the planar image information to a domain controller of the autonomous vehicle. The domain controller performs rough judgment on the received plane image information, that is, whether the low-pass obstacle may exist in each frame image (the judgment criterion may be whether a concentrated region with abrupt color change exists in the plane image or other rapid existing logic algorithm is adopted). If the low-pass obstacle is judged to be possibly present, locking information (such as but not limited to relative position, visual features and the like) of the suspected obstacle, calling a preset low-pass obstacle identification module, and finely identifying the low-pass obstacle which is judged to be the suspected obstacle (the obstacle is positioned in a subsequent image by the locked suspected obstacle information) in the subsequent acquired plane image in the continuous driving process. At the same time, the domain controller indicates that the autopilot no longer has a filler door action until the image recognition module has completed the final recognition analysis, at which stage the vehicle enters a pre-deceleration mode, such as coasting, light braking, or enters an energy recovery state.
The foregoing fine identification of low-pass obstacles can be specifically implemented based on a convolutional neural network, for example, each low-pass obstacle has multiple convolution kernels (which may be features of color, shape, water surface water streak, and the like), and the convolution kernels can be continuously optimized through deep learning of the network itself, and in the actual identification operation, the convolution kernels are used to pool and activate suspected obstacles in subsequent planar images in multiple levels, and finally, the identification of the images is completed. In practice, the techniques known in the art of machine vision may be used, and the recognition techniques themselves may be non-limiting.
If the obstacle is not a low-pass obstacle, the recognition result is returned to the domain controller, and the domain controller recovers the normal automatic driving control; if the target obstacle is determined, the type of the target obstacle is determined, the recognition result is returned to the domain controller, and the domain controller calls a pre-trained image ranging engine (which can also be trained by using a neural network or other image processing models and based on a machine learning mode) to measure the distance of the target obstacle under the continuous driving condition (only one frame of image with the fine recognition result is used as input). It should be noted that the precondition for triggering the distance measurement algorithm of the present invention may be that the vehicle is traveling on a relatively wide and flat road surface at a relatively high speed, rather than on a narrow road surface with relatively dense obstacles, because generally speaking, when the vehicle travels on a special road condition such as a narrow road surface with dense obstacles, the preset vehicle speed strategy of the autonomous vehicle will generally employ a relatively low speed for a safety priority mechanism, and therefore it is not necessary to execute the low-pass obstacle distance measurement algorithm of the present invention, and it is not necessary to reduce the vehicle speed or change the vehicle traveling direction again.
It will be understood by those skilled in the art that the foregoing obtaining of the ranging information in conjunction with the image ranging engine may refer to, in practical operation, performing feature extraction on any frame of image used for obtaining the fine recognition result, and inputting the extracted feature to the image ranging engine, and outputting the distance prediction information of the vehicle and the target obstacle by the image ranging engine.
Specifically, the following example can be referred to for the training mode of the image ranging engine:
the method comprises the steps of arranging markers (corresponding to target low-pass degree obstacles) with remarkable color marks (other special mark forms can be adopted in other embodiments) on a wider road surface, capturing a large number of front road images containing the markers through a camera on a running vehicle, labeling each frame of image (color information can be labeled), and recording the coordinate positions (distance information) of objects contained in each frame of image relative to a vehicle coordinate system.
Then, converting the obtained pixel information on each frame of image sample into an M × N vector matrix, wherein M × N is preferably a pixel imaged by a camera, namely each pixel point corresponds to one element in the vector matrix, and the element consists of a group of vectors; wherein, the vector group in the single element at least comprises two types of vectors, namely a color vector alpha and a coordinate vector beta. In practical operation, the color vector α may be a three-dimensional vector consisting of three components of R red, G green, and B blue; the coordinate vector β may also be a three-dimensional vector consisting of X, Y, Z coordinate components relative to the origin of the vehicle coordinate system. Thus the aforementioned 3 × 3 pixel vector matrix can be exemplified as follows:
Figure BDA0003170026820000081
αij=(Rij,Gij,Bij),βij=(Xij,Yij,Zij)
in the training stage, a large number of image samples and the corresponding pixel vector matrixes are input to an image ranging engine, so that the image ranging engine can be trained to lock the markers to each frame of image according to color information, and the row and column numbers of the pixel points of the markers in each frame of image are recorded (namely which rows and columns of elements in the M multiplied by N matrix form the markers), so that the coordinate vectors (the distances between the markers and the vehicles) corresponding to the pixel points where the markers are located are obtained, and the corresponding coordinate vectors can be obtained according to the positions of other pixel points in the image.
Then, the pixel points that are relatively stable (for example, the statistical variance is lower than a certain set threshold) in the image can be determined as credible pixel points according to the counted coordinate vector data, and conversely, the pixel points that have relatively large fluctuation (for example, the statistical variance is higher than a certain set threshold) in the coordinate vector data are determined as incredible pixel points.
Secondly, obtaining a corresponding coordinate vector mean value of the credible pixel points based on the large amount of sample data; and fitting the untrustworthy pixel points according to the coordinate vectors of the adjacent credible pixel points. By this process, an M × N initial coordinate vector matrix can be formed, each element of which represents a coordinate vector. An example of a 3 x 3 initial coordinate vector matrix is as follows:
Figure BDA0003170026820000091
βij=(Xij,Yij,Zij)
furthermore, a coordinate offset vector matrix may be constructed: in the foregoing process, a pixel point with large coordinate vector data fluctuation appears, and it can be considered that the distance value identified by the image ranging engine for the pixel point fluctuates greatly, the offset vector at the pixel point can take a value lower than 1 (the offset value corresponding to the stable coordinate can be 1), and the determination principle of the offset vector value can be that the larger the variance is, the smaller the offset vector value is; thus, a coordinate offset vector matrix can be constructed, which can also be an M × N matrix, containing one offset vector per element, for example, as represented by Δ ═ Δ x, δ y, δ z, where the values of all three elements can be values less than 1. An example 3 x 3 coordinate offset vector matrix is as follows:
Figure BDA0003170026820000092
Δij=(δxij,δyij,δzij),δxij、δyij、δzij∈(0,1]
and multiplying the offset vector matrix and the corresponding elements of the initial coordinate vector matrix to form an optimized coordinate vector matrix. Specifically, for a pixel point with large coordinate fluctuation, the coordinate vector of the pixel point can be regarded as an unreliable coordinate vector, and thus, the coordinate vector of the pixel point is reduced to a certain extent by using the offset vector, that is, the coordinate vector value at the pixel point is closer to the origin of the vehicle coordinate system. The purpose of the design is that in an actual scene, for the position of a pixel point where an untrusted coordinate vector is located, it is expected that an image ranging engine can determine that the pixel point is closer to a vehicle than the content displayed by a collected current frame image, so that a certain safety margin is reserved for subsequent measures of automatically driving the vehicle.
For example, during training, if β23If there is a large fluctuation (the statistical variance exceeds a certain threshold) in the y coordinate in the coordinate vector at the position of the pixel point, the beta value is calculated23Is identified as an untrusted vector; thus, Δ can be expressed in the coordinate offset vector matrix23The value is set to (1,0.95, 1). Beta is a23And Δ23Two vectors are subjected to point multiplication to generate an optimization result beta'23And corresponding elements in the initial coordinate vector matrix are replaced, so that the optimization of the ranging result is realized.
Therefore, the image ranging engine can directly output accurate distance information between the vehicle and the target obstacle by only using one frame of image where the target obstacle is located and is precisely identified as input.
In the foregoing, after obtaining the information such as the position, the type, and the distance from the host vehicle of the target obstacle, the domain controller may plan the driving mode of the vehicle at the next time, so that the autonomous vehicle makes a targeted response decision.
For example, when the image recognition result feeds back that the target obstacle is a type of low-pass obstacle that cannot be avoided, such as a deceleration strip, the vehicle can be controlled to enter an active deceleration mode, specifically, the deceleration passing is calculated according to the default passing speed V0, the current vehicle speed V, and the distance S between the type of low-pass obstacle and the vehicle.
When the image recognition result feeds back that the target barrier is a low-pass degree barrier type such as relatively small masonry, other vehicle falling objects, well covers, shallow pits and the like, the vehicle can be controlled to select steering avoidance according to the current vehicle speed V, the distance S between the low-pass degree barrier of the type and the vehicle and the relative position, or the driving direction of the vehicle can be finely adjusted to enable the low-pass degree barrier to be close to the center line of the vehicle, namely, the low-pass degree barrier to pass between the left wheel and the right wheel of the vehicle.
When the target obstacle is the water with a small area fed back by the image recognition result, the aforementioned speed-reducing passing strategy or remote passing strategy can be referred to, which is not described herein again; and when the target barrier is water accumulation with a large area or a similar low-pass degree barrier, an emergency braking strategy or a gentle braking strategy can be executed according to the distance measuring information, so that the automatic driving automobile is braked and stopped before the risk is involved, and a new driving route can be planned again.
In summary, the design concept of the invention is to perform two-stage visual recognition on the image content acquired during the continuous driving process of the autonomous vehicle to obtain the accurate information of the target obstacle, obtain the accurate distance measurement result of the low-pass obstacle and the vehicle by combining with the pre-constructed image distance measurement engine, and finally realize the self-adaptive obstacle avoidance decision with the obstacle type as the guide by using the recognized target obstacle information and the accurate distance measurement result. The invention realizes the accurate detection of the low-pass degree obstacle, and can flexibly adjust the driving safety measures by matching with the detection result of the low-pass degree obstacle, thereby achieving the purpose of reasonably avoiding or passing through the low-pass degree obstacle with pertinence.
In the embodiments of the present invention, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, and means that there may be three relationships, for example, a and/or B, and may mean that a exists alone, a and B exist simultaneously, and B exists alone. Wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" and similar expressions refer to any combination of these items, including any combination of singular or plural items. For example, at least one of a, b, and c may represent: a, b, c, a and b, a and c, b and c or a and b and c, wherein a, b and c can be single or multiple.
The structure, features and effects of the present invention have been described in detail with reference to the embodiments shown in the drawings, but the above embodiments are merely preferred embodiments of the present invention, and it should be understood that technical features related to the above embodiments and preferred modes thereof can be reasonably combined and configured into various equivalent schemes by those skilled in the art without departing from and changing the design idea and technical effects of the present invention; therefore, the invention is not limited to the embodiments shown in the drawings, and all the modifications and equivalent embodiments that can be made according to the idea of the invention are within the scope of the invention as long as they are not beyond the spirit of the description and the drawings.

Claims (8)

1. An automatic driving control method for a low-pass obstacle, characterized by comprising:
continuously receiving images of a front road collected by a camera arranged at the front part of a vehicle in the driving process;
performing primary identification on an object in each frame of the image, and judging whether a suspected low-pass obstacle exists in the image;
if yes, recording identification information of the suspected low-pass obstacle;
based on the identification information, performing fine identification on the suspected low-pass obstacle in the subsequent acquired image;
when the suspected low-pass obstacle is identified as a target obstacle, acquiring type information of the target obstacle, and combining a pre-constructed image ranging engine to obtain ranging information;
according to the current vehicle driving information, the ranging information and the information of one or more of the target obstacles: and determining an avoidance passing strategy aiming at the type information of the current target obstacle by using the position information and the size information.
2. The automatic driving control method for low-throughput obstacle according to claim 1, wherein the obtaining ranging information in combination with a pre-constructed image ranging engine comprises:
and performing feature extraction on any frame of image used for obtaining the fine identification result, inputting the frame of image into the image ranging engine, and outputting distance prediction information of the vehicle and the target obstacle by the image ranging engine.
3. The automatic driving control method for low-pass obstacles according to claim 2, wherein the training mode of the image ranging engine comprises:
setting a marker for representing a target low-pass obstacle in a road in advance;
acquiring an image sample containing the marker through a camera on the vehicle;
labeling color information and distance information of an object in the image sample of each frame, wherein the distance information represents the distance between the object in the image and the vehicle;
constructing a pixel vector matrix of each frame of image sample based on the labeling result;
inputting the image samples and the corresponding pixel vector matrixes into an image ranging engine, and enabling the image ranging engine to lock the identifiers and the distance information from each frame of image through iterative learning;
counting and training all image samples to obtain stable distance information and unstable distance information, and constructing an initial coordinate vector matrix;
combining the unstable distance information to construct a coordinate offset vector matrix;
performing point multiplication on the coordinate offset vector matrix and the initial coordinate vector matrix to optimize corresponding unstable distance information;
and updating the initial coordinate vector matrix according to the optimized distance information to obtain a target coordinate vector matrix for outputting the ranging information.
4. The method of claim 3, wherein constructing the pixel vector matrix for each frame of image samples comprises:
obtaining a pixel vector matrix of each frame of image according to the pixel value imaged by the camera;
each pixel point corresponds to one element in the pixel vector matrix, and each element at least comprises a color vector and a coordinate vector; the color vector is a three-dimensional vector constructed from three color components, and the coordinate vector is a three-dimensional vector constructed from three coordinate components with respect to the origin of the vehicle coordinate system.
5. The method of claim 4, wherein the statistically training with all image samples to obtain stable distance information and unstable distance information and constructing an initial coordinate vector matrix comprises:
determining pixel points with the distance information statistical variance lower than a set threshold value in the image sample as credible pixel points; determining pixel points with the distance information statistical variance higher than a set threshold value in the image sample as unreliable pixel points;
obtaining the mean value of the coordinate vectors corresponding to the credible pixel points based on all the image samples;
based on all image samples, performing distance information fitting on the untrusted pixel points according to the coordinate vectors of the trusted pixel points adjacent to the untrusted pixel points;
and constructing an initial coordinate vector matrix by using the mean value and the fitted distance information.
6. The automatic driving control method for the low-pass obstacle according to claim 5, wherein a coordinate offset vector with a value less than 1 is set for the unreliable pixel point, and a value of the corresponding offset vector is determined according to a strategy that the statistical variance is larger and the offset vector is smaller;
and constructing a coordinate offset vector matrix by using the offset vectors after the values are obtained.
7. The automatic driving control method for a low-throughput obstacle according to any one of claims 1 to 6, characterized by further comprising: and if the suspected low-pass obstacle exists in the image, suspending the acceleration mode of the vehicle and entering a pre-deceleration mode until a fine identification result is obtained.
8. The automatic driving control method for a low-throughput obstacle according to any one of claims 1 to 6, characterized by further comprising: before distance measurement is carried out, whether distance measurement is carried out or not is decided according to current driving information and road condition information of the vehicle.
CN202110815596.0A 2021-07-19 2021-07-19 Automatic driving control method for low-pass obstacle Active CN113486837B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110815596.0A CN113486837B (en) 2021-07-19 2021-07-19 Automatic driving control method for low-pass obstacle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110815596.0A CN113486837B (en) 2021-07-19 2021-07-19 Automatic driving control method for low-pass obstacle

Publications (2)

Publication Number Publication Date
CN113486837A true CN113486837A (en) 2021-10-08
CN113486837B CN113486837B (en) 2023-07-18

Family

ID=77941448

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110815596.0A Active CN113486837B (en) 2021-07-19 2021-07-19 Automatic driving control method for low-pass obstacle

Country Status (1)

Country Link
CN (1) CN113486837B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114407901A (en) * 2022-02-18 2022-04-29 北京小马易行科技有限公司 Control method and device for automatic driving vehicle and automatic driving system

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005329779A (en) * 2004-05-19 2005-12-02 Daihatsu Motor Co Ltd Method and device for recognizing obstacle
JP2005332120A (en) * 2004-05-19 2005-12-02 Daihatsu Motor Co Ltd Obstruction recognition means and obstruction recognition device
JP2008298533A (en) * 2007-05-30 2008-12-11 Konica Minolta Holdings Inc Obstruction measurement method, device, and system
JP2015194798A (en) * 2014-03-31 2015-11-05 日産自動車株式会社 Driving assistance control device
CN108569286A (en) * 2017-03-13 2018-09-25 丰田自动车株式会社 Collision elimination control device
CN108596058A (en) * 2018-04-11 2018-09-28 西安电子科技大学 Running disorder object distance measuring method based on computer vision
CN109116374A (en) * 2017-06-23 2019-01-01 百度在线网络技术(北京)有限公司 Determine the method, apparatus, equipment and storage medium of obstacle distance
CN109829403A (en) * 2019-01-22 2019-05-31 淮阴工学院 A kind of vehicle collision avoidance method for early warning and system based on deep learning
CN110147706A (en) * 2018-10-24 2019-08-20 腾讯科技(深圳)有限公司 The recognition methods of barrier and device, storage medium, electronic device
CN110688903A (en) * 2019-08-30 2020-01-14 陕西九域通创轨道系统技术有限责任公司 Obstacle extraction method based on camera data of train AEB system
CN110825093A (en) * 2019-11-28 2020-02-21 安徽江淮汽车集团股份有限公司 Automatic driving strategy generation method, device, equipment and storage medium
CN110909569A (en) * 2018-09-17 2020-03-24 深圳市优必选科技有限公司 Road condition information identification method and terminal equipment
CN111046843A (en) * 2019-12-27 2020-04-21 华南理工大学 Monocular distance measurement method under intelligent driving environment
CN111832418A (en) * 2020-06-16 2020-10-27 北京汽车研究总院有限公司 Vehicle control method, device, vehicle and storage medium
CN112014845A (en) * 2020-08-28 2020-12-01 安徽江淮汽车集团股份有限公司 Vehicle obstacle positioning method, device, equipment and storage medium
CN112180951A (en) * 2020-11-10 2021-01-05 桃江县缘湘聚文化传媒有限责任公司 Intelligent obstacle avoidance method for unmanned vehicle and computer readable storage medium
CN112373467A (en) * 2020-11-10 2021-02-19 桃江县缘湘聚文化传媒有限责任公司 Intelligent obstacle avoidance system of unmanned automobile
CN113228042A (en) * 2018-12-28 2021-08-06 辉达公司 Distance of obstacle detection in autonomous machine applications

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005329779A (en) * 2004-05-19 2005-12-02 Daihatsu Motor Co Ltd Method and device for recognizing obstacle
JP2005332120A (en) * 2004-05-19 2005-12-02 Daihatsu Motor Co Ltd Obstruction recognition means and obstruction recognition device
JP2008298533A (en) * 2007-05-30 2008-12-11 Konica Minolta Holdings Inc Obstruction measurement method, device, and system
JP2015194798A (en) * 2014-03-31 2015-11-05 日産自動車株式会社 Driving assistance control device
CN108569286A (en) * 2017-03-13 2018-09-25 丰田自动车株式会社 Collision elimination control device
CN109116374A (en) * 2017-06-23 2019-01-01 百度在线网络技术(北京)有限公司 Determine the method, apparatus, equipment and storage medium of obstacle distance
CN108596058A (en) * 2018-04-11 2018-09-28 西安电子科技大学 Running disorder object distance measuring method based on computer vision
CN110909569A (en) * 2018-09-17 2020-03-24 深圳市优必选科技有限公司 Road condition information identification method and terminal equipment
CN110147706A (en) * 2018-10-24 2019-08-20 腾讯科技(深圳)有限公司 The recognition methods of barrier and device, storage medium, electronic device
CN113228042A (en) * 2018-12-28 2021-08-06 辉达公司 Distance of obstacle detection in autonomous machine applications
CN109829403A (en) * 2019-01-22 2019-05-31 淮阴工学院 A kind of vehicle collision avoidance method for early warning and system based on deep learning
CN110688903A (en) * 2019-08-30 2020-01-14 陕西九域通创轨道系统技术有限责任公司 Obstacle extraction method based on camera data of train AEB system
CN110825093A (en) * 2019-11-28 2020-02-21 安徽江淮汽车集团股份有限公司 Automatic driving strategy generation method, device, equipment and storage medium
CN111046843A (en) * 2019-12-27 2020-04-21 华南理工大学 Monocular distance measurement method under intelligent driving environment
CN111832418A (en) * 2020-06-16 2020-10-27 北京汽车研究总院有限公司 Vehicle control method, device, vehicle and storage medium
CN112014845A (en) * 2020-08-28 2020-12-01 安徽江淮汽车集团股份有限公司 Vehicle obstacle positioning method, device, equipment and storage medium
CN112180951A (en) * 2020-11-10 2021-01-05 桃江县缘湘聚文化传媒有限责任公司 Intelligent obstacle avoidance method for unmanned vehicle and computer readable storage medium
CN112373467A (en) * 2020-11-10 2021-02-19 桃江县缘湘聚文化传媒有限责任公司 Intelligent obstacle avoidance system of unmanned automobile

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MINSOO SONG ET AL.: "《Monocular Depth Estimation Using Laplacian Pyramid-Based Depth Residuals》", 《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》, pages 1 - 13 *
毕天腾 等: "《基于监督学习的单幅图像深度估计综述》", 《计算机辅助设计与图形学学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114407901A (en) * 2022-02-18 2022-04-29 北京小马易行科技有限公司 Control method and device for automatic driving vehicle and automatic driving system
CN114407901B (en) * 2022-02-18 2023-12-19 北京小马易行科技有限公司 Control method and device for automatic driving vehicle and automatic driving system

Also Published As

Publication number Publication date
CN113486837B (en) 2023-07-18

Similar Documents

Publication Publication Date Title
US11093801B2 (en) Object detection device and object detection method
CN110942449B (en) Vehicle detection method based on laser and vision fusion
CN109460709B (en) RTG visual barrier detection method based on RGB and D information fusion
CN110765922B (en) Binocular vision object detection obstacle system for AGV
CN104508722B (en) Vehicle-mounted surrounding identification device
US11308717B2 (en) Object detection device and object detection method
US11460851B2 (en) Eccentricity image fusion
US10699567B2 (en) Method of controlling a traffic surveillance system
US11170272B2 (en) Object detection device, object detection method, and computer program for object detection
CN108596058A (en) Running disorder object distance measuring method based on computer vision
EP2960858B1 (en) Sensor system for determining distance information based on stereoscopic images
US20160305785A1 (en) Road surface detection device and road surface detection system
US11335100B2 (en) Traffic light recognition system and method thereof
CN111891061B (en) Vehicle collision detection method and device and computer equipment
CN110717445B (en) Front vehicle distance tracking system and method for automatic driving
CN111753623B (en) Method, device, equipment and storage medium for detecting moving object
US10984264B2 (en) Detection and validation of objects from sequential images of a camera
CN104951758A (en) Vehicle-mounted method and vehicle-mounted system for detecting and tracking pedestrians based on vision under urban environment
CN115331191B (en) Vehicle type recognition method, device, system and storage medium
CN113486837B (en) Automatic driving control method for low-pass obstacle
CN113221739B (en) Monocular vision-based vehicle distance measuring method
US11120292B2 (en) Distance estimation device, distance estimation method, and distance estimation computer program
US11069049B2 (en) Division line detection device and division line detection method
Muril et al. A review on deep learning and nondeep learning approach for lane detection system
CN115240471B (en) Intelligent factory collision avoidance early warning method and system based on image acquisition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: No.669 Shixin Road, economic development zone, Feixi County, Hefei City, Anhui Province

Applicant after: ANHUI JIANGHUAI AUTOMOBILE GROUP Corp.,Ltd.

Address before: 230601 No. 669 Shixin Road, Taohua Industrial Park, Hefei City, Anhui Province

Applicant before: ANHUI JIANGHUAI AUTOMOBILE GROUP Corp.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant