CN115187963A - Vehicle obstacle detection method, system, device, medium, and program - Google Patents

Vehicle obstacle detection method, system, device, medium, and program Download PDF

Info

Publication number
CN115187963A
CN115187963A CN202210876881.8A CN202210876881A CN115187963A CN 115187963 A CN115187963 A CN 115187963A CN 202210876881 A CN202210876881 A CN 202210876881A CN 115187963 A CN115187963 A CN 115187963A
Authority
CN
China
Prior art keywords
obstacle
vehicle
projected image
projection
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210876881.8A
Other languages
Chinese (zh)
Inventor
邵俊宇
任凡
陆波
殷炎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202210876881.8A priority Critical patent/CN115187963A/en
Publication of CN115187963A publication Critical patent/CN115187963A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/862Combination of radar systems with sonar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/865Combination of radar systems with lidar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/86Combinations of sonar systems with lidar systems; Combinations of sonar systems with systems not using wave reflection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/93Sonar systems specially adapted for specific applications for anti-collision purposes
    • G01S15/931Sonar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9314Parking operations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/93Sonar systems specially adapted for specific applications for anti-collision purposes
    • G01S15/931Sonar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2015/932Sonar systems specially adapted for specific applications for anti-collision purposes of land vehicles for parking operations

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Electromagnetism (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Acoustics & Sound (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present application provides a vehicle obstacle detection method, system, device, medium, and program, the method including: acquiring a projection image formed by projection light on the ground under the current state of the vehicle; comparing the projected image with a preset projected image, and determining an obstacle and a position area of the obstacle in the projected image; according to the barrier is in the lines that position area produced in the projection drawing changes, confirms the type of barrier realizes that the barrier detects, wherein, the type of barrier includes arch and fault, and this application does not receive ambient brightness's influence, even be in the parking area that light is darker, also can carry out the barrier through projection light and detect, has not only improved the detection precision of barrier, need not to dispose extra sensor and assists, still greatly reduced the barrier and detected the cost.

Description

Vehicle obstacle detection method, system, device, medium, and program
Technical Field
The present application relates to the field of vehicle safety detection, and in particular, to a vehicle obstacle detection method, system, device, medium, and program.
Background
In the full-automatic parking process of the vehicle, a person leaves the vehicle, and the vehicle needs to find a parking space in a parking lot and park in the parking space. In order to ensure that the vehicle can safely realize automatic parking, the obstacle of the current scene of the vehicle needs to be accurately detected.
In the related art, a camera is a sensor widely used in automatic parking, receives light emitted by a target, forms an image, and detects whether the target in the image is an obstacle through a correlation algorithm. However, this method is an image-based 2D detection method, and is weak in detection capability for untrained or texture-lacking targets, so that the detection accuracy of the existing obstacle detection based on vision is not high, and the requirement of the existing vehicle obstacle detection cannot be met.
Content of application
In view of the above drawbacks of the prior art, the present application provides a vehicle obstacle detection method, system, device, medium, and program to solve the above problem of low detection accuracy caused by obstacle detection using a camera.
In a first aspect, the present application provides a vehicle obstacle detection method, including:
acquiring a projection image formed by projection light on the ground under the current state of the vehicle, wherein the current state comprises a driving state or a static state;
comparing the projected image with a preset projected image, and determining an obstacle and a position area of the obstacle in the projected image;
and determining the type of the obstacle according to the line change generated by the obstacle in the position area in the projection drawing to realize obstacle detection, wherein the type of the obstacle comprises a bulge and a fault.
In an embodiment of the present application, after determining the type of the obstacle to implement obstacle detection, the method further includes:
if the type of the obstacle is convex, calculating the height of the convex obstacle according to the projection angle of the vehicle projection light at the farthest point and the driving distance; the driving distance and the projection angle are determined by the deformation generation time and the deformation elimination time of the projection image;
and if the type of the obstacle is a fault, calculating the height of the fault obstacle according to a similar triangle principle by combining the vehicle to generate a projection and acquiring the distance of the projection image in the horizontal direction and the vertical direction.
In an embodiment of the present application, comparing the projection image with a preset projection image to determine an obstacle and a position area of the obstacle in the projection image includes:
comparing the projected image with a preset projected image;
if the projected image is the same as the preset projected image, determining that no obstacle exists in the projected image;
and if the projected image is different from the preset projected image, determining the position area of the obstacle and the position area of the obstacle according to the graphic offset between the projected image and the preset projected image.
In an embodiment of the present application, determining the type of the obstacle according to a line variation generated in a position area of the obstacle in the projection view to implement obstacle detection includes:
if the farthest distance of the line in the projected image is farther relative to the farthest distance of the line in the preset projected image, determining that the type of the road obstacle is a fault;
and if the farthest distance of the line in the projected image is closer to the farthest distance of the line in the preset projected image, determining that the type of the road obstacle is a bulge.
In an embodiment of the present application, after determining the height of the obstacle, the method further includes:
judging the height of the barrier according to a preset height;
if the height of the obstacle is lower than a preset height, determining that the vehicle can pass through the obstacle at the height;
and if the height of the obstacle is not lower than the preset height, determining that the vehicle cannot pass through the obstacle at the height, and giving an alarm to inform.
In an embodiment of the present application, after determining that the vehicle cannot pass through the obstacle at the height, the method further includes:
acquiring the closest distance between the vehicle and the obstacle;
comparing the closest distance with a preset safe distance threshold, and if the closest distance is less than or equal to the preset safe distance threshold, giving out an obstacle alarm; and if the nearest distance is greater than a preset safety distance threshold value, not processing.
In an embodiment of the present application, after determining the type of the obstacle to implement obstacle detection, the method further includes:
acquiring ultrasonic radar data, millimeter wave radar data and laser radar data of the vehicle in the current state;
fusing the laser radar data, the ultrasonic radar data and the millimeter wave radar data to determine environment perception information;
labeling according to the environment perception information and the obstacle detection result to generate a training set;
training a pre-constructed convolutional neural network based on the training set to obtain an intelligent decision model;
and inputting the environment perception information and the obstacle detection result into the intelligent decision-making model so as to output a corresponding control strategy.
In an embodiment of the present application, after determining the type of the obstacle and further completing the obstacle detection, the method further includes:
carrying out parking space recognition on visual data of a current scene of a vehicle based on a visual deep learning algorithm, determining visual parking space information and marking a first time point;
carrying out parking space identification on the ultrasonic radar data of the current scene of the vehicle based on a machine learning algorithm, determining space parking space information and marking a second time point;
fusing the spatial parking space information and the visual parking space information which are associated with the first time point and the second time point at the same time to obtain target parking space information;
and assisting automatic parking of the vehicle according to the target parking space information and the detection result of the obstacle between the target parking space information and the vehicle.
In a second aspect, the present application provides a vehicle obstacle detection system comprising:
the system comprises an image acquisition module, a display module and a control module, wherein the image acquisition module is used for acquiring a projection image formed by projection light on the ground under the current state of a vehicle, and the current state comprises a driving state or a static state;
the image comparison module is used for comparing the projected image with a preset projected image and determining an obstacle in the projected image and a position area of the obstacle;
and the obstacle detection module is used for determining the type of the obstacle according to the line change generated by the obstacle in the position area in the projection graph to realize obstacle detection, wherein the type of the obstacle comprises a bulge and a fault.
In a third aspect, the present application provides an electronic device comprising:
one or more processors;
a storage device for storing one or more programs that, when executed by the one or more processors, cause the electronic apparatus to implement the vehicle obstacle detection method described above.
In a fourth aspect, the present application provides a vehicle device including the electronic device described above.
In a fifth aspect, the present application provides a computer-readable storage medium having stored thereon computer-readable instructions which, when executed by a processor of a computer, cause the computer to execute the above-mentioned vehicle obstacle detection method.
In a sixth aspect, the present application provides a computer program product or a computer program, the computer program product or the computer program comprising computer instructions stored in a computer-readable storage medium, a processor of an electronic device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the electronic device executes the vehicle obstacle detection method.
The beneficial effect of this application: the method comprises the steps of obtaining a projection image formed by projection light on the ground under the current state of a vehicle, comparing the projection image with a preset projection image, and determining an obstacle in the projection image and a position area of the obstacle; according to the barrier in the line change that the position area produced in the projection drawing, confirm the type of barrier accomplishes the barrier and detects, compare other visual barrier and detect, this application is not influenced by ambient brightness, even be in the parking area that light is darker, also can carry out the barrier through projection light and detect, has not only improved the detection precision of barrier, need not to dispose extra sensor and assist, still greatly reduced the barrier and detected the cost.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and, together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
fig. 1 is a schematic view of an implementation environment of a vehicle obstacle detection method according to an exemplary embodiment of the present application;
FIG. 2 is a flow chart of a vehicle obstacle detection method shown in an exemplary embodiment of the present application;
fig. 3 is a flow chart illustrating obstacle height determination in a vehicle obstacle detection method according to an exemplary embodiment of the present application;
fig. 4 is a flowchart illustrating a parking garage entry in a vehicle obstacle detection method according to an exemplary embodiment of the present application;
FIG. 5 is a flow chart illustrating low speed driving in a vehicle obstacle detection method according to an exemplary embodiment of the present application;
FIG. 6 is a flow chart of a vehicle obstacle detection method interrupt floor height illustrating an exemplary embodiment of the present application;
FIG. 7 is a flow chart illustrating a bump height in a vehicle obstacle detection method according to an exemplary embodiment of the present application;
fig. 8 is a block diagram showing the structure of a vehicle obstacle detection system according to an exemplary embodiment of the present application;
FIG. 9 is a schematic structural diagram of a vehicle obstacle detection system shown in an exemplary embodiment of the present application;
FIG. 10 illustrates a schematic structural diagram of a computer system suitable for use to implement the electronic device of the embodiments of the subject application.
Detailed Description
Other advantages and effects of the present application will become apparent to those skilled in the art from the disclosure herein, wherein the embodiments of the present application will be described in detail with reference to the accompanying drawings and preferred embodiments. The application is capable of other and different embodiments and its several details are capable of modifications and various changes in detail without departing from the spirit of the application. It should be understood that the preferred embodiments are for purposes of illustration only and are not intended to limit the scope of the present disclosure.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present application, and the drawings only show the components related to the present application and are not drawn according to the number, shape and size of the components in actual implementation, the type, quantity and proportion of each component in actual implementation may be changed freely, and the layout of the components may be more complicated.
In the following description, numerous details are set forth to provide a more thorough explanation of the embodiments of the present application, however, it will be apparent to one skilled in the art that the embodiments of the present application may be practiced without these specific details, and in other embodiments, well-known structures and devices are shown in block diagram form rather than in detail in order to avoid obscuring the embodiments of the present application.
Please refer to fig. 1, which is a schematic diagram illustrating an implementation environment of a vehicle obstacle detection method according to an embodiment of the present application. As shown in fig. 1, the implementation environment application network architecture may include a device side 11 and a vehicle 10 (vehicle terminal cluster). The vehicle terminal cluster may comprise one or more vehicle terminals, where the number of vehicle terminals will not be limited. Specifically, the vehicle terminal a, the vehicle terminal b, the vehicle terminal c, \ 8230, and the vehicle terminal n may be included. The vehicle terminal a, the vehicle terminal b, the vehicle terminal c, \ 8230, and the vehicle terminal n can be respectively connected with the device side through a network, so that the vehicle terminal can perform data interaction with the device side 11 through the network connection. The network connection is not limited to a specific connection manner, and may be directly or indirectly connected through wireless communication, for example.
Note that, one processing method is: the device side 11 can also be integrated inside a vehicle terminal to realize vehicle data acquisition, perform obstacle detection on the acquired data, and determine obstacles around the vehicle. The other processing mode comprises the following steps: the vehicle terminal is responsible for collecting vehicle data, and the vehicle barrier detection is carried out to the equipment side, namely the high in the clouds, with the vehicle data of gathering in step, accomplishes the barrier detection around the vehicle.
Among them, the vehicle 10 in the embodiment of the present application may be a power-driven automobile; such as trucks, dump trucks, off-road vehicles, cars, buses, tractors and semi-trailers, and utility vehicles. The truck is mainly used for transporting goods, and some trucks can also pull a full trailer; the dump truck is a truck mainly for transporting goods and provided with a dump cargo box, is mainly suitable for running on bad roads or no roads and is mainly used for national defense, forest areas and mines; the cross-country vehicle is mainly used for all-wheel-driven vehicles with high trafficability in bad roads or no-road areas, is suitable for driving in bad roads or no-road areas, and is mainly used for national defense, forest areas and mines; the cars are used for carrying people and personal belongings, and the seats of the four-wheeled vehicles are arranged between two shafts, and the cars can be divided into minicars (below 1L), common-grade cars (1-1.6L), medium-grade cars (1.6-2.5L), medium-grade cars (2.5-4L) and high-grade cars (above 4L) according to the displacement of an engine; the passenger car is provided with a rectangular carriage, is mainly used for carrying people and the carried luggage of the people, and can be divided into a coach, a group passenger car, a city bus, a tourist bus and the like according to different purposes; the tractor and the semi-trailer tractor are mainly used for tracting trailers or semi-trailers and can be divided into a semi-trailer tractor and a full trailer tractor according to different tracting trailers; the special vehicle is equipped with special equipment, has special functions, and is used for vehicles bearing special transportation tasks or special operations, such as fire trucks, ambulances, tank trucks, bullet-proof vehicles, engineering vehicles, and the like, without limitation.
As shown in fig. 1, the device side 11 in the embodiment of the present application may be a server corresponding to the application client. The device side 11 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud computing services.
Referring to fig. 2, a flowchart of a vehicle obstacle detection method according to an exemplary embodiment of the present application is shown, including:
step S210, acquiring a projection image formed by projection light on the ground under the current state of the vehicle, wherein the current state comprises a driving state or a static state;
the current state of the vehicle includes a stationary state and a driving state, and the driving state mainly refers to a low-speed driving state, for example, low-speed driving at a driving speed of less than 40 kilometers per hour. And the projected light is emitted by a vehicle lamp, a welcome lamp or an additional projection vehicle lamp (projection device). The projection light can be visible light, namely, light with the frequency of 380-750 THz and the wavelength of 780-400 nm which can be sensed by human eyes. For example, in the open air with strong sunlight, light waves with less frequency components in the solar spectrum can be projected to prevent the influence of sunlight, and the light waves with corresponding frequencies are received by the sensor to obtain a projected image on the ground.
The projection light may be invisible light, such as ultraviolet light, infrared light, and far infrared light. And a light receiving sensor matched with the projection light generated by the projection device is used, so that the projection image is obtained.
Step S220, comparing the projected image with a preset projected image, and determining an obstacle in the projected image and a position area of the obstacle;
specifically, the preset projection image is a projection image without obstacles on the ground of the current scene, and the projection image is compared with the preset projection image, wherein the main comparison is a graph in the two images, and the graph is formed by projection light rays of a projection pattern emitted by the projection equipment.
Step S230, determining the type of the obstacle to realize obstacle detection according to the line change generated by the obstacle in the position area in the projection drawing, wherein the type of the obstacle comprises a bulge and a fault.
Specifically, the obstacle in the projection image is judged by the line change generated in the position area of the obstacle in the projection image.
For example, if the farthest distance of the line in the projected image is farther than the farthest distance of the line in the preset projected image, the type of the road obstacle is determined to be a fault;
and if the farthest distance of the line in the projected image is closer to the farthest distance of the line in the preset projected image, determining that the type of the road obstacle is a bulge.
By the mode, the type of the barrier in the projected image can be effectively judged, and the specific condition of the fault can be conveniently known.
In the embodiment, a projected image formed by projection light on the ground in the current state of a vehicle is obtained, and the projected image is compared with a preset projected image to determine an obstacle and a position area of the obstacle in the projected image; according to the barrier is in the lines that the position area produced in the projection drawing changes, confirm the type of barrier accomplishes the barrier and detects, compares other visual barrier and detects, and this application is not influenced by ambient brightness, even be in the parking area that light is darker, also can carry out the barrier through projection light and detect, has not only improved the detection precision of barrier, need not to dispose extra sensor and assist, still greatly reduced the barrier and detected the cost.
Referring to fig. 3, a flow chart of obstacle height determination in a vehicle obstacle detection method according to an exemplary embodiment of the present application is shown, where after determining the type of the obstacle to implement obstacle detection, the method further includes:
step S320, if the type of the obstacle is convex, calculating the height of the convex obstacle according to the projection angle of the vehicle projection light at the farthest point and the driving distance; the driving distance and the projection angle are determined by the deformation generation time and the deformation elimination time of the projection image;
for example, the time when the deformation of the projection image occurs and the time when the deformation of the projection image is eliminated are determined through the projection image, a time period is obtained, the distance L which the vehicle passes in the current scene is determined according to the current vehicle speed and the time period, the angle alpha of an included angle formed between the farthest point (the farthest projection point) formed by the projection pipeline on the ground and the ground is utilized, and the height H = L tan (alpha) of the obstacle of the current vehicle is utilized.
In detail, referring to fig. 7, the height of the obstacle in the current projection image can be determined by the included angle formed by the projection light generated by the projection device and the obstacle.
And step S330, if the type of the obstacle is a fault, calculating the height of the fault obstacle according to the similar triangle principle by combining the vehicle to generate a projection and collecting the distance of the projection image in the horizontal direction and the vertical direction.
Wherein, see figure 6 in detail, h is the height of the fault to be calculated, and L2 is the distance from the projection point on the fault to the camera. H3' is a point on a similar triangle that is computationally convenient to introduce, and can result in: H3/H3' = L3/(L1 + L4-L3)
For example, a projection device generates a projection (light), a camera collects an image, and it should be noted that the camera light sensor is located inside the camera and has a certain distance from the camera lens.
L1: when the ground is flat, theoretically, the distance between the farthest irradiation point of the ground and the projection equipment is short;
l3: the distance from the camera lens to the camera photosensitive sensor;
l4: the distance from the projection equipment to the image head photosensitive sensor;
h1: the height of the camera;
h2: a projection device height;
h3: and the distance between the imaging point of the theoretically farthest ground irradiation point on the camera sensor and the imaging point of the actually farthest ground irradiation point on the camera sensor when the ground is flat.
According to the triangle-like principle, one can obtain:
h/(h+H2)=(L2-L1-L4)/(L2-L4)
(h+H3’)/(h+H1)=(L2-L1-L4)/L2
therefore, the height h of the fault can be calculated.
In the embodiment, different calculation modes are adopted according to different types of the obstacles to calculate the heights of the obstacles respectively, and the heights of the obstacles can be accurately calculated by the mode without complex algorithm, additional vehicle transformation and assistance of ultrasonic radar, laser radar and the like; in addition, compared with other camera detection methods for detecting obstacles, the detection accuracy is higher.
In other embodiments, comparing the projection image with a preset projection image to determine an obstacle and a position area of the obstacle in the projection image includes:
comparing the projected image with a preset projected image;
if the projected image is the same as the preset projected image, determining that no obstacle exists in the projected image;
and if the projected image is different from the preset projected image, determining the position area of the obstacle and the position area of the obstacle according to the graphic offset between the projected image and the preset projected image.
For example, when the pattern deviation exceeds a preset threshold value, that is, the pattern deviation is determined, and after the projected image is determined to have a pattern different from that of the preset projected image, the position area where the obstacle is located and the obstacle are determined according to the degree of the image deviation.
By the aid of the mode, the obstacles and the position areas of the obstacles in the projection images can be rapidly determined, and the obstacles can be accurately positioned conveniently.
On the basis of the above embodiment, the image recognition can be performed according to the pattern deviation of the obstacle and the outline of the obstacle in the projection image, so as to determine the obstacle type, such as stones, debris, fault formed by pits, and the like.
Referring to fig. 4, a parking-warehousing flow chart of a vehicle obstacle detection method according to an exemplary embodiment of the present application is described in detail as follows:
step S401, when a parking garage is started, the controller controls the projection device to project around the vehicle body according to a preset pattern and a preset position, and the controller calculates a theoretical pattern shot by the flat ground according to the projected pattern and the projected position;
step S402, shooting a projected image by the panoramic camera, comparing the shot pattern with a theoretical pattern by the controller, if the shot pattern is consistent with the theoretical pattern, determining that no target exists in the range of the projected pattern, and if the shot pattern is inconsistent with the theoretical pattern, entering the next step;
step S403, comparing the peripheral lines of the theoretical image with the actually shot image;
step S404, when the deviation exceeds a threshold value, the obstacle is considered to exist, and the position and the range of the area which can not be in the same row are calculated according to the position of the line deviation;
step S405, during parking and warehousing, avoiding entering the range of the area where the vehicles can not be in the same line
In the embodiment, the controller controls the projection equipment to project the light rays surrounding the vehicle body on the ground, the distance between the light rays and the vehicle body can be calibrated, and generally, the higher the vehicle speed is, the farther the light rays are from the vehicle body. When no obstacle exists, the light is projected onto the flat ground, and the shape of the light seen by the looking-around camera is unchanged no matter whether the vehicle moves or not. When the light meets the obstacle, the shape of the light changes, and if the change is larger than the threshold value, the controller judges whether the obstacle is higher than the ground or the fault is higher than the ground according to the change condition. And then recording the position where the light changes, namely the position where the light can not pass, and avoiding the position in the parking process. Furthermore, the system can be fused with sensors such as an ultrasonic radar and a panoramic camera, so that the detailed condition of the obstacle can be acquired more accurately, and the obstacle can be detected more accurately.
Referring to fig. 5, a flow chart of low-speed driving in a vehicle obstacle detection method according to an exemplary embodiment of the present application is detailed as follows:
step S501, when the parking lot is started to run at a low speed, the controller controls the projection device to project around the car body according to preset patterns and positions, the range of projection from the car is larger as the running speed of the car is higher than that of parking, and the controller calculates theoretical patterns shot by the flat ground according to the projected patterns and positions;
step S502, the front camera or the all-round camera shoots a projected image, the controller compares the shot pattern with a theoretical pattern, if the shot pattern is consistent with the theoretical pattern, no target exists in the range of the projected pattern, and if the shot pattern is inconsistent with the theoretical pattern, the next step is carried out;
step S503, comparing the peripheral lines of the theoretical image with the actually shot image;
step S504, when the deviation exceeds a threshold value, the obstacle is considered to exist, and the position and the range of the obstacle are calculated according to the position of the line deviation;
step S505 of determining whether the fault is detected
If the shot camera is higher than the projection equipment and the farthest range of the shot image is farther than the theoretical image, a fault lower than the road surface appears in front, and the fall height of the fault is calculated; if the shot camera is lower than the projection equipment and the farthest range of the shot image is closer than the theoretical image, a fault lower than the road surface appears in front, the fall height of the fault is calculated, and whether the vehicle can pass or not is judged according to the fall height
Step S506, calculating the height of the obstacle
If the vehicle does not pass through the fault, recording the time point when the image is deformed, recording the time point when the deformation of the corresponding position of the image disappears, calculating the height of the obstacle according to the driving distance of the vehicle in the time period, and judging whether the vehicle can pass through the obstacle.
In the embodiment, in the actual driving process, the projected light is far away from the vehicle, the light projected to the ground can be monitored by using the front-view camera, the rear-view camera and the circular-view camera on the vehicle,
the height of the obstacle can be calculated from the length of time the light changes and the speed of the vehicle. When no barrier exists, the light is projected onto the flat ground, and the shape of the light seen by the camera is unchanged no matter whether the vehicle moves or not. When the light meets the obstacle, the shape of the light changes, and if the change is larger than the threshold value, the controller judges whether the light is higher than the obstacle on the ground or is in a fault with the ground according to the change condition. If the fault is detected, judging the fault height according to the light change; if the obstacle is the small obstacle, the height of the obstacle is calculated according to the time of starting and ending the change of the light and the advancing distance of the vehicle in the time, and whether the obstacle is the small obstacle which can pass through is judged.
In other embodiments, after determining the height of the obstacle, further comprising:
judging the height of the barrier according to a preset height;
if the height of the obstacle is lower than a preset height, determining that the vehicle can pass through the obstacle at the height;
and if the height of the obstacle is not lower than the preset height, determining that the vehicle cannot pass through the obstacle at the height, and giving an alarm to inform.
In the embodiment, whether the vehicle can pass through the obstacle with the height can be accurately determined in a threshold mode, so that safety accidents are prevented from occurring in the automatic driving process of the vehicle, and the safety performance of automatic parking of the vehicle is improved.
On the basis of the above embodiment, after determining that the vehicle cannot pass through the obstacle at the height, the method further includes:
acquiring the nearest distance between the vehicle and the obstacle;
specifically, the center point of the obstacle is located, the positions between the center point and the front end, the rear end, the left side and the right side of the obstacle are determined, and the closest distance between the vehicle and the obstacle is determined.
Comparing the closest distance with a preset safe distance threshold, and if the closest distance is less than or equal to the preset safe distance threshold, giving out an obstacle alarm; and if the nearest distance is greater than a preset safety distance threshold value, not processing.
Through the mode, the distance between the vehicle and the obstacle can be accurately controlled in the driving process of the vehicle, and unnecessary loss of the vehicle caused by safety accidents caused by driving reasons (human factors and automatic driving factors of the vehicle) on the premise that the vehicle cannot cross the obstacle is avoided.
In some embodiments, after determining the type of the obstacle to detect the obstacle, the method further includes:
acquiring ultrasonic radar data, millimeter wave radar data and laser radar data of the vehicle in the current state;
specifically, the environment data is acquired by utilizing the various radar sensors, the environment data is determined from multiple dimensions, and the accuracy of the vehicle surrounding environment data is determined.
Fusing the laser radar data, the ultrasonic radar data and the millimeter wave radar data to determine environment perception information;
specifically, the ultrasonic radar sensor can draw a parking map with the vision sensor, and dynamically plan a parking path in real time, guide target vehicle automatic control steering wheel to drive into parking position, and the millimeter wave radar can detect the long distance in all weather, through the millimeter wave radar of predetermineeing, surveys rear-view mirror blind area scope, and laser radar can detect the long distance outside, still can measure transverse position, and vision sensor low cost, low cost is through vision sensor, millimeter wave radar sensor and laser radar sensor.
Labeling according to the environment perception information and the obstacle detection result to generate a training set;
specifically, the environment perception information and the labels of the obstacle detection results can be determined through manual labeling, and a training set is formed.
Training a pre-constructed convolutional neural network based on the training set to obtain an intelligent decision model;
specifically, the convolutional neural network is trained in an unsupervised training mode to obtain an intelligent decision-making model, and whether the vehicle can pass through the obstacle or not can be determined through the model.
And inputting the environment perception information and the obstacle detection result into the intelligent decision-making model so as to output a corresponding control strategy.
Specifically, it is determined whether the vehicle can pass through the obstacle and whether the obstacle needs to be bypassed.
In this embodiment, fuse based on predetermined multiple radar data, detect and the perception to the environment of traveling of target vehicle, confirm environment perception information, carry out parameter collection to the environment, improve the environment simulation ability, intelligent decision-making module is used for receiving and handling environment perception information formulates corresponding self-adaptation control strategy, transmits self-adaptation control strategy to predetermined man-machine interactive system and carries out intelligent processing, generates corresponding intelligent decision-making instruction, carries out intelligent nimble control to the vehicle.
In other embodiments, after determining the type of the obstacle and completing the obstacle detection, the method further includes:
performing parking space recognition on visual data of a current scene of a vehicle based on a visual deep learning algorithm (such as a convolutional neural network, a YOLO (young Look Once) algorithm and the like), determining visual parking space information and marking a first time point;
specifically, the first time point is the shooting time of the image of the vehicle scene to be identified. The visual parking space information comprises any one or combination of visual parking space angle point information, visual parking space parking attributes, visual parking space types and visual parking space sizes, and the visual parking space parking attributes comprise parking spaces or parking spaces which cannot be parked. The visual parking space angle point information comprises visual parking space angle point coordinates and visual parking space angle point angles. The visual parking space types comprise vertical parking spaces, horizontal parking spaces and inclined parking spaces. The vision parking stall size includes vision parking stall length and vision parking stall degree of depth, and vision parking stall length is the length on the first parking stall limit along the road direction in the parking stall line, and vision parking stall degree of depth is the length on the second parking stall limit crossing with first parking stall limit in the car position line, and second parking stall limit has two, and the length on these two parking stall limits is probably the same, also probably different.
Carrying out parking space identification on the ultrasonic radar data of the current scene of the vehicle based on a machine learning algorithm (a support vector machine, a feedforward neural network, a cyclic neural network and the like), determining space parking space information and marking a second time point;
specifically, the second time point is the time when the ultrasonic radar receives the echo signal in the ultrasonic radar data to be identified. The spatial parking space information comprises any one or combination of spatial parking space angular point information, spatial parking space parking attributes, spatial parking space types and spatial parking space sizes, and the spatial parking space parking attributes comprise parking spaces or parking spaces which cannot be parked. The spatial parking space angle point information comprises spatial parking space angle point coordinates and spatial parking space angle point angles, and the spatial parking space types comprise vertical parking spaces, horizontal parking spaces and inclined parking spaces. The space parking space size comprises the length of the space parking space and the depth of the space parking space.
The visual parking space information and the spatial parking space information are the same in nature, and are distinguished and expressed by names only by means of distinguishing the visual parking space information from the spatial parking space information.
Fusing the spatial parking space information and the visual parking space information which are associated with the first time point and the second time point at the same time to obtain target parking space information;
specifically, the spatial parking space information and the visual parking space information are simultaneously collected through time point determination, for example, when the visual parking space information is unclear or unclear at some point, the visual parking space information is fused by referring to the spatial parking space information. Or if the spatial parking space information is different from the visual parking space information, the spatial parking space information is preferentially used as the standard, the spatial parking space information and the visual parking space information are fused, and more accurate target parking space information is obtained.
It should be noted that the fusion method includes, but is not limited to, a weighted average method, an image pyramid fusion algorithm, a gaussian pyramid fusion algorithm, a laplacian pyramid fusion algorithm, and the like.
And assisting automatic parking of the vehicle according to the target parking space information and the detection result of the obstacle between the target parking space information and the vehicle.
Specifically, after the target parking space information is acquired, the automatic parking is assisted by combining the detection result of the obstacle between the target parking space information and the vehicle, for example, when the vehicle backs up and enters the garage, whether the vehicle needs to change the driving track to bypass the obstacle or not can be determined, and the safety of the vehicle entering the garage is further ensured.
In this embodiment, not only can realize vision parking stall discernment based on machine vision identification method, realize space parking stall discernment based on ultrasonic radar, can also fuse these two kinds of recognition results and obtain more accurate parking stall recognition result, and then compare with artifical discernment parking stall, can reach the effect of automatic, quick and accurately discerning the parking stall.
Fig. 8 is a block diagram showing the configuration of a vehicle obstacle detection system according to an exemplary embodiment of the present application. The system can be applied to the implementation environment shown in fig. 1, and is specifically configured in a vehicle terminal and a cloud server. The system may also be applicable to other exemplary implementation environments, and is specifically configured in a vehicle terminal or a cloud server, and this embodiment does not limit the implementation environment to which the system is applicable.
As shown in fig. 8, the exemplary vehicle obstacle detection system includes:
the image acquisition module 801 is configured to acquire a projection image formed on the ground by projection light in a current state of a vehicle, where the current state includes a driving state or a static state;
an image comparison module 802, configured to compare the projection image with a preset projection image, and determine an obstacle and a position area of the obstacle in the projection image;
and the obstacle detection module 803 is configured to determine the type of the obstacle according to the line change generated by the obstacle in the position area in the projection view, so as to implement obstacle detection, where the type of the obstacle includes a protrusion and a fault.
In detail, referring to fig. 9, a light projection device (i.e., a projection device) emits a specific pattern to form a projected image on the ground, and a light sensing device (i.e., a camera) acquires the projected image formed on the ground. The controller controls the light projection equipment to emit a projection image of a specific pattern, and the projection image is compared with the acquired projection image, so that the obstacle in the projection image and the position area of the obstacle are determined; and determining the type of the obstacle to finish obstacle detection according to the line change generated by the position area of the obstacle in the projection drawing.
The vehicle obstacle detection system obtains a projection image formed by projection light on the ground under the current state of a vehicle, compares the projection image with a preset projection image, and determines an obstacle in the projection image and a position area of the obstacle; according to the barrier is in the lines that position area produced in the projection drawing changes, confirms the barrier detection is accomplished to the type of barrier, compares other vision barriers and detects, and this application is not influenced by ambient brightness, even be in the parking area that light is darker, also can carry out the barrier and detect through projection light, has not only improved the detection precision of barrier, need not to dispose extra sensor and assist, still greatly reduced the barrier and detected the cost.
It should be noted that the vehicle obstacle detection system provided in the above embodiment and the vehicle obstacle detection method provided in the above embodiment belong to the same concept, and the specific manner in which each module and unit perform operations has been described in detail in the method embodiment, and is not described again here. In practical applications, the vehicle obstacle detection system provided in the above embodiment may distribute the functions to different functional modules according to needs, that is, divide the internal structure of the device into different functional modules to complete all or part of the functions described above, which is not limited herein.
An embodiment of the present application further provides an electronic device, including: one or more processors; a storage device for storing one or more programs that, when executed by the one or more processors, cause the electronic apparatus to implement the vehicle obstacle detection method provided in the above-described respective embodiments.
FIG. 10 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application. It should be noted that the computer system 1000 of the electronic device shown in fig. 10 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 10, the computer system 100 includes a Central Processing Unit (CPU) 101, which can perform various appropriate actions and processes, such as executing the methods described in the above embodiments, according to a program stored in a Read-Only Memory (ROM) 102 or a program loaded from a storage section 108 into a Random Access Memory (RAM) 103. In the RAM103, various programs and data necessary for system operation are also stored. The CPU101, ROM102, and RAM103 are connected to each other via a bus 104. An Input/Output (I/O) interface 105 is also connected to the bus 104.
The following components are connected to the I/O interface 105: an input portion 106 including a keyboard, a mouse, and the like; an output section 107 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 108 including a hard disk and the like; and a communication section 109 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. The driver 1010 is also connected to the I/O interface 105 as necessary. A removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1010 as necessary, so that a computer program read out therefrom is mounted into the storage section 108 as necessary.
In particular, according to embodiments of the present application, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising a computer program for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 109, and/or installed from the removable medium 1011. When the computer program is executed by a Central Processing Unit (CPU) 101, various functions defined in the system of the present application are executed.
It should be noted that the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. The computer readable storage medium may be, for example, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer-readable signal medium may comprise a propagated data signal with a computer-readable computer program embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. The computer program embodied on the computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
Another aspect of the present application also provides a computer-readable storage medium on which a computer program is stored, which, when executed by a processor, implements the vehicle obstacle detection method as described above. The computer-readable storage medium may be included in the electronic device described in the above embodiment, or may exist separately without being incorporated in the electronic device.
Another aspect of the application also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the vehicle obstacle detection method provided in the above-described embodiments.
The above-described embodiments are merely illustrative of the principles and utilities of the present application and are not intended to limit the application. Any person skilled in the art can modify or change the above-described embodiments without departing from the spirit and scope of the present application. Accordingly, it is intended that all equivalent modifications or changes which may be made by those skilled in the art without departing from the spirit and technical spirit of the present disclosure be covered by the claims of the present application.

Claims (13)

1. A vehicle obstacle detection method, characterized by comprising:
acquiring a projection image formed by projection light on the ground under the current state of the vehicle, wherein the current state comprises a driving state or a static state;
comparing the projected image with a preset projected image, and determining an obstacle in the projected image and a position area of the obstacle;
and determining the type of the obstacle according to the line change generated by the obstacle in the position area in the projection drawing to realize obstacle detection, wherein the type of the obstacle comprises a bulge and a fault.
2. The vehicle obstacle detection method according to claim 1, further comprising, after determining that the type of the obstacle achieves obstacle detection:
if the type of the obstacle is convex, calculating the height of the convex obstacle according to the projection angle of the vehicle projection light at the farthest point and the driving distance; the driving distance and the projection angle are determined by the deformation generation time and the deformation elimination time of the projection image;
and if the type of the obstacle is a fault, calculating the height of the fault obstacle according to a similar triangle principle by combining the vehicle to generate a projection and acquiring the distance of the projection image in the horizontal direction and the vertical direction.
3. The vehicle obstacle detection method according to claim 1, wherein comparing the projection image with a preset projection image to determine an obstacle in the projection image and a position area of the obstacle includes:
comparing the projected image with a preset projected image;
if the projected image is the same as the preset projected image, determining that no obstacle exists in the projected image;
and if the projected image is different from the preset projected image, determining the position area of the obstacle and the position area of the obstacle according to the graphic offset between the projected image and the preset projected image.
4. The vehicle obstacle detection method according to claim 1, wherein determining the type of the obstacle to implement obstacle detection based on a line change generated in a position area of the obstacle in the projection view includes:
if the farthest distance of the line in the projected image is farther relative to the farthest distance of the line in the preset projected image, determining that the type of the road obstacle is a fault;
and if the farthest distance of the line in the projected image is closer to the farthest distance of the line in the preset projected image, determining that the type of the road barrier is a bulge.
5. The vehicle obstacle detection method according to claim 2, further comprising, after determining the height of the obstacle:
judging the height of the barrier according to a preset height;
if the height of the obstacle is lower than a preset height, determining that the vehicle can pass through the obstacle at the height;
and if the height of the obstacle is not lower than the preset height, determining that the vehicle cannot pass through the obstacle at the height, and giving an alarm to inform.
6. The vehicle obstacle detection method according to claim 5, further comprising, after determining that the vehicle cannot pass through the obstacle of the height:
acquiring the closest distance between the vehicle and the obstacle;
comparing the closest distance with a preset safe distance threshold, and if the closest distance is less than or equal to the preset safe distance threshold, giving an obstacle alarm; and if the nearest distance is greater than a preset safety distance threshold value, not processing.
7. The vehicle obstacle detection method according to any one of claims 1 to 6, further comprising, after determining that the type of the obstacle achieves obstacle detection:
acquiring ultrasonic radar data, millimeter wave radar data and laser radar data of the vehicle in the current state;
fusing the laser radar data, the ultrasonic radar data and the millimeter wave radar data to determine environment perception information;
labeling according to the environment perception information and the obstacle detection result to generate a training set;
training a pre-constructed convolutional neural network based on the training set to obtain an intelligent decision model;
and inputting the environment perception information and the obstacle detection result into the intelligent decision-making model so as to output a corresponding control strategy.
8. The vehicle obstacle detection method according to any one of claims 1 to 6, further comprising, after determining that the type of the obstacle realizes obstacle detection:
carrying out parking space recognition on visual data of a current scene of a vehicle based on a visual deep learning algorithm, determining visual parking space information and marking a first time point;
performing parking space identification on the ultrasonic radar data of the current scene of the vehicle based on a machine learning algorithm, determining space and parking space information and marking a second time point;
fusing the spatial parking space information and the visual parking space information which are associated with the first time point and the second time point at the same time to obtain target parking space information;
and assisting automatic parking of the vehicle according to the target parking space information and the detection result of the obstacle between the target parking space information and the vehicle.
9. A vehicle obstacle detection system, comprising:
the system comprises an image acquisition module, a display module and a control module, wherein the image acquisition module is used for acquiring a projection image formed by projection light on the ground under the current state of a vehicle, and the current state comprises a driving state or a static state;
the image comparison module is used for comparing the projected image with a preset projected image and determining an obstacle in the projected image and a position area of the obstacle;
and the obstacle detection module is used for determining the type of the obstacle to realize obstacle detection according to the line change generated by the position area of the obstacle in the projection image, wherein the type of the obstacle comprises a bulge and a fault.
10. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs that, when executed by the one or more processors, cause the electronic device to implement the method of any of claims 1-8.
11. A vehicle device characterized by comprising the electronic device of claim 10.
12. A computer-readable storage medium having stored thereon computer-readable instructions which, when executed by a processor of a computer, cause the computer to perform the method of any one of claims 1 to 8.
13. A computer program product or a computer program, characterized in that the computer program product or the computer program comprises computer instructions, the computer instructions being stored in a computer-readable storage medium, which computer instructions are read by a processor of an electronic device from the computer-readable storage medium, which computer instructions are executed by the processor, causing the electronic device to perform the method of any of claims 1 to 8.
CN202210876881.8A 2022-07-25 2022-07-25 Vehicle obstacle detection method, system, device, medium, and program Pending CN115187963A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210876881.8A CN115187963A (en) 2022-07-25 2022-07-25 Vehicle obstacle detection method, system, device, medium, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210876881.8A CN115187963A (en) 2022-07-25 2022-07-25 Vehicle obstacle detection method, system, device, medium, and program

Publications (1)

Publication Number Publication Date
CN115187963A true CN115187963A (en) 2022-10-14

Family

ID=83520733

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210876881.8A Pending CN115187963A (en) 2022-07-25 2022-07-25 Vehicle obstacle detection method, system, device, medium, and program

Country Status (1)

Country Link
CN (1) CN115187963A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115977496A (en) * 2023-02-24 2023-04-18 重庆长安汽车股份有限公司 Vehicle door control method, system, equipment and medium
CN116883478A (en) * 2023-07-28 2023-10-13 广州瀚臣电子科技有限公司 Obstacle distance confirmation system and method based on automobile camera
CN117261877A (en) * 2023-08-18 2023-12-22 广州优保爱驾科技有限公司 Self-correction image acquisition system and method based on vehicle appearance change

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115977496A (en) * 2023-02-24 2023-04-18 重庆长安汽车股份有限公司 Vehicle door control method, system, equipment and medium
CN116883478A (en) * 2023-07-28 2023-10-13 广州瀚臣电子科技有限公司 Obstacle distance confirmation system and method based on automobile camera
CN116883478B (en) * 2023-07-28 2024-01-23 广州瀚臣电子科技有限公司 Obstacle distance confirmation system and method based on automobile camera
CN117261877A (en) * 2023-08-18 2023-12-22 广州优保爱驾科技有限公司 Self-correction image acquisition system and method based on vehicle appearance change
CN117261877B (en) * 2023-08-18 2024-05-14 广州优保爱驾科技有限公司 Self-correction image acquisition system and method based on vehicle appearance change

Similar Documents

Publication Publication Date Title
US11663917B2 (en) Vehicular control system using influence mapping for conflict avoidance path determination
CN115187963A (en) Vehicle obstacle detection method, system, device, medium, and program
KR102098140B1 (en) Method for monotoring blind spot of vehicle and blind spot monitor using the same
CN110065494B (en) Vehicle anti-collision method based on wheel detection
CN110745140B (en) Vehicle lane change early warning method based on continuous image constraint pose estimation
CN108638999B (en) Anti-collision early warning system and method based on 360-degree look-around input
CN105678787A (en) Heavy-duty lorry driving barrier detection and tracking method based on binocular fisheye camera
CN112313095A (en) Apparatus and method for determining the center of a trailer hitch coupler
CN107972662A (en) To anti-collision warning method before a kind of vehicle based on deep learning
CN113329927A (en) Laser radar based trailer tracking
CN110033621B (en) Dangerous vehicle detection method, device and system
CN110909705A (en) Roadside parking space sensing method and system based on vehicle-mounted camera
CN104773177A (en) Aided driving method and aided driving device
EP2788968A1 (en) Method and vehicle assistance system for active warning and/or for navigation aid for preventing a collision of a vehicle body part and/or a vehicle wheel with an object
EP2741234B1 (en) Object localization using vertical symmetry
CN113631452A (en) Lane change area acquisition method and device
US10108866B2 (en) Method and system for robust curb and bump detection from front or rear monocular cameras
CN112249007A (en) Vehicle danger alarm method and related equipment
CN114862964A (en) Automatic calibration method for sensor, electronic device and storage medium
CN113432615B (en) Detection method and system based on multi-sensor fusion drivable area and vehicle
CN113870246A (en) Obstacle detection and identification method based on deep learning
CN113954822A (en) Method for automatically parking vehicle in side direction
CN114559960A (en) Collision early warning system based on fusion of forward-looking camera and rear millimeter wave radar
CN113727071A (en) Road condition display method and system
CN113688662A (en) Motor vehicle passing warning method and device, electronic device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination