CN112793567A - Driving assistance method and system based on road condition detection - Google Patents

Driving assistance method and system based on road condition detection Download PDF

Info

Publication number
CN112793567A
CN112793567A CN202110049651.XA CN202110049651A CN112793567A CN 112793567 A CN112793567 A CN 112793567A CN 202110049651 A CN202110049651 A CN 202110049651A CN 112793567 A CN112793567 A CN 112793567A
Authority
CN
China
Prior art keywords
obstacle
area
road condition
driven
driving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110049651.XA
Other languages
Chinese (zh)
Inventor
史鹏飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202110049651.XA priority Critical patent/CN112793567A/en
Publication of CN112793567A publication Critical patent/CN112793567A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/09Taking automatic action to avoid collision, e.g. braking and steering
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads

Abstract

The invention provides an auxiliary driving method and system based on road condition detection, and relates to the field of auxiliary driving. A driving assisting method based on road condition detection comprises the following steps: acquiring video data information and road condition image information through a preset camera device; determining the number of obstacles and a to-be-driven area according to the video data information and the image information; acquiring point cloud data of the obstacle, and calculating to obtain the distance and the angle between the area to be driven and the obstacle; obtaining a drivable area according to the calculation result and acquiring current vehicle driving data; the position and the distance of the obstacle in the surrounding environment can be accurately calculated through image analysis, the vehicle brake is automatically controlled, and the vehicle brake control system is high in reliability, high in accuracy and good in stability. In addition, the invention also provides an assistant driving system based on road condition detection, which comprises: the device comprises a first acquisition module, a determination module, a calculation module, a second acquisition module and an output module.

Description

Driving assistance method and system based on road condition detection
Technical Field
The invention relates to the field of auxiliary driving, in particular to an auxiliary driving method and system based on road condition detection.
Background
With the development of economic technology and the improvement of living standard of people, the driving assisting technology is widely applied to the production and the life of people, and brings endless convenience to the production and the life of people.
Automatic braking and road condition recognition are important components in the driving assistance technology, and play a role in automatically stopping the vehicle. Therefore, automatic braking has become one of the most important parts in the driving assistance technique.
The current automatic brake technology generally adopts the technical scheme that: the vehicle detects an obstacle in the vicinity of the vehicle by a millimeter wave radar or an ultrasonic radar. When the vehicle has an obstacle in the driving direction, the driver is prompted or the vehicle is actively braked through voice, the vehicle is prevented from colliding, and the driving safety is improved.
However, the existing driving assistance technical scheme is only suitable for open roads, and has the problems of high false alarm rate, poor interference resistance and the like for complex environments.
Disclosure of Invention
The invention aims to provide an auxiliary driving method based on road condition detection, which can accurately calculate the position and the distance of an obstacle in the surrounding environment through image analysis, automatically control vehicle braking, and has high reliability, high accuracy and good stability.
Another objective of the present invention is to provide a driving assistance system based on road condition detection, which can operate a driving assistance method based on road condition detection.
The embodiment of the invention is realized by the following steps:
in a first aspect, an embodiment of the present application provides a driving assistance method based on road condition detection, which includes acquiring video data information and road condition image information through a preset camera device; determining the number of obstacles and a to-be-driven area according to the video data information and the road condition image information; acquiring point cloud data of the obstacle, and calculating to obtain the distance and the angle between the area to be driven and the obstacle; obtaining a drivable area according to the distance and the angle between the area to be driven and the barrier and acquiring current vehicle driving data; and calculating the braking force and/or acceleration of the current vehicle according to the drivable area, the current vehicle driving data and the distance and angle between the area to be driven and the obstacle, thereby completing the auxiliary driving of the vehicle.
In some embodiments of the present invention, the acquiring the video data information and the traffic image information by the preset camera device includes: detecting a preset part in the road condition image information by an image recognition method and determining a Region of interest (ROI) in the preset part.
In some embodiments of the present invention, the method further includes the steps of obtaining, by the preset camera device, the ROI from the preset portion according to the current driving scene data; the method comprises the steps of scanning pixel points of a preset part line by line to determine an upper boundary of an ROI, determining a lower boundary of the ROI by a preset camera device according to current driving scene data, scanning the pixel points of the preset part line by the preset camera device, and determining a left boundary and/or a right boundary of the ROI.
In some embodiments of the present invention, the determining the number of obstacles and the area to be driven according to the video data information and the image information includes: and calculating by an artificial intelligence algorithm to determine the obstacle quantity information of the surrounding environment.
In some embodiments of the present invention, the method further includes taking video data information as an input, performing dynamic weight scoring on each input data, summing the data to obtain a total score, obtaining an output of one neuron according to the total score through a set excitation function, and combining the neuron with the neuron to obtain final obstacle number information.
In some embodiments of the present invention, the obtaining point cloud data of the obstacle and calculating the distance and the angle between the area to be driven and the obstacle includes: and performing data combination on the acquired obstacle information and the point cloud data according to a weighted scoring algorithm, and performing iterative computation for preset times to obtain the distance and the angle between the final area to be driven and the obstacle.
In some embodiments of the present invention, the obtaining of the travelable region and the current vehicle travel data according to the calculation result includes: and according to the distance and the angle between the area to be driven and the barrier, carrying out data analysis by combining road condition image information to obtain a drivable area.
In a second aspect, an embodiment of the present application provides a driving assistance system based on road condition detection, which includes a first obtaining module, configured to obtain video data information and road condition image information through a preset camera device;
the determining module is used for determining the number of obstacles and a to-be-driven area according to the video data information and the image information;
the computing module is used for acquiring point cloud data of the obstacle and computing to obtain the distance and the angle between the area to be driven and the obstacle;
the second acquisition module is used for acquiring a drivable area according to a calculation result and acquiring current vehicle driving data;
and the output module is used for calculating the braking force and/or acceleration of the current vehicle according to the drivable area, the current vehicle driving data and the distance and angle between the area to be driven and the obstacle, so as to complete the auxiliary driving of the vehicle.
In some embodiments of the present invention, the ROI determining module is configured to detect a preset portion in the road condition image information by using an image recognition method and determine the ROI in the preset portion.
In some embodiments of the invention, the above includes: at least one memory for storing computer instructions; at least one processor in communication with the memory, wherein the at least one processor, when executing the computer instructions, causes the system to: the device comprises a first acquisition module, a determination module, a calculation module, a second acquisition module and an output module.
Compared with the prior art, the embodiment of the invention has at least the following advantages or beneficial effects:
the position and the distance of the obstacle in the surrounding environment can be accurately calculated through image analysis, the vehicle brake is automatically controlled, and the vehicle brake control system is high in reliability, high in accuracy and good in stability. By image recognition of the road condition image, the detection range of obstacle detection is expanded, common obstacles can be detected, unusual obstacles can also be detected, the obstacle detection accuracy is further improved, and design of a follow-up obstacle avoidance control algorithm is facilitated. The detection of the obstacles in the road condition image is pixel-level detection, so that the detection range is wider compared with other obstacle detection methods.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic step diagram of an assisted driving method based on road condition detection according to an embodiment of the present invention;
fig. 2 is a detailed step diagram of an assisted driving method based on road condition detection according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a driving assistance system module based on road condition detection according to an embodiment of the present invention.
Icon: 10-a first acquisition module; 20-a determination module; 30-a calculation module; 40-a second acquisition module; 50-output module.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are preset partial embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the individual features of the embodiments can be combined with one another without conflict.
Example 1
Referring to fig. 1, fig. 1 is a schematic diagram illustrating steps of an assisted driving method based on road condition detection according to an embodiment of the present invention, which is shown as follows:
step S100, acquiring video data information and road condition image information through a preset camera device;
in some embodiments, the preset camera device may be a visual sensor, and is configured to collect road condition image information of a road in front of the vehicle body, where the road condition image may include objects such as pedestrians, vehicles, road surfaces, barriers, sidewalks, street trees, traffic lights, traffic signs, and the like.
In some embodiments, the preset camera device may be a monocular camera, and the monocular camera captures a road condition image to be processed at a time. The camera device can also comprise multi-view cameras, and the cameras can be physically combined in one camera device and also can be physically distributed in a plurality of camera devices. A plurality of road condition images are shot at the same time through the multi-view camera, and the road condition images can be processed according to the road condition images to obtain a road condition image to be processed.
In some embodiments, the video information around the vehicle body can be acquired through a preset camera device, and the acquired video is subjected to framing to extract image information, so that single-frame road condition image information is acquired.
Step S110, determining the number of obstacles and a to-be-driven area according to the video data information and the image information;
in some embodiments, the obtained video data information is subjected to framing processing, a single frame image is extracted, or image information is directly obtained, image recognition is performed on the image, features are extracted according to the image in each frame, then a classifier is run, the number of obstacles is determined, R-CNN (R-CNN) can be adopted, a picture detection network is adopted, network classification is obtained through the R-CNN, coordinates of target objects can be obtained, even if a plurality of target objects exist in the picture, the target objects can be found one by one, and the number of the obstacles is 0, 1, 2, 3 and the like; the area to be traveled is determined as the location of the vehicle, e.g., on a highway, on a street road, on a cell lane, etc.
Step S120, point cloud data of the obstacle are obtained, and the distance and the angle between the area to be driven and the obstacle are calculated;
in some embodiments, the point cloud data is a data set of points in a certain coordinate system. The points contain rich information including three-dimensional coordinates of X, Y, Z, color, classification value, intensity value, time and the like, and the distance and the angle between the obstacles are calculated through the three-dimensional coordinates.
Step S130, obtaining a drivable area according to the distance and the angle between the area to be driven and the obstacle and obtaining the current vehicle driving data;
in some embodiments, obstacles are removed from the image of the area to be driven by an image recognition method, and the remaining image part is the drivable area.
And step S140, calculating the braking force and/or acceleration of the current vehicle according to the drivable area, the current vehicle driving data and the distance and angle between the area to be driven and the obstacle, thereby completing the auxiliary driving of the vehicle.
In some embodiments, the braking force f of the current vehicle is calculated according to the obtained driving data of the current vehicle and the obtained obstacle distance and angle of the surrounding environment, so that automatic braking and driving assistance of the vehicle are completed; calculating the braking force according to the following formula
f=k[(D/V)-(d/a)]
The distance D of the obstacle, the remaining distance D, the current speed V, the theoretical acceleration a and a set proportionality coefficient k.
In some embodiments, the acceleration a of the current vehicle is calculated according to the obtained driving data of the current vehicle and the obtained obstacle distance and angle of the surrounding environment, so as to complete the automatic braking and the driving assistance of the vehicle.
Example 2
Referring to fig. 2, fig. 2 is a detailed step diagram of an assisted driving method based on road condition detection according to an embodiment of the present invention, which is shown as follows:
step S200, detecting a preset part in the road condition image information by an image recognition method and determining an ROI in the preset part.
Step S210, the preset camera device acquires the ROI from the preset portion according to the current driving scene data.
Step S220, pixel points of a preset part are scanned line by line to determine an upper boundary of the ROI, a preset camera device determines a lower boundary of the ROI according to current driving scene data, and the preset camera device scans the pixel points of the preset part line by line to determine a left boundary and/or a right boundary of the ROI.
And step S230, calculating through an artificial intelligence algorithm, and determining the number information of the obstacles in the surrounding environment.
And step S240, taking the video data information as input, performing dynamic weight scoring on each input data, summing to obtain a total score, obtaining the output of one neuron after passing through a set excitation function according to the total score, and combining a plurality of neurons to obtain the final obstacle quantity information.
And S250, performing data combination on the acquired obstacle information and the point cloud data according to a weighted scoring algorithm, and performing iterative computation for preset times to obtain the distance and the angle between the final area to be driven and the obstacle.
And step S260, performing data analysis by combining road condition image information according to the distance and the angle between the area to be driven and the barrier to obtain a drivable area.
In some embodiments, video data information of the surrounding environment is acquired through a plurality of network cameras; judging whether the surrounding environment has obstacles or not according to the acquired video data information; specifically, an artificial intelligence algorithm is adopted for calculation, so that whether the surrounding environment has obstacles is judged: acquiring video data information of a plurality of network cameras as input, and summing the input data after dynamic weight scoring to obtain a total score; then, the total score passes through a set excitation function to obtain the output of a neuron; then combining a plurality of neurons to obtain final judgment information whether the surrounding environment has obstacles or not;
according to the judged obstacle information, calculating to obtain the obstacle distance and angle of the surrounding environment by combining point cloud data acquired by a plurality of laser radar sensors; performing data combination on the acquired obstacle information and point cloud data acquired by a plurality of laser radar sensors through a weighted scoring algorithm, and performing iterative calculation through a plurality of rounds to obtain the final distance and angle of the obstacle and acquire driving data of the current vehicle; and calculating the braking force of the current vehicle according to the obtained driving data of the current vehicle and the obtained distance and angle of the obstacles in the surrounding environment, thereby completing the automatic braking of the vehicle.
In some embodiments, the distance and the angle of the obstacle of the surrounding environment are calculated according to the judged obstacle information and by combining point cloud data obtained by a plurality of laser radar sensors, the system pre-judges the steering direction in advance and controls the steering lamp to pre-flash in advance according to the calculation result, and then the auxiliary driving operation of steering is carried out at a time node.
In some embodiments, the image recognition method may be an image segmentation algorithm, such as a semantic segmentation algorithm, an instance segmentation algorithm, or the like. The semantic segmentation algorithm is to group/segment pixels in an image according to the difference of semantic meanings expressed in the image. For example, classical algorithms of semantic segmentation algorithms may include: a semantic segmentation algorithm based on a full convolution neural network (FCN), a semantic segmentation algorithm based on a hole convolution (scaled convolution), or a semantic segmentation algorithm of a pyramid scene parsing network (pspnet), and the like, and may be other possible semantic segmentation algorithms to achieve semantic segmentation of a road condition image. The example segmentation algorithm is to position examples in the image by using a target detection algorithm and then mark target objects in different positioning frames by using a semantic segmentation algorithm, thereby realizing example segmentation. For example, classical algorithms of the example segmentation algorithm may include: an example segmentation algorithm based on multi-task network cascading (MNC), an example segmentation algorithm based on example FNC (instant fcn, ISFCN), an example segmentation algorithm based on Mask area convolutional neural network (Mask with a connected neural network, Mask R-CNN), or an example segmentation algorithm based on Path Aggregation Network (PAN), etc., but of course, other possible example segmentation algorithms may also be used.
Common obstacles such as vehicles, pedestrians, road surfaces, barriers and the like in the road condition image can be detected through an image segmentation algorithm. And the drivable area in the road condition image can be extracted through an image segmentation algorithm. Since image segmentation algorithms are a computer vision task that marks a specified area based on the content of the image, it is simply to identify what is in the image and where it is located in the image. The goal of the image segmentation algorithm is to label each pixel in the image and to associate each pixel with the class it represents, resulting in a pixel-level description of the obstacle.
Example 3
Referring to fig. 3, fig. 3 is a schematic diagram of a driving assistance system module based on road condition detection according to an embodiment of the present invention, which is shown as follows:
the first obtaining module 10 is configured to obtain video data information and road condition image information through a preset camera device;
the determining module 20 is configured to determine the number of obstacles and the area to be driven according to the video data information and the image information;
the calculation module 30 is used for acquiring point cloud data of the obstacle, and calculating to obtain the distance and the angle between the area to be driven and the obstacle;
the second obtaining module 40 is configured to obtain a drivable area according to a calculation result and obtain current vehicle driving data;
and the output module 50 is configured to calculate the braking force and/or acceleration of the current vehicle according to the drivable area, the current vehicle driving data, and the distance and angle between the to-be-driven area and the obstacle, so as to complete the auxiliary driving of the vehicle.
Also included are a memory, a processor, and a communication interface, which are electrically connected, directly or indirectly, to each other to enable transmission or interaction of data. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The memory may be used to store software programs and modules, and the processor may execute various functional applications and data processing by executing the software programs and modules stored in the memory. The communication interface may be used for communicating signaling or data with other node devices.
The Memory may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
The processor may be an integrated circuit chip having signal processing capabilities. The Processor may be a general-purpose Processor including a Central Processing Unit (CPU), a Network Processor (NP), etc.; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
It will be appreciated that the configuration shown in fig. 3 is merely illustrative and may include more or fewer components than shown in fig. 3, or have a different configuration than shown in fig. 3. The components shown in fig. 3 may be implemented in hardware, software, or a combination thereof.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or predetermined portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In summary, the driving assistance method and system based on road condition detection provided by the embodiment of the application can accurately calculate the position and distance of the obstacle in the surrounding environment through image analysis, automatically control the vehicle brake, and have high reliability, high accuracy and good stability. By image recognition of the road condition image, the detection range of obstacle detection is expanded, common obstacles can be detected, unusual obstacles can also be detected, the obstacle detection accuracy is further improved, and design of a follow-up obstacle avoidance control algorithm is facilitated. The detection of the obstacles in the road condition image is pixel-level detection, so that the detection range is wider compared with other obstacle detection methods.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (10)

1. A driving assistance method based on road condition detection is characterized by comprising the following steps:
acquiring video data information and road condition image information through a preset camera device;
determining the number of obstacles and a to-be-driven area according to the video data information and the road condition image information;
acquiring point cloud data of the obstacle, and calculating to obtain the distance and the angle between the area to be driven and the obstacle;
obtaining a drivable area according to the distance and the angle between the area to be driven and the barrier and acquiring current vehicle driving data;
and calculating the braking force and/or acceleration of the current vehicle according to the drivable area, the current vehicle driving data and the distance and angle between the area to be driven and the obstacle, thereby completing the auxiliary driving of the vehicle.
2. The driving assistance method according to claim 1, wherein the acquiring of the video data information and the traffic image information by the preset camera device comprises:
and detecting a preset part in the road condition image information by an image recognition method and determining an ROI in the preset part.
3. The driving assistance method according to claim 2, further comprising:
according to the current driving scene data, the preset camera device obtains the ROI from a preset part;
the method comprises the steps of scanning pixel points of a preset part line by line to determine an upper boundary of an ROI, determining a lower boundary of the ROI by a preset camera device according to current driving scene data, scanning the pixel points of the preset part line by the preset camera device, and determining a left boundary and/or a right boundary of the ROI.
4. The driving assistance method according to claim 1, wherein the determining the number of obstacles and the area to be driven according to the video data information and the image information comprises:
and calculating by an artificial intelligence algorithm to determine the obstacle quantity information of the surrounding environment.
5. The driving assistance method according to claim 4, further comprising:
the method comprises the steps of taking video data information as input, carrying out dynamic weight scoring on each input data, summing to obtain a total score, obtaining the output of one neuron through a set excitation function according to the total score, and combining through a plurality of neurons to obtain final obstacle quantity information.
6. The driving assistance method according to claim 1, wherein the obtaining point cloud data of the obstacle and the calculating the distance and the angle between the area to be driven and the obstacle comprise:
and performing data combination on the acquired obstacle information and the point cloud data according to a weighted scoring algorithm, and performing iterative computation for preset times to obtain the distance and the angle between the final area to be driven and the obstacle.
7. The driving assistance method according to claim 1, wherein the obtaining point cloud data of the obstacle and the calculating the distance and the angle between the area to be driven and the obstacle comprise:
and according to the distance and the angle between the area to be driven and the barrier, carrying out data analysis by combining road condition image information to obtain a drivable area.
8. A driving assistance system based on road condition detection is characterized by comprising:
the first acquisition module is used for acquiring video data information and road condition image information through a preset camera device;
the determining module is used for determining the number of obstacles and a to-be-driven area according to the video data information and the image information;
the computing module is used for acquiring point cloud data of the obstacle and computing to obtain the distance and the angle between the area to be driven and the obstacle;
the second acquisition module is used for acquiring a drivable area according to a calculation result and acquiring current vehicle driving data;
and the output module is used for calculating the braking force and/or acceleration of the current vehicle according to the drivable area, the current vehicle driving data and the distance and angle between the area to be driven and the obstacle, so as to complete the auxiliary driving of the vehicle.
9. The driving assistance system according to claim 8, further comprising:
and the ROI determining module is used for detecting a preset part in the road condition image information by an image recognition method and determining the ROI in the preset part.
10. The driving assistance system according to claim 8, comprising:
at least one memory for storing computer instructions;
at least one processor in communication with the memory, wherein the at least one processor, when executing the computer instructions, causes the system to perform: the device comprises a first acquisition module, a determination module, a calculation module, a second acquisition module and an output module.
CN202110049651.XA 2021-01-14 2021-01-14 Driving assistance method and system based on road condition detection Withdrawn CN112793567A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110049651.XA CN112793567A (en) 2021-01-14 2021-01-14 Driving assistance method and system based on road condition detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110049651.XA CN112793567A (en) 2021-01-14 2021-01-14 Driving assistance method and system based on road condition detection

Publications (1)

Publication Number Publication Date
CN112793567A true CN112793567A (en) 2021-05-14

Family

ID=75810902

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110049651.XA Withdrawn CN112793567A (en) 2021-01-14 2021-01-14 Driving assistance method and system based on road condition detection

Country Status (1)

Country Link
CN (1) CN112793567A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114708567A (en) * 2022-06-06 2022-07-05 济南融瓴科技发展有限公司 Road surface depression detection and avoidance method and system based on binocular camera
CN114864066A (en) * 2022-07-06 2022-08-05 深圳壹家智能锁有限公司 Management method, device, equipment and storage medium of shared accompanying bed
CN115179920A (en) * 2022-09-07 2022-10-14 北京中科慧眼科技有限公司 Vehicle running system adjusting method and system based on cross-country scene
CN116052122A (en) * 2023-01-28 2023-05-02 广汽埃安新能源汽车股份有限公司 Method and device for detecting drivable space, electronic equipment and storage medium
CN116883478A (en) * 2023-07-28 2023-10-13 广州瀚臣电子科技有限公司 Obstacle distance confirmation system and method based on automobile camera

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109977845A (en) * 2019-03-21 2019-07-05 百度在线网络技术(北京)有限公司 A kind of drivable region detection method and car-mounted terminal
CN110803144A (en) * 2019-12-02 2020-02-18 长沙万为机器人有限公司 Automatic braking method, automatic braking device and automatic driving vehicle
CN112149460A (en) * 2019-06-27 2020-12-29 华为技术有限公司 Obstacle detection method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109977845A (en) * 2019-03-21 2019-07-05 百度在线网络技术(北京)有限公司 A kind of drivable region detection method and car-mounted terminal
CN112149460A (en) * 2019-06-27 2020-12-29 华为技术有限公司 Obstacle detection method and device
CN110803144A (en) * 2019-12-02 2020-02-18 长沙万为机器人有限公司 Automatic braking method, automatic braking device and automatic driving vehicle

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114708567A (en) * 2022-06-06 2022-07-05 济南融瓴科技发展有限公司 Road surface depression detection and avoidance method and system based on binocular camera
CN114864066A (en) * 2022-07-06 2022-08-05 深圳壹家智能锁有限公司 Management method, device, equipment and storage medium of shared accompanying bed
CN115179920A (en) * 2022-09-07 2022-10-14 北京中科慧眼科技有限公司 Vehicle running system adjusting method and system based on cross-country scene
CN116052122A (en) * 2023-01-28 2023-05-02 广汽埃安新能源汽车股份有限公司 Method and device for detecting drivable space, electronic equipment and storage medium
CN116883478A (en) * 2023-07-28 2023-10-13 广州瀚臣电子科技有限公司 Obstacle distance confirmation system and method based on automobile camera
CN116883478B (en) * 2023-07-28 2024-01-23 广州瀚臣电子科技有限公司 Obstacle distance confirmation system and method based on automobile camera

Similar Documents

Publication Publication Date Title
Song et al. Real-time obstacles detection and status classification for collision warning in a vehicle active safety system
Rezaei et al. Robust vehicle detection and distance estimation under challenging lighting conditions
CN112793567A (en) Driving assistance method and system based on road condition detection
CN110843794B (en) Driving scene understanding method and device and trajectory planning method and device
Wu et al. Applying a functional neurofuzzy network to real-time lane detection and front-vehicle distance measurement
Nieto et al. Road environment modeling using robust perspective analysis and recursive Bayesian segmentation
Gavrila et al. Real time vision for intelligent vehicles
CN111033510A (en) Method and device for operating a driver assistance system, driver assistance system and motor vehicle
Arenado et al. Monovision‐based vehicle detection, distance and relative speed measurement in urban traffic
Garcia et al. PHD filter for vehicle tracking based on a monocular camera
Prakash et al. Robust obstacle detection for advanced driver assistance systems using distortions of inverse perspective mapping of a monocular camera
Liu et al. Vehicle detection and ranging using two different focal length cameras
Zhang et al. A novel vehicle reversing speed control based on obstacle detection and sparse representation
Alpar et al. Intelligent collision warning using license plate segmentation
Qing et al. A novel particle filter implementation for a multiple-vehicle detection and tracking system using tail light segmentation
CN114296095A (en) Method, device, vehicle and medium for extracting effective target of automatic driving vehicle
Arthi et al. Object detection of autonomous vehicles under adverse weather conditions
Riera et al. Driver behavior analysis using lane departure detection under challenging conditions
CN114488072A (en) Obstacle detection method, obstacle detection device and storage medium
Jung et al. Intelligent Hybrid Fusion Algorithm with Vision Patterns for Generation of Precise Digital Road Maps in Self-driving Vehicles.
CN111612818A (en) Novel binocular vision multi-target tracking method and system
Dai et al. A driving assistance system with vision based vehicle detection techniques
Lu et al. A vision-based system for the prevention of car collisions at night
Beresnev et al. Automated Driving System based on Roadway and Traffic Conditions Monitoring.
Riera et al. Detecting and tracking unsafe lane departure events for predicting driver safety in challenging naturalistic driving data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210514