CN112711967A - Rugged road detection method, apparatus, storage medium, electronic device, and vehicle - Google Patents

Rugged road detection method, apparatus, storage medium, electronic device, and vehicle Download PDF

Info

Publication number
CN112711967A
CN112711967A CN201911019061.1A CN201911019061A CN112711967A CN 112711967 A CN112711967 A CN 112711967A CN 201911019061 A CN201911019061 A CN 201911019061A CN 112711967 A CN112711967 A CN 112711967A
Authority
CN
China
Prior art keywords
road surface
rugged
vehicle
determining
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911019061.1A
Other languages
Chinese (zh)
Inventor
马锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BYD Co Ltd
Original Assignee
BYD Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BYD Co Ltd filed Critical BYD Co Ltd
Priority to CN201911019061.1A priority Critical patent/CN112711967A/en
Publication of CN112711967A publication Critical patent/CN112711967A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Abstract

The present disclosure relates to a rough road detection method, apparatus, storage medium, electronic device and vehicle, which belongs to the field of vehicles and can accurately determine a rough area on a road. A rough road detection method comprising: acquiring an image of an environment in front of a vehicle; extracting features of the image of the environment in front of the vehicle; matching the extracted features with a pre-trained feature model to determine a road boundary and a rugged region on the road; determining a target road surface according to the road surface boundary, and determining roughness information of a rough area on the target road surface through a mathematical model comparison method.

Description

Rugged road detection method, apparatus, storage medium, electronic device, and vehicle
Technical Field
The present disclosure relates to the field of vehicles, and in particular, to a rough road detection method, apparatus, storage medium, electronic device, and vehicle.
Background
The existing rough road detection method comprises the following steps: collecting sampling values from vibration sensitive signals, filtering out periodic anomalies and random anomalies by a statistical module and calculating statistical signals, then calculating first and second derivatives based on speed or time by a differential module, and finally determining whether rugged road conditions exist or not based on the statistical signals or the first and second derivatives by a comparison module.
The existing detection method mainly carries out sampling through vibration sensitive signals, and rough road information is obtained through a series of calculation and comparison, so that the calculation difficulty is high, and the precision is low.
Disclosure of Invention
An object of the present disclosure is to provide a rough road detection method, apparatus, storage medium, electronic device and vehicle capable of accurately determining a rough area on a road surface.
According to a first embodiment of the present disclosure, there is provided a rough road detection method including: acquiring an image of an environment in front of a vehicle; extracting features of the image of the environment in front of the vehicle; matching the extracted features with a pre-trained feature model to determine a road boundary and a rugged region on the road; determining a target road surface according to the road surface boundary, and determining roughness information of a rough area on the target road surface through a mathematical model comparison method.
Optionally, the performing feature extraction on the image of the vehicle front environment includes: and performing feature extraction on the image of the environment in front of the vehicle by using a deep learning algorithm.
Optionally, the matching the extracted features with a pre-trained feature model to determine a road boundary and a rough area on the road includes: matching the extracted features with the pre-trained feature model, marking the features of the road boundary and the features of the rugged regions after the matching is successful, and determining the information of the road boundary and the rugged range on the road; acquiring calibration parameters of an imaging device for acquiring an image of the environment in front of the vehicle; determining world coordinates of the road surface boundary and world coordinates of a rugged area on the road surface based on the calibration parameters; determining the rugged regions based on the rugged range information, the world coordinates of the roadway boundary, and the world coordinates of rugged regions on the roadway.
Optionally, the determining of the rugged information of the rugged area on the road surface by a mathematical model comparison method comprises: establishing an ideal flat road model based on the determined road surface boundary, and establishing an actual road surface model based on the determined road surface boundary and a rugged area on the road surface; comparing the ideal flat road surface model with the actual road surface model to obtain a difference function between the ideal flat road surface model and the actual road surface model; determining a rugosity and a width of the rugged area using the difference function.
Optionally, the method further comprises: extracting barrier features based on the image of the environment in front of the vehicle, matching the extracted barrier features with a pre-trained barrier feature model, and determining barriers on the road surface; determining the world coordinates of the obstacles on the road surface according to the calibration parameters; determining obstacle information of an obstacle on the road surface based on the world coordinates of the obstacle on the road surface.
Optionally, after the determining the rugged information of the rugged region on the target road surface by the mathematical model comparison method, the method further comprises: based on the determined road surface boundary information, obstacle information on the road surface, and the rugged information, a driving strategy of the vehicle is determined.
Optionally, the determining a driving strategy of the vehicle based on the determined road surface boundary information, obstacle information on the road surface, and bumpiness information includes: determining a travelable region based on the determined road surface boundary information, obstacle information on the road surface, and rugged information; determining an acceleration of the vehicle as a ═ V with the drivable region including a rugged region and the vehicle being located within a lateral range of the rugged region0-Vt/T, where Vt ═ V0-V0×R×(1-F),T=D/(V0Vt) where V0Representing a current vehicle speed of the vehicle; vt represents a recommended vehicle speed of the vehicle over the travelable region; t represents the shortest time from the current vehicle speed to the recommended vehicle speed; r represents a positional relationship of the vehicle to the rugged region, R being 0 when the vehicle is located outside a lateral extent of the rugged region, R being 1 when the vehicle is located within the lateral extent of the rugged region; f represents a roughness of the rugged area and D represents a longitudinal distance between the vehicle and the rugged area.
According to a second embodiment of the present disclosure, there is provided a rough road surface detecting device including: the acquisition module is used for acquiring an image of the environment in front of the vehicle; the characteristic extraction module is used for extracting the characteristics of the image of the environment in front of the vehicle; the matching module is used for matching the extracted features with a pre-trained feature model and determining a road boundary and a rugged region on the road; and the rugged information determining module is used for determining a target road surface according to the road surface boundary and determining rugged information of a rugged area on the target road surface by a mathematical model comparison method.
Optionally, the feature extraction module is further configured to perform feature extraction on the image of the vehicle front environment by using a deep learning algorithm.
Optionally, the matching module comprises: the matching and marking submodule is used for matching the extracted features with the pre-trained feature model, marking the features of the road boundary and the features of the rugged area after the matching is successful, and determining the information of the road boundary and the rugged range on the road; the calibration parameter acquisition sub-module is used for acquiring calibration parameters of an imaging device for acquiring images of the environment in front of the vehicle; a world coordinate determination submodule for determining world coordinates of the road surface boundary and world coordinates of a rugged region on the road surface based on the calibration parameters; a rough area determination sub-module to determine the rough area based on the rough range information, world coordinates of the road surface boundary, and world coordinates of a rough area on the road surface.
Optionally, the rugged information determination module comprises: a mathematical modeling submodule for establishing an ideal flat road model based on the determined road surface boundary, and establishing an actual road surface model based on the determined road surface boundary and a rugged region on the road surface; the comparison submodule is used for comparing the ideal flat road surface model with the actual road surface model to obtain a difference function between the ideal flat road surface model and the actual road surface model; a rugged information determination sub-module for determining a ruggedness and a width of the rugged region using the difference function.
Optionally, the feature extraction module is further configured to extract obstacle features based on the image of the vehicle front environment; the matching module is further used for matching the extracted obstacle features with a pre-trained obstacle feature model, determining obstacles on the road surface, determining the world coordinates of the obstacles on the road surface according to the calibration parameters, and determining the obstacle information of the obstacles on the road surface based on the world coordinates of the obstacles on the road surface.
Optionally, the apparatus further comprises: a driving strategy determination module for determining a driving strategy of the vehicle based on the determined road surface boundary information, obstacle information on the road surface, and rugged information.
Optionally, the driving strategy determination module comprises: a travelable region determination submodule for determining a travelable region based on the determined road surface boundary information, obstacle information on the road surface, and rugged information; an acceleration determination sub-module to determine an acceleration of the vehicle as a-V when a rugged region is included in the drivable region and the vehicle is located within a lateral range of the rugged region0-Vt/T, where Vt ═ V0-V0×R×(1-F),T=D/(V0Vt) where V0Representing a current vehicle speed of the vehicle; vt represents a recommended vehicle speed of the vehicle over the travelable region; t represents the shortest time from the current vehicle speed to the recommended vehicle speed; r represents a positional relationship of the vehicle to the rugged region, R being 0 when the vehicle is located outside a lateral extent of the rugged region, R being 1 when the vehicle is located within the lateral extent of the rugged region; f represents a roughness of the rugged area and D represents a longitudinal distance between the vehicle and the rugged area.
According to a third embodiment of the present disclosure, there is provided a computer readable storage medium having a computer program stored thereon, characterized in that the program, when executed by a processor, implements the steps of the method according to the first embodiment of the present disclosure.
According to a fourth embodiment of the present disclosure, there is provided an electronic apparatus including: a memory having a computer program stored thereon; a processor for executing the computer program in the memory to carry out the steps of the method according to the first embodiment of the disclosure.
According to a fifth embodiment of the present disclosure, there is provided a vehicle including: the rough road detection device according to the second embodiment of the present disclosure.
By adopting the technical scheme, since the road surface feature is firstly identified and matched, and then the rugged information of the rugged area is further determined by a mathematical model comparison method, the specific position, type, rugged degree, width and other information of the rugged area can be accurately obtained before the vehicle reaches the rugged area, so that the extracted road surface information is accurate and reliable, and the driving decision suggestion can be executed according to the road surface information obtained in advance, thereby being beneficial to avoiding driving risks in advance and greatly improving the utilization efficiency of the road surface information.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
FIG. 1 illustrates a flow chart of a rough road detection method according to an embodiment of the present disclosure.
Fig. 2 shows an example diagram of the road surface condition after feature extraction.
FIG. 3 illustrates yet another flowchart of a rough road detection method according to an embodiment of the present disclosure.
FIG. 4 illustrates a schematic block diagram of a rough road detection device according to an embodiment of the present disclosure.
FIG. 5 illustrates yet another schematic block diagram of a rough road detection device according to an embodiment of the present disclosure.
FIG. 6 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
The following detailed description of specific embodiments of the present disclosure is provided in connection with the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present disclosure, are given by way of illustration and explanation only, not limitation.
FIG. 1 illustrates a flow chart of a rough road detection method according to an embodiment of the present disclosure. As shown in fig. 1, the method includes the following steps S11 to S14.
In step S11, a vehicle front environment image is acquired.
The image of the environment in front of the vehicle may be an image of the environment in front of the vehicle acquired using at least one of a monocular camera, a binocular camera, a multi-view camera, a color camera, a grayscale camera, and the like. The camera is preferably a high-definition camera so that the shot image is clearer. The visual distance of the camera can be more than 1KM, and the camera can simultaneously cover the current lane of the vehicle and four lanes adjacent to the left and right of the current lane, so that images in a longer distance and a wider range in front of the vehicle can be shot, and the camera is more favorable for being used as a reference of a driving strategy. The camera may be mounted behind the windshield of the vehicle.
The present disclosure does not limit the image format of the vehicle front environment image, such as a grayscale image or a color image. For the convenience of the next step, if the acquired image of the environment in front of the vehicle is a color image, the color image can be converted into a gray scale image, and of course, no conversion is also feasible.
After the camera shoots the image of the environment in front of the vehicle, the shot image of the environment in front of the vehicle can be acquired from the camera.
In step S12, feature extraction is performed on the vehicle front environment image.
In this step, various deep learning algorithms can be used to perform feature extraction on the image of the vehicle front environment, for example, Caffe, tenserflow, etc. can be used, and the method of feature extraction is also various, and the disclosure is not limited. For example, feature extraction may be performed based on a deep learning model of the SSD network of the Caffe framework, which is one of the frameworks that can satisfy the deep feature extraction requirement of the present disclosure, and which supports a large number of features. Of course, if some of the features required by the present disclosure are not found in the Caffe framework, the required features may also be sample trained and added to the Caffe framework. The present disclosure is not limited to the method of sample training.
In this step, the features to be extracted mainly include two major classes. The first type is a road surface boundary; the second type is a rough area of the road surface.
The road surface boundary characteristic may be at least one of: curbs, isolation belts, green belts, guardrails, lane lines, edges of other vehicles, and the like.
The rugged area characteristic may be at least one of: speed bumps, well covers, pits, rough roads, hills with height differences, and other obstacles that may cause bumps in the vehicle.
In step S13, the extracted features are matched with a feature model trained in advance, and a road surface boundary and a rough area on the road surface are determined.
The implementation of feature matching is diverse. One implementation manner includes: matching the extracted features with a pre-trained feature model, marking the features of the road boundary and the features of the rugged area after the matching is successful, and determining the information of the road boundary and the rugged range on the road; acquiring calibration parameters of an imaging device for acquiring an image of an environment in front of a vehicle; determining world coordinates of a boundary of the road surface and world coordinates of a rugged region on the road surface based on the calibration parameters; and determining a rugged region based on the rugged range information, the world coordinates of the road boundary, and the world coordinates of the rugged region on the road.
In the specific implementation process, in the process of performing feature matching and labeling, if the extracted features are matched with a pre-trained feature model such as a lane line, labeling is performed at the extracted features, and the performed labeling can be specific to a specific subclass of each class of features, for example, assuming that the major class of the feature classification is a road surface boundary and the lane line is a subclass under the major class of the road surface boundary, labeling the extracted features as the lane line instead of the road surface boundary.
In addition, since the number, the installation position, the installation angle, the installation error and the like of the cameras all affect the finally determined world coordinates, before determining the world coordinates of the boundary of the road surface and the rugged region on the road surface, the parameters of the cameras need to be calibrated to obtain various calibration parameters of the cameras, for example, the adopted calibration method can include manual calibration, chessboard calibration and the like, so that a world coordinate system of the cameras relative to the environment in front of the vehicle can be established. Moreover, during the camera calibration process, the offset of the sensor of the camera in the three directions of x, y and z relative to the lens can be determined, and the offset is added into the calculation of camera world coordinate system reconstruction, so that the world coordinate system is accurately reflected through the camera coordinate system, and the distance, the angle, the width and the like of the road boundary, the rugged region and the like relative to the host vehicle are accurately reflected in the reconstructed world coordinate system, and the world coordinates of the road boundary, the rugged region and the like can be obtained.
Take the road surface condition shown in fig. 2 as an example. After the feature matching is performed, it may be determined that a green belt, a first lane line, a second lane line, a third lane line, a fourth lane line, a first obstacle, a first rugged area, and the vehicle 2 are present in the environment in front of the vehicle. The road surface boundary can be determined according to the green belt, the first lane line and the fourth lane line, because the solid lane line exists in the example, the first lane line and the fourth lane line of the solid line are selected as the road surface boundary according to the principle of giving priority to the solid lane line, and if the solid lane line is lacked, the edge of the green belt can be used as the road surface boundary, so that the road surface position can be preliminarily screened out. After the location of the road surface has been identified, obstacles, rough areas etc. on the road surface are identified, in this example the presence of a first obstacle, a first rough area and the vehicle 2 on the road surface, and the world coordinates of these obstacles and rough areas can also be determined by means of the method described above.
In step S14, a target road surface is determined based on the road surface boundary, and unevenness information of an uneven area on the target road surface is determined by a mathematical model comparison method.
In step S13, only the world coordinates and types of the rugged regions, such as speed bump, manhole cover, pit, etc., can be obtained, however, the feature matching features of deep learning determine that it is difficult to obtain the information of the rugged regions, such as the height of the speed bump, the height of the pit, etc. Since the driving strategy of the vehicle is not the same because of the different bumpiness, different bumpiness information is important for the same type of bumpy area. It is also necessary to further precisely determine the roughness, width, etc. of the rugged region in step S14.
The manner in which the rugged information of the rugged area is determined is various. One embodiment includes: establishing an ideal flat road model based on the determined road surface boundary, and establishing an actual road surface model based on the determined road surface boundary, the obstacles on the road surface and the rugged regions on the road surface; comparing the ideal flat pavement model with the actual pavement model to obtain a difference function between the ideal flat pavement model and the actual pavement model, wherein the difference function can reflect the difference between the ideal flat pavement model and the actual pavement model; the ruggedness and width of the rugged region are determined using a difference function.
In the specific implementation process, for example, the world coordinates of the rugged region are obtained in step S13, so that the coordinate range of the rugged region can be determined by the world coordinates of each edge of the rugged region, then, an ideal flat road model and an actual road model in the coordinate range can be established, the information such as the rugged degree and the width of the rugged region can be accurately determined based on the comparison between the two models, and the operation amount and the false detection can be reduced to the maximum extent.
By adopting the technical scheme, since the road surface feature is firstly identified and matched, and then the rugged information of the rugged area is further determined by a mathematical model comparison method, the specific position, type, rugged degree, width and other information of the rugged area can be accurately obtained before the vehicle reaches the rugged area, so that the extracted road surface information is accurate and reliable, and the driving decision suggestion can be executed according to the road surface information obtained in advance, thereby being beneficial to avoiding driving risks in advance and greatly improving the utilization efficiency of the road surface information.
In one implementation, the method according to an embodiment of the present disclosure may further include: extracting barrier features based on the image of the front environment, matching the extracted barrier features with a pre-trained barrier feature model, and determining barriers on the road surface; determining the world coordinates of the obstacles on the road surface according to the calibration parameters; obstacle information of an obstacle on a road surface is determined based on world coordinates of the obstacle on the road surface. Wherein the obstacle feature may be at least one of: pedestrians, other vehicles, pavement warning signs, stone piers, animals, and other spilled items, among others.
Through above-mentioned technical scheme, can also accurately confirm the barrier on the road surface in advance, be favorable to avoiding driving risk in advance, promote the utilization efficiency of road surface information greatly.
FIG. 3 illustrates yet another flowchart of a rough road detection method according to an embodiment of the present disclosure. As shown in fig. 3, after step S14, the rough road detection method according to the embodiment of the present disclosure may further include step S15 of determining a driving strategy of the vehicle based on the determined road surface boundary information, obstacle information on the road surface, and rough road information.
The manner in which the vehicle driving strategy is determined is varied, one implementation of which is as follows. First, vehicle information, such as the height of the chassis of the vehicle, the maximum passing gradient H, is acquired0The current speed V of the vehicle0Body width K, etc. Then, in conjunction with the rugged information obtained in step S14, the longitudinal distance D between the vehicle and the rugged region, the positional relationship R between the vehicle and the rugged region, and the rugged region F% and so on may be obtained. The value of R may be 0 or 1, where R ═ 0 indicates that the body edge is outside the lateral distance of the rugged region, which indicates that the rugged region does not affect the normal driving of the vehicle, e.g., the vehicle 2 in fig. 2 is outside the lateral distance of the first rugged region, and R ═ 1 indicates that the body edge is within the lateral distance of the rugged region, which indicates that the rugged region may affect the normal driving of the vehicle, e.g., the host vehicle in fig. 2 is within the lateral distance of the first rugged region. The roughness F% may be, for example, the maximum grade H passable with respect to the host vehicle0When F is 0%, the road surface is flat, and when F exceeds 100%, the road surface is an impassable area.
After the above information is derived, the driving strategy can then be determined. For example, combining the positional relationship R and the bumpiness F% and the like, the recommended vehicle speed Vt ═ V in the travelable region can be determined0-V0Xrx (1-F), and then determines the shortest time T ═ D/(V) to reach the recommended vehicle speed from the current vehicle speed by the current vehicle speed and the recommended vehicle speed0-Vt). Then, the recommended acceleration a ═ V can be determined0Vt/T. In this way, it is determined that the vehicle can be caused to travel at the above-determined acceleration a, so that the vehicle can be adjusted at a suitable time to a more comfortable passing vehicle speed for the driver or passenger before reaching the rough area, so that the vehicle passes through the rough area at such a comfortable passing vehicle speed.
Another embodiment of determining a vehicle driving strategy is described next in conjunction with fig. 2. As shown in fig. 2, the travelable region obviously includes two lanes defined by the second lane line, the third lane line, and the fourth lane line and a portion of the lane defined by the first lane line and the second lane line. Since the positional relationship R between the host vehicle and the first rough region ahead is 1 (i.e., the host vehicle is influenced by the first rough region), the lane may be determined and changed by the longitudinal distance relationship between the host vehicle and the vehicle 2, for example, assuming that it is detected that the longitudinal distance between the host vehicle and the vehicle 2 is D1, after the host vehicle traveling distance exceeds D1 (in the case where the vehicle 2 is stationary), the host vehicle may be controlled to change the lane, and the host vehicle may be changed to a lane determined by the third lane line and the fourth lane line to travel, and then, the positional relationship between the host vehicle and the first rough region may be changed to R0, so that the first rough region may be passed through without affecting the vehicle speed and the driving experience.
FIG. 4 illustrates a schematic block diagram of a rough road detection device according to an embodiment of the present disclosure. As shown in fig. 4, the rough road detection device includes: an obtaining module 41, configured to obtain an image of an environment in front of a vehicle; the feature extraction module 42 is used for extracting features of the image of the environment in front of the vehicle; a matching module 43, configured to match the extracted features with a pre-trained feature model, and determine a road boundary and a rugged region on the road; and a rugged information determination module 44 for determining a target road surface according to the road surface boundary, and determining rugged information of a rugged region on the target road surface by a mathematical model comparison method.
By adopting the technical scheme, since the road surface feature is firstly identified and matched, and then the rugged information of the rugged area is further determined by a mathematical model comparison method, the specific position, type, rugged degree, width and other information of the rugged area can be accurately obtained before the vehicle reaches the rugged area, so that the extracted road surface information is accurate and reliable, and the driving decision suggestion can be executed according to the road surface information obtained in advance, thereby being beneficial to avoiding driving risks in advance and greatly improving the utilization efficiency of the road surface information.
Optionally, the feature extraction module 42 is further configured to perform feature extraction on the vehicle front environment image by using a deep learning algorithm.
Optionally, the matching module 43 comprises: the matching and marking submodule is used for matching the extracted features with the pre-trained feature model, marking the features of the road boundary and the features of the rugged area after the matching is successful, and determining the information of the road boundary and the rugged range on the road; the calibration parameter acquisition sub-module is used for acquiring calibration parameters of an imaging device for acquiring images of the environment in front of the vehicle; the world coordinate determination submodule is used for determining the world coordinates of the road surface boundary and the world coordinates of the rugged region on the road surface based on the calibration parameters; and a rugged region determination sub-module for determining a rugged region based on the rugged range information, the world coordinates of the road boundary, and the world coordinates of the rugged region on the road.
Optionally, the rugged information determination module 44 includes: a mathematical modeling submodule for establishing an ideal flat road model based on the determined road surface boundary, and establishing an actual road surface model based on the determined road surface boundary and a rugged region on the road surface; the comparison submodule is used for comparing the ideal flat road model with the actual road model to obtain a difference function between the ideal flat road model and the actual road model; a rugged information determination sub-module for determining a ruggedness and a width of the rugged region using a difference function.
Optionally, the feature extraction module 42 is further configured to extract the obstacle feature based on the image of the vehicle front environment; the matching module 43 is further configured to match the extracted obstacle feature with a pre-trained obstacle feature model, determine an obstacle on the road surface, determine a world coordinate of the obstacle on the road surface according to the calibration parameter, and determine obstacle information of the obstacle on the road surface based on the world coordinate of the obstacle on the road surface.
FIG. 5 illustrates yet another schematic block diagram of a rough road detection device according to an embodiment of the present disclosure. As shown in fig. 5, the apparatus further includes: a driving strategy determination module 45 for determining a driving strategy of the vehicle based on the determined road surface boundary information, obstacle information on the road surface, and bumpiness information.
Alternatively, the driving strategy determination module 45 includes: a travelable region determination submodule for determining a travelable region based on the determined road surface boundary information, obstacle information on the road surface, and rugged information; an acceleration determination submodule for determining an acceleration of the vehicle as a ═ V in a case where a rugged region is included in the drivable region and the vehicle is located in a lateral range of the rugged region0-Vt/T, where Vt ═ V0-V0×R×(1-F),T=D/(V0Vt) where V0Representing a current vehicle speed of the vehicle; vt represents a recommended vehicle speed of the vehicle over the travelable region; t represents the shortest time from the current vehicle speed to the recommended vehicle speed; r represents a positional relationship of the vehicle and the rugged region, wherein when the vehicle is located outside a lateral range of the rugged region, R is 0, and when the vehicle is located within the lateral range of the rugged region, R is 1; f represents the roughness of the rugged area and D represents the longitudinal distance between the vehicle and the rugged area. In addition, the driving maneuver determination module 45 may read vehicle information used in the driving maneuver determination process, such as the chassis height of the vehicle, the maximum grade of traffic, the current vehicle speed, the vehicle body width, and the like, through, for example, the CAN bus.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
According to still another embodiment of the present disclosure, there is provided a vehicle including the rough road detection apparatus according to the embodiment of the present disclosure.
Fig. 6 is a block diagram illustrating an electronic device 700 according to an example embodiment. As shown in fig. 6, the electronic device 700 may include: a processor 701 and a memory 702. The electronic device 700 may also include one or more of a multimedia component 703, an input/output (I/O) interface 704, and a communication component 705.
The processor 701 is configured to control the overall operation of the electronic device 700 to complete all or part of the steps of the rough road detection method. The memory 702 is used to store various types of data to support operation at the electronic device 700, such as instructions for any application or method operating on the electronic device 700 and application-related data, such as contact data, transmitted and received messages, pictures, audio, video, and the like. The Memory 702 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk, or optical disk. The multimedia components 703 may include screen and audio components. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 702 or transmitted through the communication component 705. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 704 provides an interface between the processor 701 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 705 is used for wired or wireless communication between the electronic device 700 and other devices. Wireless Communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G, or 4G, or a combination of one or more of them, so that the corresponding Communication component 705 may include: Wi-Fi module, bluetooth module, NFC module.
In an exemplary embodiment, the electronic Device 700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the above-described rough road surface detection method.
In another exemplary embodiment, a computer readable storage medium is also provided, which includes program instructions, which when executed by a processor, implement the steps of the rough road detection method described above. For example, the computer readable storage medium may be the memory 702 including program instructions executable by the processor 701 of the electronic device 700 to perform the rough road detection method described above.
The preferred embodiments of the present disclosure are described in detail with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solution of the present disclosure within the technical idea of the present disclosure, and these simple modifications all belong to the protection scope of the present disclosure.
It should be noted that the various features described in the above embodiments may be combined in any suitable manner without departing from the scope of the invention. In order to avoid unnecessary repetition, various possible combinations will not be separately described in this disclosure.
In addition, any combination of various embodiments of the present disclosure may be made, and the same should be considered as the disclosure of the present disclosure, as long as it does not depart from the spirit of the present disclosure.

Claims (11)

1. A rough road surface detection method, comprising:
acquiring an image of an environment in front of a vehicle;
extracting features of the image of the environment in front of the vehicle;
matching the extracted features with a pre-trained feature model to determine a road boundary and a rugged region on the road;
determining a target road surface according to the road surface boundary, and determining roughness information of a rough area on the target road surface through a mathematical model comparison method.
2. The method of claim 1, wherein the feature extracting the image of the vehicle front environment comprises:
and performing feature extraction on the image of the environment in front of the vehicle by using a deep learning algorithm.
3. The method of claim 1, wherein matching the extracted features to a pre-trained feature model to determine road surface boundaries and rugged regions on the road surface comprises:
matching the extracted features with the pre-trained feature model, marking the features of the road boundary and the features of the rugged regions after the matching is successful, and determining the information of the road boundary and the rugged range on the road;
acquiring calibration parameters of an imaging device for acquiring an image of the environment in front of the vehicle;
determining world coordinates of the road surface boundary and world coordinates of a rugged area on the road surface based on the calibration parameters;
determining the rugged regions based on the rugged range information, the world coordinates of the roadway boundary, and the world coordinates of rugged regions on the roadway.
4. The method of claim 1, wherein the determining the roughness information of the rugged area on the target surface by mathematical model comparison comprises:
establishing an ideal flat road model based on the determined road surface boundary, and establishing an actual road surface model based on the determined road surface boundary and a rugged area on the road surface;
comparing the ideal flat road surface model with the actual road surface model to obtain a difference function between the ideal flat road surface model and the actual road surface model;
determining a rugosity and a width of the rugged area using the difference function.
5. The method of claim 3, further comprising:
extracting barrier features based on the image of the environment in front of the vehicle, matching the extracted barrier features with a pre-trained barrier feature model, and determining barriers on the road surface;
determining the world coordinates of the obstacles on the road surface according to the calibration parameters;
determining obstacle information of an obstacle on the road surface based on the world coordinates of the obstacle on the road surface.
6. The method of claim 5, wherein after the determining the roughness information of the rugged area on the target road surface by mathematical model comparison, the method further comprises:
based on the determined road surface boundary information, obstacle information on the road surface, and the rugged information, a driving strategy of the vehicle is determined.
7. The method of claim 6, wherein determining a driving strategy for the vehicle based on the determined road surface boundary information, obstacle information on the road surface, and roughness information comprises:
determining a travelable region based on the determined road surface boundary information, obstacle information on the road surface, and rugged information;
determining an acceleration of the vehicle as a ═ V with the drivable region including a rugged region and the vehicle being located within a lateral range of the rugged region0-Vt/T, where Vt ═ V0-V0×R×(1-F),T=D/(V0-Vt),
Wherein, V0Representing a current vehicle speed of the vehicle; vt represents a recommended vehicle speed of the vehicle over the travelable region; t represents the shortest time from the current vehicle speed to the recommended vehicle speed; r represents a positional relationship of the vehicle to the rugged region, R being 0 when the vehicle is located outside a lateral extent of the rugged region, R being 1 when the vehicle is located within the lateral extent of the rugged region; f represents a roughness of the rugged area and D represents a longitudinal distance between the vehicle and the rugged area.
8. A rough road surface detecting device, comprising:
the acquisition module is used for acquiring an image of the environment in front of the vehicle;
the characteristic extraction module is used for extracting the characteristics of the image of the environment in front of the vehicle;
the matching module is used for matching the extracted features with a pre-trained feature model and determining a road boundary and a rugged region on the road;
and the rugged information determining module is used for determining a target road surface according to the road surface boundary and determining rugged information of a rugged area on the target road surface by a mathematical model comparison method.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
10. An electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to carry out the steps of the method of any one of claims 1 to 7.
11. A vehicle, characterized in that the vehicle comprises: a rough road detection device as claimed in claim 8.
CN201911019061.1A 2019-10-24 2019-10-24 Rugged road detection method, apparatus, storage medium, electronic device, and vehicle Pending CN112711967A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911019061.1A CN112711967A (en) 2019-10-24 2019-10-24 Rugged road detection method, apparatus, storage medium, electronic device, and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911019061.1A CN112711967A (en) 2019-10-24 2019-10-24 Rugged road detection method, apparatus, storage medium, electronic device, and vehicle

Publications (1)

Publication Number Publication Date
CN112711967A true CN112711967A (en) 2021-04-27

Family

ID=75540344

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911019061.1A Pending CN112711967A (en) 2019-10-24 2019-10-24 Rugged road detection method, apparatus, storage medium, electronic device, and vehicle

Country Status (1)

Country Link
CN (1) CN112711967A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106485233A (en) * 2016-10-21 2017-03-08 深圳地平线机器人科技有限公司 Drivable region detection method, device and electronic equipment
CN106919915A (en) * 2017-02-22 2017-07-04 武汉极目智能技术有限公司 Map road mark and road quality harvester and method based on ADAS systems

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106485233A (en) * 2016-10-21 2017-03-08 深圳地平线机器人科技有限公司 Drivable region detection method, device and electronic equipment
CN106919915A (en) * 2017-02-22 2017-07-04 武汉极目智能技术有限公司 Map road mark and road quality harvester and method based on ADAS systems

Similar Documents

Publication Publication Date Title
CN111448478B (en) System and method for correcting high-definition maps based on obstacle detection
US11373532B2 (en) Pothole detection system
US11016484B2 (en) Vehicle control apparatus and method for performing automatic driving control
US8791996B2 (en) Image processing system and position measurement system
US20180031384A1 (en) Augmented road line detection and display system
US11531348B2 (en) Method and apparatus for the detection and labeling of features of an environment through contextual clues
CN111874006A (en) Route planning processing method and device
US11755917B2 (en) Generating depth from camera images and known depth data using neural networks
CN102208036A (en) Vehicle position detection system
CN112384962B (en) Driving assistance method and driving assistance device
CN111091037A (en) Method and device for determining driving information
JP7203586B2 (en) Image processing device and image processing method
CN103679121A (en) Method and system for detecting roadside using visual difference image
Park et al. Vehicle localization using an AVM camera for an automated urban driving
CN112711967A (en) Rugged road detection method, apparatus, storage medium, electronic device, and vehicle
JP2020095623A (en) Image processing device and image processing method
CN116524454A (en) Object tracking device, object tracking method, and storage medium
CN110736991B (en) Vehicle radar control device and method
US20190337455A1 (en) Mobile Body Surroundings Display Method and Mobile Body Surroundings Display Apparatus
Hilario et al. Visual perception and tracking of vehicles for driver assistance systems
US20230186638A1 (en) Device for determining a topography of a vehicle environment, vehicle and method
US20240087092A1 (en) Method, apparatus, user interface, and computer program product for identifying map objects or road attributes based on imagery
CN110210280B (en) Beyond-visual-range sensing method, beyond-visual-range sensing system, terminal and storage medium
US20240046780A1 (en) Control device, recording medium, and data generation method
JP2022123153A (en) Stereo image processing device and stereo image processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination