CN112329670B - Method for recognizing obstacle with irregular road condition of unmanned vehicle, computer program and storage medium - Google Patents

Method for recognizing obstacle with irregular road condition of unmanned vehicle, computer program and storage medium Download PDF

Info

Publication number
CN112329670B
CN112329670B CN202011256299.9A CN202011256299A CN112329670B CN 112329670 B CN112329670 B CN 112329670B CN 202011256299 A CN202011256299 A CN 202011256299A CN 112329670 B CN112329670 B CN 112329670B
Authority
CN
China
Prior art keywords
result
recording
obstacle
recording result
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011256299.9A
Other languages
Chinese (zh)
Other versions
CN112329670A (en
Inventor
杨扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Boonray Intelligent Technology Co Ltd
Original Assignee
Shanghai Boonray Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Boonray Intelligent Technology Co Ltd filed Critical Shanghai Boonray Intelligent Technology Co Ltd
Priority to CN202011256299.9A priority Critical patent/CN112329670B/en
Publication of CN112329670A publication Critical patent/CN112329670A/en
Application granted granted Critical
Publication of CN112329670B publication Critical patent/CN112329670B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Abstract

The invention relates to a method for identifying an irregular obstacle of a driverless vehicle road condition, a computer program and a storage medium, wherein the method comprises the following steps of 1: acquiring infrared data of a target area by using the infrared sensor and acquiring image data of the target area by using the image acquisition device; step 2: recognizing barrier information in the infrared data by using a preset infrared recognition model, and recording a recognition result to obtain a first recording result; and step 3: recognizing the image data by using a preset obstacle feature recognition model; and 4, step 4: and comparing and judging. According to the method, after the plurality of feature sets of the image are obtained through data feature analysis, which set the obstacle features of the image data are closer to the third feature set is judged, so that the obtained result is more accurate.

Description

Method for recognizing obstacle with irregular road condition of unmanned vehicle, computer program and storage medium
Technical Field
The invention belongs to the technical field of unmanned driving, and particularly relates to a method for identifying an obstacle with irregular road condition of an unmanned vehicle, a computer program and a storage medium.
Background
The unmanned automobile is an intelligent automobile which senses road environment through a vehicle-mounted sensing system, automatically plans a driving route and controls the automobile to reach a preset target.
The vehicle-mounted sensor is used for sensing the surrounding environment of the vehicle, and controlling the steering and the speed of the vehicle according to the road, the vehicle position and the obstacle information obtained by sensing, so that the vehicle can safely and reliably run on the road.
The system integrates a plurality of technologies such as automatic control, a system structure, artificial intelligence, visual calculation and the like, is a product of high development of computer science, mode recognition and intelligent control technology, is an important mark for measuring national scientific research strength and industrial level, and has wide application prospect in the fields of national defense and national economy.
The correct identification of obstacles is of great importance for unmanned vehicles as well as for autonomous driving modes of vehicles. When automatic identification of an obstacle is implemented, a laser radar sensor, a millimeter wave radar sensor, or an image acquisition device is generally installed on a vehicle to acquire obstacle information around the vehicle, so as to obtain three-dimensional point cloud data or two-dimensional image data. And then, recognizing the obstacles in the three-dimensional point cloud data or the two-dimensional image data by using a trained machine learning algorithm. In training the machine learning algorithm, the machine learning algorithm is usually trained by using three-dimensional point cloud data or two-dimensional image data in which an obstacle is marked.
Disclosure of Invention
The invention mainly aims to provide a method for identifying road condition irregular obstacles of an unmanned vehicle, a computer program and a storage medium, wherein an infrared sensor and an image acquisition device are utilized to acquire image information of a target area in the driving process of the unmanned vehicle, then the image information is respectively identified, and result judgment is carried out based on the identification results of the infrared sensor and the image acquisition device, so that the accuracy of the identification result is improved; meanwhile, in the process of image recognition, data feature extraction is carried out after image information data conversion, and recognition efficiency is improved.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
the method for identifying the obstacle with irregular road condition of the unmanned vehicle comprises the following steps:
step 1: acquiring infrared data of a target area by using the infrared sensor and acquiring image data of the target area by using the image acquisition device;
step 2: recognizing barrier information in the infrared data by using a preset infrared recognition model, and recording a recognition result to obtain a first recording result;
and step 3: recognizing the image data by using a preset obstacle feature recognition model to obtain an obstacle feature image of the image data, converting the obtained obstacle feature image into binary data, performing data feature recognition on the binary data to obtain a data feature recognition result, analyzing the data feature recognition result to obtain an obstacle information recognition result in the image data corresponding to the data feature recognition result, and recording the obstacle information recognition result to obtain a second recording result;
and 4, step 4: comparing the first recording result with the second recording result, detecting whether the difference between the first recording result and the second recording result exceeds a set threshold value, if so, judging that the first recording result and the second recording result are different, and if not, judging that the first recording result and the second recording result are the same; respectively taking the weighted average of the first recording result and the second recording result according to the set weighted value in response to the two being the same to obtain a final result; and in response to the first recording result and the second recording result being different, determining a correct recording result in the first recording result and the second recording result, and outputting the correct recording result as a final result.
Further, in the step 4: determining a correct recording result of the first recording result and the second recording result comprises: determining that the first recorded result is different from the second recorded result; determining a first obstacle indicated in the first recorded result and a second obstacle indicated in the second recorded result for the same said difference; determining a recording volume determined by the three-dimensional coordinate set of the first obstacle and a first distance between the first obstacle and the infrared sensor in the first recording result; determining a desired volume range of the first obstacle in the point cloud data according to the first recording result and the first distance; determining that the first recording result is erroneous in response to the recorded volume being outside the desired volume range.
Further, in the step 4: determining a correct recording result of the first recording result and the second recording result comprises: determining a recording pixel area covered by the second obstacle in the second recording result; determining a second distance between the second obstacle and the image acquisition device according to the first distance and calibration parameters between the infrared sensor and the image acquisition device; determining an expected pixel area range of the second obstacle in the image data according to the second recording result and the second distance; determining that the second recording result is erroneous in response to the recording pixel area being outside the desired pixel area range.
Further, in the step 3: recognizing the place by using a preset obstacle feature recognition modelThe image data, obtaining an obstacle feature image of the image data, includes: carrying out image binarization processing after image data is subjected to image enhancement; setting three preset feature distribution sets, which are respectively: a first feature set, a second feature set, and a third feature set; the corresponding characteristic probability values in each set are respectively as follows: a first feature set: p; a second feature set: x; the third feature set: m; setting a sample point, using the sample point to detect the sample point of the image after the image binarization processing, and after the detection is finished, calculating the classification coincidence probability of the sample point of the image after the image binarization processing and the set sample point by adopting the following formula:
Figure BDA0002773208940000021
Figure BDA0002773208940000022
wherein k is the number of sample points, and j is the number of overlapped sample points; calculating a loss function:
Figure BDA0002773208940000031
and (3) carrying out the following operation on the classification coincidence probability and the loss function to obtain the final classification coincidence probability:
Figure BDA0002773208940000032
judgment of pNWhichever is closer to P, X and M, i.e. pNP, X and M are respectively subjected to difference absolute value operation, and the calculation result is closest to 0; if p isjIf the image data is closer to P, judging that the obstacle feature of the image data is closer to the first feature set, and if P is closer to P, judging that the obstacle feature of the image data is closer to the first feature setNIf the image data is closer to the X, judging that the obstacle feature of the image data is closer to a second feature set; if p isNAnd if the image data is closer to the P, judging that the obstacle feature of the image data is closer to the third feature set.
Further, in the step 3: converting the obtained obstacle feature image into binary data, and performing data feature recognition on the binary data to obtain a data feature recognition result, wherein the data feature recognition result comprises the following steps: acquiring a set corresponding to the obstacle feature of the image data; if the acquired sets are a first feature set and a second feature set, converting data in the sets to obtain original data of multiple dimensions, and preprocessing the original data to obtain a pixel matrix corresponding to each dimension; analyzing each pixel matrix to obtain an incidence relation between each dimensionality, and establishing a neural network model for width learning by utilizing the incidence relation; extracting high-dimensional features of each dimension by utilizing the neural network model according to the pixel matrix, and fusing all the high-dimensional features into dimension information; performing dimensionality reduction processing on the dimensionality information to obtain a data feature extraction result; if the acquired set is a third feature set, converting data in the set to obtain original data of multiple dimensions, and preprocessing the original data to obtain a pixel matrix corresponding to each dimension; analyzing each pixel matrix to obtain an incidence relation between each dimensionality, and establishing a highly-learned neural network model by utilizing the incidence relation; extracting high-dimensional features of each dimension by utilizing the neural network model according to the pixel matrix, and fusing all the high-dimensional features into dimension information; and performing dimensionality reduction processing on the dimensionality information to obtain a data feature extraction result.
The method for identifying the road condition irregular obstacles of the unmanned vehicle, the computer program and the storage medium have the following beneficial effects: the infrared sensor and the image acquisition device are used for acquiring the image information of a target area in the driving process of the unmanned vehicle, then the image information is respectively identified, and result judgment is carried out based on the identification results of the image information and the image information, so that the accuracy of the identification result is improved; meanwhile, in the process of image recognition, data feature extraction is carried out after image information data conversion, so that the recognition efficiency is improved; the method is mainly realized by the following steps: 1. the image data identification method comprises the following steps: the image data is recognized by the aid of the preset obstacle feature recognition model, the obstacle feature image of the image data is obtained, the obtained obstacle feature image is converted into binary data, data feature recognition is carried out on the binary data to obtain a data feature recognition result, the data feature recognition result is analyzed to obtain an obstacle information recognition result in the image data corresponding to the data feature recognition result, and the recognition efficiency is higher compared with that of image recognition because the image data is converted into the binary data to carry out the data feature recognition in the recognition process; 2. judging the recognition result: comparing the first recording result with the second recording result, detecting whether the difference between the first recording result and the second recording result exceeds a set threshold value, if so, judging that the first recording result and the second recording result are different, and if not, judging that the first recording result and the second recording result are the same; respectively taking the weighted average of the first recording result and the second recording result according to the set weighted value in response to the two being the same to obtain a final result; in response to the first recording result and the second recording result being different, determining a correct recording result in the first recording result and the second recording result, and outputting the correct recording result as a final result; on the basis of the identification result, the identification result is not directly used, but is judged and analyzed to find out an accurate identification result, so that the identification accuracy is improved; 3. according to the method for image recognition based on the data features, after a plurality of feature sets of the image are obtained through data feature analysis, the set which is closer to the third feature set to the obstacle features of the image data is judged, and the obtained result is more accurate.
Drawings
Fig. 1 is a schematic flow chart of a method for identifying an obstacle with irregular road condition of an unmanned vehicle according to an embodiment of the present invention;
fig. 2 is a schematic connection diagram of a recognition device for an obstacle with irregular road condition of an unmanned vehicle according to an embodiment of the present invention;
fig. 3 is a graph illustrating a comparison experiment effect between a graph illustrating a change in the recognition accuracy of the method for recognizing an obstacle having an irregular road condition of an unmanned vehicle according to the embodiment of the present invention with the number of experiments and a comparison experiment effect in the prior art.
Detailed Description
The technical solution of the present invention is further described in detail below with reference to the following detailed description and the accompanying drawings:
example 1
As shown in fig. 1, the method for identifying an obstacle with irregular road condition of an unmanned vehicle comprises the following steps:
step 1: acquiring infrared data of a target area by using the infrared sensor and acquiring image data of the target area by using the image acquisition device;
step 2: recognizing barrier information in the infrared data by using a preset infrared recognition model, and recording a recognition result to obtain a first recording result;
and step 3: recognizing the image data by using a preset obstacle feature recognition model to obtain an obstacle feature image of the image data, converting the obtained obstacle feature image into binary data, performing data feature recognition on the binary data to obtain a data feature recognition result, analyzing the data feature recognition result to obtain an obstacle information recognition result in the image data corresponding to the data feature recognition result, and recording the obstacle information recognition result to obtain a second recording result;
and 4, step 4: comparing the first recording result with the second recording result, detecting whether the difference between the first recording result and the second recording result exceeds a set threshold value, if so, judging that the first recording result and the second recording result are different, and if not, judging that the first recording result and the second recording result are the same; respectively taking the weighted average of the first recording result and the second recording result according to the set weighted value in response to the two being the same to obtain a final result; and in response to the first recording result and the second recording result being different, determining a correct recording result in the first recording result and the second recording result, and outputting the correct recording result as a final result.
By adopting the technical scheme, the infrared sensor and the image acquisition device are utilized to acquire the image information of the target area in the driving process of the unmanned vehicle, then the image information is respectively identified, and result judgment is carried out based on the identification results of the infrared sensor and the image acquisition device, so that the accuracy of the identification result is improved; meanwhile, in the process of image recognition, data feature extraction is carried out after image information data conversion, so that the recognition efficiency is improved; the method is mainly realized by the following steps: 1. the image data identification method comprises the following steps: the image data is recognized by the aid of the preset obstacle feature recognition model, the obstacle feature image of the image data is obtained, the obtained obstacle feature image is converted into binary data, data feature recognition is carried out on the binary data to obtain a data feature recognition result, the data feature recognition result is analyzed to obtain an obstacle information recognition result in the image data corresponding to the data feature recognition result, and the recognition efficiency is higher compared with that of image recognition because the image data is converted into the binary data to carry out the data feature recognition in the recognition process; 2. judging the recognition result: comparing the first recording result with the second recording result, detecting whether the difference between the first recording result and the second recording result exceeds a set threshold value, if so, judging that the first recording result and the second recording result are different, and if not, judging that the first recording result and the second recording result are the same; respectively taking the weighted average of the first recording result and the second recording result according to the set weighted value in response to the two being the same to obtain a final result; in response to the first recording result and the second recording result being different, determining a correct recording result in the first recording result and the second recording result, and outputting the correct recording result as a final result; on the basis of the identification result, the identification result is not directly used, but is judged and analyzed to find out an accurate identification result, so that the identification accuracy is improved; 3. according to the method for image recognition based on the data features, after a plurality of feature sets of the image are obtained through data feature analysis, the set which is closer to the third feature set to the obstacle features of the image data is judged, and the obtained result is more accurate.
Example 2
On the basis of the example 1, in the step 4: determining a correct recording result of the first recording result and the second recording result comprises: determining that the first recorded result is different from the second recorded result; determining a first obstacle indicated in the first recorded result and a second obstacle indicated in the second recorded result for the same said difference; determining a recording volume determined by the three-dimensional coordinate set of the first obstacle and a first distance between the first obstacle and the infrared sensor in the first recording result; determining a desired volume range of the first obstacle in the point cloud data according to the first recording result and the first distance; determining that the first recording result is erroneous in response to the recorded volume being outside the desired volume range.
Specifically, in the prior art, a camera and a laser radar are generally used to identify an obstacle to be identified. The camera scheme can be applied to the scene with sufficient illumination and relatively stable environment. However, under the conditions of bad weather and disordered road environment, the vision of the camera scheme is always unstable, so that the acquired information of the obstacle to be identified is inaccurate. While lidar is very expensive, lidar solutions are very stable and safe in identifying obstacles to be identified. In the prior art, when a laser radar is used for identifying an obstacle to be identified, the type of the obstacle to be identified is judged according to the point cloud size and the local features of the obstacle to be identified, which are acquired by scanning the obstacle to be identified by the laser radar. For example, whether the obstacle to be recognized is a person may be determined according to whether the local feature of the point cloud of the obstacle to be recognized is a head portrait of the person; and judging whether the obstacle to be identified is a bicycle or not according to whether the local characteristic of the point cloud of the obstacle to be identified is the bicycle head characteristic or not.
Example 3
On the basis of the example 2, in the step 4: determining a correct recording result of the first recording result and the second recording result comprises: determining a recording pixel area covered by the second obstacle in the second recording result; determining a second distance between the second obstacle and the image acquisition device according to the first distance and calibration parameters between the infrared sensor and the image acquisition device; determining an expected pixel area range of the second obstacle in the image data according to the second recording result and the second distance; determining that the second recording result is erroneous in response to the recording pixel area being outside the desired pixel area range.
Example 4
On the basis of the above embodiment, in step 3: recognizing the image data by using a preset obstacle feature recognition model, and obtaining an obstacle feature image of the image data comprises the following steps: carrying out image binarization processing after image data is subjected to image enhancement; setting three preset feature distribution sets, which are respectively: a first feature set, a second feature set, and a third feature set; the corresponding characteristic probability values in each set are respectively as follows: a first feature set: p; a second feature set: x; the third feature set: m; setting a sample point, using the sample point to detect the sample point of the image after the image binarization processing, and after the detection is finished, calculating the classification coincidence probability of the sample point of the image after the image binarization processing and the set sample point by adopting the following formula:
Figure BDA0002773208940000061
wherein k is the number of sample points, and j is the number of overlapped sample points; calculating a loss function:
Figure BDA0002773208940000062
and (3) carrying out the following operation on the classification coincidence probability and the loss function to obtain the final classification coincidence probability:
Figure BDA0002773208940000063
Figure BDA0002773208940000064
judgment of pNWhichever is closer to P, X and M, i.e. pNP, X and M are respectively subjected to difference absolute value operation, and the calculation result is closest to 0; if p isjIf the image data is closer to P, judging that the obstacle feature of the image data is closer to the first feature set, and if P is closer to P, judging that the obstacle feature of the image data is closer to the first feature setNIf the image data is closer to the X, judging that the obstacle feature of the image data is closer to a second feature set; if p isNIf the image data is closer to P, the obstacle feature of the image data is judged to be closer to the secondAnd (4) three feature sets.
Specifically, the loss function (lossfunction) or the cost function (costfunction) is a function that maps a random event or a value of its associated random variable to a non-negative real number to represent a "risk" or a "loss" of the random event. In application, the loss function is usually associated with the optimization problem as a learning criterion, i.e. the model is solved and evaluated by minimizing the loss function. For example, parameter estimation (parameter estimation) used for models in statistics and machine learning, risk management (risk) and decision making in macro-economics, and optimal control theory (optimal control theory) used in control theory.
Example 5
On the basis of example 4, in step 3: converting the obtained obstacle feature image into binary data, and performing data feature recognition on the binary data to obtain a data feature recognition result, wherein the data feature recognition result comprises the following steps: acquiring a set corresponding to the obstacle feature of the image data; if the acquired sets are a first feature set and a second feature set, converting data in the sets to obtain original data of multiple dimensions, and preprocessing the original data to obtain a pixel matrix corresponding to each dimension; analyzing each pixel matrix to obtain an incidence relation between each dimensionality, and establishing a neural network model for width learning by utilizing the incidence relation; extracting high-dimensional features of each dimension by utilizing the neural network model according to the pixel matrix, and fusing all the high-dimensional features into dimension information; performing dimensionality reduction processing on the dimensionality information to obtain a data feature extraction result; if the acquired set is a third feature set, converting data in the set to obtain original data of multiple dimensions, and preprocessing the original data to obtain a pixel matrix corresponding to each dimension; analyzing each pixel matrix to obtain an incidence relation between each dimensionality, and establishing a highly-learned neural network model by utilizing the incidence relation; extracting high-dimensional features of each dimension by utilizing the neural network model according to the pixel matrix, and fusing all the high-dimensional features into dimension information; and performing dimensionality reduction processing on the dimensionality information to obtain a data feature extraction result.
In particular, with the development of light weight and ultra-thin display devices, the display devices are also gradually applied to more various electronic products to display the related information to be presented to the user. However, in order to adapt to the increasingly diversified applications, such as the application to wearable devices, touch devices, or display devices of electronic products such as household appliances, the light emitting type, gray scale display, and power consumption of the display device are not changed. In contrast, the change of the pixel matrix included in the display devices is much smaller, and under the limitation of the outer frames with different shapes among different electronic products, in most display devices, the square or rectangular pixels are still used as the basic units of the pixel matrix, and the square or rectangular pixels are used for combining and covering the display shape of the display surface which is filled in the frame limit of the outer frame, so as to cover more pixel areas, and a simpler control circuit is used for controlling the display state of the pixels.
Example 6
The computer program for realizing the method of the invention and the computer program are recorded on the corresponding storage medium, and the storage medium is connected with a recognition device of the unmanned vehicle road condition irregular obstacle with a central processing unit,
the above-mentioned recognition device further includes:
the acquisition device is configured to acquire infrared data of a target area by using the infrared sensor and acquire image data of the target area by using the image acquisition device;
the infrared recognition device is configured to recognize obstacle information in the infrared data by using a preset infrared recognition model, and record a recognition result to obtain a first recording result;
the image recognition device is configured to recognize the image data by using a preset obstacle feature recognition model to obtain an obstacle feature image of the image data, convert the obtained obstacle feature image into binary data, perform data feature recognition on the binary data to obtain a data feature recognition result, analyze the data feature recognition result to obtain an obstacle information recognition result in the image data corresponding to the data feature recognition result, and record the obstacle information recognition result to obtain a second recording result;
the result generating device is configured to compare the first recording result with the second recording result, detect whether the difference between the first recording result and the second recording result exceeds a set threshold, judge that the first recording result and the second recording result are different if the difference exceeds the set threshold, and judge that the first recording result and the second recording result are the same if the difference does not exceed the set threshold; respectively taking the weighted average of the first recording result and the second recording result according to the set weighted value in response to the two being the same to obtain a final result; and in response to the first recording result and the second recording result being different, determining a correct recording result in the first recording result and the second recording result, and outputting the correct recording result as a final result.
Specifically, the infrared sensor and the image acquisition device are utilized to acquire the image information of the target area in the driving process of the unmanned vehicle, the image information is respectively identified, and result judgment is carried out based on the identification results of the image information and the image information, so that the accuracy of the identification result is improved; meanwhile, in the process of image recognition, data feature extraction is carried out after image information data conversion, and recognition efficiency is improved.
Example 7
On the basis of embodiment 6, the determining, by the result generation means, a correct recording result of the first recording result and the second recording result includes: determining that the first recorded result is different from the second recorded result; determining a first obstacle indicated in the first recorded result and a second obstacle indicated in the second recorded result for the same said difference; determining a recording volume determined by the three-dimensional coordinate set of the first obstacle and a first distance between the first obstacle and the infrared sensor in the first recording result; determining a desired volume range of the first obstacle in the point cloud data according to the first recording result and the first distance; determining that the first recording result is erroneous in response to the recorded volume being outside the desired volume range.
Specifically, the image data identification method includes: the image data is recognized by the aid of the preset obstacle feature recognition model, the obstacle feature image of the image data is obtained, the obtained obstacle feature image is converted into binary data, data feature recognition is conducted on the binary data to obtain a data feature recognition result, the data feature recognition result is analyzed, and an obstacle information recognition result in the image data corresponding to the data feature recognition result is obtained.
Example 8
On the basis of embodiment 7, the determining, by the result generation means, a correct recording result of the first recording result and the second recording result includes: determining a recording pixel area covered by the second obstacle in the second recording result; determining a second distance between the second obstacle and the image acquisition device according to the first distance and calibration parameters between the infrared sensor and the image acquisition device; determining an expected pixel area range of the second obstacle in the image data according to the second recording result and the second distance; determining that the second recording result is erroneous in response to the recording pixel area being outside the desired pixel area range.
Specifically, the first recording result and the second recording result are compared, whether the difference between the first recording result and the second recording result exceeds a set threshold value or not is detected, if the difference exceeds the set threshold value, the first recording result and the second recording result are different, and if the difference does not exceed the set threshold value, the first recording result and the second recording result are identical; respectively taking the weighted average of the first recording result and the second recording result according to the set weighted value in response to the two being the same to obtain a final result; in response to the first recording result and the second recording result being different, determining a correct recording result in the first recording result and the second recording result, and outputting the correct recording result as a final result; on the basis of the identification result, the identification result is not directly used, but is judged and analyzed to find out the accurate identification result, so that the identification accuracy is improved.
Example 9
On the basis of embodiment 8, the image recognition apparatus recognizes the image data by using a preset obstacle feature recognition model, and obtaining an obstacle feature image of the image data includes: carrying out image binarization processing after image data is subjected to image enhancement; setting three preset feature distribution sets, which are respectively: a first feature set, a second feature set, and a third feature set; the corresponding characteristic probability values in each set are respectively as follows: a first feature set: p; a second feature set: x; the third feature set: m; setting a sample point, using the sample point to detect the sample point of the image after the image binarization processing, and after the detection is finished, calculating the classification coincidence probability of the sample point of the image after the image binarization processing and the set sample point by adopting the following formula:
Figure BDA0002773208940000091
wherein k is the number of sample points, and j is the number of overlapped sample points; calculating a loss function:
Figure BDA0002773208940000092
and (3) carrying out the following operation on the classification coincidence probability and the loss function to obtain the final classification coincidence probability:
Figure BDA0002773208940000093
Figure BDA0002773208940000094
judgment of pNWhichever is closer to P, X and M, i.e. pNP, X and M are respectively subjected to difference absolute value operation, and the calculation result is closest to 0; if p isjIf the image data is closer to P, judging that the obstacle feature of the image data is closer to the first feature set, and if P is closer to P, judging that the obstacle feature of the image data is closer to the first feature setNIf the image data is closer to the X, judging that the obstacle feature of the image data is closer to a second feature set; if p isNAnd P is moreAnd if so, judging that the obstacle feature of the image data is closer to the third feature set.
Specifically, after a plurality of feature sets of the image are obtained through data feature analysis, the method and the device judge which set the obstacle feature of the image data is closer to the third feature set, so that the obtained result is more accurate.
Example 10
On the basis of embodiment 9, the image recognition apparatus converts the obtained obstacle feature image into binary data, and performs data feature recognition on the binary data, and obtaining a data feature recognition result includes: acquiring a set corresponding to the obstacle feature of the image data; if the acquired sets are a first feature set and a second feature set, converting data in the sets to obtain original data of multiple dimensions, and preprocessing the original data to obtain a pixel matrix corresponding to each dimension; analyzing each pixel matrix to obtain an incidence relation between each dimensionality, and establishing a neural network model for width learning by utilizing the incidence relation; extracting high-dimensional features of each dimension by utilizing the neural network model according to the pixel matrix, and fusing all the high-dimensional features into dimension information; performing dimensionality reduction processing on the dimensionality information to obtain a data feature extraction result; if the acquired set is a third feature set, converting data in the set to obtain original data of multiple dimensions, and preprocessing the original data to obtain a pixel matrix corresponding to each dimension; analyzing each pixel matrix to obtain an incidence relation between each dimensionality, and establishing a highly-learned neural network model by utilizing the incidence relation; extracting high-dimensional features of each dimension by utilizing the neural network model according to the pixel matrix, and fusing all the high-dimensional features into dimension information; and performing dimensionality reduction processing on the dimensionality information to obtain a data feature extraction result.
In the prior art, an intelligent vehicle is a comprehensive intelligent system integrating functions of environmental perception, path planning decision making, control and the like, and can greatly improve traffic safety, improve vehicle passing efficiency of the existing road and reduce pollution. The environment perception system is the foundation and the core in an intelligent vehicle architecture and provides necessary basic information for planning decision and control execution. The environment sensing system mainly has the functions of acquiring vehicle and environment information through a sensor, specifically acquiring pose and state information of the vehicle, recognizing and tracking lane lines and lane edges in a structured road, recognizing and tracking traffic signs and traffic signals, recognizing and tracking obstacles around the vehicle and the like.
Sensors commonly used for environmental perception include cameras, lasers, millimeter wave radar, GPS, inertial navigation, and the like. The camera visual data cannot provide accurate distance information of the barrier or even if the distance information is provided, the calculated amount is huge, the real-time requirement of the intelligent vehicle is difficult to meet, the laser radar is high in ranging precision, high in scanning frequency and rich in data amount, is not influenced by factors such as weather and illumination, does not depend on grains and colors for distinguishing, is not sensitive to shadow noise and other excellent characteristics, and is greatly concerned in environment perception of the intelligent vehicle in recent years.
Compared with the prior art, the identification method has higher identification accuracy and identification efficiency.
The above description is only an embodiment of the present invention, but not intended to limit the scope of the present invention, and any structural changes made according to the present invention should be considered as being limited within the scope of the present invention without departing from the spirit of the present invention.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process and related description of the system described above may refer to the corresponding process in the foregoing method embodiments, and will not be described herein again.
It should be noted that, the system provided in the foregoing embodiment is only illustrated by dividing the functional modules, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the modules or steps in the embodiment of the present invention are further decomposed or combined, for example, the modules in the foregoing embodiment may be combined into one module, or may be further split into multiple sub-modules, so as to complete all or part of the functions described above. The names of the modules and steps involved in the embodiments of the present invention are only for distinguishing the modules or steps, and are not to be construed as unduly limiting the present invention.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes and related descriptions of the storage device and the processing device described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Those of skill in the art would appreciate that the various illustrative modules, method steps, and modules described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that programs corresponding to the software modules, method steps may be located in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. To clearly illustrate this interchangeability of electronic hardware and software, various illustrative components and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as electronic hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The terms "first," "second," and the like are used for distinguishing between similar elements and not necessarily for describing or implying a particular order or sequence.
The terms "comprises," "comprising," or any other similar term are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.

Claims (2)

1. The method for identifying the obstacle with the irregular road condition of the unmanned vehicle is characterized by comprising the following steps:
step 1: acquiring infrared data of a target area by using an infrared sensor and acquiring image data of the target area by using an image acquisition device;
step 2: recognizing barrier information in the infrared data by using a preset infrared recognition model, and recording a recognition result to obtain a first recording result;
and step 3: recognizing the image data by using a preset obstacle feature recognition model to obtain an obstacle feature image of the image data, converting the obtained obstacle feature image into binary stored data, performing data feature recognition on the binary stored data to obtain a data feature recognition result, analyzing the data feature recognition result to obtain an obstacle information recognition result in the image data corresponding to the data feature recognition result, and recording the obstacle information recognition result to obtain a second recording result;
and 4, step 4: comparing the first recording result with the second recording result, detecting whether the difference between the first recording result and the second recording result exceeds a set threshold value, if so, judging that the first recording result and the second recording result are different, and if not, judging that the first recording result and the second recording result are the same; in response to the two are the same, according to a set weighted value, taking a weighted average of the first recording result and the second recording result to obtain a final result; in response to the first recording result and the second recording result being different, determining a correct recording result in the first recording result and the second recording result, and outputting the correct recording result as a final result;
in the step 4: determining a correct recording result of the first recording result and the second recording result comprises: determining that the first recorded result is different from the second recorded result; determining a first obstacle indicated in the first recorded result and a second obstacle indicated in the second recorded result for the same said difference; determining a recording volume determined by the three-dimensional coordinate set of the first obstacle and a first distance between the first obstacle and the infrared sensor in the first recording result; determining a desired volume range of the first obstacle according to the first recording result and the first distance; determining that the first recording result is erroneous in response to the recorded volume being outside the desired volume range;
in the step 4: determining a correct recording result of the first recording result and the second recording result comprises: determining a recording pixel area covered by the second obstacle in the second recording result; determining a second distance between the second obstacle and the image acquisition device according to the first distance and calibration parameters between the infrared sensor and the image acquisition device; determining an expected pixel area range of the second obstacle in the image data according to the second recording result and the second distance; determining that the second recording result is erroneous in response to the recording pixel area being outside the desired pixel area range.
2. A storage medium on which a computer program is stored, the computer program being adapted to carry out the steps of the method as claimed in claim 1.
CN202011256299.9A 2020-11-11 2020-11-11 Method for recognizing obstacle with irregular road condition of unmanned vehicle, computer program and storage medium Active CN112329670B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011256299.9A CN112329670B (en) 2020-11-11 2020-11-11 Method for recognizing obstacle with irregular road condition of unmanned vehicle, computer program and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011256299.9A CN112329670B (en) 2020-11-11 2020-11-11 Method for recognizing obstacle with irregular road condition of unmanned vehicle, computer program and storage medium

Publications (2)

Publication Number Publication Date
CN112329670A CN112329670A (en) 2021-02-05
CN112329670B true CN112329670B (en) 2021-10-19

Family

ID=74318921

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011256299.9A Active CN112329670B (en) 2020-11-11 2020-11-11 Method for recognizing obstacle with irregular road condition of unmanned vehicle, computer program and storage medium

Country Status (1)

Country Link
CN (1) CN112329670B (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4315991B2 (en) * 2007-04-20 2009-08-19 本田技研工業株式会社 Vehicle periphery monitoring device, vehicle periphery monitoring method, vehicle periphery monitoring program
CN102819263B (en) * 2012-07-30 2014-11-05 中国航天科工集团第三研究院第八三五七研究所 Multi-camera visual perception system for UGV (Unmanned Ground Vehicle)
US10007269B1 (en) * 2017-06-23 2018-06-26 Uber Technologies, Inc. Collision-avoidance system for autonomous-capable vehicle
CN109598187A (en) * 2018-10-15 2019-04-09 西北铁道电子股份有限公司 Obstacle recognition method, differentiating obstacle and railcar servomechanism

Also Published As

Publication number Publication date
CN112329670A (en) 2021-02-05

Similar Documents

Publication Publication Date Title
CN110988912B (en) Road target and distance detection method, system and device for automatic driving vehicle
CN109444911B (en) Unmanned ship water surface target detection, identification and positioning method based on monocular camera and laser radar information fusion
CN110992683B (en) Dynamic image perception-based intersection blind area early warning method and system
WO2022141914A1 (en) Multi-target vehicle detection and re-identification method based on radar and video fusion
CN103176185B (en) Method and system for detecting road barrier
EP2574958B1 (en) Road-terrain detection method and system for driver assistance systems
CN109444912B (en) Driving environment sensing system and method based on cooperative control and deep learning
CN112149550A (en) Automatic driving vehicle 3D target detection method based on multi-sensor fusion
CN113592905B (en) Vehicle driving track prediction method based on monocular camera
CN111491093A (en) Method and device for adjusting field angle of camera
KR20170126740A (en) Apparatus and method for detecting object
CN115943439A (en) Multi-target vehicle detection and re-identification method based on radar vision fusion
CN112487905A (en) Method and system for predicting danger level of pedestrian around vehicle
CN104915642A (en) Method and apparatus for measurement of distance to vehicle ahead
CN110909656B (en) Pedestrian detection method and system integrating radar and camera
CN113688738A (en) Target identification system and method based on laser radar point cloud data
Chen et al. Vehicle detection based on multifeature extraction and recognition adopting RBF neural network on ADAS system
CN112347953B (en) Recognition device for road condition irregular obstacles of unmanned vehicle
CN117124332A (en) Mechanical arm control method and system based on AI vision grabbing
CN112329670B (en) Method for recognizing obstacle with irregular road condition of unmanned vehicle, computer program and storage medium
CN114898319B (en) Vehicle type recognition method and system based on multi-sensor decision level information fusion
CN114842660B (en) Unmanned lane track prediction method and device and electronic equipment
CN113611008B (en) Vehicle driving scene acquisition method, device, equipment and medium
CN114359861A (en) Intelligent vehicle obstacle recognition deep learning method based on vision and laser radar
CN113820682A (en) Target detection method and device based on millimeter wave radar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant