CN112347953A - Recognition device for road condition irregular obstacles of unmanned vehicle - Google Patents

Recognition device for road condition irregular obstacles of unmanned vehicle Download PDF

Info

Publication number
CN112347953A
CN112347953A CN202011258635.3A CN202011258635A CN112347953A CN 112347953 A CN112347953 A CN 112347953A CN 202011258635 A CN202011258635 A CN 202011258635A CN 112347953 A CN112347953 A CN 112347953A
Authority
CN
China
Prior art keywords
result
feature
obstacle
data
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011258635.3A
Other languages
Chinese (zh)
Other versions
CN112347953B (en
Inventor
杨扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Boonray Intelligent Technology Co Ltd
Original Assignee
Shanghai Boonray Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Boonray Intelligent Technology Co Ltd filed Critical Shanghai Boonray Intelligent Technology Co Ltd
Priority to CN202011258635.3A priority Critical patent/CN112347953B/en
Publication of CN112347953A publication Critical patent/CN112347953A/en
Application granted granted Critical
Publication of CN112347953B publication Critical patent/CN112347953B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to a recognition device for an irregular obstacle of a driverless vehicle road condition, which comprises an acquisition device for acquiring peripheral information, wherein the acquisition device is provided with a telescopic mechanism arranged on a vehicle body; the upper side of the telescopic mechanism body is provided with a top cover plate; the telescopic mechanism is provided with a hinged part inserted with the arc column; the free end of the telescopic mechanism is provided with a limit strip with the thickness smaller than that of the telescopic mechanism body; the telescopic mechanism body is sequentially provided with an infrared sensor, a buffer rubber block and an air quantity sensor; the side surface of the air quantity sensor is provided with a coupler, and the coupler is connected with a reciprocating pump which is arranged close to the hinged part through a connecting shaft; the top cover sheet is provided with a water chute penetrating through the upper surface. The invention has the advantages of high recognition efficiency, high accuracy, compact structure, good reliability and small interference among sensors.

Description

Recognition device for road condition irregular obstacles of unmanned vehicle
Technical Field
The invention belongs to the technical field of unmanned driving, and particularly relates to a recognition device for an obstacle with an irregular road condition of an unmanned vehicle.
Background
The unmanned automobile is an intelligent automobile which senses road environment through a vehicle-mounted sensing system, automatically plans a driving route and controls the automobile to reach a preset target.
The vehicle-mounted sensor is used for sensing the surrounding environment of the vehicle, and controlling the steering and the speed of the vehicle according to the road, the vehicle position and the obstacle information obtained by sensing, so that the vehicle can safely and reliably run on the road.
The system integrates a plurality of technologies such as automatic control, a system structure, artificial intelligence, visual calculation and the like, is a product of high development of computer science, mode recognition and intelligent control technology, is an important mark for measuring national scientific research strength and industrial level, and has wide application prospect in the fields of national defense and national economy.
The correct identification of obstacles is of great importance for unmanned vehicles as well as for autonomous driving modes of vehicles. When automatic identification of an obstacle is implemented, a laser radar sensor, a millimeter wave radar sensor, or an image acquisition device is generally installed on a vehicle to acquire obstacle information around the vehicle, so as to obtain three-dimensional point cloud data or two-dimensional image data. And then, recognizing the obstacles in the three-dimensional point cloud data or the two-dimensional image data by using a trained machine learning algorithm. In training the machine learning algorithm, the machine learning algorithm is usually trained by using three-dimensional point cloud data or two-dimensional image data in which an obstacle is marked.
Disclosure of Invention
The invention mainly aims to provide the recognition device for the road condition irregular obstacles of the unmanned vehicle, which has the advantages of high recognition efficiency, high accuracy, compact structure, good reliability and small interference among sensors.
In order to achieve the purpose, the technical scheme of the invention is realized as follows: the identification device for the road condition irregular obstacles of the unmanned vehicle comprises an acquisition device for acquiring peripheral information, wherein the acquisition device is provided with a telescopic mechanism arranged on a vehicle body; the upper side of the telescopic mechanism body is provided with a top cover plate; the telescopic mechanism is provided with a hinged part inserted with the arc column; the free end of the telescopic mechanism is provided with a limit strip with the thickness smaller than that of the telescopic mechanism body; the telescopic mechanism body is sequentially provided with an infrared sensor, a buffer rubber block and an air quantity sensor; the side surface of the air quantity sensor is provided with a coupler, and the coupler is connected with a reciprocating pump which is arranged close to the hinged part through a connecting shaft; the top cover sheet is provided with a water chute penetrating through the upper surface.
The acquisition device is configured to acquire infrared data of a target area by using the infrared sensor and acquire image data of the target area by using the image acquisition device;
the identification device also comprises an infrared identification device which is configured to identify the obstacle information in the infrared data by using a preset infrared identification model and record an identification result to obtain a first recording result;
the image recognition device is configured for recognizing the image data by using a preset obstacle feature recognition model to obtain an obstacle feature image of the image data, converting the obtained obstacle feature image into binary data, performing data feature recognition on the binary data to obtain a data feature recognition result, analyzing the data feature recognition result to obtain an obstacle information recognition result in the image data corresponding to the data feature recognition result, and recording the obstacle information recognition result to obtain a second recording result;
the result generating device is configured for comparing the first recording result with the second recording result, detecting whether the difference between the first recording result and the second recording result exceeds a set threshold value, if the difference exceeds the set threshold value, judging that the first recording result and the second recording result are different, and if the difference does not exceed the set threshold value, judging that the first recording result and the second recording result are the same; respectively taking the weighted average of the first recording result and the second recording result according to the set weighted value in response to the two being the same to obtain a final result; and in response to the first recording result and the second recording result being different, determining a correct recording result in the first recording result and the second recording result, and outputting the correct recording result as a final result.
The determining, by the result generation device, a correct recording result of the first recording result and the second recording result includes: determining that the first recording result is different from the second recording result; determining a first obstacle indicated in the first recorded result and a second obstacle indicated in the second recorded result that are the same and different; determining a recording volume determined by the three-dimensional coordinate set of the first obstacle and a first distance between the first obstacle and the infrared sensor in the first recording result; determining an expected volume range of the first obstacle in the point cloud data according to the first recording result and the first distance; in response to the recorded volume being outside the desired volume range, a first recording result error is determined.
The determining, by the result generation device, a correct recording result of the first recording result and the second recording result includes: determining a recording pixel area covered by the second obstacle in the second recording result; determining a second distance between a second obstacle and the image acquisition device according to the first distance and calibration parameters between the infrared sensor and the image acquisition device; determining an expected pixel area range of the second obstacle in the image data according to the second recording result and the second distance; in response to the recording pixel area being outside the desired pixel area range, a second recording result error is determined.
The image recognition device recognizes the image data by using a preset obstacle feature recognition model, and obtaining the obstacle feature image of the image data comprises the following steps: carrying out image binarization processing after image data is subjected to image enhancement; setting three preset feature distribution sets, which are respectively: a first feature set, a second feature set, and a third feature set; the corresponding characteristic probability values in each set are respectively as follows: a first feature set: p; a second feature set: x; the third feature set: m; setting a sample point, using the sample point to detect the sample point of the image after the image binarization processing, and after the detection is finished, calculating the classification coincidence probability of the sample point of the image after the image binarization processing and the set sample point by adopting the following formula:
Figure BDA0002773871520000021
Figure BDA0002773871520000022
wherein k is the number of sample points, and j is the number of overlapped sample points; calculating a loss function:
Figure BDA0002773871520000023
and (3) carrying out the following operation on the classification coincidence probability and the loss function to obtain the final classification coincidence probability:
Figure BDA0002773871520000031
judgment of pNWhichever is closer to P, X and M, i.e. pNP, X and M are respectively subjected to difference absolute value operation, and the calculation result is closest to 0; if p isjIf the image data is closer to P, judging that the obstacle feature of the image data is closer to the first feature set, and if P is closer to P, judging that the obstacle feature of the image data is closer to the first feature setNIf the image data is closer to the X, judging that the obstacle feature of the image data is closer to a second feature set; if p isNAnd if the image data is closer to the P, judging that the obstacle feature of the image data is closer to the third feature set.
The image recognition device converts the obtained obstacle feature image into binary data, and performs data feature recognition on the binary data to obtain a data feature recognition result, wherein the data feature recognition result comprises: acquiring a set corresponding to the obstacle feature of the image data; if the acquired sets are the first feature set and the second feature set, converting data in the sets to obtain original data of multiple dimensions, and preprocessing the original data to obtain a pixel matrix corresponding to each dimension; analyzing each pixel matrix to obtain an incidence relation between each dimensionality, and establishing a neural network model for width learning by utilizing the incidence relation; extracting high-dimensional features of each dimension by using a neural network model according to the pixel matrix, and fusing all the high-dimensional features into dimension information; performing dimensionality reduction processing on the dimensionality information to obtain a data feature extraction result; if the acquired set is a third feature set, converting data in the set to obtain original data of multiple dimensions, and preprocessing the original data to obtain a pixel matrix corresponding to each dimension; analyzing each pixel matrix to obtain the incidence relation between each dimensionality, and establishing a highly-learned neural network model by utilizing the incidence relation; extracting high-dimensional features of each dimension by using a neural network model according to the pixel matrix, and fusing all the high-dimensional features into dimension information; and performing dimensionality reduction processing on the dimensionality information to obtain a data feature extraction result.
The method for identifying the road condition irregular obstacles of the unmanned vehicle, the computer program and the storage medium have the following beneficial effects: the infrared sensor and the image acquisition device are used for acquiring the image information of a target area in the driving process of the unmanned vehicle, then the image information is respectively identified, and result judgment is carried out based on the identification results of the image information and the image information, so that the accuracy of the identification result is improved; meanwhile, in the process of image recognition, data feature extraction is carried out after image information data conversion, so that the recognition efficiency is improved; the method is mainly realized by the following steps: 1. the image data identification method comprises the following steps: the image data are identified by the aid of the preset obstacle feature identification model, the obstacle feature image of the image data is obtained, the obtained obstacle feature image is converted into binary data, data feature identification is carried out on the binary data to obtain a data feature identification result, the data feature identification result is analyzed to obtain an obstacle information identification result in the image data corresponding to the data feature identification result, and the identification efficiency is higher compared with that of image identification because the image data are converted into the binary data to carry out the data feature identification in the identification process; 2. judging the recognition result: comparing a first recording result with a second recording result, detecting whether the difference between the first recording result and the second recording result exceeds a set threshold value, if so, judging that the first recording result and the second recording result are different, and if not, judging that the first recording result and the second recording result are the same; respectively taking the weighted average of the first recording result and the second recording result according to the set weighted value in response to the two being the same to obtain a final result; in response to the first recording result and the second recording result being different, determining a correct recording result in the first recording result and the second recording result, and outputting the correct recording result as a final result; on the basis of the identification result, the identification result is not directly used, but is judged and analyzed to find out an accurate identification result, so that the identification accuracy is improved; 3. according to the method for image recognition based on the data features, after a plurality of feature sets of the image are obtained through data feature analysis, the set which is closer to the third feature set to the obstacle features of the image data is judged, and the obtained result is more accurate.
Drawings
Fig. 1 is a schematic view of a working state of a recognition device for an irregular obstacle in a road condition of an unmanned vehicle according to an embodiment of the present invention;
fig. 2 is a structure of a device for recognizing an obstacle having an irregular road condition of an unmanned vehicle according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of a method for identifying an obstacle with irregular road condition of an unmanned vehicle according to an embodiment of the present invention;
fig. 4 is a schematic view of a connection mode of an identification apparatus for an obstacle with irregular road condition of an unmanned vehicle according to an embodiment of the present invention;
fig. 5 is a graph illustrating a comparison experiment effect between a graph illustrating a change in the recognition accuracy of the recognition device for an obstacle having an irregular road condition of an unmanned vehicle according to an embodiment of the present invention according to the number of experiments and a comparison experiment effect in the prior art.
Detailed Description
The following describes the technical solution of the present invention in further detail with reference to the detailed description and the accompanying drawings.
Description of reference numerals: the automobile body comprises a telescopic mechanism 1, a top cover plate 1a, a water guide groove 1b, an inserting arc column 1c, a limiting strip 1d, an infrared sensor 11, an air quantity sensor 12, a coupler 13, a connecting shaft 14, a reciprocating pump 15, a buffering rubber block 16, a notch 16a, an automobile body 2, an automobile roof 21 and a bearing arc hole 21 a.
Example 1
As shown in fig. 1-5, the device for recognizing an obstacle having an irregular road condition of an unmanned vehicle comprises an obtaining device for obtaining peripheral information, wherein the obtaining device is provided with a telescopic mechanism 1 mounted on a vehicle body 2; the upper side of the telescopic mechanism 1 body is provided with a top cover plate 1 a; the telescopic mechanism 1 is provided with a hinged part inserted with an arc column 1 c; the free end of the telescopic mechanism 1 is provided with a limit strip 1d with the thickness smaller than that of the telescopic mechanism 1 body; the body of the telescopic mechanism 1 is sequentially provided with an infrared sensor 11, a buffer rubber block 16 and an air quantity sensor 12; the side surface of the air quantity sensor 12 is provided with a coupler 13, and the coupler 13 is connected with a reciprocating pump 15 which is arranged close to the hinged part through a connecting shaft 14; the top cover sheet 1a is provided with a water chute 1b penetrating through the upper surface.
The top cover sheet 1a is used for keeping off rain, and rainwater can be drained along the water chute 1 b. The air quantity sensor 12 is used for monitoring the ambient air quantity, and the power of the related infrared sensor is increased when the air quantity is too large. The air quantity sensor 12 is provided on a slide rail to be capable of sliding back and forth, and the connecting shaft 14 is connected to a reciprocating pump 15 provided near the hinge portion so as to be capable of sliding under the driving of the reciprocating pump 15. When the image stability of the infrared sensor 11 is affected by the excessive vibration of the air volume sensor 12, the air volume sensor 12 is tightly propped against the buffer rubber block 16. The length of the gap 16a of the cushion rubber 16 is one third to one half of the length of the cushion rubber 16, and the width of the gap 16a of the cushion rubber 16 is one third to one half of the width of the cushion rubber 16. If the resonance cannot be eliminated due to accidental causes, the air quantity sensor 12 is pulled toward the reciprocating pump 15 until the resonance is eliminated.
The acquisition device is configured to acquire infrared data of a target area by using the infrared sensor 11 and acquire image data of the target area by using the image acquisition device; the image data acquired by the image acquisition device is an image of visible light.
The infrared recognition device is configured for recognizing barrier information in the infrared data by using a preset infrared recognition model and recording a recognition result to obtain a first recording result;
the image recognition device is configured for recognizing the image data by using a preset obstacle feature recognition model to obtain an obstacle feature image of the image data, converting the obtained obstacle feature image into binary data, performing data feature recognition on the binary data to obtain a data feature recognition result, analyzing the data feature recognition result to obtain an obstacle information recognition result in the image data corresponding to the data feature recognition result, and recording the obstacle information recognition result to obtain a second recording result;
the result generating device is configured for comparing the first recording result with the second recording result, detecting whether the difference between the first recording result and the second recording result exceeds a set threshold value, if the difference exceeds the set threshold value, judging that the first recording result and the second recording result are different, and if the difference does not exceed the set threshold value, judging that the first recording result and the second recording result are the same; respectively taking the weighted average of the first recording result and the second recording result according to the set weighted value in response to the two being the same to obtain a final result; and in response to the first recording result and the second recording result being different, determining a correct recording result in the first recording result and the second recording result, and outputting the correct recording result as a final result.
The determining, by the result generation device, a correct recording result of the first recording result and the second recording result includes: determining that the first recording result is different from the second recording result; determining a first obstacle indicated in the first recorded result and a second obstacle indicated in the second recorded result that are the same and different; determining a recording volume determined by the three-dimensional coordinate set of the first obstacle and a first distance between the first obstacle and the infrared sensor 11 in the first recording result; determining an expected volume range of the first obstacle in the point cloud data according to the first recording result and the first distance; in response to the recorded volume being outside the desired volume range, a first recording result error is determined.
The determining, by the result generation device, a correct recording result of the first recording result and the second recording result includes: determining a recording pixel area covered by the second obstacle in the second recording result; determining a second distance between a second obstacle and the image acquisition device according to the first distance and calibration parameters between the infrared sensor 11 and the image acquisition device; determining an expected pixel area range of the second obstacle in the image data according to the second recording result and the second distance; in response to the recording pixel area being outside the desired pixel area range, a second recording result error is determined.
The image recognition device recognizes the image data by using a preset obstacle feature recognition model, and obtaining the obstacle feature image of the image data comprises the following steps: carrying out image binarization processing after image data is subjected to image enhancement; setting three preset feature distribution sets, which are respectively: a first feature set, a second feature set, and a third feature set; the corresponding characteristic probability values in each set are respectively as follows: a first feature set: p; second special featureAnd (4) sign set: x; the third feature set: m; setting a sample point, using the sample point to detect the sample point of the image after the image binarization processing, and after the detection is finished, calculating the classification coincidence probability of the sample point of the image after the image binarization processing and the set sample point by adopting the following formula:
Figure BDA0002773871520000061
Figure BDA0002773871520000062
wherein k is the number of sample points, and j is the number of overlapped sample points; calculating a loss function:
Figure BDA0002773871520000063
and (3) carrying out the following operation on the classification coincidence probability and the loss function to obtain the final classification coincidence probability:
Figure BDA0002773871520000064
judgment of pNWhichever is closer to P, X and M, i.e. pNP, X and M are respectively subjected to difference absolute value operation, and the calculation result is closest to 0; if p isjIf the image data is closer to P, judging that the obstacle feature of the image data is closer to the first feature set, and if P is closer to P, judging that the obstacle feature of the image data is closer to the first feature setNIf the image data is closer to the X, judging that the obstacle feature of the image data is closer to a second feature set; if p isNAnd if the image data is closer to the P, judging that the obstacle feature of the image data is closer to the third feature set.
The image recognition device converts the obtained obstacle feature image into binary data, and performs data feature recognition on the binary data to obtain a data feature recognition result, wherein the data feature recognition result comprises: acquiring a set corresponding to the obstacle feature of the image data; if the acquired sets are the first feature set and the second feature set, converting data in the sets to obtain original data of multiple dimensions, and preprocessing the original data to obtain a pixel matrix corresponding to each dimension; analyzing each pixel matrix to obtain an incidence relation between each dimensionality, and establishing a neural network model for width learning by utilizing the incidence relation; extracting high-dimensional features of each dimension by using a neural network model according to the pixel matrix, and fusing all the high-dimensional features into dimension information; performing dimensionality reduction processing on the dimensionality information to obtain a data feature extraction result; if the acquired set is a third feature set, converting data in the set to obtain original data of multiple dimensions, and preprocessing the original data to obtain a pixel matrix corresponding to each dimension; analyzing each pixel matrix to obtain the incidence relation between each dimensionality, and establishing a highly-learned neural network model by utilizing the incidence relation; extracting high-dimensional features of each dimension by using a neural network model according to the pixel matrix, and fusing all the high-dimensional features into dimension information; and performing dimensionality reduction processing on the dimensionality information to obtain a data feature extraction result.
The identification method of the device executes the following steps:
step 1: acquiring infrared data of a target area by using an infrared sensor and acquiring image data of the target area by using an image acquisition device;
step 2: recognizing barrier information in the infrared data by using a preset infrared recognition model, and recording a recognition result to obtain a first recording result;
and step 3: recognizing the image data by using a preset obstacle feature recognition model to obtain an obstacle feature image of the image data, converting the obtained obstacle feature image into binary data, performing data feature recognition on the binary data to obtain a data feature recognition result, analyzing the data feature recognition result to obtain an obstacle information recognition result in the image data corresponding to the data feature recognition result, and recording the obstacle information recognition result to obtain a second recording result;
and 4, step 4: comparing the first recording result with the second recording result, detecting whether the difference between the first recording result and the second recording result exceeds a set threshold value, if so, judging that the first recording result and the second recording result are different, and if not, judging that the first recording result and the second recording result are the same; respectively taking the weighted average of the first recording result and the second recording result according to the set weighted value in response to the two being the same to obtain a final result; and in response to the first recording result and the second recording result being different, determining a correct recording result in the first recording result and the second recording result, and outputting the correct recording result as a final result.
By adopting the technical scheme, the infrared sensor and the image acquisition device are utilized to acquire the image information of the target area in the driving process of the unmanned vehicle, then the image information is respectively identified, and result judgment is carried out based on the identification results of the infrared sensor and the image acquisition device, so that the accuracy of the identification result is improved; meanwhile, in the process of image recognition, data feature extraction is carried out after image information data conversion, so that the recognition efficiency is improved; the method is mainly realized by the following steps: 1. the image data identification method comprises the following steps: the image data are identified by the aid of the preset obstacle feature identification model, the obstacle feature image of the image data is obtained, the obtained obstacle feature image is converted into binary data, data feature identification is carried out on the binary data to obtain a data feature identification result, the data feature identification result is analyzed to obtain an obstacle information identification result in the image data corresponding to the data feature identification result, and the identification efficiency is higher compared with that of image identification because the image data are converted into the binary data to carry out the data feature identification in the identification process; 2. judging the recognition result: comparing a first recording result with a second recording result, detecting whether the difference between the first recording result and the second recording result exceeds a set threshold value, if so, judging that the first recording result and the second recording result are different, and if not, judging that the first recording result and the second recording result are the same; respectively taking the weighted average of the first recording result and the second recording result according to the set weighted value in response to the two being the same to obtain a final result; in response to the first recording result and the second recording result being different, determining a correct recording result in the first recording result and the second recording result, and outputting the correct recording result as a final result; on the basis of the identification result, the identification result is not directly used, but is judged and analyzed to find out an accurate identification result, so that the identification accuracy is improved; 3. according to the method for image recognition based on the data features, after a plurality of feature sets of the image are obtained through data feature analysis, the set which is closer to the third feature set to the obstacle features of the image data is judged, and the obtained result is more accurate.
Example 2
On the basis of example 1, in step 4: determining a correct recording result of the first recording result and the second recording result comprises: determining that the first recording result is different from the second recording result; determining a first obstacle indicated in the first recorded result and a second obstacle indicated in the second recorded result that are the same and different; determining a recording volume determined by the three-dimensional coordinate set of the first obstacle and a first distance between the first obstacle and the infrared sensor in the first recording result; determining an expected volume range of the first obstacle in the point cloud data according to the first recording result and the first distance; in response to the recorded volume being outside the desired volume range, a first recording result error is determined.
Specifically, in the prior art, a camera and a laser radar are generally used to identify an obstacle to be identified. The camera scheme can be applied to the scene with sufficient illumination and relatively stable environment. However, under the conditions of bad weather and disordered road environment, the vision of the camera scheme is always unstable, so that the acquired information of the obstacle to be identified is inaccurate. While lidar is very expensive, lidar solutions are very stable and safe in identifying obstacles to be identified. In the prior art, when a laser radar is used for identifying an obstacle to be identified, the type of the obstacle to be identified is judged according to the point cloud size and the local features of the obstacle to be identified, which are acquired by scanning the obstacle to be identified by the laser radar. For example, whether the obstacle to be recognized is a person may be determined according to whether the local feature of the point cloud of the obstacle to be recognized is a head portrait of the person; and judging whether the obstacle to be identified is a bicycle or not according to whether the local characteristic of the point cloud of the obstacle to be identified is the bicycle head characteristic or not.
Example 3
On the basis of example 2, in step 4: determining a correct recording result of the first recording result and the second recording result comprises: determining a recording pixel area covered by the second obstacle in the second recording result; determining a second distance between a second obstacle and the image acquisition device according to the first distance and calibration parameters between the infrared sensor and the image acquisition device; determining an expected pixel area range of the second obstacle in the image data according to the second recording result and the second distance; in response to the recording pixel area being outside the desired pixel area range, a second recording result error is determined.
Example 4
On the basis of the above embodiment, in step 3: recognizing the image data by using a preset obstacle feature recognition model, and obtaining an obstacle feature image of the image data comprises the following steps: carrying out image binarization processing after image data is subjected to image enhancement; setting three preset feature distribution sets, which are respectively: a first feature set, a second feature set, and a third feature set; the corresponding characteristic probability values in each set are respectively as follows: a first feature set: p; a second feature set: x; the third feature set: m; setting a sample point, using the sample point to detect the sample point of the image after the image binarization processing, and after the detection is finished, calculating the classification coincidence probability of the sample point of the image after the image binarization processing and the set sample point by adopting the following formula:
Figure BDA0002773871520000081
wherein k is the number of sample points, and j is the number of overlapped sample points; calculating a loss function:
Figure BDA0002773871520000082
and (3) carrying out the following operation on the classification coincidence probability and the loss function to obtain the final classification coincidence probability:
Figure BDA0002773871520000083
judgment of pNWhichever is closer to P, X and M, i.e. pNP, X and M are respectively subjected to difference absolute value operation, and the calculation result is closest to 0; if p isjIf the image data is closer to P, judging that the obstacle feature of the image data is closer to the first feature set, and if P is closer to P, judging that the obstacle feature of the image data is closer to the first feature setNCloser to X, then judgeThe obstacle feature of the image data is closer to the second feature set; if p isNAnd if the image data is closer to the P, judging that the obstacle feature of the image data is closer to the third feature set.
Specifically, the loss function lossfunction or cost function costfuntion is a function that maps a random event or its related random variable into a non-negative real number to represent the "risk" or "loss" of the random event. In application, the loss function is usually associated with the optimization problem as a learning criterion, i.e. the model is solved and evaluated by minimizing the loss function. Parameter estimation parameters used for models in statistics and machine learning, for example, for risk management and decision making in macro-economics, and for optimal control theory in control theory.
Example 5
On the basis of example 4, in step 3: converting the obtained obstacle feature image into binary data, and performing data feature recognition on the binary data to obtain a data feature recognition result, wherein the data feature recognition result comprises the following steps: acquiring a set corresponding to the obstacle feature of the image data; if the acquired sets are the first feature set and the second feature set, converting data in the sets to obtain original data of multiple dimensions, and preprocessing the original data to obtain a pixel matrix corresponding to each dimension; analyzing each pixel matrix to obtain an incidence relation between each dimensionality, and establishing a neural network model for width learning by utilizing the incidence relation; extracting high-dimensional features of each dimension by using a neural network model according to the pixel matrix, and fusing all the high-dimensional features into dimension information; performing dimensionality reduction processing on the dimensionality information to obtain a data feature extraction result; if the acquired set is a third feature set, converting data in the set to obtain original data of multiple dimensions, and preprocessing the original data to obtain a pixel matrix corresponding to each dimension; analyzing each pixel matrix to obtain the incidence relation between each dimensionality, and establishing a highly-learned neural network model by utilizing the incidence relation; extracting high-dimensional features of each dimension by using a neural network model according to the pixel matrix, and fusing all the high-dimensional features into dimension information; and performing dimensionality reduction processing on the dimensionality information to obtain a data feature extraction result.
In particular, with the development of light weight and ultra-thin display devices, the display devices are also gradually applied to more various electronic products to display the related information to be presented to the user. However, in order to adapt to the increasingly diversified applications, such as the application to wearable devices, touch devices, or display devices of electronic products such as household appliances, the light emitting type, gray scale display, and power consumption of the display device are not changed. In contrast, the change of the pixel matrix included in the display devices is much smaller, and under the limitation of the outer frames with different shapes among different electronic products, in most display devices, the square or rectangular pixels are still used as the basic units of the pixel matrix, and the square or rectangular pixels are used for combining and covering the display shape of the display surface which is filled in the frame limit of the outer frame, so as to cover more pixel areas, and a simpler control circuit is used for controlling the display state of the pixels.
Example 6
The computer program for realizing the method of the invention and the computer program are recorded on the corresponding storage medium, and the storage medium is connected with a recognition device of the unmanned vehicle road condition irregular obstacle with a central processing unit, the recognition device also comprises:
the acquisition device is configured to acquire infrared data of a target area by using the infrared sensor and acquire image data of the target area by using the image acquisition device;
the infrared recognition device is configured for recognizing barrier information in the infrared data by using a preset infrared recognition model and recording a recognition result to obtain a first recording result;
the image recognition device is configured for recognizing the image data by using a preset obstacle feature recognition model to obtain an obstacle feature image of the image data, converting the obtained obstacle feature image into binary data, performing data feature recognition on the binary data to obtain a data feature recognition result, analyzing the data feature recognition result to obtain an obstacle information recognition result in the image data corresponding to the data feature recognition result, and recording the obstacle information recognition result to obtain a second recording result;
the result generating device is configured for comparing the first recording result with the second recording result, detecting whether the difference between the first recording result and the second recording result exceeds a set threshold value, if the difference exceeds the set threshold value, judging that the first recording result and the second recording result are different, and if the difference does not exceed the set threshold value, judging that the first recording result and the second recording result are the same; respectively taking the weighted average of the first recording result and the second recording result according to the set weighted value in response to the two being the same to obtain a final result; and in response to the first recording result and the second recording result being different, determining a correct recording result in the first recording result and the second recording result, and outputting the correct recording result as a final result.
Specifically, the infrared sensor and the image acquisition device are utilized to acquire the image information of the target area in the driving process of the unmanned vehicle, the image information is respectively identified, and result judgment is carried out based on the identification results of the image information and the image information, so that the accuracy of the identification result is improved; meanwhile, in the process of image recognition, data feature extraction is carried out after image information data conversion, and recognition efficiency is improved.
Example 7
On the basis of embodiment 6, the determination of the correct one of the first and second recording results by the result generation means includes: determining that the first recording result is different from the second recording result; determining a first obstacle indicated in the first recorded result and a second obstacle indicated in the second recorded result that are the same and different; determining a recording volume determined by the three-dimensional coordinate set of the first obstacle and a first distance between the first obstacle and the infrared sensor in the first recording result; determining an expected volume range of the first obstacle in the point cloud data according to the first recording result and the first distance; in response to the recorded volume being outside the desired volume range, a first recording result error is determined.
Specifically, the image data identification method includes: the image data are recognized by the aid of the preset obstacle feature recognition model, the obstacle feature image of the image data is obtained, the obtained obstacle feature image is converted into binary data, data feature recognition is conducted on the binary data, a data feature recognition result is obtained, the data feature recognition result is analyzed, and an obstacle information recognition result in the image data corresponding to the data feature recognition result is obtained.
Example 8
On the basis of embodiment 7, the determination of the correct one of the first and second recording results by the result generation means includes: determining a recording pixel area covered by the second obstacle in the second recording result; determining a second distance between a second obstacle and the image acquisition device according to the first distance and calibration parameters between the infrared sensor and the image acquisition device; determining an expected pixel area range of the second obstacle in the image data according to the second recording result and the second distance; in response to the recording pixel area being outside the desired pixel area range, a second recording result error is determined.
Specifically, the first recording result and the second recording result are compared, whether the difference between the first recording result and the second recording result exceeds a set threshold value or not is detected, if the difference exceeds the set threshold value, the first recording result and the second recording result are different, and if the difference does not exceed the set threshold value, the first recording result and the second recording result are the same; respectively taking the weighted average of the first recording result and the second recording result according to the set weighted value in response to the two being the same to obtain a final result; in response to the first recording result and the second recording result being different, determining a correct recording result in the first recording result and the second recording result, and outputting the correct recording result as a final result; on the basis of the identification result, the identification result is not directly used, but is judged and analyzed to find out the accurate identification result, so that the identification accuracy is improved.
Example 9
On the basis of the embodiment 8, the image recognition device utilizes the preset obstacle feature recognition modelThe image data acquiring the obstacle feature image of the image data includes: carrying out image binarization processing after image data is subjected to image enhancement; setting three preset feature distribution sets, which are respectively: a first feature set, a second feature set, and a third feature set; the corresponding characteristic probability values in each set are respectively as follows: a first feature set: p; a second feature set: x; the third feature set: m; setting a sample point, using the sample point to detect the sample point of the image after the image binarization processing, and after the detection is finished, calculating the classification coincidence probability of the sample point of the image after the image binarization processing and the set sample point by adopting the following formula:
Figure BDA0002773871520000111
wherein k is the number of sample points, and j is the number of overlapped sample points; calculating a loss function:
Figure BDA0002773871520000112
and (3) carrying out the following operation on the classification coincidence probability and the loss function to obtain the final classification coincidence probability:
Figure BDA0002773871520000113
judgment of pNWhichever is closer to P, X and M, i.e. pNP, X and M are respectively subjected to difference absolute value operation, and the calculation result is closest to 0; if p isjIf the image data is closer to P, judging that the obstacle feature of the image data is closer to the first feature set, and if P is closer to P, judging that the obstacle feature of the image data is closer to the first feature setNIf the image data is closer to the X, judging that the obstacle feature of the image data is closer to a second feature set; if p isNAnd if the image data is closer to the P, judging that the obstacle feature of the image data is closer to the third feature set.
Specifically, after a plurality of feature sets of the image are obtained through data feature analysis, the method and the device judge which set the obstacle feature of the image data is closer to the third feature set, so that the obtained result is more accurate.
Example 10
On the basis of embodiment 9, the image recognition apparatus converts the obtained obstacle feature image into binary data, and performs data feature recognition on the binary data, and obtaining a data feature recognition result includes: acquiring a set corresponding to the obstacle feature of the image data; if the acquired sets are the first feature set and the second feature set, converting data in the sets to obtain original data of multiple dimensions, and preprocessing the original data to obtain a pixel matrix corresponding to each dimension; analyzing each pixel matrix to obtain an incidence relation between each dimensionality, and establishing a neural network model for width learning by utilizing the incidence relation; extracting high-dimensional features of each dimension by using a neural network model according to the pixel matrix, and fusing all the high-dimensional features into dimension information; performing dimensionality reduction processing on the dimensionality information to obtain a data feature extraction result; if the acquired set is a third feature set, converting data in the set to obtain original data of multiple dimensions, and preprocessing the original data to obtain a pixel matrix corresponding to each dimension; analyzing each pixel matrix to obtain the incidence relation between each dimensionality, and establishing a highly-learned neural network model by utilizing the incidence relation; extracting high-dimensional features of each dimension by using a neural network model according to the pixel matrix, and fusing all the high-dimensional features into dimension information; and performing dimensionality reduction processing on the dimensionality information to obtain a data feature extraction result.
In the prior art, an intelligent vehicle is a comprehensive intelligent system integrating functions of environmental perception, path planning decision making, control and the like, and can greatly improve traffic safety, improve vehicle passing efficiency of the existing road and reduce pollution. The environment perception system is the foundation and the core in an intelligent vehicle architecture and provides necessary basic information for planning decision and control execution. The environment sensing system mainly has the functions of acquiring vehicle and environment information through a sensor, specifically acquiring pose and state information of the vehicle, recognizing and tracking lane lines and lane edges in a structured road, recognizing and tracking traffic signs and traffic signals, recognizing and tracking obstacles around the vehicle and the like.
Sensors commonly used for environmental perception include cameras, lasers, millimeter wave radar, GPS, inertial navigation, and the like. The camera visual data cannot provide accurate distance information of the barrier or even if the distance information is provided, the calculated amount is huge, the real-time requirement of the intelligent vehicle is difficult to meet, the laser radar is high in ranging precision, high in scanning frequency and rich in data amount, is not influenced by factors such as weather and illumination, does not depend on grains and colors for distinguishing, is not sensitive to shadow noise and other excellent characteristics, and is greatly concerned in environment perception of the intelligent vehicle in recent years.
Compared with the prior art, the identification method has higher identification accuracy and identification efficiency.
The above is only an embodiment of the present invention, but the scope of the present invention should not be limited thereby, and any structural changes made according to the present invention should be considered as being limited within the scope of the present invention without departing from the gist of the present invention.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process and related description of the system described above may refer to the corresponding process in the foregoing method embodiments, and will not be described herein again.
It should be noted that, the system provided in the foregoing embodiment is only illustrated by dividing the functional modules, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the modules or steps in the embodiment of the present invention are further decomposed or combined, for example, the modules in the foregoing embodiment may be combined into one module, or may be further split into multiple sub-modules, so as to complete all or part of the functions described above. The names of the modules and steps involved in the embodiments of the present invention are only for distinguishing the modules or steps, and are not to be construed as unduly limiting the present invention.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes and related descriptions of the storage device and the processing device described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Those of skill in the art would appreciate that the various illustrative modules, method steps, and modules described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that programs corresponding to the software modules, method steps may be located in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. To clearly illustrate this interchangeability of electronic hardware and software, various illustrative components and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as electronic hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The terms "first," "second," and the like are used for distinguishing between similar elements and not necessarily for describing or implying a particular order or sequence.
The terms "comprises," "comprising," or any other similar term are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.

Claims (6)

1. The recognition device for the road condition irregular obstacles of the unmanned vehicle comprises an acquisition device for acquiring peripheral information, and is characterized in that the acquisition device is provided with a telescopic mechanism (1) arranged on a vehicle body (2); the upper side of the body of the telescopic mechanism (1) is provided with a top cover plate (1 a); the telescopic mechanism (1) is provided with a hinged part inserted with an arc column (1 c); the free end of the telescopic mechanism (1) is provided with a limiting strip (1d) with the thickness smaller than that of the telescopic mechanism (1) body; the body of the telescopic mechanism (1) is sequentially provided with an infrared sensor (11), a buffer rubber block (16) and an air quantity sensor (12); the side surface of the air volume sensor (12) is provided with a coupler (13), and the coupler (13) is connected with a reciprocating pump (15) which is arranged close to the hinged part through a connecting shaft (14); the top cover plate (1a) is provided with a water chute (1b) penetrating through the upper surface.
2. The apparatus according to claim 1, wherein the acquiring means is configured to acquire infrared data of the target area using the infrared sensor (11) and to acquire image data of the target area using the image capturing means;
the infrared recognition device is configured to recognize obstacle information in the infrared data by using a preset infrared recognition model, and record a recognition result to obtain a first recording result; also comprises the following steps of (1) preparing,
the image recognition device is configured to recognize the image data by using a preset obstacle feature recognition model to obtain an obstacle feature image of the image data, convert the obtained obstacle feature image into binary data, perform data feature recognition on the binary data to obtain a data feature recognition result, analyze the data feature recognition result to obtain an obstacle information recognition result in the image data corresponding to the data feature recognition result, and record the obstacle information recognition result to obtain a second recording result;
the result generating device is configured to compare the first recording result with the second recording result, detect whether the difference between the first recording result and the second recording result exceeds a set threshold, judge that the first recording result and the second recording result are different if the difference exceeds the set threshold, and judge that the first recording result and the second recording result are the same if the difference does not exceed the set threshold; respectively taking the weighted average of the first recording result and the second recording result according to the set weighted value in response to the two being the same to obtain a final result; and in response to the first recording result and the second recording result being different, determining a correct recording result in the first recording result and the second recording result, and outputting the correct recording result as a final result.
3. The apparatus of claim 2, wherein the result generation means to determine a correct one of the first recorded result and the second recorded result comprises: determining that the first recorded result is different from the second recorded result; determining a first obstacle indicated in the first recorded result and a second obstacle indicated in the second recorded result for the same said difference; determining a recording volume determined by the three-dimensional coordinate set of the first obstacle and a first distance between the first obstacle and the infrared sensor (11) in the first recording result; determining a desired volume range of the first obstacle in the point cloud data according to the first recording result and the first distance; determining that the first recording result is erroneous in response to the recorded volume being outside the desired volume range.
4. The apparatus of claim 3, wherein the result generation means to determine a correct one of the first recorded result and the second recorded result comprises: determining a recording pixel area covered by the second obstacle in the second recording result; determining a second distance between the second obstacle and the image acquisition device according to the first distance and calibration parameters between the infrared sensor (11) and the image acquisition device; determining an expected pixel area range of the second obstacle in the image data according to the second recording result and the second distance; determining that the second recording result is erroneous in response to the recording pixel area being outside the desired pixel area range.
5. The apparatus according to claim 4, wherein the image recognition means recognizes the image data by using a preset obstacle feature recognition model, and obtaining the obstacle feature image of the image data comprises: carrying out image binarization processing after image data is subjected to image enhancement; setting three preset feature distribution sets, which are respectively: a first feature set, a second feature set, and a third feature set; the corresponding characteristic probability values in each set are respectively as follows: a first feature set: p; a second feature set: x; the third feature set: m; setting a sample point, using the sample point to detect the sample point of the image after the image binarization processing, and after the detection is finished, calculating the classification coincidence probability of the sample point of the image after the image binarization processing and the set sample point by adopting the following formula:
Figure FDA0002773871510000021
wherein k is the number of sample points, and j is the number of overlapped sample points; calculating a loss function:
Figure FDA0002773871510000022
and (3) carrying out the following operation on the classification coincidence probability and the loss function to obtain the final classification coincidence probability:
Figure FDA0002773871510000023
Figure FDA0002773871510000024
judgment of pNWhichever is closer to P, X and M, i.e. pNP, X and M are respectively subjected to difference absolute value operation, and the calculation result is closest to 0; if p isjIf the image data is closer to P, judging that the obstacle feature of the image data is closer to the first feature set, and if P is closer to P, judging that the obstacle feature of the image data is closer to the first feature setNIf the image data is closer to the X, judging that the obstacle feature of the image data is closer to a second feature set; if p isNAnd if the image data is closer to the P, judging that the obstacle feature of the image data is closer to the third feature set.
6. The apparatus according to claim 5, wherein the image recognition means converts the obtained obstacle feature image into binary data, and performs data feature recognition on the binary data, and obtaining a data feature recognition result comprises: acquiring a set corresponding to the obstacle feature of the image data; if the acquired sets are a first feature set and a second feature set, converting data in the sets to obtain original data of multiple dimensions, and preprocessing the original data to obtain a pixel matrix corresponding to each dimension; analyzing each pixel matrix to obtain an incidence relation between each dimensionality, and establishing a neural network model for width learning by utilizing the incidence relation; extracting high-dimensional features of each dimension by utilizing the neural network model according to the pixel matrix, and fusing all the high-dimensional features into dimension information; performing dimensionality reduction processing on the dimensionality information to obtain a data feature extraction result; if the acquired set is a third feature set, converting data in the set to obtain original data of multiple dimensions, and preprocessing the original data to obtain a pixel matrix corresponding to each dimension; analyzing each pixel matrix to obtain an incidence relation between each dimensionality, and establishing a highly-learned neural network model by utilizing the incidence relation; extracting high-dimensional features of each dimension by utilizing the neural network model according to the pixel matrix, and fusing all the high-dimensional features into dimension information; and performing dimensionality reduction processing on the dimensionality information to obtain a data feature extraction result.
CN202011258635.3A 2020-11-11 2020-11-11 Recognition device for road condition irregular obstacles of unmanned vehicle Active CN112347953B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011258635.3A CN112347953B (en) 2020-11-11 2020-11-11 Recognition device for road condition irregular obstacles of unmanned vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011258635.3A CN112347953B (en) 2020-11-11 2020-11-11 Recognition device for road condition irregular obstacles of unmanned vehicle

Publications (2)

Publication Number Publication Date
CN112347953A true CN112347953A (en) 2021-02-09
CN112347953B CN112347953B (en) 2021-09-28

Family

ID=74363554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011258635.3A Active CN112347953B (en) 2020-11-11 2020-11-11 Recognition device for road condition irregular obstacles of unmanned vehicle

Country Status (1)

Country Link
CN (1) CN112347953B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114779790A (en) * 2022-06-16 2022-07-22 小米汽车科技有限公司 Obstacle recognition method, obstacle recognition device, vehicle, server, storage medium and chip

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103176185A (en) * 2011-12-26 2013-06-26 上海汽车集团股份有限公司 Method and system for detecting road barrier
CN106707293A (en) * 2016-12-01 2017-05-24 百度在线网络技术(北京)有限公司 Obstacle recognition method and device for vehicles
CN107554698A (en) * 2017-09-12 2018-01-09 无锡宝宏船舶机械有限公司 The clean-out cover peculiar to vessel of radar stealthy materials long-range detection barrier automatic alarm is installed
CN107627827A (en) * 2017-09-11 2018-01-26 太仓史瑞克工业设计有限公司 A kind of intelligent shading system and its control method based on automobile
CN207045262U (en) * 2017-06-13 2018-02-27 纵目科技(上海)股份有限公司 A kind of on-vehicle information acquisition system, car-mounted terminal and pilotless automobile
CN107909024A (en) * 2017-11-13 2018-04-13 哈尔滨理工大学 Vehicle tracking system, method and vehicle based on image recognition and infrared obstacle avoidance
WO2018073778A1 (en) * 2016-10-20 2018-04-26 Rail Vision Ltd System and method for object and obstacle detection and classification in collision avoidance of railway applications
CN109752748A (en) * 2017-11-03 2019-05-14 智飞智能装备科技东台有限公司 One kind being based on Beidou unmanned aerial vehicle onboard data back equipment
CN109986605A (en) * 2019-04-09 2019-07-09 深圳市发掘科技有限公司 A kind of intelligence automatically tracks robot system and method
EP3508937A1 (en) * 2018-01-05 2019-07-10 iRobot Corporation Mobile cleaning robot artificial intelligence for situational awareness
CN110309785A (en) * 2019-07-03 2019-10-08 孙启城 A kind of blind-guidance robot control method based on image recognition technology
CN110862033A (en) * 2019-11-12 2020-03-06 中信重工开诚智能装备有限公司 Intelligent early warning detection method applied to coal mine inclined shaft winch
CN111145538A (en) * 2019-12-06 2020-05-12 齐鲁交通信息集团有限公司 Stereo perception system suitable for audio and video acquisition, recognition and monitoring on highway
CN210835729U (en) * 2019-12-13 2020-06-23 博智安全科技股份有限公司 Barrier storage transport vechicle is kept away to intelligence

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103176185A (en) * 2011-12-26 2013-06-26 上海汽车集团股份有限公司 Method and system for detecting road barrier
WO2018073778A1 (en) * 2016-10-20 2018-04-26 Rail Vision Ltd System and method for object and obstacle detection and classification in collision avoidance of railway applications
CN106707293A (en) * 2016-12-01 2017-05-24 百度在线网络技术(北京)有限公司 Obstacle recognition method and device for vehicles
CN207045262U (en) * 2017-06-13 2018-02-27 纵目科技(上海)股份有限公司 A kind of on-vehicle information acquisition system, car-mounted terminal and pilotless automobile
CN107627827A (en) * 2017-09-11 2018-01-26 太仓史瑞克工业设计有限公司 A kind of intelligent shading system and its control method based on automobile
CN107554698A (en) * 2017-09-12 2018-01-09 无锡宝宏船舶机械有限公司 The clean-out cover peculiar to vessel of radar stealthy materials long-range detection barrier automatic alarm is installed
CN109752748A (en) * 2017-11-03 2019-05-14 智飞智能装备科技东台有限公司 One kind being based on Beidou unmanned aerial vehicle onboard data back equipment
CN107909024A (en) * 2017-11-13 2018-04-13 哈尔滨理工大学 Vehicle tracking system, method and vehicle based on image recognition and infrared obstacle avoidance
EP3508937A1 (en) * 2018-01-05 2019-07-10 iRobot Corporation Mobile cleaning robot artificial intelligence for situational awareness
CN109986605A (en) * 2019-04-09 2019-07-09 深圳市发掘科技有限公司 A kind of intelligence automatically tracks robot system and method
CN110309785A (en) * 2019-07-03 2019-10-08 孙启城 A kind of blind-guidance robot control method based on image recognition technology
CN110862033A (en) * 2019-11-12 2020-03-06 中信重工开诚智能装备有限公司 Intelligent early warning detection method applied to coal mine inclined shaft winch
CN111145538A (en) * 2019-12-06 2020-05-12 齐鲁交通信息集团有限公司 Stereo perception system suitable for audio and video acquisition, recognition and monitoring on highway
CN210835729U (en) * 2019-12-13 2020-06-23 博智安全科技股份有限公司 Barrier storage transport vechicle is kept away to intelligence

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114779790A (en) * 2022-06-16 2022-07-22 小米汽车科技有限公司 Obstacle recognition method, obstacle recognition device, vehicle, server, storage medium and chip

Also Published As

Publication number Publication date
CN112347953B (en) 2021-09-28

Similar Documents

Publication Publication Date Title
CN109100741B (en) Target detection method based on 3D laser radar and image data
CN110992683B (en) Dynamic image perception-based intersection blind area early warning method and system
WO2022141914A1 (en) Multi-target vehicle detection and re-identification method based on radar and video fusion
CN103176185B (en) Method and system for detecting road barrier
CN103559791B (en) A kind of vehicle checking method merging radar and ccd video camera signal
CN111753797B (en) Vehicle speed measuring method based on video analysis
CN106845547A (en) A kind of intelligent automobile positioning and road markings identifying system and method based on camera
CN108196260A (en) The test method and device of automatic driving vehicle multi-sensor fusion system
CN111491093B (en) Method and device for adjusting field angle of camera
CN115943439A (en) Multi-target vehicle detection and re-identification method based on radar vision fusion
CN111179300A (en) Method, apparatus, system, device and storage medium for obstacle detection
CN104183127A (en) Traffic surveillance video detection method and device
CN112487905B (en) Method and system for predicting danger level of pedestrian around vehicle
CN111524365B (en) Method for classifying vehicle types by using multiple geomagnetic sensors
CN108711172B (en) Unmanned aerial vehicle identification and positioning method based on fine-grained classification
CN114898319B (en) Vehicle type recognition method and system based on multi-sensor decision level information fusion
CN113592905B (en) Vehicle driving track prediction method based on monocular camera
CN104915642A (en) Method and apparatus for measurement of distance to vehicle ahead
CN110796360A (en) Fixed traffic detection source multi-scale data fusion method
CN112347953B (en) Recognition device for road condition irregular obstacles of unmanned vehicle
CN114155720B (en) Vehicle detection and track prediction method for roadside laser radar
CN110909656A (en) Pedestrian detection method and system with integration of radar and camera
CN113820682A (en) Target detection method and device based on millimeter wave radar
CN117124332A (en) Mechanical arm control method and system based on AI vision grabbing
CN117197019A (en) Vehicle three-dimensional point cloud image fusion method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant