CN114913508A - Outdoor environment obstacle real-time detection method and system based on active and passive information fusion - Google Patents

Outdoor environment obstacle real-time detection method and system based on active and passive information fusion Download PDF

Info

Publication number
CN114913508A
CN114913508A CN202210815916.7A CN202210815916A CN114913508A CN 114913508 A CN114913508 A CN 114913508A CN 202210815916 A CN202210815916 A CN 202210815916A CN 114913508 A CN114913508 A CN 114913508A
Authority
CN
China
Prior art keywords
preset
obstacle
normal vector
real
view field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210815916.7A
Other languages
Chinese (zh)
Inventor
贺亮
陈建林
袁建平
马川
于洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Yunmu Zhizao Technology Co ltd
Taicang Yangtze River Delta Research Institute of Northwestern Polytechnical University
Original Assignee
Jiangsu Yunmu Zhizao Technology Co ltd
Taicang Yangtze River Delta Research Institute of Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Yunmu Zhizao Technology Co ltd, Taicang Yangtze River Delta Research Institute of Northwestern Polytechnical University filed Critical Jiangsu Yunmu Zhizao Technology Co ltd
Priority to CN202210815916.7A priority Critical patent/CN114913508A/en
Publication of CN114913508A publication Critical patent/CN114913508A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Geophysics And Detection Of Objects (AREA)
  • Train Traffic Observation, Control, And Security (AREA)

Abstract

The invention discloses an outdoor environmental obstacle real-time detection method and system based on active and passive information fusion, wherein the method comprises the following steps: step S1: detecting first characteristic points according to a preset number of preset environment images, further calculating average density, and generating a judgment threshold value; step S2: acquiring a preset field region image, and detecting a second characteristic point through an SURF algorithm; step S3: judging whether the number of the second feature points is higher than a judgment threshold value, if so, judging the preset view field area as a potential barrier area, and if not, returning to the step S2; step S4: calculating a first plane normal vector according to point cloud data in a preset direction of the potential barrier area, and further calculating an included angle between the first plane normal vector and a reference normal vector; step S5: and judging whether the included angle is larger than a preset angle, if so, judging that an obstacle exists, and if not, returning to the step S2. The invention can make up the defect of the obstacle detection of the pure active light or the passive light through the active/passive image fusion, and enhances the reliability and the success rate.

Description

Outdoor environment obstacle real-time detection method and system based on active and passive information fusion
Technical Field
The invention relates to the technical field of obstacle detection, in particular to an outdoor environment obstacle real-time detection method and system based on active and passive information fusion, a computer medium and a computer.
Background
The leading edge application and related problems of the TOF technology are research hotspots in the field of depth imaging, and a 3D area array laser camera based on the TOF (Time-of-Flight) technology is novel and miniaturized three-dimensional imaging equipment, and can capture the intensity information and the depth information of a dynamic target efficiently in real Time. The camera is simple and convenient to operate, has a large amount of information, has wide development potential and market prospect, can bring revolutionary changes to many fields, is widely applied to the fields of three-dimensional reconstruction, virtual reality, urban surveying and mapping, human posture recognition, obstacle avoidance navigation of robots and the like, and is an important component for realizing outdoor environment exploration.
At present, the detection of obstacles is the premise of realizing the exploration of an outdoor environment, the outdoor environment is an uncertain and unstructured unknown complex environment, the soil on the surface is very soft and rugged, and a large number of stones and pits are distributed. The difficulty of navigation obstacle avoidance of the experimental robot is undoubtedly increased in the environment, detection and positioning of obstacles cannot be completed comprehensively and accurately by singly adopting a certain sensor data form, and the existing outdoor obstacle target detection method cannot adapt to multi-scale changes of targets, so that the target identification error rate is high, and the target detection precision needs to be improved.
Therefore, there is a need for an outdoor environmental obstacle detection method that can improve the accuracy and the global performance of obstacle detection and reduce the recognition error rate.
Disclosure of Invention
Therefore, the method and the system for detecting the outdoor environmental obstacle in real time based on active and passive information fusion overcome the defects of the prior art.
In order to solve the technical problem, the invention provides an outdoor environmental obstacle real-time detection method based on active and passive information fusion, which comprises the following steps:
step S1: detecting first feature points according to a preset number of preset environment images, calculating average density according to the first feature points, and generating a judgment threshold value;
step S2: acquiring a preset field area image with a preset resolution, and performing second characteristic point detection on the preset field area image through a SURF algorithm;
step S3: judging whether the number of the second feature points is higher than the judgment threshold value, if so, judging the preset view field area as an obstacle-hiding area, otherwise, returning to the step S2, and reacquiring a preset view field area image with a preset resolution;
step S4: calculating a real-time first plane normal vector according to the point cloud data in the preset direction of the potential barrier area, and further calculating an included angle between the first plane normal vector and a reference normal vector;
step S5: and judging whether the included angle is larger than a preset angle, if so, judging that an obstacle exists in the preset direction of the obstacle-hiding area, otherwise, returning to the step S2, and re-acquiring a preset view field area image with a preset resolution.
By adopting the technical scheme, the spatial and texture distribution situation of the local features of the visible light image is analyzed by adopting the image local feature extraction SURF algorithm, and on the basis, the plane normal vector of the current detection field is quickly calibrated by means of point cloud data, so that the obstacle assessment method based on the comparison information of the local features and the plane normal vector is established, the obstacle threatening the advancing in the current field is calibrated by combining the image local feature extraction data, the reliability of obstacle detection is ensured, and the obstacle detection under the outdoor environment is conveniently completed.
Further, the method for detecting the first feature point according to the preset number of the preset environment images comprises the following steps:
and detecting the first characteristic points according to the outdoor environment images, the simulated outdoor environment images and the ground images of the preset quantity.
By adopting the technical scheme, a plurality of outdoor environment images, the outdoor environment simulation images and the ground images are selected for first feature point detection, so that the reliability and the correctness of the judgment threshold value can be improved.
Further, the method for detecting the second feature point of the preset field area image by the SURF algorithm includes:
extracting the characteristic points of the preset view field region image through an SURF algorithm, and further detecting the characteristic points of the preset view field region image through introducing a Hessian matrix; wherein the minHessian threshold range of the Hessian matrix is 400-600.
By adopting the technical scheme, the SURF algorithm is adopted, the Hessian matrix is introduced, the characteristic detection has high reliability and robustness, good real-time performance is achieved, and the calculated amount is reduced by applying the idea of converting the original image into the integral image.
Further, the method for selecting the reference normal vector is as follows:
and selecting a preset view according to preset point cloud data, generating a matched second plane normal vector according to the preset view, and further setting the second plane normal vector as a reference normal vector.
By adopting the technical scheme, the reliability and the accuracy of obstacle detection can be improved by selecting the second plane normal vector of the set visual field as the reference normal vector.
Further, the method for calculating the real-time first plane normal vector according to the point cloud data comprises the following steps:
step S40: generating a plane equation by a least square fitting plane method:
Figure 5715DEST_PATH_IMAGE001
and further the deformation is:
Figure 543400DEST_PATH_IMAGE002
wherein, A, B, C,
Figure 60838DEST_PATH_IMAGE003
Are all parameters of a plane equation;
step S41: and then calculating a minimization function according to a plane equation:
Figure 621439DEST_PATH_IMAGE004
and are respectively aligned with
Figure 97288DEST_PATH_IMAGE003
And (5) calculating a partial derivative to obtain:
Figure 387849DEST_PATH_IMAGE005
step S42: the conversion into matrix form is:
Figure 177819DEST_PATH_IMAGE006
step S43: determination of parameter values by means of the Kramer rule
Figure 634340DEST_PATH_IMAGE007
And further calculating a plane normal vector as follows:
Figure 221703DEST_PATH_IMAGE008
by adopting the technical scheme, the error value between data can be reduced by adopting a least square fitting plane method.
The invention also provides an outdoor environmental obstacle real-time detection system based on active and passive information fusion, which comprises:
the threshold value calculation module is used for detecting first feature points according to a preset number of preset environment images, calculating average density according to the first feature points and generating a judgment threshold value;
the real-time detection module is used for acquiring a preset view field region image with a preset resolution, and performing second characteristic point detection on the preset view field region image through an SURF algorithm;
the included angle calculation module is used for calculating a real-time first plane normal vector according to the point cloud data in the preset direction of the potential barrier area, and further calculating an included angle between the first plane normal vector and a reference normal vector;
the obstacle judging module is used for judging whether the number of the second characteristic points is higher than the judging threshold value, if so, judging the preset view field area as an obstacle hiding area, and if not, re-acquiring a preset view field area image with a preset resolution; and judging whether the included angle is larger than a preset angle, if so, judging that an obstacle exists in the preset direction of the obstacle-hiding area, and if not, re-acquiring a preset view field area image with a preset resolution.
Further, comprising:
an experimental moving platform;
and the image shooting equipment is arranged on the experiment mobile platform and used for outputting RGB images, depth maps and three-dimensional point cloud data and measuring the distance between the image shooting equipment and the object according to the time difference between the emission of the signals and the time difference between the signals and the time difference.
Further, the method also comprises the following steps:
and the experimental robot is used for simulating obstacle detection.
By adopting the technical scheme, the experimental hardware platform is formed by combining the image pickup equipment with the experimental mobile platform; outputting an RGB image, a depth map and three-dimensional point cloud data through image pickup equipment; and further, the complementation of the image data and the point cloud data is realized, and the reliability of obstacle detection is ensured.
The invention also provides a computer medium, wherein a computer program is stored on the computer medium, and the computer program is executed by a processor to realize the real-time detection method for the outdoor environmental obstacle based on the active and passive information fusion.
The invention also provides a computer comprising a computer medium according to the above.
Compared with the prior art, the outdoor environmental obstacle real-time detection method and the system based on active and passive information fusion have the following advantages:
1. the complementation of the image data and the point cloud data is realized, the reliability of obstacle detection is ensured, the obstacle is evaluated by adopting passive optical information, and the obstacle is finally judged by adopting active optical information, so that the obstacle detection speed is increased;
2. the obstacle evaluation algorithm based on the density of the characteristic point area of the visible light image is adopted, the obstacle detection flexibility is improved, the obstacle is judged by utilizing the included angle between the real-time plane normal vector and the reference normal vector, and the obstacle detection accuracy is ensured;
3. through active/passive image fusion, the defect of single active light or passive light obstacle detection is overcome, and the adaptability of the experimental robot to the environment and the reliability and success rate of an obstacle detection algorithm are enhanced.
Drawings
In order that the present disclosure may be more readily and clearly understood, reference will now be made in detail to the present disclosure, examples of which are illustrated in the accompanying drawings.
FIG. 1 is a flow chart of the outdoor environmental obstacle real-time detection method based on active and passive information fusion.
Fig. 2 is a feature point detection diagram of an outdoor environment of the present invention.
Fig. 3 is a characteristic point detection diagram of an outdoor environmental obstacle of the present invention.
Fig. 4 is an operation schematic diagram of the image pickup apparatus of the present invention.
Fig. 5 is a diagram showing the detection result of the environmental disturbance of the present invention.
Fig. 6 is a first connection relationship diagram of the outdoor environmental obstacle real-time detection system based on active and passive information fusion according to the invention.
Fig. 7 is a second connection relationship diagram of the outdoor environmental obstacle real-time detection system based on active and passive information fusion according to the invention.
The specification reference numbers indicate: 1. the device comprises a threshold value calculation module, a real-time detection module, a 3 included angle calculation module, a 4 obstacle judgment module, a 5 reference detection module, a 10 experiment mobile platform, 11 image shooting equipment, 12 experiment robots, 110 transmitters, 111 receivers, 112 detection objects, 113 and a timer.
Detailed Description
The present invention is further described below in conjunction with the following figures and specific examples so that those skilled in the art may better understand the present invention and practice it, but the examples are not intended to limit the present invention.
In the description of the present invention, it should be understood that the term "comprises/comprising" is intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to the listed steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Generally, the existence of obstacles makes the texture distribution situation of local features of the visible light image more complex, and the texture distribution situation reflects the density of feature points to a certain extent, so that the density of feature points in a relatively flat and unobstructed area is low, and the density of feature points in a relatively rugged and obstructed area is high, so that obstacles can be indirectly evaluated by measuring the density of the feature points.
Referring to fig. 1-3 and 5, the present invention provides an embodiment of an outdoor environmental obstacle real-time detection method based on active and passive information fusion, the method includes the following steps:
step S1: detecting first feature points according to a preset number of preset environment images, calculating average density according to the first feature points, and generating a judgment threshold value;
step S2: acquiring a preset field area image with a preset resolution, and performing second characteristic point detection on the preset field area image through a SURF algorithm;
step S3: judging whether a second characteristic point area with the quantity higher than the judgment threshold exists or not;
step S4: if so, judging the second feature point area as an obstacle-hiding area, otherwise, returning to the step S2, and re-acquiring a preset view field area image with a preset resolution;
step S5: calculating a real-time first plane normal vector according to the point cloud data in the preset direction of the potential barrier area, and further calculating an included angle between the first plane normal vector and a reference normal vector;
step S6: judging whether the included angle is larger than a preset angle or not;
step S7: if yes, judging that the obstacle exists in the preset direction of the obstacle-hiding area, otherwise, returning to the step S2, and re-acquiring the preset view field area image with the preset resolution.
In step S1, the preset number is set by the operator according to actual requirements and cost, wherein 200 sheets are referred in this embodiment if the number is larger, the accuracy is higher, and the reliability is higher; the preset environment image comprises but is not limited to an outdoor environment image, a simulated outdoor environment image and a ground image, and at least one of the images is selected by an operator; the detection of the first feature point adopts an algorithm including but not limited to SURF and SIFT, and is specifically set by an operator according to actual requirements; the key to evaluate obstacles by feature point density is to determine a feature point density judgment threshold, areas above the judgment threshold are considered to have obstacles, and areas below the judgment threshold are considered to have no obstacles, for example: in order to improve the reliability of the threshold, a total of 200 images in different outdoor environments are selected for feature point detection, and the judgment threshold for evaluating obstacles is calculated by combining all experimental data
Figure 465472DEST_PATH_IMAGE009
The decision threshold represents the presence of each square pixel
Figure 734166DEST_PATH_IMAGE010
And the characteristic points are considered to be possible to exist in the area higher than the judgment threshold value, and are considered to be not existed in the area lower than the judgment threshold value.
In step S2, in consideration of a special environment of an outdoor environment, in order to enable feature detection to have high reliability and robustness and good real-time performance, a SURF algorithm is used for feature point extraction, and a Hessian matrix is introduced into the SURF algorithm, a minHessian threshold range of the Hessian matrix is selected to be 400-600, a specific minHessian threshold is set by an operator according to actual calculation requirements, and the minHessian threshold is referred to as 600 in this embodiment; the preset field area image is a visible light image with effective resolution, which is captured by the image capturing device 11 in real time; after the preset view field region image is captured, extracting feature points of each region of the preset view field region image through the SURF algorithm, and calculating the number of the feature points of each region of the preset view field region image, wherein the specific model of the image capturing device 11 is set by an operator according to actual needs and costs, and in this embodiment, a Basler blaze 101 camera is referred to;
referring to fig. 2, fig. 2 is a feature point detection diagram of an outdoor environment according to the present invention, where the outdoor environment includes, but is not limited to, a real outdoor terrain environment and a simulated outdoor terrain environment built indoors, and specifically, the outdoor environment is selected by an operator according to actual needs and costs; the feature points included in fig. 2 are feature points detected by the algorithm.
In step S3, it is determined whether there is a second feature point region whose number is higher than the determination threshold by comparing the number of feature points of each region of the preset field-of-view region image with the determination threshold.
In step S4, if a second feature point region having a feature point amount higher than the determination threshold exists in the preset view field region image, determining the second feature point region as an obstacle potential region, that is, a region where an obstacle may exist; if there is no second feature point region having a region feature point quantity higher than the judgment threshold in the preset view field region image, returning to step S2, and re-acquiring a preset view field region image with a preset resolution, that is, re-performing second feature point detection;
referring to fig. 3, fig. 3 is a feature point detection diagram of an outdoor environmental obstacle according to the present invention, and feature point detection is performed on a preset view field region image according to the feature point detection method of the present invention, where the preset view field region image is the region image shown in the first image of fig. 3; then, as shown in the second diagram of fig. 3, the position where the obstacle exists is determined based on the feature point density information, and is framed with a frame.
In step S5, after detecting the obstacle crossing area through the passive preset view field area image, it is further determined whether there is an obstacle through point cloud data, where the point cloud data is three-dimensional point cloud data of the obstacle crossing area in a preset direction with an effective resolution captured in real time, where the preset direction may be a view field direction of the obstacle crossing area, and is specifically set by an operator according to actual capturing equipment and cost; after point cloud data of the visual field direction of the potential barrier area are obtained, a real-time first plane normal vector is calculated, and then an included angle between the first plane normal vector and a reference normal vector is calculated.
In step S6, the preset angle is β degrees, if the included angle between the first plane normal vector and the reference normal vector is greater than or equal to β degrees, it is determined that there is an obstacle in the view direction of the potential obstacle area, otherwise, it is determined that there is no obstacle in the view direction of the potential obstacle area, where β is a parameter related to the maximum obstacle crossing gradient of the experimental robot 12, and is set by an operator according to an actual detection requirement.
In step S7, when an included angle between the first plane normal vector and a reference normal vector is greater than or equal to β degrees, it is determined that an obstacle exists in the view direction of the obstacle-hidden area; and when the included angle between the first plane normal vector and the reference normal vector is smaller than beta degrees, determining that no obstacle exists in the view direction of the potential obstacle area, returning to the step S2, re-acquiring a preset view area image with preset resolution, and re-performing second feature point detection.
Wherein the obstacles include, but are not limited to, bumps, pits, rugged areas; the β is set by the operator according to a number of simulation tests, which is referred to as 20 ° in this embodiment, wherein the simulation test process is: firstly, ensuring that the front of the experimental robot 12 is clear and free of obstacles, secondly simulating the experimental robot 12 to move under a flat road condition, selecting a plane normal vector of point cloud data at the moment as a reference normal vector, and then placing obstacles in front of the experimental robot 12 so as to simulate the obstacle detection of the experimental robot 12;
referring to fig. 5, fig. 5 is a diagram of a detection result of an environmental obstacle according to the present invention, where an upper left diagram is a grayscale diagram of original data, an upper right diagram is a disparity diagram based on image information, a lower left diagram is an obstacle evaluation algorithm based on a density of a feature point region of a visible light image, so as to improve flexibility of obstacle detection, and a lower right diagram is a diagram of an obstacle determination algorithm based on an angle between a real-time normal vector and a reference normal vector, so as to ensure accuracy of obstacle detection.
The method for selecting the reference normal vector comprises the following steps of:
step S10: the image pickup device 11 acquires preset point cloud data;
step S11: selecting a preset view according to the preset point cloud data;
step S12: generating a second plane normal vector matched with a preset visual field;
step S13: setting the second plane normal vector as a reference normal vector.
The preset point cloud data is point cloud data which is acquired by the image pickup device 11 and matched with a preset view field area image, the preset view field is set by an operator according to an actual detection requirement, and the preset view field is preferably a flat road view field in the embodiment.
By adopting the technical scheme, the complementation of the image data and the point cloud data is realized, and the reliability of obstacle detection is ensured; the obstacle is evaluated by adopting the passive optical information, and then the obstacle is finally judged by the active optical information, so that the obstacle detection speed is increased; and the obstacle is judged by utilizing the included angle between the real-time normal vector and the reference normal vector, so that the obstacle detection accuracy is ensured.
As a preferred mode of the present invention, the method for calculating the real-time first plane normal vector according to the point cloud data comprises:
step S40: generating a general equation of the plane by least squares fitting the plane method:
Figure 463437DEST_PATH_IMAGE001
and further the deformation is:
Figure 218772DEST_PATH_IMAGE002
step S41: and then calculating a minimization function according to a plane equation:
Figure 421608DEST_PATH_IMAGE004
and are respectively aligned with
Figure 287801DEST_PATH_IMAGE003
And (5) calculating a partial derivative to obtain:
Figure 626903DEST_PATH_IMAGE005
step S42: the conversion into matrix form is:
Figure 490823DEST_PATH_IMAGE011
step S43: determination of parameter values by means of the Kramer rule
Figure 820435DEST_PATH_IMAGE007
And further calculating a plane normal vector as follows:
Figure 481531DEST_PATH_IMAGE008
in step S40, the plane in which the least squares fit to the plane method uses the general equation:
Figure 937789DEST_PATH_IMAGE001
the description is carried out; and then transformed into
Figure 303436DEST_PATH_IMAGE002
(ii) a Wherein A, B, C,
Figure 212355DEST_PATH_IMAGE003
Are parameters of the plane equation.
In step S41, the equation obtained by the least squares method is transformed into a minimization function:
Figure 688860DEST_PATH_IMAGE004
and further, the partial derivatives of the parameters are calculated.
In step S42, the formula for obtaining the partial derivatives is converted into a matrix form, and then the parameters are obtained by the kramer method
Figure 530783DEST_PATH_IMAGE007
And then calculating a first plane normal vector of
Figure 97025DEST_PATH_IMAGE008
And comparing the first plane normal vector with a reference plane normal vector, and outputting a result.
Example two
Referring to fig. 4, fig. 6-fig. 7, the present invention further provides an embodiment of an outdoor environmental obstacle real-time detection system based on active and passive information fusion, including:
the threshold calculation module 1 is configured to perform first feature point detection according to a preset number of preset environment images, further calculate an average density according to the first feature points, and generate a judgment threshold;
the real-time detection module 2 is used for acquiring a preset view field region image with a preset resolution, and performing second feature point detection on the preset view field region image through an SURF algorithm;
the included angle calculation module 3 is used for calculating a real-time first plane normal vector according to the point cloud data in the preset direction of the potential barrier area, and further calculating an included angle between the first plane normal vector and a reference normal vector;
the obstacle judging module 4 is configured to judge whether the number of the second feature points is higher than the judgment threshold, if yes, judge the preset field of view area as an obstacle potential area, and if not, re-acquire a preset field of view area image with a preset resolution; judging whether the included angle is larger than a preset angle or not, if so, judging that an obstacle exists in the preset direction of the obstacle-submerging area, and if not, re-acquiring a preset view field area image with a preset resolution;
and the reference detection module 5 is used for selecting a preset view according to the preset point cloud data, generating a matched second plane normal vector according to the preset view, and further setting the second plane normal vector as a reference normal vector.
As a preferred embodiment of the present invention, the present invention further comprises:
an experimental moving platform 10;
the image shooting device 11 is installed on the experiment mobile platform 10, is used for outputting RGB images, depth maps and three-dimensional point cloud data, and measures the distance between the image shooting device and the object according to the time difference between the emission of signals and the time when the signals are reflected by the object and then return to the sensor;
and the experimental robot 12 is used for simulating obstacle tests.
The experimental moving platform 10 adopts a four-wheel test vehicle to carry the image pickup device 11; the image capturing device 11 adopts a Basler blaze 101 camera, which is different from common monocular and binocular cameras, the Basler blaze 101 camera is a ToF camera based on optical pulse time-of-flight ranging, integrates the advantages of the camera and a laser radar, is not affected by environmental illumination change, environmental texture and weather conditions, has a small size, a high integration degree and a high resolution, can output RGB images, depth maps and three-dimensional point cloud data, and has a great advantage by adopting the Basler blaze 101 camera as an obstacle avoidance sensor.
Referring to fig. 4, the operating principle of the Basler blaze 101 camera is as follows: the transmitter 110 transmits a signal to the detection object 112, returns to the receiver 111 after being reflected by the detection object 112, and the timer 113 measures the distance between the camera and the detection object according to the time difference between the transmission of the signal and the return of the signal to the receiver 111 after being reflected by the detection object.
The experimental robot 12 is a quadruped robot for experiments, and the specific model of the quadruped robot is set by an operator according to actual test requirements and cost.
EXAMPLE III
The invention also provides a computer medium, wherein a computer program is stored on the computer medium and is executed by a processor to realize the real-time detection method for the outdoor environmental obstacle based on the active and passive information fusion.
The invention also provides a computer comprising the computer medium.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications of the invention may be made without departing from the scope of the invention.

Claims (10)

1. The outdoor environment obstacle real-time detection method based on active and passive information fusion is characterized by comprising the following steps:
step S1: detecting first feature points according to a preset number of preset environment images, calculating average density according to the first feature points, and generating a judgment threshold value;
step S2: acquiring a preset view field region image with a preset resolution, and performing second characteristic point detection on the preset view field region image through a SURF algorithm;
step S3: judging whether the number of the second feature points is higher than the judgment threshold value, if so, judging the preset view field area as an obstacle-hiding area, otherwise, returning to the step S2, and reacquiring a preset view field area image with a preset resolution;
step S4: calculating a real-time first plane normal vector according to the point cloud data in the preset direction of the potential barrier area, and further calculating an included angle between the first plane normal vector and a reference normal vector;
step S5: and judging whether the included angle is larger than a preset angle, if so, judging that an obstacle exists in the preset direction of the obstacle-hiding area, otherwise, returning to the step S2, and re-acquiring a preset view field area image with a preset resolution.
2. The active-passive information fusion-based outdoor environment obstacle real-time detection method according to claim 1, wherein the method for detecting the first feature point according to the preset number of preset environment images comprises the following steps:
and detecting the first characteristic points according to the outdoor environment images, the simulated outdoor environment images and the ground images of the preset quantity.
3. The active-passive information fusion-based outdoor environment obstacle real-time detection method according to claim 1, wherein the method for performing second feature point detection on the preset view field region image through SURF algorithm comprises the following steps:
extracting the characteristic points of the preset view field region image through an SURF algorithm, and further detecting the characteristic points of the preset view field region image through introducing a Hessian matrix; wherein the minHessian threshold range of the Hessian matrix is 400-600.
4. The active-passive information fusion-based outdoor environment obstacle real-time detection method according to claim 1, wherein the reference normal vector is selected by the following method:
and selecting a preset view according to the preset point cloud data, generating a matched second plane normal vector according to the preset view, and setting the second plane normal vector as a reference normal vector.
5. The active-passive information fusion-based outdoor environmental obstacle real-time detection method according to claim 4, wherein the method for calculating the real-time first plane normal vector according to the point cloud data is as follows:
step S40: generating a plane equation by a least square fitting plane method:
Figure 392498DEST_PATH_IMAGE001
and further the deformation is:
Figure 824485DEST_PATH_IMAGE002
wherein, A, B, C,
Figure 110280DEST_PATH_IMAGE003
Are all parameters of a plane equation;
step S41: and then calculating a minimization function according to a plane equation:
Figure 52960DEST_PATH_IMAGE004
and are respectively aligned with
Figure 913337DEST_PATH_IMAGE003
And (4) calculating a partial derivative to obtain:
Figure 253576DEST_PATH_IMAGE005
step S42: the conversion into matrix form is:
Figure 832194DEST_PATH_IMAGE006
step S43: determination of parameter values by means of the Kramer rule
Figure 548871DEST_PATH_IMAGE007
And further calculating a plane normal vector as follows:
Figure 467017DEST_PATH_IMAGE008
6. outdoor environment obstacle real-time detection system based on active and passive information fusion, its characterized in that includes:
the threshold value calculating module (1) is used for detecting first feature points according to a preset number of preset environment images, further calculating average density according to the first feature points and generating a judgment threshold value;
the real-time detection module (2) is used for acquiring a preset view field region image with a preset resolution, and performing second characteristic point detection on the preset view field region image through an SURF algorithm;
the included angle calculation module (3) is used for calculating a real-time first plane normal vector according to point cloud data in a preset direction of the potential barrier area, and further calculating an included angle between the first plane normal vector and a reference normal vector;
the obstacle judging module (4) is used for judging whether the number of the second characteristic points is higher than the judging threshold value, if so, judging the preset view field area as an obstacle hiding area, and if not, re-acquiring a preset view field area image with a preset resolution; and judging whether the included angle is larger than a preset angle, if so, judging that an obstacle exists in the preset direction of the obstacle-hiding area, and if not, re-acquiring a preset view field area image with a preset resolution.
7. The active-passive information fusion-based outdoor environmental obstacle real-time detection system according to claim 6, further comprising:
an experimental mobile platform (10);
and the image shooting equipment (11) is arranged on the experiment mobile platform and used for outputting RGB images, depth maps and three-dimensional point cloud data, and measuring the distance between the image shooting equipment and the object according to the time difference between the emission of the signals and the time difference between the signals and the time when the signals are reflected by the object and the time when the signals return to the sensor.
8. The active-passive information fusion-based outdoor environmental obstacle real-time detection system according to claim 7, further comprising:
and the experimental robot (12) is used for simulating obstacle detection.
9. A computer medium, characterized in that the computer medium has a computer program stored thereon, and the computer program is executed by a processor to implement the active and passive information fusion-based outdoor environment obstacle real-time detection method according to any one of claims 1-5.
10. A computer comprising a computer medium according to claim 9.
CN202210815916.7A 2022-07-12 2022-07-12 Outdoor environment obstacle real-time detection method and system based on active and passive information fusion Pending CN114913508A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210815916.7A CN114913508A (en) 2022-07-12 2022-07-12 Outdoor environment obstacle real-time detection method and system based on active and passive information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210815916.7A CN114913508A (en) 2022-07-12 2022-07-12 Outdoor environment obstacle real-time detection method and system based on active and passive information fusion

Publications (1)

Publication Number Publication Date
CN114913508A true CN114913508A (en) 2022-08-16

Family

ID=82772002

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210815916.7A Pending CN114913508A (en) 2022-07-12 2022-07-12 Outdoor environment obstacle real-time detection method and system based on active and passive information fusion

Country Status (1)

Country Link
CN (1) CN114913508A (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114663526A (en) * 2022-03-17 2022-06-24 深圳市优必选科技股份有限公司 Obstacle detection method, obstacle detection device, robot and computer-readable storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114663526A (en) * 2022-03-17 2022-06-24 深圳市优必选科技股份有限公司 Obstacle detection method, obstacle detection device, robot and computer-readable storage medium

Similar Documents

Publication Publication Date Title
US11455565B2 (en) Augmenting real sensor recordings with simulated sensor data
Broggi et al. Obstacle detection with stereo vision for off-road vehicle navigation
Häne et al. Obstacle detection for self-driving cars using only monocular cameras and wheel odometry
CN109427214A (en) It is recorded using simulated sensor data Augmented Reality sensor
US9224208B2 (en) Image-based surface tracking
Gangawane et al. Obstacle detection and object size measurement for autonomous mobile robot using sensor
US7764284B2 (en) Method and system for detecting and evaluating 3D changes from images and a 3D reference model
Joubert et al. Pothole tagging system
Fiala et al. Visual odometry using 3-dimensional video input
CN104899855A (en) Three-dimensional obstacle detection method and apparatus
CN104574393A (en) Three-dimensional pavement crack image generation system and method
CN111753609A (en) Target identification method and device and camera
CN113096183B (en) Barrier detection and measurement method based on laser radar and monocular camera
Kang et al. Accurate fruit localisation using high resolution LiDAR-camera fusion and instance segmentation
Morales et al. Ground truth evaluation of stereo algorithms for real world applications
CN106022266A (en) Target tracking method and target tracking apparatus
Zhu et al. A simple outdoor environment obstacle detection method based on information fusion of depth and infrared
Sun et al. Large-scale building height estimation from single VHR SAR image using fully convolutional network and GIS building footprints
CN116071424A (en) Fruit space coordinate positioning method based on monocular vision
CN101782386B (en) Non-visual geometric camera array video positioning method and system
CN112749584A (en) Vehicle positioning method based on image detection and vehicle-mounted terminal
Moroni et al. Underwater scene understanding by optical and acoustic data integration
CN114913508A (en) Outdoor environment obstacle real-time detection method and system based on active and passive information fusion
US20220309776A1 (en) Method and system for determining ground level using an artificial neural network
Griesser et al. CNN-based monocular 3D ship detection using inverse perspective

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220816