CN115760898A - World coordinate positioning method for road sprinklers in mixed Gaussian domain - Google Patents
World coordinate positioning method for road sprinklers in mixed Gaussian domain Download PDFInfo
- Publication number
- CN115760898A CN115760898A CN202211540766.XA CN202211540766A CN115760898A CN 115760898 A CN115760898 A CN 115760898A CN 202211540766 A CN202211540766 A CN 202211540766A CN 115760898 A CN115760898 A CN 115760898A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- dimensional
- shadow
- cloud data
- time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 60
- 238000009826 distribution Methods 0.000 claims abstract description 52
- 238000001514 detection method Methods 0.000 claims abstract description 28
- 238000012216 screening Methods 0.000 claims abstract description 4
- 239000011159 matrix material Substances 0.000 claims description 28
- 230000011218 segmentation Effects 0.000 claims description 12
- 230000035945 sensitivity Effects 0.000 claims description 12
- 238000012546 transfer Methods 0.000 claims description 12
- 239000000203 mixture Substances 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 8
- 230000009466 transformation Effects 0.000 claims description 7
- 230000008859 change Effects 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 6
- 230000007797 corrosion Effects 0.000 claims description 3
- 238000005260 corrosion Methods 0.000 claims description 3
- 230000008030 elimination Effects 0.000 claims description 3
- 238000003379 elimination reaction Methods 0.000 claims description 3
- 230000000877 morphologic effect Effects 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- 238000005286 illumination Methods 0.000 abstract description 5
- 238000012935 Averaging Methods 0.000 abstract description 4
- 230000004927 fusion Effects 0.000 description 5
- 238000000605 extraction Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000004308 accommodation Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004737 colorimetric analysis Methods 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002401 inhibitory effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012847 principal component analysis method Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Landscapes
- Image Processing (AREA)
Abstract
The invention provides a world coordinate positioning method of road sprinklers under a mixed Gaussian domain, which comprises the steps of confirming the foreground and background attribution of each pixel point in an acquired image by using a mixed Gaussian distribution model with a plurality of Gaussian distributions; continuously updating and extracting the required foreground to obtain a relatively complete foreground; analyzing whether the foreground or shadow contour by using a pixel averaging method; matching the target contour with the shadow contour; pre-storing the matched object shadow profiles, updating and screening; initializing and storing the suspected tossing shadow outline template and the posture of the matched result; outputting the coordinates of the projectile obtained by tracking and positioning; and collecting three-dimensional point cloud data of the throwing object by adopting a laser radar, establishing a three-dimensional KD tree assisted by Euclidean clustering, and sorting the collected disordered three-dimensional point cloud data to form an ordered three-dimensional point cloud number so as to complete the world coordinate positioning of the road throwing object. The method can effectively improve the problem of tracking drift of the projectile under the complex illumination environment, thereby improving the detection accuracy.
Description
Technical Field
The invention belongs to the technical field of traffic safety management, and particularly relates to a world coordinate positioning method for road sprinklers in a mixed Gaussian domain.
Background
With the rapid development of highway construction, the traffic flow of roads is greatly increased, and the number of accidents caused by the spilled objects is increased sharply. As an important component of traffic in modern society, the highway has huge traffic flow, and in recent years, accidents are frequently caused, and the huge accidents are often caused by throwing objects on small roads. Therefore, an efficient road information collecting and detecting system capable of being arranged on a vehicle body is urgently needed, information collected by a sensor is sorted and judged, a road parabolic object can be quickly tracked, accurate positioning under world coordinates can be realized, and the system is used as an effective means for detecting road objects and positioning the objects, can timely take countermeasures, reduces damage of the parabolic object to subsequent vehicles, guarantees life safety of drivers and conductors, reduces occurrence of traffic accidents, saves a large amount of manpower, improves efficiency, and conforms to the development trend of road management from extensive to informatization.
The camera and the laser radar can acquire various types of information of roads as two types of sensors which are most widely used and facilitate data acquisition. In the detection and tracking of the prior road parabola, detection algorithms such as YOLO (YOLO) are mostly adopted to detect and position the parabola on a frame where the parabola appears, and the traditional tracking algorithms such as kalman, mean-shift and pipeline are adopted to track the parabola in modes such as a target tracking algorithm based on deep learning. In actual situations, road traffic flow is huge, road conditions are very complex, a viewing mirror and a window which are equipped with a vehicle indirectly contribute to a very complex detection environment in the modes of light reflection, refraction and the like, and the traditional tracking mode is difficult to adapt to the complex change and easily causes the problems of target loss, target tracking deviation and the like, so that a target tracking algorithm based on deep learning needs to be adopted to effectively track the target.
Disclosure of Invention
Aiming at the defects, the invention provides a world coordinate positioning method for road sprinklers in a mixed Gaussian domain. The method can effectively improve the accuracy of detecting the projectile in the complex illumination environment and effectively solve the problem of projectile tracking drift in the complex illumination environment. Compared with the traditional tracking algorithm, the method can effectively improve the tracking capability of the small target.
The invention provides the following technical scheme: a method for world coordinate positioning of a road projectile in a mixed gaussian domain, said method comprising the steps of:
s1, collecting an image through a camera, and confirming foreground background attribution of each pixel point in the collected image by using a mixed Gaussian distribution model with a plurality of Gaussian distributions; continuously updating and carrying out shadow pixel point detection, extracting a required foreground, carrying out morphological processing of corrosion and expansion closing operation on the foreground, and obtaining a relatively complete foreground;
s2, analyzing the contour type of the acquired real-time image at the time t by using a pixel mean value method, so as to be convenient for analyzing whether the contour is a foreground or shadow contour;
s3, introducing a Matchshape operator, and matching a target contour with a shadow contour by using the translation, scaling and rotation invariance of the Hu matrix; pre-storing the outline of the object shadow successfully matched with the appearing object shadow, and updating and screening; initializing and storing the suspected projectile shadow contour template and the posture of the matching result;
s4, obtaining a suspected throwing object which is positioned after the step S3, and performing shadow elimination on a new foreground by adopting the method of the step S2 to obtain a foreground target; performing object shadow matching on the target in the foreground by using the template to complete tracking, and outputting a coordinate of the projectile obtained by tracking and positioning;
and S5, collecting three-dimensional point cloud data of the throwing object by adopting a laser radar, establishing a three-dimensional KD tree, arranging the collected disordered three-dimensional point cloud data to form ordered three-dimensional point cloud data, then calculating Euclidean distance to cluster the formed ordered three-dimensional point cloud data, converting the clustered ordered three-dimensional point cloud data into a throwing object coordinate in a two-dimensional plane in a matrix transformation mode, and fusing the converted two-dimensional plane throwing object coordinate with the throwing object coordinate output in the step S4 to complete the world coordinate positioning of the road throwing object.
Further, the mixture gaussian distribution model of the plurality of gaussian distributions in the step S1 is as follows:
wherein M is the Gaussian distribution number, chi, of the Gaussian mixture distribution model T For the collection of samples within the time window T starting at time T χ T ={x (t) ,...,x (t-T) },χ T The background model updating method is used for indicating that new samples are continuously added in a time window T to update the background model so as to adapt to the change of a complex environment; BG and FG represent background component and foreground component, respectively; pi m Weights representing the M mixture models; eta is a probability density function of Gaussian distribution;and withAre respectively at time tA mean and covariance matrix of the pixels, whereinIs a gaussian distribution variance estimate, and I is an identity matrix.
Further, in the process of continuously updating the gaussian mixture distribution model in the step S1, performing shadow pixel point detection on an image, including the following steps:
s11, constructing a shadow pixel point S t The discriminant of (x, y):
|I t (x,y)H-B t (x,y)H|≤τ H
|I t (x,y)S-B t (x,y)S|≤τ S
wherein, I t (x, y) is the real-time image at the time t acquired in the step S1; b is t (x, y) is a component B of the real-time image at the time t acquired in the step S1, and H, S and V are HSV components obtained by converting the image at the time t acquired in the step S1 from RGB to HSV space respectively; α is a first adjustment parameter of luminance sensitivity for shadow detection, β is a second adjustment parameter of luminance sensitivity for shadow detection, τ H First adjustment parameter for noise sensitivity for shadow detection, τ S A second adjustment parameter for noise sensitivity for shadow detection;
and S12, judging whether each pixel point in the detected image accords with the shadow pixel point discriminant constructed in the step S21, and if so, marking the pixel point as a shadow pixel point.
Further, the updating equation of the gaussian distribution model with multiple gaussian distributions continuously updated in step S1 is as follows:
wherein,is a central vector;as pixels of the real-time image at time tAttribution factors conforming to the distribution model, the attribution of the most conforming distribution model is 1, and the attribution of the rest models is 0;an envelope curve which is exponentially descending and is used for limiting the influence of old data;
continuously updating pixels of real-time image at t moment in mixed Gaussian distribution model with multiple Gaussian distributionsMahalanobis distance ofJudging whether the standard deviation is less than three times to judge whether the pixel accords with the existing Gaussian distribution or not and judging the pixel of the real-time image at the moment tMahalanobis distance ofThe calculation formula of (c) is as follows:
Further, the discriminant formula for analyzing the contour category of the acquired real-time image at the time t by using a pixel averaging method in the step S2 is as follows:
wherein, I is the total number of pixel points of the collected real-time image at the time t, I is the ith pixel point of the collected real-time image at the time t, b w Is a target pixel value;
and when P is larger than 191, indicating that the number of the target foreground points in the acquired real-time image at the time t is more than that of the shadow points, and judging that the contour target of the acquired real-time image at the time t is a foreground contour.
Further, the step S5 of establishing a three-dimensional KD tree and sorting the collected unordered three-dimensional point cloud data into ordered three-dimensional point cloud data includes the following steps:
s51, acquiring three-dimensional disordered point cloud data T = { p by using laser radar 1 ,p 2 ,p 3 Therein ofInitializing a segmentation axis, calculating the sum of variances of point cloud position data of each dimension, taking the dimension with the maximum variance as a segmentation hyperplane, and marking as r;
s52, retrieving the current point cloud position data according to the segmented hyperplane dimension, finding median data, and putting the median data on a current node, wherein the root node corresponds to a hyperplane area of a three-dimensional space containing T;
s53, dividing the current hyperplane dimension into two sub hyperplane dimensions, dividing all values smaller than the median into the left branch hyperplane dimension, and dividing all values larger than or equal to the median into the right branch hyperplane dimension;
s54, updating the segmentation hyperplane r obtained in the step S51, saving the point drawn into the dimension of the sub hyperplane on the current root node, and updating the formula of the segmentation hyperplane r obtained in the step S51 as follows: r = (r + 1)% 3,% is calculated as the remainder;
s55: repeating the step S52 in the data of the dimension of the left branch hyperplane obtained in the step S53 until no data exists in the sub-dimension, and determining to obtain a left node; repeating the step S52 in the data of the right branch hyperplane dimension obtained in the step S53 until no data exists in the sub-dimension, and determining to obtain a right node;
s56: and outputting a KD tree corresponding to the point cloud, and sorting the collected disordered three-dimensional point cloud data according to the KD tree corresponding to the point cloud to form ordered three-dimensional point cloud data.
Further, the euclidean distance calculation formula for calculating the euclidean distance in the step S5 to cluster the formed ordered three-dimensional point cloud data is as follows:
wherein x is i Is the abscissa, y, of the ith point cloud data in the ordered three-dimensional point cloud data i The longitudinal coordinate of the ith point cloud data in the ordered three-dimensional point cloud data is shown, and n is the total number of point cloud data in the ordered three-dimensional point cloud data.
Further, in the step S5, converting the clustered ordered three-dimensional point cloud data into coordinates of the projectile in the two-dimensional plane by means of matrix transformation includes the following steps:
1) Constructing a transfer matrix M, and transferring three-dimensional coordinates (X, Y, Z) in a point cloud space of the target throwing object acquired by the laser radar into coordinates (u, v) in a two-dimensional plane of the target throwing object:
wherein m is ab For the row a, column b transfer parameters in the transfer matrix M, a =1,2,3; b =1,2,3,4; z is a linear or branched member C To take a photographThe z-axis coordinate of the camera coordinate system where the image head is located;
2) And calculating the coordinates (u, v) in the two-dimensional plane of the target projectile according to the transfer matrix as follows:
3) And 2) obtaining a series of linear equations through the calculation of the step 2), solving the linear equations to obtain calibration parameters, and further obtaining coordinates (u, v) in a two-dimensional plane of the target throwing object.
Further, the three-dimensional coordinates (X, Y, Z) in the point cloud space in step 1) are to perform voxel division on the acquired three-dimensional data according to a voxel grid, create a three-dimensional voxel grid with a volume of 1 cubic centimeter, and approximately display other points in the voxel by using the centroid of all the points in the voxel after accommodation in each voxel, where the centroid is calculated as follows:
wherein m is the total number of voxel points contained in voxels voxelized by three-dimensional coordinates of the detected target object, and x central 、y central And z central The three-dimensional x-axis coordinate, the y-axis coordinate and the z-axis coordinate of the centroid, and all points in voxels voxelized by the three-dimensional coordinates of the detection target object are finally represented by the centroid; x = X in three-dimensional coordinates (X, Y, Z) in a point cloud space central ,Y=y central ,Z=z central 。
The invention has the following beneficial effects:
1. the method can effectively improve the accuracy of the detection of the sprinkled object in the complex illumination environment.
2. The method can effectively solve the problem of tracking and drifting of the sprinkled object in the complex illumination environment.
3. Compared with the traditional tracking algorithm, the method provided by the invention can effectively improve the tracking capability of the small target.
4. The invention provides a world coordinate positioning method for a road projectile in a mixed Gaussian domain. The method realizes real-time identification of the sprinkled objects and track tracking by utilizing an object-image matching and tracking algorithm of a camera, real-time positioning and map construction computing technology of the laser radar is combined with a voxel filtering algorithm and an Euclidean clustering algorithm, road data are collected and processed in real time, accurate positioning of the sprinkled objects from a two-dimensional to three-dimensional world coordinate system is realized through matrix transformation and sensor fusion, early warning is made for high-speed object-throwing accidents, secondary accidents are prevented, and unnecessary loss is reduced.
5. According to the world coordinate positioning method for the mixed Gaussian domain system road sprinkled object, the technical scheme that after the camera collects the image and the target is positioned, the laser radar is adopted to collect the three-dimensional point cloud data for data fusion is adopted, so that the defect that a single sensor cannot avoid the existence in the actual situation is avoided, and the situations that the single sensor is inaccurate easily, no depth information exists and the field angle is limited due to the fact that only the camera is used are avoided. Therefore, after the camera is used for collecting images to position the target, the laser radar is used for collecting three-dimensional point cloud data to perform data fusion, the robustness of the system is further improved, and the positioning accuracy of the road sprinkled object is improved by adopting the scheme of multi-sensor fusion and fusing the time synchronization and the space synchronization of different sensors.
Drawings
The invention will be described in more detail hereinafter on the basis of embodiments and with reference to the accompanying drawings. Wherein:
FIG. 1 is a flow chart of the coordinate positioning of a projectile performed in steps S1-S4 of the method provided by the present invention;
FIG. 2 is a schematic flow chart of establishing a three-dimensional KD tree for the unordered three-dimensional point cloud data in the step S5 in the method provided by the invention for sorting;
fig. 3 is a schematic diagram of a process of clustering by using an euclidean distance assisted three-dimensional KD tree according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides a world coordinate positioning method of a road projectile in a mixed Gaussian domain, which comprises the following steps:
s1, collecting an image through a camera, and confirming foreground and background attribution of each pixel point in the collected image by using a mixed Gaussian distribution model with a plurality of Gaussian distributions; continuously updating and carrying out shadow pixel point detection, extracting a required foreground, carrying out morphological processing of corrosion and expansion closing operation on the foreground, effectively eliminating noise generated in a complex environment, and highlighting edge sharpening of an image so as to obtain a relatively complete foreground;
s2, analyzing the contour type of the acquired real-time image at the time t by using a pixel averaging method, so as to be convenient for analyzing whether the contour is a foreground or shadow contour;
s3, introducing a Matchshape operator, and matching a target contour with a shadow contour by utilizing translation, scaling and rotation invariance of a Hu matrix (a discrete image normalized central matrix); pre-storing the outline of the object shadow successfully matched with the object shadow, and updating and screening; initializing and storing the suspected tossing shadow outline template and the posture of the matched result;
s4, obtaining a suspected throwing object which is positioned after the step S3, and performing shadow elimination on a new foreground by adopting the method of the step S2 to obtain a foreground target; performing object shadow matching on the target in the foreground by using the template to complete tracking, and outputting a coordinate of the tossing object obtained by tracking and positioning;
and S5, collecting three-dimensional point cloud data of the throwing object by adopting a laser radar, establishing a three-dimensional KD tree, arranging the collected disordered three-dimensional point cloud data to form ordered three-dimensional point cloud data, then calculating Euclidean distance to cluster the formed ordered three-dimensional point cloud data, converting the clustered ordered three-dimensional point cloud data into a throwing object coordinate in a two-dimensional plane in a matrix transformation mode, and fusing the converted two-dimensional plane throwing object coordinate with the throwing object coordinate output in the step S4 to complete the world coordinate positioning of the road throwing object.
As a preferred embodiment of the present invention, as shown in fig. 1, the mixed gaussian distribution model of the plurality of gaussian distributions in the step S1 is as follows:
wherein M is the Gaussian distribution number, chi, of the Gaussian mixture distribution model T For the collection of sample sets within the time window T starting at time T χ T ={x (t) ,...,x (t-T) },χ T The background model updating method is used for indicating that new samples are continuously added in a time window T to update the background model so as to adapt to the change of a complex environment; BG and FG represent background component and foreground component, respectively; pi m Weights representing the M mixture models; eta is a probability density function of Gaussian distribution;andrespectively, mean and covariance matrices of the pixels at time t, whereinIs a Gaussian distribution variance estimation value, and I is an identity matrix.
As another preferred embodiment of the present invention, in order to optimize a background updating mode of the gaussian distribution model, in the continuous updating process of the gaussian distribution model in the step S1, a shadow detection module is introduced to perform shadow pixel point detection on an image, including the following steps:
s11, constructing a shadow pixel point S t The discriminant of (x, y):
|I t (x,y)H-B t (x,y)H|≤τ H
|I t (x,y)S-B t (x,y)S|≤τ S
wherein, I t (x, y) is the real-time image at the time t acquired in the step S1; b is t (x, y) is a component B of the real-time image at the time t acquired in the step S1, and H, S and V are HSV components for converting the RGB of the image at the time t acquired in the step S1 into HSV space respectively; α is a first adjustment parameter of luminance sensitivity for shadow detection, β is a second adjustment parameter of luminance sensitivity for shadow detection, τ H First adjustment parameter for noise sensitivity for shadow detection, τ S A second adjustment parameter for noise sensitivity for shadow detection;
and S12, judging whether each pixel point in the detected image accords with the shadow pixel point discriminant constructed in the step S21, and if so, marking the pixel point as a shadow pixel point.
The shadow detection is carried out on the colorimetric analysis, is suitable for detecting the shadow of a moving object, and has the effect of inhibiting the background shadow.
Further preferably, the updating equation of the gaussian distribution model with multiple gaussian distributions continuously updated in step S1 is as follows:
wherein,is a central vector;as pixels of the real-time image at time tAttribution factors conforming to the distribution model, the attribution of the most conforming distribution model is 1, and the attribution of the rest models is 0;an envelope curve which is exponentially descending and is used for limiting the influence of old data;
continuously updating multiple Gaussian distribution mixed Gaussian distribution models, and judging pixels of real-time images at t momentMahalanobis distance ofJudging whether the standard deviation is less than three times to judge whether the pixel accords with the existing Gaussian distribution or not and judging the pixel of the real-time image at the moment tMahalanobis distance ofThe calculation formula of (a) is as follows:
As another preferred embodiment of the present invention, in the step S2, a discriminant of analyzing the contour category at the acquired real-time image at the time t by using a pixel averaging method is as follows:
wherein, I is the total number of pixel points of the collected real-time image at the time t, I is the ith pixel point of the collected real-time image at the time t, b w Is the target pixel value;
and when P is larger than 191, indicating that the number of the target foreground points in the acquired real-time image at the time t is more than that of the shadow points, and judging that the contour target of the acquired real-time image at the time t is a foreground contour.
As another preferred embodiment of the present invention, as shown in fig. 2, the step S5 of establishing a three-dimensional KD tree and sorting the collected unordered three-dimensional point cloud data into ordered three-dimensional point cloud data includes the following steps:
s51, acquiring three-dimensional disordered point cloud data T = { p by using laser radar 1 ,p 2 ,p 3 Therein ofInitializing a segmentation axis, calculating the sum of variances of point cloud position data of each dimension, taking the dimension with the maximum variance as a segmentation hyperplane, and marking as r;
s52, retrieving the current point cloud position data according to the segmented hyperplane dimension, finding median data, and putting the median data on a current node, wherein the root node corresponds to a hyperplane area of a three-dimensional space containing T;
s53, dividing the dimension of the current hyperplane into two sub hyperplane dimensions, dividing all values smaller than the median into the dimension of the left sub hyperplane, and dividing all values larger than or equal to the median into the dimension of the right sub hyperplane;
s54, updating the segmentation hyperplane r obtained in the step S51, saving the point drawn into the dimension of the sub hyperplane on the current root node, and updating the formula of the segmentation hyperplane r obtained in the step S51 as follows: r = (r + 1)% 3,% is calculated as the remainder;
s55: repeating the step S52 in the data of the left branch hyperplane dimension obtained in the step S53 until no data exists in the sub-dimension, and determining to obtain a left node; repeating the step S52 in the data of the right branch hyperplane dimension obtained in the step S53 until no data exists in the sub-dimension, and determining to obtain a right node;
s56: outputting a KD tree corresponding to the point cloud, and sorting the collected unordered three-dimensional point cloud data according to the KD tree corresponding to the point cloud to form ordered three-dimensional point cloud data.
Further preferably, as shown in fig. 3, the euclidean distance calculation formula for calculating the euclidean distance in the step S5 to cluster the formed ordered three-dimensional point cloud data is as follows:
wherein x is i Is the abscissa, y, of the ith point cloud data in the ordered three-dimensional point cloud data i The longitudinal coordinate of the ith point cloud data in the ordered three-dimensional point cloud data is shown, and n is the total number of point cloud data in the ordered three-dimensional point cloud data.
As another preferred embodiment of the present invention, in the step S5, converting the clustered ordered three-dimensional point cloud data into coordinates of a projectile in a two-dimensional plane by means of matrix transformation includes the following steps:
1) Constructing a transfer matrix M, and transferring three-dimensional coordinates (X, Y, Z) in a point cloud space of the target throwing object acquired by the laser radar into coordinates (u, v) in a two-dimensional plane of the target throwing object:
wherein m is ab For the row a, column b transfer parameters in the transfer matrix M, a =1,2,3; b =1,2,3,4; z C Is the z-axis coordinate of the camera coordinate system where the camera is located;
2) And calculating the coordinates (u, v) in the two-dimensional plane of the target projectile according to the transfer matrix as follows:
3) And 2) obtaining a series of linear equations through the calculation of the step 2), solving the linear equations to obtain calibration parameters, and further obtaining coordinates (u, v) in a two-dimensional plane of the target throwing object.
The method comprises the steps of collecting three-dimensional point cloud data through a laser radar, reading three-dimensional coordinates (X, Y, Z) in a point cloud space of a target throwing object obtained by the laser radar, and obtaining corresponding coordinates (u, v) in a two-dimensional plane of the target throwing object in an image plane through matrix change. And when the calculated image pixel coordinates are judged to be in the image read by the camera, reading out image pixels (R, G and B), assigning to point cloud data to form 3D coordinate color point cloud based on an RGB camera 3D coordinate system, and realizing the fusion of the camera and the laser radar.
And the laser radar finishes the operations of ROI extraction, ground segmentation, euclidean clustering and the like on the point cloud data, generates an object matrix positioning frame, basically determines the basic position of the tossing object through image processing, positions the three-dimensional position of the tossing object relative to the vehicle body, finishes positioning, and can provide clues for the next driving route of the vehicle according to the topological relation of the lane where the tossing object is located on the vehicle body.
Although the collected lidar data may have been converted into an ordered data set through steps S51-S56. However, the visual angle is different when the laser radar collects obstacle points each time, the coordinate change of the collected part of obstacle points is large, many obstacle points are irrelevant to the tracking of obstacles, and the extraction of the outline of the external frame is influenced by too many obstacle points, so that the region of interest needs to be screened out from the original point cloud. The invention acquires the ordered point cloud data which is acquired by the laser radar through European clustering and cannot accurately determine the obstacle through KD tree arrangement, continues clustering processing, and then regularly aggregates the point cloud data into a point set. Therefore, the region containing the road surface and the intersection is retrieved through ROI extraction, and meanwhile, the data is subjected to ground segmentation through constructing a top view based on the planar grid so as to leave point cloud data of obstacles on the road surface. And then searching three main directions of the clustered point set by a principal component analysis method, solving a mass center, calculating covariance to obtain a covariance matrix, solving a characteristic value and a characteristic vector of the covariance matrix by adopting a Jacobi iteration method, wherein the characteristic vector is the main direction, projecting the (X, Y, Z) coordinates of each point onto a calculated coordinate axis, obtaining the position by accumulating all points and then solving an average value, solving a central point and a half length, generating a minimum rotation rectangle, and forming an enclosure frame along the principal component direction of the target.
Further preferably, the three-dimensional coordinates (X, Y, Z) in the point cloud space in step 1) are to perform voxel division on the acquired three-dimensional data according to a voxel grid, create a three-dimensional voxel grid with a volume of 1 cubic centimeter, and approximately display other points in the voxel by using the centroid of all the points in the voxel after accommodation in each voxel, where the centroid is calculated as follows:
wherein m is the total number of voxel points contained in voxels voxelized by three-dimensional coordinates of the detected target object, and x central 、y central And z central The three-dimensional x-axis coordinate, the y-axis coordinate and the z-axis coordinate of the centroid, and all points in voxels voxelized by the three-dimensional coordinate of the detection target object are finally represented by the centroid; x = X in three-dimensional coordinates (X, Y, Z) within a point cloud space central ,Y=y central ,Z=z central 。
The method realizes down-sampling by using a voxelization grid method, namely, on the premise of not destroying the geometrical structure function of the point cloud, the shape characteristics of the point cloud are saved while the number of points is reduced, and noise points and outliers are removed to a certain degree. However, the point cloud data after voxel filtering is still disordered data, the data volume is still huge, and the processing difficulty is high, so that a KD tree is required to be passed, so that much time consumption is reduced, and meanwhile, the searching and registration of the associated points of the point cloud are ensured to be in a real-time state.
It should be noted that the above-mentioned numbers of the embodiments of the present invention are merely for description, and do not represent the merits of the embodiments. And the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising one of 8230, and" comprising 8230does not exclude the presence of additional like elements in a process, apparatus, article, or method comprising the element.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method of the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (9)
1. A method for world coordinate positioning of a road projectile in a mixed gaussian domain, said method comprising the steps of:
s1, collecting an image through a camera, and confirming foreground and background attribution of each pixel point in the collected image by using a mixed Gaussian distribution model with a plurality of Gaussian distributions; continuously updating and carrying out shadow pixel point detection, extracting a required prospect, and carrying out morphological processing of corrosion and expansion closing operation on the prospect to obtain a relatively complete prospect;
s2, analyzing the contour type of the acquired real-time image at the time t by using a pixel mean value method, so as to be convenient for analyzing whether the contour is a foreground or shadow contour;
s3, introducing a Matchshape operator, and matching a target contour with a shadow contour by using the translation, scaling and rotation invariance of the Hu matrix; pre-storing the outline of the object shadow successfully matched with the appearing object shadow, and updating and screening; initializing and storing the suspected tossing shadow outline template and the posture of the matched result;
s4, obtaining a suspected throwing object which is positioned after the step S3, and performing shadow elimination on a new foreground by adopting the method of the step S2 to obtain a foreground target; performing object shadow matching on the target in the foreground by using the template to complete tracking, and outputting a coordinate of the tossing object obtained by tracking and positioning;
and S5, collecting three-dimensional point cloud data of the throwing object by adopting a laser radar, establishing a three-dimensional KD tree, arranging the collected disordered three-dimensional point cloud data to form ordered three-dimensional point cloud data, then calculating Euclidean distance to cluster the formed ordered three-dimensional point cloud data, converting the clustered ordered three-dimensional point cloud data into a throwing object coordinate in a two-dimensional plane in a matrix transformation mode, and fusing the converted two-dimensional plane throwing object coordinate with the throwing object coordinate output in the step S4 to complete the world coordinate positioning of the road throwing object.
2. The method according to claim 1, wherein the Gaussian mixture model of the Gaussian distribution in step S1 is as follows:
wherein M is the Gaussian distribution number, chi of the mixed Gaussian distribution model T For the collection of sample sets within the time window T starting at time T χ T ={x (t) ,...,x (t-T) },χ T The background model updating method is used for indicating that new samples are continuously added in a time window T to update the background model so as to adapt to the change of a complex environment; BG and FG represent background component and foreground component, respectively; pi m Weights representing the M mixture models; eta is a probability density function of Gaussian distribution;andrespectively, mean and covariance matrices of the pixels at time t, whereinIs a Gaussian distribution variance estimation value, and I is an identity matrix.
3. The method according to claim 1, wherein shadow pixel point detection is performed on the image during the continuous updating process of the Gaussian mixture distribution model in the step S1, and the method comprises the following steps:
s11, constructing a shadow pixel point S t The discriminant of (x, y):
|I t (x,y)H-B t (x,y)H|≤τ H
|I t (x,y)S-B t (x,y)S|≤τ S
wherein, I t (x, y) is the step S1Collecting real-time images at the t moment; b is t (x, y) is a component B of the real-time image at the time t acquired in the step S1, and H, S and V are HSV components obtained by converting the image at the time t acquired in the step S1 from RGB to HSV space respectively; α is a first adjustment parameter of the luminance sensitivity for shadow detection, β is a second adjustment parameter of the luminance sensitivity for shadow detection, τ H First adjustment parameter for noise sensitivity for shadow detection, τ S A second adjustment parameter for noise sensitivity for shadow detection;
and S12, judging whether each pixel point in the detected image accords with the shadow pixel point discriminant constructed in the step S21, and if so, marking the pixel point as a shadow pixel point.
4. The method according to claim 1, wherein the updating equation of the Gaussian mixture model for continuously updating the Gaussian distribution in the step S1 is as follows:
wherein,is a central vector;as pixels of the real-time image at time tAttribution factors conforming to the distribution model, the attribution of the most conforming distribution model is 1, and the attribution of the rest models is 0;an envelope curve which is exponentially descending and is used for limiting the influence of old data;
continuously updating multiple Gaussian distribution mixed Gaussian distribution models, and judging pixels of real-time images at t momentMahalanobis distance ofJudging whether the standard deviation is less than three times to judge whether the pixel accords with the existing Gaussian distribution or not and judging the pixel of the real-time image at the moment tMahalanobis distance ofThe calculation formula of (a) is as follows:
5. The method according to claim 1, wherein the discriminant of the contour type of the acquired real-time image at time t analyzed by a pixel mean value method in the step S2 is as follows:
wherein I isThe total number of pixel points of the collected real-time image at the time t, i is the ith pixel point of the collected real-time image at the time t, b w Is a target pixel value;
and when P is larger than 191, indicating that the target foreground points in the acquired real-time image at the time t are more than the shadow points, and judging that the contour target of the acquired real-time image at the time t is a foreground contour.
6. The method according to claim 1, wherein the step of S5, establishing a three-dimensional KD tree and sorting the collected unordered three-dimensional point cloud data into ordered three-dimensional point cloud data, comprises the following steps:
s51, acquiring three-dimensional disordered point cloud data T = { p by using laser radar 1 ,p 2 ,p 3 Therein ofInitializing a segmentation axis, calculating the sum of variances of point cloud position data of each dimension, taking the dimension with the maximum variance as a segmentation hyperplane, and marking as r;
s52, retrieving the current point cloud position data according to the segmented hyperplane dimension, finding median data, and putting the median data on a current node, wherein the root node corresponds to a hyperplane area of a three-dimensional space containing T;
s53, dividing the dimension of the current hyperplane into two sub hyperplane dimensions, dividing all values smaller than the median into the dimension of the left sub hyperplane, and dividing all values larger than or equal to the median into the dimension of the right sub hyperplane;
s54, updating the segmented hyperplane r obtained in the step S51, storing the point drawn into the dimension of the sub hyperplane on the current root node, and updating the formula of the segmented hyperplane r obtained in the step S51 as follows: r = (r + 1)% 3,% is calculated as the remainder;
s55: repeating the step S52 in the data of the left branch hyperplane dimension obtained in the step S53 until no data exists in the sub-dimension, and determining to obtain a left node; repeating the step S52 in the data of the right branch hyperplane dimension obtained in the step S53 until no data exists in the sub-dimension, and determining to obtain a right node;
s56: outputting a KD tree corresponding to the point cloud, and sorting the collected unordered three-dimensional point cloud data according to the KD tree corresponding to the point cloud to form ordered three-dimensional point cloud data.
7. The method according to claim 6, wherein the Euclidean distance calculation in step S5 is as follows:
wherein x is i Is the abscissa, y, of the ith point cloud data in the ordered three-dimensional point cloud data i The longitudinal coordinate of the ith point cloud data in the ordered three-dimensional point cloud data is shown, and n is the total number of point cloud data in the ordered three-dimensional point cloud data.
8. The method according to claim 1, wherein the step S5 of converting the clustered ordered three-dimensional point cloud data into coordinates of the projectile in the two-dimensional plane by means of matrix transformation comprises the following steps:
1) And constructing a transfer matrix M, and transferring the three-dimensional coordinates (X, Y, Z) in the point cloud space of the target throwing object acquired by the laser radar into the coordinates (u, v) in the two-dimensional plane of the target throwing object:
wherein m is ab For the row a, column b transfer parameters in the transfer matrix M, a =1,2,3; b =1,2,3,4; z c Is the position of a cameraZ-axis coordinates of a camera coordinate system;
2) And calculating the coordinates (u, v) in the two-dimensional plane of the target projectile according to the transfer matrix as follows:
3) And 2) obtaining a series of linear equations through the calculation of the step 2), solving the linear equations to obtain calibration parameters, and further obtaining coordinates (u, v) in a two-dimensional plane of the target throwing object.
9. The method according to claim 8, wherein the three-dimensional coordinates (X, Y, Z) in the point cloud space in step 1) are obtained by voxel-dividing the acquired three-dimensional data according to a voxel grid, creating a three-dimensional voxel grid with a volume of 1 cubic centimeter, and containing the center of mass of all the points in the voxel to approximately display other points in the voxel, wherein the center of mass is calculated as follows:
wherein m is the total number of voxel points contained in voxels voxelized by three-dimensional coordinates of the detected target object, and x central 、y central And z central The three-dimensional x-axis coordinate, the y-axis coordinate and the z-axis coordinate of the centroid, and all points in voxels voxelized by the three-dimensional coordinates of the detection target object are finally represented by the centroid; x = X in three-dimensional coordinates (X, Y, Z) in a point cloud space central ,Y=y central ,Z=z central 。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211540766.XA CN115760898A (en) | 2022-12-02 | 2022-12-02 | World coordinate positioning method for road sprinklers in mixed Gaussian domain |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211540766.XA CN115760898A (en) | 2022-12-02 | 2022-12-02 | World coordinate positioning method for road sprinklers in mixed Gaussian domain |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115760898A true CN115760898A (en) | 2023-03-07 |
Family
ID=85342839
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211540766.XA Pending CN115760898A (en) | 2022-12-02 | 2022-12-02 | World coordinate positioning method for road sprinklers in mixed Gaussian domain |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115760898A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117522824A (en) * | 2023-11-16 | 2024-02-06 | 安徽大学 | Multi-source domain generalization cloud and cloud shadow detection method based on domain knowledge base |
CN117975407A (en) * | 2024-01-09 | 2024-05-03 | 湖北鄂东长江公路大桥有限公司 | Road casting object detection method |
-
2022
- 2022-12-02 CN CN202211540766.XA patent/CN115760898A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117522824A (en) * | 2023-11-16 | 2024-02-06 | 安徽大学 | Multi-source domain generalization cloud and cloud shadow detection method based on domain knowledge base |
CN117522824B (en) * | 2023-11-16 | 2024-05-14 | 安徽大学 | Multi-source domain generalization cloud and cloud shadow detection method based on domain knowledge base |
CN117975407A (en) * | 2024-01-09 | 2024-05-03 | 湖北鄂东长江公路大桥有限公司 | Road casting object detection method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111563442B (en) | Slam method and system for fusing point cloud and camera image data based on laser radar | |
CN108171112B (en) | Vehicle identification and tracking method based on convolutional neural network | |
Lehtomäki et al. | Object classification and recognition from mobile laser scanning point clouds in a road environment | |
US9846946B2 (en) | Objection recognition in a 3D scene | |
Huang et al. | Vehicle detection and inter-vehicle distance estimation using single-lens video camera on urban/suburb roads | |
CN113156421A (en) | Obstacle detection method based on information fusion of millimeter wave radar and camera | |
CN115760898A (en) | World coordinate positioning method for road sprinklers in mixed Gaussian domain | |
CN115049700A (en) | Target detection method and device | |
CN111046856B (en) | Parallel pose tracking and map creating method based on dynamic and static feature extraction | |
CN112949366B (en) | Obstacle identification method and device | |
CN112825192B (en) | Object identification system and method based on machine learning | |
CN112560747B (en) | Lane boundary interactive extraction method based on vehicle-mounted point cloud data | |
CN116229408A (en) | Target identification method for fusing image information and laser radar point cloud information | |
US8686892B2 (en) | Synthetic aperture radar chip level cross-range streak detector | |
Wang et al. | An overview of 3d object detection | |
CN116109601A (en) | Real-time target detection method based on three-dimensional laser radar point cloud | |
Liu et al. | Multi-type road marking recognition using adaboost detection and extreme learning machine classification | |
CN113281782A (en) | Laser radar snow point filtering method based on unmanned vehicle | |
CN115293287A (en) | Vehicle-mounted radar-based target clustering method, memory and electronic device | |
CN117953245A (en) | Infrared unmanned aerial vehicle tail wing detection and tracking method based on template matching and KCF algorithm | |
CN109191489B (en) | Method and system for detecting and tracking aircraft landing marks | |
Qing et al. | Localization and tracking of same color vehicle under occlusion problem | |
CN113343819B (en) | Efficient unmanned airborne SAR image target segmentation method | |
CN112686222B (en) | Method and system for detecting ship target by satellite-borne visible light detector | |
CN111611942B (en) | Method for extracting and building database by perspective self-adaptive lane skeleton |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |