CN107194989B - Traffic accident scene three-dimensional reconstruction system and method based on unmanned aerial vehicle aircraft aerial photography - Google Patents

Traffic accident scene three-dimensional reconstruction system and method based on unmanned aerial vehicle aircraft aerial photography Download PDF

Info

Publication number
CN107194989B
CN107194989B CN201710343879.3A CN201710343879A CN107194989B CN 107194989 B CN107194989 B CN 107194989B CN 201710343879 A CN201710343879 A CN 201710343879A CN 107194989 B CN107194989 B CN 107194989B
Authority
CN
China
Prior art keywords
image
dimensional
accident scene
traffic accident
aerial vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710343879.3A
Other languages
Chinese (zh)
Other versions
CN107194989A (en
Inventor
张纪升
刘晓锋
耿杰
牛树云
张凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Research Institute of Highway Ministry of Transport
Original Assignee
Research Institute of Highway Ministry of Transport
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Research Institute of Highway Ministry of Transport filed Critical Research Institute of Highway Ministry of Transport
Priority to CN201710343879.3A priority Critical patent/CN107194989B/en
Publication of CN107194989A publication Critical patent/CN107194989A/en
Application granted granted Critical
Publication of CN107194989B publication Critical patent/CN107194989B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a traffic accident scene three-dimensional reconstruction system based on unmanned aerial vehicle aircraft aerial photography, which comprises an unmanned aerial vehicle system, a ground control platform and a control system, wherein the unmanned aerial vehicle system comprises an unmanned aerial vehicle body for carrying and flying, an inertial measurement unit, a positioning and orientation module, a communication module and an airborne camera, and the ground control platform is in communication connection with the communication module of the unmanned aerial vehicle system so as to realize control communication and picture information transmission; the accident scene three-dimensional image processing system is in communication connection with the ground control platform to realize image information transmission. According to the method, the flight route and the waypoints of the unmanned plane are set, the image acquisition of the road traffic accident scene is carried out from different heights and different angles, then the multi-view stereoscopic vision processing technology is applied to build the three-dimensional traffic accident scene model, and from the technical aspect, the advantages of maneuver, flexibility and good vision of the unmanned plane are fully exerted.

Description

Traffic accident scene three-dimensional reconstruction system and method based on unmanned aerial vehicle aircraft aerial photography
Technical Field
The invention relates to the technical field of intelligent transportation, in particular to a traffic accident scene three-dimensional reconstruction system and a reconstruction method based on unmanned aerial vehicle and airplane aerial photography.
Background
With the rapid increase of the keeping amount of motor vehicles, the safety driving awareness of partial drivers is not high, and meanwhile, the safety driving awareness is influenced by bad roads, weather and other conditions, and road traffic accidents occur. After a traffic accident, traffic police usually adopts temporary traffic control measures, such as road traffic limitation and even road sealing, then measures and photographs traffic accident sites, and obtains first hand data for subsequent traffic accident identification and responsibility division. However, the conventional method has several drawbacks, namely, a long road closing time (e.g., more than 1 hour) is required, which seriously affects the traffic capacity of the road; secondly, accident vehicles, casualties, accident scattered matters and vehicle brake marks exist in the traffic accident scene, meanwhile, the accident rescue and other works need to be carried out, and the accident scene is at risk of being damaged; thirdly, traditional measurement and photography work mainly aims at local and two-dimensional traffic accident scenes, and is difficult to collect all-round traffic accident information from the perspective of traffic accident panorama (including accident vehicles, casualties, accident scattered matters, vehicle brake marks, road environments and the like).
Lu Guangquan, li Yibing on the 3 rd page 63-67 of the 3 rd volume of the journal of traffic engineering and information journal of 2005, have published "traffic accident photogrammetry technology based on ordinary digital cameras and research progress thereof", have generalized, have envisaged two-dimensional and three-dimensional modeling methods of the traffic accident scene, but these methods are still in the research stage, face the influence of camera calibration, rebuilding the precision at the same time, there is a great gap from practical application.
Chinese patent document CN200710045440.9 discloses a method for reproducing collision accident of car based on photogrammetry and external contour of car body, which comprises photogrammetry is carried out on deformed car and intact car of the same model, three-dimensional numerical model of external contour of car is built, finite element model is built for simulation, and speed and collision angle at accident occurrence moment are determined. In the method, a plurality of calibration objects are required to be set in photogrammetry, shooting angle requirements are met, and other traffic accident scene information (such as brake marks and road environments) is not processed by the method, so that the practicability is greatly limited.
Chinese patent document CN106295556 discloses a road detection method based on aerial images of a small unmanned plane, wherein road and non-road pixel points are manually extracted through a man-machine interaction mode, clustering and modeling are carried out, and a max-flow algorithm is used for detecting a road region. But this approach does not take into account the three-dimensional modeling problem of traffic accident sites.
In addition, the quality of the aerial photo is easily affected by factors such as poor light, object shielding and the like, so that the difficulty of extracting characteristic points and matching images of the aerial photo is caused.
Disclosure of Invention
The invention aims at solving the technical defects existing in the prior art and provides a traffic accident scene three-dimensional reconstruction system and a reconstruction method based on unmanned aerial vehicle and airplane aerial photography.
The technical scheme adopted for realizing the purpose of the invention is as follows:
a traffic accident scene three-dimensional reconstruction system based on unmanned aerial vehicle aircraft aerial photography, which comprises,
the unmanned plane system comprises an unmanned plane body for carrying and flying, an inertial measurement unit, a positioning and orientation module, a communication module and an airborne camera, wherein the inertial measurement unit is used for measuring three-axis attitude angle and acceleration information of the unmanned plane; the positioning and orientation module is used for generating real-time navigation data and POS data of each aerial image;
the ground control platform is in communication connection with the communication module of the unmanned plane system to realize control communication and picture information transmission;
the accident scene three-dimensional image processing system is in communication connection with the ground control platform to realize image information transmission.
The aircraft body is a four-rotor unmanned aircraft, and the airborne camera is arranged on the unmanned aircraft body through a 360-degree rotating cradle head module.
A traffic accident scene three-dimensional reconstruction method based on unmanned aerial vehicle aircraft aerial photography, which comprises the following steps,
1) A shooting step, wherein the unmanned plane takes the accident vehicle as a center, flies around the accident vehicle, hovers and shoots at a plurality of waypoints, and the shooting picture is transmitted to an accident scene three-dimensional image processing system in real time,
2) Screening the images, namely screening a preset number of aerial photos according to the content overlapping rate so as to facilitate image feature extraction and matching;
3) Convolution calculation of images: the weighted convolution calculation is carried out on the screened image from the horizontal gradient and the vertical gradient by using the Sobel operator and the Laplacian operator, so that the robustness of the image to light change and object shielding is improved; the convolution calculation formulas in the horizontal and vertical directions are respectively:
wherein IM is the gray scale image matrix of the screened image,is a convolution operation, S x Is a Sobel horizontal operator, S y Is a Sobel vertical operator, L x Is a Laplace horizontal operator, L y Is a Laplace vertical operator, b is an offset value range of 0.1-0.3; when the light is bad, the alpha value range is 0.6-1, and the beta value range is 0-0.4; when an object is shielded, the alpha value range is 0-0.4, the beta value range is 0.6-1, and when the light is bad and the object is shielded, the alpha value range is 0.5-0.6, and the beta value range is 1-alpha;
wherein ,
the convolution value of the image is:
according to the convolution calculation result of the image, a gray level image after convolution treatment can be obtained;
4) Importing POS data: importing longitude, latitude, altitude and course information of the processed image;
5) Image feature extraction and matching: extracting and matching feature points of the traffic accident scene image by using a scale feature-based non-transformation algorithm;
6) Establishing a sparse and dense point cloud model: calculating a rotation matrix and a translation matrix between two matched images by using a multi-eye stereoscopic vision recovery algorithm from motion, and calculating three-dimensional coordinates of feature points, so as to obtain a sparse three-dimensional network point cloud; using a multi-eye stereoscopic and Patch-based multi-view stereoo algorithm, taking sparse point clouds obtained by an SFM algorithm as seed points for input, clustering an image sequence according to a visual angle by using a CMVS algorithm, reducing the data volume of dense reconstruction, diffusing the seed points to the periphery by using a PMVS algorithm based on a micro-Patch model to obtain a space directional point cloud or Patch, and completing dense reconstruction under the constraint of local luminosity consistency and global visibility;
7) Three-dimensional model gridding and texturing: reconstructing a surface grid of the three-dimensional dense point cloud model by using a poisson surface reconstruction algorithm, and mapping texture information of the surface into a network model;
8) Three-dimensional reconstruction quality evaluation: after the three-dimensional model of the traffic accident scene is reconstructed, the similarity degree of the evaluation object between the reference image and the processing image is analyzed, and if the similarity degree does not reach a preset value, the modeling is performed again.
In the step 1), the flight radius is 3-10m, the number of waypoints is 12-24, the unmanned plane hovers at each waypoint for 2-5 seconds, and the photo is taken at an angle of 30-90 degrees.
The onboard camera of the unmanned aerial vehicle takes pictures at horizontal angles, 45 ° and 90 °.
In the step 1), the unmanned plane takes the traffic accident scene as an object and is divided into three layers of a low layer, a middle layer and a high layer, wherein the height of the low layer is 2-5m, the height of the middle layer is 10-15m, and the height of the high layer is 20-25m.
In the step 1), the damaged position of the vehicle collision is subjected to aerial photographing at the distance of 2-5m from the horizontal, 45 DEG and vertical angles.
When more aerial images are needed, redundant low-quality images are needed to be removed, the image processing adopts a Brenner gradient function method to analyze the definition of the images, and a calculation formula is as follows:
wherein: i (x, y) represents the gray value of the corresponding pixel point (x, y) of the image I, D (I) is the calculation result of the image definition, the image with the D (I) value ranked at the front is optionally reserved, and other images are deleted.
In the step 8), the evaluation index adopts structural similarity, and the calculation formula of the SSIM is as follows:
wherein x and y are the reference image and the processed image, mu, respectively x and μy Is the average value, sigma x and σy Is standard deviation sigma xy Is the covariance of x and y, c 1 and c2 The method is used for maintaining a stable constant, and when the SSIM value is less than or equal to 0.85, the traffic accident scene is required to be remodelled; when SSIM value>And 0.85, the three-dimensional model of the traffic accident scene meets the precision requirement.
The evaluation objects include accident vehicles, brake marks and accident scatterers.
Compared with the prior art, the invention has the beneficial effects that:
according to the method, the flight route and the waypoints of the unmanned plane are set, the image acquisition of the road traffic accident scene is carried out from different heights and different angles, then a three-dimensional traffic accident scene model is established by using a multi-view stereoscopic vision processing technology, and from the technical aspect, the advantages of maneuver, flexibility and good vision of the unmanned plane are fully exerted, POS data of aerial images are used, a ground control point is not required to be set, and a camera is not required to be calibrated; the influence of factors such as poor light and object shielding on the reconstruction quality is reduced; from the use effect, the method can obviously shorten the time of limiting and closing the road, reduce the risk of damaging the traffic accident scene, and acquire various information of the traffic accident scene from the global angle, thereby providing complete information for road traffic accident identification and responsibility division.
Drawings
Fig. 1 is a schematic diagram of a quad-rotor unmanned helicopter system.
Fig. 2 is a flow chart of information between modules of the unmanned aircraft system.
Fig. 3 is a configuration diagram of an accident scene three-dimensional image processing system.
Fig. 4 is a horizontal aerial roadmap for an unmanned aircraft.
Fig. 5 is a vertical aerial height view of the unmanned aerial vehicle.
Fig. 6 is an aerial angle view of the damaged portion of the accident vehicle.
Fig. 7 is a SIFT algorithm flow chart.
Fig. 8 is a flow chart of the RANSAC algorithm.
Fig. 9 is a SFM algorithm flow chart.
Fig. 10 is a CMVS algorithm flow chart.
Fig. 11 is a PMVS algorithm flow chart.
Fig. 12 is a flow chart of a poisson surface reconstruction algorithm.
Fig. 13 is a system workflow diagram.
Fig. 14 is a sparse point cloud model.
Fig. 15 is a dense point cloud model.
Fig. 16 is a three-dimensional model of a traffic accident scene.
Detailed Description
The invention is described in further detail below with reference to the drawings and the specific examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
As shown in the figure, the traffic accident scene three-dimensional reconstruction system based on unmanned aerial vehicle aircraft aerial photography of the invention comprises,
the unmanned plane system comprises an unmanned plane body for carrying and flying, an inertial measurement unit, a positioning and orientation module, a communication module and an airborne camera, wherein the inertial measurement unit is used for measuring three-axis attitude angle and acceleration information of the unmanned plane; the positioning and orientation module comprises a differential GPS which is matched with the inertial measurement unit to generate real-time navigation data and POS data of each aerial image, wherein the data comprise longitude, latitude, altitude, course and the like of each aerial image;
the ground control platform is in communication connection with the communication module of the unmanned aerial vehicle system to realize control communication, and meanwhile, the ground control platform also comprises a picture transmission station, and the communication module of the unmanned aerial vehicle is contacted to realize real-time traffic accident scene image transmission;
the accident scene three-dimensional image processing system is in communication connection with the ground control platform to realize image information transmission. The accident scene three-dimensional image processing system is a cloud computing terminal or a computing workstation. The system is connected with the ground control platform through network communication, and can also comprise a data storage platform which is synchronously connected with the ground control platform, such as a police storage platform, so as to realize fair storage of the original data, realize remote storage and centralized calculation of the original data by utilizing data transfer, integrate various resources by utilizing the Internet while realizing data safety, and improve social service effect.
The aircraft body is a four-rotor unmanned aircraft, and the onboard camera is arranged on the unmanned aircraft body through a 360-degree rotating cradle head module. So that the camera can take aerial photographs of the traffic accident scene from different angles.
The unmanned plane is controlled by the ground control platform, control instructions are sent by the unmanned plane, aerial images are transferred by the unmanned plane, so that the ground control platform can be arranged at multiple points in the city, the nearby control is realized, the flight time is reduced, the reaction efficiency is improved, the centralized accident site three-dimensional image processing system is utilized, the processing cost is reduced, and the unified and true reduction degree of data is ensured.
The reconstruction method of the present invention comprises the steps of,
1) Flight route setting: firstly, controlling the unmanned plane to fly to the site, taking the accident vehicle as a center, and enabling the unmanned plane to fly around the accident vehicle, wherein the flight radius is preferably 3-10m, the number of waypoints is 12-24, such as 16, namely, the included angle between the waypoints is 22.5 degrees, the unmanned plane hovers at each waypoint for 2-5 seconds, and an onboard camera takes a picture at an angle of 30-90 degrees (shown in figure 4). The shot pictures are transmitted to a ground control platform in real time through a communication module, the ground control platform is transmitted to an accident scene three-dimensional image processing system in real time,
in addition, the unmanned plane takes the traffic accident scene as an object, and is divided into three layers (shown in figure 5) of low, medium and high, and aerial traffic accident scene images are photographed according to the flying and photographing methods at different height layers, wherein the height of the low layer is 2-5m, the height of the middle layer is 10-15m, and the height of the high layer is 20-25m. In addition, for the damaged position of the vehicle collision, aerial photographing is carried out at a distance of 2-5m from the damaged position of the vehicle at horizontal, 45 degrees and vertical angles (shown in figure 6) if necessary,
(2) Screening treatment of images: and continuously photographing a plurality of waypoints, wherein the content overlapping rate of aerial photographs of the photographed traffic accidents is more than or equal to 60 percent, so that the image characteristics are conveniently extracted and matched. In addition, when more aerial images are needed, redundant low-quality images are needed to be removed, the image processing adopts a Brenner gradient function method to analyze the definition of the images, and a calculation formula is as follows:
wherein: i (x, y) represents the gray value of the corresponding pixel point (x, y) of the image I, D (I) is the calculation result of the image definition, the image with the D (I) value of 10% -30% of the ranking is optionally reserved, and other images are deleted.
(3) Convolution calculation of images: the convolution object is a screened image, the convolution calculation relates to the gray value of each pixel point, specifically, the weighted convolution calculation is carried out on the screened image from two gradients of horizontal and vertical by using a Sobel operator and a Laplacian operator, and the robustness of the image to light change and object shielding is improved; the convolution calculation formulas in the horizontal and vertical directions are respectively:
wherein IM is the gray scale image matrix of the screened image,is a convolution operation, S x Is a Sobel horizontal operator, S y Is a Sobel vertical operator, L x Is a Laplace horizontal operator, L y Is the Laplace vertical operator, b is the offsetThe value range is 0.1-0.3; when the light is bad, the alpha value range is 0.6-1, and the beta value range is 0-0.4; when an object is shielded, the alpha value range is 0-0.4, and the beta value range is 0.6-1. When the light is bad and the object is blocked, the value range of alpha is 0.5-0.6, and the value range of beta is 1-alpha. The object shielding means: when the proportion of the covered area of the image is larger than a preset value, such as 15% or more, the poor light is that: the ambient illuminance value at the time of image acquisition is smaller than a predetermined value, such as 50 lux or less.
wherein ,
the image convolution values are:
according to the convolution calculation result of the image, a gray level image after convolution treatment can be obtained, and the image has good robustness to light change and object shielding.
(4) Importing POS data: and importing information such as longitude, latitude, altitude, heading and the like of the processed image.
(5) Image feature extraction and matching: and extracting and matching characteristic points of the traffic accident scene image by using a scale-based characteristic non-transformation algorithm (Scale Invariant Feature Transform, SIFT), wherein the scene image processed in the step is a picture processed by the image convolution calculation in the step 3. Firstly, extracting feature points from a convolution image by using a SIFT operator, and acquiring corresponding feature descriptors thereof; then, selecting an image pair which possibly has an overlapping relation according to the POS constraint relation of the images; finally, each pair of descriptors is matched, and coarse reject is performed to eliminate mismatching by using a random sampling consensus algorithm (Random Sample Consensus, RANSAC).
The feature extraction and matching flow of the SIFT algorithm is as follows: establishing a scale space and detecting an image extremum; accurately positioning characteristic points; calculating a characteristic direction; constructing a feature descriptor; according to the feature descriptors and the similarity measure, data registration under different view angles is realized, and the flow is shown in fig. 7. The flow of the RANSAC algorithm is: randomly extracting 4 sample data, calculating a transformation matrix, and marking the transformation matrix as a model M; calculating projection errors of the data set individuals on M, and adding an inner point set if the errors are smaller than a threshold value; if the number of the elements of the current inner point set is larger than that of the optimal inner point set, replacing the former with the latter; the flow of the iterative update is shown in fig. 8.
(6) Establishing a sparse and dense point cloud model: and (3) calculating a rotation matrix and a translation matrix between the two matched images by using a motion recovery algorithm (Structure from motion, SFM) of multi-eye stereoscopic vision, and calculating three-dimensional coordinates of the feature points, thereby obtaining a sparse three-dimensional network point cloud. Utilizing Clustering views for multi-view stereoo (CMVS) and Patch-based multi-view stereoo (PMVS) algorithms, taking sparse point clouds obtained by SFM algorithms as seed points for input, utilizing the CMVS algorithm to cluster image sequences according to view angles, reducing data volume of dense reconstruction, utilizing PMVS algorithm based on micro-Patch models to diffuse seed points to the surroundings, obtaining space directed point clouds or patches, and completing dense reconstruction under the constraint of local luminosity consistency and global visibility. The flow of the SFM algorithm is shown in fig. 9, the flow of the CMVS algorithm is shown in fig. 10, and the flow of the PMVS algorithm is shown in fig. 11.
(7) Three-dimensional model gridding and texturing: using a poisson surface reconstruction algorithm (Poisson surface reconstruction), a surface mesh of the three-dimensional dense point cloud model is reconstructed, mapping texture information of the surface into the network model. The flow of the poisson surface reconstruction algorithm is as follows: parameterizing the surface of the three-dimensional model; optimizing a target image and reducing texture seams; and correcting the model color and reducing the discontinuity of textures. The flow of the poisson surface reconstruction algorithm is shown in fig. 12.
(8) Three-dimensional reconstruction quality evaluation: after the three-dimensional model of the traffic accident scene is rebuilt, the reconstruction quality is required to be evaluated. The evaluation object includes accident vehicles, brake marks, accident spreads. The evaluation method comprises the steps of selecting an orthographic image of an accident vehicle, a brake mark and an accident scatterer which are aerial images of an unmanned plane as a reference image, selecting an orthographic image of the accident vehicle, the brake mark and the accident scatterer of a traffic accident scene three-dimensional model as a processing image, and analyzing the similarity degree between the reference image and the processing image. The evaluation index adopts structural similarity (Structural similarity index, SSIM), and the calculation formula of the SSIM is as follows:
wherein x and y are the reference image and the processed image, mu, respectively x and μy Is the average value, sigma x and σy Is standard deviation sigma xy Is the covariance of x and y, c 1 and c2 Is used to maintain a constant. When the SSIM value is less than or equal to 0.85, the traffic accident scene is required to be remodelled; when SSIM value>And 0.85, the three-dimensional model of the traffic accident scene meets the precision requirement.
According to the reconstruction method, after the traffic accident image is acquired by the accident scene three-dimensional image processing system, the screening processing of the image is carried out, the low-quality surplus image is removed, the convolution calculation of the image is carried out, and the robustness of the image to light changes and object shielding is improved; importing POS data of the image, performing feature extraction and matching of the image, establishing a sparse and dense point cloud model and networking texturing of a three-dimensional point cloud model, and finally performing quality evaluation of a three-dimensional traffic accident scene model, wherein the POS data of the aerial image is used without setting ground control points and calibrating a camera; the influence of factors such as poor light and object shielding on the reconstruction quality is reduced; from the use effect, the method can obviously shorten the time of limiting and closing the road, reduce the risk of damaging the traffic accident scene, and acquire various information of the traffic accident scene from the global angle, thereby providing complete information for road traffic accident identification and responsibility division.
Specifically, a traffic accident that a car collides with a bicycle occurs in a certain district, the road environment is a district road and greening plants, and the three-dimensional reconstruction steps of the traffic accident site are as follows:
(1) Flight route setting: taking an accident vehicle as a center, enabling the unmanned plane to fly around the accident vehicle in a circle, wherein the flying radius is 5m, the number of waypoints is 16, the included angle between the waypoints is 22.5 degrees, the unmanned plane hovers at each waypoint for 3 seconds, and taking photos at angles of 45 degrees and 90 degrees. In addition, the unmanned plane takes the traffic accident scene as an object, and aerial traffic accident scene images are photographed at the height of 22m according to the flying and photographing method. In addition, aerial photographing is carried out on the damaged position of the vehicle collision at an angle of 45 degrees and a distance of 4m from the damaged position of the vehicle, so that 80 aerial photographing images are obtained.
(2) Image acquisition and processing: according to the content overlapping rate of aerial photos of traffic accidents is more than or equal to 60 percent and the requirement of image definition sequencing, 24 images are selected for subsequent work.
(3) Convolution calculation of images: and carrying out weighted convolution calculation on the screened image from horizontal and vertical gradients by using a Sobel operator and a Laplacian operator, and improving the robustness of the image to light change and object shielding. The offset b has a value of 0.2, α has a value of 0.6, and β has a value of 0.4.
(4) Importing POS data: and importing information such as longitude, latitude, altitude, heading and the like of the processed image.
(5) Image feature extraction and matching: and extracting and matching the characteristic points of the traffic accident scene image by using a scale-based characteristic non-transformation algorithm (Scale Invariant Feature Transform, SIFT). Firstly, extracting feature points from each image by using a SIFT operator, and acquiring corresponding feature descriptors of the feature points; then, selecting an image pair which possibly has an overlapping relation according to the POS constraint relation of the images; finally, each pair of descriptors is matched, and coarse reject is performed to eliminate mismatching by using a random sampling consensus algorithm (Random Sample Consensus, RANSAC).
(6) Establishing a sparse and dense point cloud model: a motion recovery algorithm (Structure from motion, SFM) of multi-view stereo vision is applied to calculate a rotation matrix and a translation matrix between two matched images, and calculate three-dimensional coordinates of feature points, so as to obtain a sparse three-dimensional network point cloud, as shown in fig. 14. Using Clustering views for multi-view stereoo (CMVS) and Patch-based multi-view stereoo (PMVS) algorithms, taking sparse point clouds obtained by SFM algorithm as seed points for input, clustering image sequences according to view angles by using the CMVS algorithm, reducing data volume of dense reconstruction, diffusing seed points to the surroundings by using PMVS algorithm based on micro Patch model to obtain space directed point clouds or patches, and completing dense reconstruction under constraint of local luminosity consistency and global visibility to obtain dense point cloud model, as shown in figure 15.
(7) Three-dimensional model gridding and texturing: using a poisson surface reconstruction algorithm (Poisson surface reconstruction), a surface mesh of the three-dimensional dense point cloud model is reconstructed, mapping texture information of the surface into the network model. A networked and textured traffic accident scene three-dimensional model is obtained as shown in fig. 16.
(8) Three-dimensional reconstruction quality evaluation: after the three-dimensional model of the traffic accident scene is rebuilt, the reconstruction quality is required to be evaluated. An orthographic image of an accident vehicle photographed by an unmanned plane is selected as a reference image, an orthographic image of an accident vehicle of a traffic accident scene three-dimensional model is selected as a processing image, the similarity degree between the reference image and the processing image is analyzed, the value of an evaluation index SSIM is 0.92, and the three-dimensional reconstruction precision requirement is met.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (10)

1. A traffic accident scene three-dimensional reconstruction method based on unmanned aerial vehicle aircraft is characterized by comprising the following steps,
1) A shooting step, wherein the unmanned plane takes the accident vehicle as a center, flies around the accident vehicle, hovers and shoots at a plurality of waypoints, and the shooting picture is transmitted to an accident scene three-dimensional image processing system in real time,
2) Screening the images, namely screening a preset number of aerial photos according to the content overlapping rate so as to facilitate image feature extraction and matching;
3) Convolution calculation of images: the weighted convolution calculation is carried out on the screened image from the horizontal gradient and the vertical gradient by using the Sobel operator and the Laplacian operator, so that the robustness of the image to light change and object shielding is improved; the convolution calculation formulas in the horizontal and vertical directions are respectively:
wherein IM is the gray scale image matrix of the screened image,is a convolution operation, S x Is a Sobel horizontal operator, S y Is a Sobel vertical operator, L x Is a Laplace horizontal operator, L y Is a Laplace vertical operator, b is an offset value range of 0.1-0.3; when the light is bad, the alpha value range is 0.6-1, and the beta value range is 0-0.4; when an object is shielded, the alpha value range is 0-0.4, the beta value range is 0.6-1, and when the light is bad and the object is shielded, the alpha value range is 0.5-0.6, and the beta value range is 1-alpha;
wherein ,
the convolution value of the image is:
according to the convolution calculation result of the image, a gray level image after convolution treatment can be obtained;
4) Importing POS data: importing longitude, latitude, altitude and course information of the processed image;
5) Image feature extraction and matching: extracting and matching feature points of the traffic accident scene image by using a scale feature-based non-transformation algorithm;
6) Establishing a sparse and dense point cloud model: calculating a rotation matrix and a translation matrix between two matched images by using a multi-eye stereoscopic vision recovery algorithm from motion, and calculating three-dimensional coordinates of feature points, so as to obtain a sparse three-dimensional network point cloud; using a multi-eye stereoscopic and Patch-based multi-view stereoo algorithm, taking sparse point clouds obtained by an SFM algorithm as seed points for input, clustering an image sequence according to a visual angle by using a CMVS algorithm, reducing the data volume of dense reconstruction, diffusing the seed points to the periphery by using a PMVS algorithm based on a micro-Patch model to obtain a space directional point cloud or Patch, and completing dense reconstruction under the constraint of local luminosity consistency and global visibility;
7) Three-dimensional model gridding and texturing: reconstructing a surface grid of the three-dimensional dense point cloud model by using a poisson surface reconstruction algorithm, and mapping texture information of the surface into a network model;
8) Three-dimensional reconstruction quality evaluation: after the three-dimensional model of the traffic accident scene is reconstructed, the similarity degree of the evaluation object between the reference image and the processing image is analyzed, and if the similarity degree does not reach a preset value, the modeling is performed again.
2. The method of three-dimensional reconstruction according to claim 1, wherein in said step 1), the flying radius is 3-10m, the number of waypoints is 12-24, and the unmanned aerial vehicle hovers at each waypoint for 2-5 seconds, and photographs are taken at an angle of 30 ° -90 °.
3. The three-dimensional reconstruction method according to claim 2, wherein the onboard camera of the unmanned aerial vehicle takes pictures at horizontal angles, angles of 45 ° and 90 °.
4. The three-dimensional reconstruction method according to claim 1, wherein in the step 1), the unmanned plane is classified into three layers of a lower layer, a middle layer and a higher layer by taking a traffic accident scene as an object, wherein the height of the lower layer is 2-5m, the height of the middle layer is 10-15m and the height of the higher layer is 20-25m.
5. The method of claim 1, wherein in said step 1), the vehicle is subjected to aerial photographing at a distance of 2-5m from horizontal, 45 ° and vertical angles at the damaged position of the vehicle.
6. The three-dimensional reconstruction method according to claim 1, wherein when more aerial images are required, redundant low-quality images are removed, the image processing adopts a Bren gradient function method to analyze the definition of the images, and a calculation formula is as follows:
wherein: i (x, y) represents the gray value of the corresponding pixel point (x, y) of the image I, D (I) is the calculation result of the image definition, the image with the D (I) value ranked at the front is optionally reserved, and other images are deleted.
7. The three-dimensional reconstruction method according to claim 1, wherein in the step 8), the evaluation index is structural similarity, and the calculation formula of the SSIM is as follows:
wherein x and y are the reference image and the processed image, mu, respectively x and μy Is the average value, sigma x and σy Is standard deviation sigma xy Is the co-ordination of x and yDifference, c 1 and c2 The method is used for maintaining a stable constant, and when the SSIM value is less than or equal to 0.85, the traffic accident scene is required to be remodelled; when the SSIM value is more than 0.85, the traffic accident scene three-dimensional model meets the precision requirement.
8. The three-dimensional reconstruction method according to claim 1, wherein the evaluation object includes an accident vehicle, a brake mark, and accident debris.
9. A traffic accident scene three-dimensional reconstruction system based on unmanned aerial vehicle aircraft for realizing the three-dimensional reconstruction method according to claim 1-8, which is characterized by comprising,
the unmanned plane system comprises an unmanned plane body for carrying and flying, an inertial measurement unit, a positioning and orientation module, a communication module and an airborne camera, wherein the inertial measurement unit is used for measuring three-axis attitude angle and acceleration information of the unmanned plane; the positioning and orientation module is used for generating real-time navigation data and POS data of each aerial image;
the ground control platform is in communication connection with the communication module of the unmanned plane system to realize control communication and picture information transmission;
the accident scene three-dimensional image processing system is in communication connection with the ground control platform to realize image information transmission.
10. The traffic accident scene three-dimensional reconstruction system based on unmanned aerial vehicle aircraft taking photo by plane according to claim 9, wherein the aircraft body is a four-rotor unmanned aerial vehicle, and the onboard camera is arranged on the unmanned aerial vehicle body through a 360-degree rotating cradle head module.
CN201710343879.3A 2017-05-16 2017-05-16 Traffic accident scene three-dimensional reconstruction system and method based on unmanned aerial vehicle aircraft aerial photography Active CN107194989B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710343879.3A CN107194989B (en) 2017-05-16 2017-05-16 Traffic accident scene three-dimensional reconstruction system and method based on unmanned aerial vehicle aircraft aerial photography

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710343879.3A CN107194989B (en) 2017-05-16 2017-05-16 Traffic accident scene three-dimensional reconstruction system and method based on unmanned aerial vehicle aircraft aerial photography

Publications (2)

Publication Number Publication Date
CN107194989A CN107194989A (en) 2017-09-22
CN107194989B true CN107194989B (en) 2023-10-13

Family

ID=59873250

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710343879.3A Active CN107194989B (en) 2017-05-16 2017-05-16 Traffic accident scene three-dimensional reconstruction system and method based on unmanned aerial vehicle aircraft aerial photography

Country Status (1)

Country Link
CN (1) CN107194989B (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680378A (en) * 2017-11-07 2018-02-09 中车株洲电力机车有限公司 A kind of accident surveying method, system, equipment and computer-readable storage medium
CN109813281A (en) * 2017-11-20 2019-05-28 南京模幻天空航空科技有限公司 Navigation channel incident management system based on unmanned plane aerial photography technology
CN108171790B (en) * 2017-12-25 2019-02-15 北京航空航天大学 A kind of Object reconstruction method dictionary-based learning
CN108680137A (en) * 2018-04-24 2018-10-19 天津职业技术师范大学 Earth subsidence detection method and detection device based on unmanned plane and Ground Penetrating Radar
CN108805869A (en) * 2018-06-12 2018-11-13 哈尔滨工业大学 It is a kind of based on the extraterrestrial target three-dimensional reconstruction appraisal procedure of the reconstruction model goodness of fit and application
CN109191447B (en) * 2018-08-31 2021-11-19 宁波大学 Three-dimensional grid quality evaluation method based on geometric curvature analysis
CN110969858B (en) * 2018-09-29 2022-05-13 比亚迪股份有限公司 Traffic accident processing method and device, storage medium and electronic equipment
CN111216668A (en) * 2018-11-23 2020-06-02 比亚迪股份有限公司 Vehicle collision processing method and unmanned aerial vehicle fixing device
CN109931912A (en) * 2019-04-12 2019-06-25 成都睿铂科技有限责任公司 A kind of aviation oblique photograph method and device
CN110059101B (en) * 2019-04-16 2021-08-13 北京科基中意软件开发有限公司 Vehicle data searching system and method based on image recognition
CN111102967A (en) * 2019-11-25 2020-05-05 桂林航天工业学院 Intelligent navigation mark supervision system and method based on unmanned aerial vehicle
CN111080794B (en) * 2019-12-10 2022-04-05 华南农业大学 Three-dimensional reconstruction method for farmland on-site edge cloud cooperation
CN111080685A (en) * 2019-12-17 2020-04-28 北京工业大学 Airplane sheet metal part three-dimensional reconstruction method and system based on multi-view stereoscopic vision
CN111141264B (en) * 2019-12-31 2022-06-28 中国电子科技集团公司信息科学研究院 Unmanned aerial vehicle-based urban three-dimensional mapping method and system
US11403851B2 (en) * 2020-08-04 2022-08-02 Verizon Connect Development Limited Systems and methods for utilizing machine learning and other models to reconstruct a vehicle accident scene from video
CN112446958B (en) * 2020-11-13 2023-07-28 山东产研信息与人工智能融合研究院有限公司 Road traffic accident auxiliary processing method and system based on laser point cloud
CN113160406B (en) * 2021-04-26 2024-03-01 北京车和家信息技术有限公司 Road three-dimensional reconstruction method and device, storage medium and electronic equipment
CN114777744B (en) * 2022-04-25 2024-03-08 中国科学院古脊椎动物与古人类研究所 Geological measurement method and device in ancient organism field and electronic equipment
CN114677429B (en) * 2022-05-27 2022-08-30 深圳广成创新技术有限公司 Positioning method and device of manipulator, computer equipment and storage medium
CN115862341A (en) * 2022-12-04 2023-03-28 桂林理工大学 Unmanned aerial vehicle traffic accident processing system based on edge calculation
CN117392328B (en) * 2023-12-07 2024-02-23 四川云实信息技术有限公司 Three-dimensional live-action modeling method and system based on unmanned aerial vehicle cluster

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105721751A (en) * 2016-03-28 2016-06-29 中国人民解放军第三军医大学第三附属医院 Method for collecting information of night traffic accident scene
CN106027980A (en) * 2016-06-22 2016-10-12 沈阳天择智能交通工程有限公司 Flight control system for aerial survey of traffic accident
WO2017030737A1 (en) * 2015-08-20 2017-02-23 Motionloft, Inc. Object detection and analysis via unmanned aerial vehicle

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017030737A1 (en) * 2015-08-20 2017-02-23 Motionloft, Inc. Object detection and analysis via unmanned aerial vehicle
CN105721751A (en) * 2016-03-28 2016-06-29 中国人民解放军第三军医大学第三附属医院 Method for collecting information of night traffic accident scene
CN106027980A (en) * 2016-06-22 2016-10-12 沈阳天择智能交通工程有限公司 Flight control system for aerial survey of traffic accident

Also Published As

Publication number Publication date
CN107194989A (en) 2017-09-22

Similar Documents

Publication Publication Date Title
CN107194989B (en) Traffic accident scene three-dimensional reconstruction system and method based on unmanned aerial vehicle aircraft aerial photography
Wierzbicki et al. Assesment of the influence of UAV image quality on the orthophoto production
CN110648389A (en) 3D reconstruction method and system for city street view based on cooperation of unmanned aerial vehicle and edge vehicle
KR100912715B1 (en) Method and apparatus of digital photogrammetry by integrated modeling for different types of sensors
KR102200299B1 (en) A system implementing management solution of road facility based on 3D-VR multi-sensor system and a method thereof
CN207068060U (en) The scene of a traffic accident three-dimensional reconstruction system taken photo by plane based on unmanned plane aircraft
CN110186468B (en) High-precision map making method and device for automatic driving
CN111429528A (en) Large-scale distributed high-precision map data processing system
Zietara Creating Digital Elevation Model (DEM) based on ground points extracted from classified aerial images obtained from Unmanned Aerial Vehicle (UAV)
Sužiedelytė Visockienė et al. Comparison of UAV images processing softwares
JP2022039188A (en) Position attitude calculation method and position attitude calculation program
CN112446915A (en) Picture-establishing method and device based on image group
CN116433865B (en) Space-ground collaborative acquisition path planning method based on scene reconstructability analysis
Amin et al. Reconstruction of 3D accident scene from multirotor UAV platform
Leberl et al. Aerial computer vision for a 3d virtual habitat
Schleiss et al. VPAIR--Aerial Visual Place Recognition and Localization in Large-scale Outdoor Environments
CN109003295B (en) Rapid matching method for aerial images of unmanned aerial vehicle
Simon et al. 3D MAPPING OF A VILLAGE WITH A WINGTRAONE VTOL TAILSITER DRONE USING PIX4D MAPPER.
Chaudhry et al. A comparative study of modern UAV platform for topographic mapping
KR102587445B1 (en) 3d mapping method with time series information using drone
Seong et al. UAV Utilization for Efficient Estimation of Earthwork Volume Based on DEM
Fernández-Hernandez et al. A new trend for reverse engineering: Robotized aerial system for spatial information management
Chen et al. 3D model construction and accuracy analysis based on UAV tilt photogrammetry
KR102557775B1 (en) Drone used 3d mapping method
Ulziisaikhan et al. UAV and terrestrial laser scanner data processing for large scale topographic mapping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant