CN107194989A - The scene of a traffic accident three-dimensional reconstruction system and method taken photo by plane based on unmanned plane aircraft - Google Patents

The scene of a traffic accident three-dimensional reconstruction system and method taken photo by plane based on unmanned plane aircraft Download PDF

Info

Publication number
CN107194989A
CN107194989A CN201710343879.3A CN201710343879A CN107194989A CN 107194989 A CN107194989 A CN 107194989A CN 201710343879 A CN201710343879 A CN 201710343879A CN 107194989 A CN107194989 A CN 107194989A
Authority
CN
China
Prior art keywords
mrow
mtd
msub
image
mtr
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710343879.3A
Other languages
Chinese (zh)
Other versions
CN107194989B (en
Inventor
张纪升
刘晓锋
耿杰
牛树云
张凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Research Institute of Highway Ministry of Transport
Original Assignee
Research Institute of Highway Ministry of Transport
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Research Institute of Highway Ministry of Transport filed Critical Research Institute of Highway Ministry of Transport
Priority to CN201710343879.3A priority Critical patent/CN107194989B/en
Publication of CN107194989A publication Critical patent/CN107194989A/en
Application granted granted Critical
Publication of CN107194989B publication Critical patent/CN107194989B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of scene of a traffic accident three-dimensional reconstruction system taken photo by plane based on unmanned plane aircraft, including, unmanned aerial vehicle system, including to the unmanned aerial vehicle body for carrying and flying, Inertial Measurement Unit, positioning and directing module, communication module and Airborne Camera, ground control platform, it is connected to realize that communication control and pictorial information are transmitted with the communication module communication of described unmanned aerial vehicle system;Scene of the accident stereo image processing system, is connected to realize that pictorial information is transmitted with described ground control platform communication.Flight path and destination of the invention by setting unmanned aerial vehicle, the IMAQ of traffic accident situ is carried out from different height and different angles, then multi-view stereo vision treatment technology is used, set up three-dimensional scene of a traffic accident model, technically, this method has given full play to the advantage that unmanned aerial vehicle is motor-driven, flexible, the ken is good.

Description

Traffic accident scene three-dimensional reconstruction system and method based on unmanned aerial vehicle aerial photography
Technical Field
The invention relates to the technical field of intelligent traffic, in particular to a traffic accident scene three-dimensional reconstruction system and a reconstruction method based on unmanned aerial vehicle and airplane aerial photography.
Background
With the rapid increase of the quantity of motor vehicles, the safety driving awareness of partial drivers is not high, and road traffic accidents sometimes occur due to the influence of conditions such as bad roads and meteorology. After a traffic accident occurs, the traffic police usually takes temporary traffic control measures, such as road restriction and even road closure, and then measures and photographs the traffic accident scene to obtain first-hand data for subsequent traffic accident identification and responsibility division. However, the above-mentioned conventional method has several disadvantages, one is that it requires a long road closing time (e.g. more than 1 hour), which seriously affects the traffic capacity of the road; accident vehicles, casualty personnel, accident scatters and vehicle brake marks exist in the traffic accident site, meanwhile, accident rescue and other work need to be carried out, and the accident site is exposed to the risk of damage; thirdly, the traditional measurement and photography work mainly aims at local and two-dimensional traffic accident scenes, and is difficult to carry out all-around traffic accident information acquisition from the perspective of traffic accident panorama (including accident vehicles, casualties, accident scatters, vehicle braking prints, road markings, road environment and the like).
In Lu Guangquan, Li, a soldier in the journal of traffic and transportation engineering and information science, 3 rd volume, 3 rd phase, 63-67 pages in 2005, published 'traffic accident photogrammetry technology based on a common digital camera and research progress thereof', generalized and expectable two-dimensional and three-dimensional modeling methods of a traffic accident scene, but the methods are still in a research stage, face the influence of camera calibration and reconstruction precision, and have a large gap from practical application.
Chinese patent document CN200710045440.9 discloses a method for reproducing car collision accident based on photogrammetry and car body outer contour, which is to photogrammetry on deformed cars and intact cars of the same model, establish a three-dimensional numerical model of the car outer contour, then establish a finite element model for simulation, and determine the speed and collision angle at the time of accident occurrence. During photogrammetry, a plurality of calibration objects need to be set, the shooting angle requirement exists, and meanwhile, other traffic accident site information (such as brake marks, road environments and the like) is not processed by the method, so that the practicability of the method is greatly limited.
Chinese patent document CN106295556 discloses a road detection method based on small unmanned aerial vehicle aerial images, which manually extracts road and non-road pixel points in a man-machine interaction mode, performs clustering and modeling, and detects a road area by using a max-flow algorithm. But the method does not consider the three-dimensional modeling problem of the traffic accident scene.
In addition, the quality of the aerial image is easily affected by factors such as poor light, object shielding and the like, and the difficulties of aerial image feature point extraction and image matching are caused.
Disclosure of Invention
The invention aims to provide a traffic accident scene three-dimensional reconstruction system and a reconstruction method based on unmanned aerial vehicle aerial photography, aiming at the technical defects in the prior art.
The technical scheme adopted for realizing the purpose of the invention is as follows:
a traffic accident scene three-dimensional reconstruction system based on unmanned aerial vehicle aerial photography comprises,
the unmanned aerial vehicle system comprises an unmanned aerial vehicle body, an inertial measurement unit, a positioning and orientation module, a communication module and an airborne camera, wherein the unmanned aerial vehicle body is used for bearing and flying, and the inertial measurement unit is used for measuring three-axis attitude angle and acceleration information of the unmanned aerial vehicle; the positioning and orientation module is used for generating real-time navigation data and POS data of each aerial image;
the ground control platform is in communication connection with the communication module of the unmanned aircraft system to realize control communication and picture information transmission;
and the accident scene three-dimensional image processing system is in communication connection with the ground control platform to realize picture information transmission.
The aircraft organism be four rotor unmanned aerial vehicle, the cloud platform module setting that airborne camera passes through 360 rotations on unmanned aerial vehicle organism.
A traffic accident scene three-dimensional reconstruction method based on unmanned aerial vehicle aerial photography comprises the following steps,
1) a shooting step, wherein the unmanned aerial vehicle takes an accident vehicle as a center, flies around the accident vehicle, hovers at a plurality of navigation points for shooting, the shot pictures are transmitted to a three-dimensional image processing system of the accident scene in real time,
2) screening processing of images, namely screening a predetermined number of aerial photos according to the content overlapping rate so as to facilitate image feature extraction and matching;
3) convolution calculation of the image: carrying out weighted convolution calculation on the screened image from the horizontal gradient and the vertical gradient by using a Sobel operator and a Laplace operator, so that the robustness of the image on light change and object shielding is improved; the convolution calculation formulas in the horizontal and vertical directions are respectively:
wherein IM is a gray scale image matrix of the screened image,is a convolution operation, SxIs the Sobel horizontal operator, SyIs a Sobel vertical operator, LxIs the Laplace horizontal operator, LyThe method is a Laplace vertical operator, b is that the offset value range is 0.1-0.3, α is 0.6-1 when light is bad, β is 0-0.4, α is 0-0.4 when an object is shielded, β is 0.6-1 when the light is bad and the object is shielded, α is 0.5-0.6 when the light is bad and the object is shielded, β is 1- α;
wherein ,
the convolution values of the image are:
obtaining a gray level image after convolution processing according to the convolution calculation result of the image;
4) importing POS data: importing longitude, latitude, height and course information of the processed image;
5) image feature extraction and matching: extracting and matching the characteristic points of the traffic accident scene image by using a scale-based characteristic invariant algorithm;
6) establishing a sparse and dense point cloud model: calculating a rotation matrix and a translation matrix between two matched images by using a recovery algorithm from motion of multi-view stereo vision, and calculating three-dimensional coordinates of characteristic points so as to obtain sparse three-dimensional network point cloud; using a multi-view stereo and batch-based multi-view stereo algorithm, inputting sparse point cloud obtained by an SFM algorithm as seed points, clustering an image sequence according to a view angle by using a CMVS algorithm, reducing the data volume of dense reconstruction, diffusing the seed points to the periphery by using a PMVS algorithm based on a micro-Patch model to obtain space directed point cloud or Patch, and completing the dense reconstruction under the constraint of local luminosity consistency and global visibility;
7) three-dimensional model meshing and texturing: reconstructing a surface mesh of the three-dimensional dense point cloud model by using a Poisson surface reconstruction algorithm, and mapping the texture information of the surface into a network model;
8) and (3) three-dimensional reconstruction quality evaluation: and after the traffic accident scene three-dimensional model is reconstructed, analyzing the similarity of the evaluation object between the reference image and the processed image, and modeling again if the similarity does not reach a preset value.
In the step 1), the flying radius is 3-10m, the number of the waypoints is 12-24, the unmanned aerial vehicle hovers at each waypoint for 2-5 seconds, and a picture is taken at an angle of 30-90 degrees.
The onboard camera of the drone takes pictures at horizontal angles, 45 ° and 90 °.
In the step 1), the unmanned aerial vehicle takes a traffic accident site as an object and is divided into a low layer, a middle layer and a high layer, wherein the height of the low layer is 2-5m, the height of the middle layer is 10-15m, and the height of the high layer is 20-25 m.
In the step 1), aerial photography is carried out on the damaged part of the vehicle collision at a distance of 2-5m from the horizontal angle, the 45-degree angle and the vertical angle.
When the aerial images are more, the redundant low-quality images need to be removed, the definition of the images is analyzed by adopting a Brenner gradient function method in image processing, and the calculation formula is as follows:
wherein: i (x, y) represents the gray value of the pixel point (x, y) corresponding to the image I, D (I) is the calculation result of the image definition, D (I) is reserved according to the situation of the image with the value ranked at the front, and other images are deleted.
In the step 8), the evaluation indexes adopt structural similarity, and the calculation formula of SSIM is as follows:
where x and y are the reference and processed images, respectively, μx and μyIs the mean value, σx and σyIs the standard deviation, σxyIs the covariance of x and y, c1 and c2The constant is used for maintaining stability, and when the SSIM value is less than or equal to 0.85, traffic accident scene remodeling is needed; when SSIM value>And when 0.85 time, the three-dimensional model of the traffic accident scene meets the precision requirement.
The evaluation objects comprise accident vehicles, brake marks and accident scatterers.
Compared with the prior art, the invention has the beneficial effects that:
according to the method, the flight route and the navigation point of the unmanned aerial vehicle are set, images of a road traffic accident scene are acquired from different heights and different angles, then a three-dimensional traffic accident scene model is established by applying a multi-view stereoscopic vision processing technology, and the method gives full play to the advantages of maneuverability, flexibility and good vision field of the unmanned aerial vehicle from the technical point of view, uses POS data of aerial images, does not need to set ground control points, and does not need to calibrate a camera; the influence of factors such as poor light and object shielding on the reconstruction quality is reduced; the method has the advantages that the method can obviously shorten the time for restricting and closing roads, reduce the risk of damaging the traffic accident scene, and obtain various information of the traffic accident scene from the global perspective, thereby providing complete information for road traffic accident identification and responsibility division.
Drawings
Fig. 1 is a block diagram of a four-rotor unmanned aircraft system.
Fig. 2 is a flow chart of information between modules of the drone system.
Fig. 3 is a configuration diagram of an accident scene three-dimensional image processing system.
Fig. 4 is a horizontal aerial route map of an unmanned aircraft.
Fig. 5 is a vertical aerial altitude view of the drone.
Fig. 6 is an aerial photograph angle view of a damaged portion of an accident vehicle.
Fig. 7 is a SIFT algorithm flow chart.
Fig. 8 is a RANSAC algorithm flow chart.
Fig. 9 is a flow chart of the SFM algorithm.
FIG. 10 is a flow chart of the CMVS algorithm.
Fig. 11 is a flow chart of the PMVS algorithm.
Fig. 12 is a flow chart of a poisson surface reconstruction algorithm.
Fig. 13 is a system work flow diagram.
FIG. 14 is a sparse point cloud model.
FIG. 15 is a dense point cloud model.
Fig. 16 is a three-dimensional model of a traffic accident scene.
Detailed Description
The invention is described in further detail below with reference to the figures and specific examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in the figure, the traffic accident scene three-dimensional reconstruction system based on unmanned aerial vehicle aerial photography of the invention comprises,
the unmanned aerial vehicle system comprises an unmanned aerial vehicle body, an inertial measurement unit, a positioning and orientation module, a communication module and an airborne camera, wherein the unmanned aerial vehicle body is used for bearing and flying, and the inertial measurement unit is used for measuring three-axis attitude angle and acceleration information of the unmanned aerial vehicle; the positioning and orientation module comprises a differential GPS which is matched with the inertial measurement unit to generate real-time navigation data and POS data of each aerial image, wherein the real-time navigation data comprises longitude, latitude, height, course and the like of each aerial image;
the ground control platform is in communication connection with the communication module of the unmanned aircraft system to realize control communication, and meanwhile, the ground control platform also comprises a map transmission station, and the communication module of the unmanned aircraft is in contact to realize real-time traffic accident site image transmission;
and the accident scene three-dimensional image processing system is in communication connection with the ground control platform to realize picture information transmission. The accident scene three-dimensional image processing system is a cloud computing terminal or a computing workstation. The system is in communication connection with a ground control platform through a network, and simultaneously can also comprise a data storage platform in synchronous communication connection with the ground control platform, such as a police storage platform, so that the just storage of the original data is realized, the remote storage and centralized calculation of the original data are realized by data transfer, the data security is realized, and simultaneously, the resources of all places are integrated by using the internet, and the social service effect is improved.
The aircraft body is a quad-rotor unmanned aircraft, and the airborne camera is arranged on the unmanned aircraft body through a 360-degree rotating holder module. So that the camera can take aerial photos of the traffic accident scene from different angles.
The unmanned plane is controlled by the ground control platform, control instructions are sent by the ground control platform, aerial images are converted from the ground control platform, the ground control platforms can be arranged at multiple points in a city, nearby control is achieved, flying time is reduced, reaction efficiency is improved, a centralized accident scene three-dimensional image processing system is used, processing cost is reduced, and data unification and real reduction degree are guaranteed.
The reconstruction method of the present invention comprises the steps of,
1) setting a flight route: firstly, controlling the unmanned aerial vehicle to fly to a field, and taking the accident vehicle as a center, wherein the unmanned aerial vehicle flies around the accident vehicle, preferably, the flying radius is 3-10m, the number of waypoints is 12-24, such as 16, namely, the included angle between the waypoints is 22.5 degrees, the unmanned aerial vehicle hovers at each waypoint for 2-5 seconds, and the onboard camera takes a picture at an angle of 30-90 degrees (as shown in figure 4). The shot pictures are transmitted to the ground control platform in real time through the communication module, the ground control platform transmits the shot pictures to the accident site three-dimensional image processing system in real time,
in addition, the unmanned aerial vehicle takes a traffic accident scene as an object, is divided into three levels of a low level, a middle level and a high level (as shown in figure 5), and takes aerial images of the traffic accident scene at different height levels according to the flying and photographing method, wherein the height of the low level is 2-5m, the height of the middle level is 10-15m, and the height of the high level is 20-25 m. In addition, the damaged part of the vehicle collision is aerial-photographed at a distance of 2-5m from the damaged part of the vehicle, optionally from horizontal, 45 deg. and vertical angles (as shown in fig. 6),
(2) image screening: and continuously photographing at multiple navigation points, wherein the content overlapping rate of the aerial photographs of the photographed traffic accidents is more than or equal to 60 percent, so that the image features can be extracted and matched conveniently. In addition, when the aerial images are more, redundant low-quality images need to be removed, the definition of the images is analyzed by adopting a Brenner gradient function method in image processing, and the calculation formula is as follows:
wherein: i (x, y) represents the gray value of a pixel point (x, y) corresponding to the image I, D (I) is the calculation result of the image definition, D (I) is reserved according to the situation of the image with the value of 10% -30% of the first rank, and other images are deleted.
(3) Convolution calculation of the image: the convolution object is a screened image, convolution calculation relates to the gray value of each pixel point, specifically, weighting convolution calculation is carried out on the screened image from a horizontal gradient and a vertical gradient by using a Sobel operator and a Laplace operator, and robustness of the image to light change and object shielding is improved; the convolution calculation formulas in the horizontal and vertical directions are respectively:
wherein IM is a gray scale image matrix of the screened image,is a convolution operation, SxIs the Sobel horizontal operator, SyIs a Sobel vertical operator, LxIs the Laplace horizontal operator, LyThe method is a Laplace vertical operator, b is an offset value range of 0.1-0.3, α is in a range of 0.6-1 when light is poor, β is in a range of 0-0.4 when light is poor, α is in a range of 0-0.4 when an object is shielded, β is in a range of 0.6-1 when light is poor and the object is shielded, α is in a range of 0.5-0.6 when light is poor and the object is shielded, β is in a range of 1- α when the object is shielded, namely when the ratio of the covered area of other objects (including shadow) in an image is larger than a preset value, such as larger than or equal to 15%, the poor light is that an environmental illumination value during image acquisition is smaller than a preset value, such as smaller than or equal to 50 lux.
wherein ,
the image convolution values are:
according to the convolution calculation result of the image, the gray level image after convolution processing can be obtained, and the image has better robustness to light ray change and object shielding.
(4) Importing POS data: and importing information such as longitude, latitude, height, heading and the like of the processed image.
(5) Image feature extraction and matching: and (3) extracting and matching the feature points of the traffic accident scene image by using a Scale Invariant Feature Transform (SIFT) -based algorithm, wherein the scene image processed in the step is a picture processed by the image convolution calculation in the step 3. Firstly, extracting feature points from a convolution image by using an SIFT operator, and acquiring a corresponding feature descriptor; then, selecting image pairs possibly having overlapping relation according to the POS constraint relation of the images; finally, matching each image pair descriptor, and using Random sample consensus (RANSAC) to perform gross error elimination and eliminate mismatching.
The feature extraction and matching process of the SIFT algorithm comprises the following steps: establishing a scale space and detecting an image extreme value; accurately positioning the characteristic points; calculating a characteristic direction; constructing a feature descriptor; according to the feature descriptors and the similarity measurement, data registration under different view angles is achieved, and the flow is shown in fig. 7. The process of RANSAC algorithm is as follows: randomly extracting 4 sample data, calculating a transformation matrix and marking as a model M; calculating the projection error of the data set individual on M, and if the error is smaller than a threshold value, adding an inner point set; if the number of elements of the current internal point set is larger than that of the optimal internal point set, the former replaces the latter; the process of iterative update is shown in fig. 8.
(6) Establishing a sparse and dense point cloud model: and (3) calculating a rotation matrix and a translation matrix between the two matched images by using a Structure From Motion (SFM) of the multi-view stereo vision, and calculating the three-dimensional coordinates of the characteristic points so as to obtain the sparse three-dimensional network point cloud. The method comprises the steps of applying a Clustering vision for multi-view-viewer (CMVS) algorithm and a batch-based multi-view-viewer (PMVS) algorithm of multi-view stereo vision, inputting sparse point cloud obtained by an SFM algorithm as seed points, Clustering an image sequence according to a view angle by using the CMVS algorithm, reducing data volume of dense reconstruction, diffusing the seed points to the periphery by using the PMVS algorithm based on a micro-Patch model to obtain space directed point cloud or Patch, and completing the dense reconstruction under the constraint of local luminosity consistency and global visibility. The flow of the SFM algorithm is shown in fig. 9, the flow of the CMVS algorithm is shown in fig. 10, and the flow of the PMVS algorithm is shown in fig. 11.
(7) Three-dimensional model meshing and texturing: and reconstructing the surface mesh of the three-dimensional dense point cloud model by using a Poisson surface reconstruction algorithm (Poisson surface reconstruction), and mapping the texture information of the surface into the network model. The process of the Poisson surface reconstruction algorithm comprises the following steps: parameterizing the surface of the three-dimensional model; optimizing a target image and reducing texture seams; and correcting the color of the model and reducing the discontinuous condition of the texture. The flow of the poisson surface reconstruction algorithm is shown in fig. 12.
(8) And (3) three-dimensional reconstruction quality evaluation: after the three-dimensional model of the traffic accident scene is reconstructed, the reconstruction quality needs to be evaluated. The evaluation objects comprise accident vehicles, brake marks and accident scatterers. The evaluation method comprises the steps of selecting an incident vehicle, a brake mark and an orthographic image of an accident scattered object which are aerial photographed by the unmanned aerial vehicle as reference images, selecting an incident vehicle, a brake mark and an orthographic image of the accident scattered object of the traffic accident scene three-dimensional model as processing images, and analyzing the similarity degree between the reference images and the processing images. The evaluation index is Structural Similarity Index (SSIM), and the calculation formula of SSIM is as follows:
where x and y are the reference and processed images, respectively, μx and μyIs the mean value, σx and σyIs the standard deviation, σxyIs the covariance of x and y, c1 and c2Is a constant used to maintain stability. When the SSIM value is less than or equal to 0.85, traffic accident scene remodeling is needed; when SSIM value>And when 0.85 time, the three-dimensional model of the traffic accident scene meets the precision requirement.
According to the reconstruction method, after the traffic accident image is acquired by the accident scene three-dimensional image processing system, the image is screened, low-quality redundant images are removed, convolution calculation of the image is carried out, and robustness of the image to light change and object shielding is improved; importing POS data of an image, then carrying out feature extraction and matching of the image, establishing a sparse and dense point cloud model and networking texture of a three-dimensional point cloud model, and finally carrying out quality evaluation on the three-dimensional traffic accident site model, wherein the POS data of the aerial image is used, ground control points do not need to be set, and a camera does not need to be calibrated; the influence of factors such as poor light and object shielding on the reconstruction quality is reduced; the method has the advantages that the method can obviously shorten the time for restricting and closing roads, reduce the risk of damaging the traffic accident scene, and obtain various information of the traffic accident scene from the global perspective, thereby providing complete information for road traffic accident identification and responsibility division.
Specifically, a traffic accident that cars and bicycles collide together occurs in a certain cell, the road environment is a cell road and green plants, and the traffic accident site three-dimensional reconstruction steps are as follows:
(1) setting a flight route: by taking an accident vehicle as a center, the unmanned aerial vehicle flies around the accident vehicle, the flying radius is 5m, the number of the flight points is 16, the included angle between the flight points is 22.5 degrees, the unmanned aerial vehicle hovers at each flight point for 3 seconds, and the photos are shot at angles of 45 degrees and 90 degrees. The unmanned aerial vehicle takes a traffic accident scene as an object, and takes an aerial image of the traffic accident scene at a height of 22m according to the flight and photographing method. Further, the damaged portion of the vehicle at the collision was subjected to aerial photography at an angle of 45 ° at a distance of 4m from the damaged portion of the vehicle, and 80 aerial images were obtained.
(2) Image acquisition and processing: and selecting 24 images for subsequent work according to the requirements that the content overlapping rate of the aerial photos of the traffic accident is more than or equal to 60 percent and the image definition and sequencing.
(3) Convolution calculation of the image: and for the screened image, carrying out weighted convolution calculation from the horizontal gradient and the vertical gradient by using a Sobel operator and a Laplace operator, so that the robustness of the image on light change and object shielding is improved. The offset b is 0.2, α is 0.6, and β is 0.4.
(4) Importing POS data: and importing information such as longitude, latitude, height, heading and the like of the processed image.
(5) Image feature extraction and matching: feature point extraction and matching of traffic accident scene images are performed using Scale Invariant Feature Transform (SIFT) based algorithm. Firstly, extracting feature points from each image by using an SIFT operator, and acquiring a corresponding feature descriptor; then, selecting image pairs possibly having overlapping relation according to the POS constraint relation of the images; finally, matching each image pair descriptor, and using Random Sample Consensus (RANSAC) to perform gross error elimination and eliminate mismatching.
(6) Establishing a sparse and dense point cloud model: by using a Structure From Motion (SFM) algorithm for multi-view stereo vision, a rotation matrix and a translation matrix between two matched images are calculated, and three-dimensional coordinates of feature points are calculated, so as to obtain a sparse three-dimensional network point cloud, as shown in fig. 14. By using a Clustering views for multi-view stereo (CMVS) and a Patch-based multi-view stereo (PMVS) algorithm, the sparse point cloud obtained by the SFM algorithm is used as a seed point input, the CMVS algorithm is used to cluster an image sequence according to a view angle, the data amount of dense reconstruction is reduced, then the PMVS algorithm based on a micro-Patch model is used to diffuse the seed point to the periphery to obtain a space directed point cloud or a Patch, and dense reconstruction is completed under the constraint of local luminosity consistency and global visibility to obtain a dense point cloud model, as shown in fig. 15.
(7) Three-dimensional model meshing and texturing: and reconstructing the surface mesh of the three-dimensional dense point cloud model by using a Poisson surface reconstruction algorithm (Poisson surface reconstruction), and mapping the texture information of the surface into the network model. A three-dimensional model of the traffic accident scene processed by networking and texturing is obtained, as shown in fig. 16.
(8) And (3) three-dimensional reconstruction quality evaluation: after the three-dimensional model of the traffic accident scene is reconstructed, the reconstruction quality needs to be evaluated. Selecting an orthographic image of an accident vehicle aerial photographed by the unmanned aerial vehicle as a reference image, selecting an orthographic image of the accident vehicle of the three-dimensional model of the traffic accident scene as a processed image, analyzing the similarity between the reference image and the processed image, and evaluating the SSIM to be 0.92, so that the requirement of three-dimensional reconstruction precision is met.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A traffic accident scene three-dimensional reconstruction system based on unmanned aerial vehicle aerial photography is characterized by comprising,
the unmanned aerial vehicle system comprises an unmanned aerial vehicle body, an inertial measurement unit, a positioning and orientation module, a communication module and an airborne camera, wherein the unmanned aerial vehicle body is used for bearing and flying, and the inertial measurement unit is used for measuring three-axis attitude angle and acceleration information of the unmanned aerial vehicle; the positioning and orientation module is used for generating real-time navigation data and POS data of each aerial image;
the ground control platform is in communication connection with the communication module of the unmanned aircraft system to realize control communication and picture information transmission;
and the accident scene three-dimensional image processing system is in communication connection with the ground control platform to realize picture information transmission.
2. The three-dimensional reconstruction system for the traffic accident scene based on the unmanned aerial vehicle aerial photography of the aircraft as claimed in claim 1, wherein the aircraft body is a quad-rotor unmanned aircraft, and the onboard camera is arranged on the unmanned aircraft body through a 360-degree rotating cradle head module.
3. A traffic accident scene three-dimensional reconstruction method based on unmanned aerial vehicle aerial photography is characterized by comprising the following steps,
1) a shooting step, wherein the unmanned aerial vehicle takes an accident vehicle as a center, flies around the accident vehicle, hovers at a plurality of navigation points for shooting, the shot pictures are transmitted to a three-dimensional image processing system of the accident scene in real time,
2) screening processing of images, namely screening a predetermined number of aerial photos according to the content overlapping rate so as to facilitate image feature extraction and matching;
3) convolution calculation of the image: carrying out weighted convolution calculation on the screened image from the horizontal gradient and the vertical gradient by using a Sobel operator and a Laplace operator, so that the robustness of the image on light change and object shielding is improved; the convolution calculation formulas in the horizontal and vertical directions are respectively:
<mrow> <msub> <mi>Convolution</mi> <mi>x</mi> </msub> <mo>=</mo> <msup> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <msup> <mi>e</mi> <mrow> <mo>-</mo> <mrow> <mo>(</mo> <mo>(</mo> <mrow> <mi>I</mi> <mi>M</mi> <mo>&amp;CircleTimes;</mo> <msub> <mi>S</mi> <mi>x</mi> </msub> <mo>+</mo> <mi>b</mi> </mrow> <mo>)</mo> <mo>&amp;times;</mo> <mi>&amp;alpha;</mi> <mo>+</mo> <mo>(</mo> <mrow> <mi>I</mi> <mi>M</mi> <mo>&amp;CircleTimes;</mo> <msub> <mi>L</mi> <mi>x</mi> </msub> <mo>+</mo> <mi>b</mi> </mrow> <mo>)</mo> <mo>&amp;times;</mo> <mi>&amp;beta;</mi> <mo>)</mo> </mrow> </mrow> </msup> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> </mrow>
<mrow> <msub> <mi>Convolution</mi> <mi>y</mi> </msub> <mo>=</mo> <msup> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <msup> <mi>e</mi> <mrow> <mo>-</mo> <mrow> <mo>(</mo> <mo>(</mo> <mrow> <mi>I</mi> <mi>M</mi> <mo>&amp;CircleTimes;</mo> <msub> <mi>S</mi> <mi>y</mi> </msub> <mo>+</mo> <mi>b</mi> </mrow> <mo>)</mo> <mo>&amp;times;</mo> <mi>&amp;alpha;</mi> <mo>+</mo> <mo>(</mo> <mrow> <mi>I</mi> <mi>M</mi> <mo>&amp;CircleTimes;</mo> <msub> <mi>L</mi> <mi>y</mi> </msub> <mo>+</mo> <mi>b</mi> </mrow> <mo>)</mo> <mo>&amp;times;</mo> <mi>&amp;beta;</mi> <mo>)</mo> </mrow> </mrow> </msup> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> </mrow>
wherein IM is a gray scale image matrix of the screened image,is a convolution operation, SxIs the Sobel horizontal operator, SyIs a Sobel vertical operator, LxIs the Laplace horizontal operator, LyThe method is a Laplace vertical operator, b is that the offset value range is 0.1-0.3, α is 0.6-1 when light is bad, β is 0-0.4, α is 0-0.4 when an object is shielded, β is 0.6-1 when the light is bad and the object is shielded, α is 0.5-0.6 when the light is bad and the object is shielded, β is 1- α;
wherein ,
<mrow> <msub> <mi>S</mi> <mi>x</mi> </msub> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mo>+</mo> <mn>1</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>-</mo> <mn>2</mn> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mo>+</mo> <mn>2</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mo>+</mo> <mn>1</mn> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> <msub> <mi>S</mi> <mi>y</mi> </msub> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <mo>+</mo> <mn>1</mn> </mrow> </mtd> <mtd> <mrow> <mo>+</mo> <mn>2</mn> </mrow> </mtd> <mtd> <mrow> <mo>+</mo> <mn>1</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </mtd> <mtd> <mrow> <mo>-</mo> <mn>2</mn> </mrow> </mtd> <mtd> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> <msub> <mi>L</mi> <mi>x</mi> </msub> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </mtd> <mtd> <mn>4</mn> </mtd> <mtd> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> <msub> <mi>L</mi> <mi>y</mi> </msub> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </mtd> <mtd> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </mtd> <mtd> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </mtd> <mtd> <mn>8</mn> </mtd> <mtd> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </mtd> <mtd> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </mtd> <mtd> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
the convolution values of the image are:
<mrow> <mi>G</mi> <mo>=</mo> <msqrt> <mrow> <msubsup> <mi>Convolution</mi> <mi>x</mi> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>Convolution</mi> <mi>y</mi> <mn>2</mn> </msubsup> </mrow> </msqrt> </mrow>
obtaining a gray level image after convolution processing according to the convolution calculation result of the image;
4) importing POS data: importing longitude, latitude, height and course information of the processed image;
5) image feature extraction and matching: extracting and matching the characteristic points of the traffic accident scene image by using a scale-based characteristic invariant algorithm;
6) establishing a sparse and dense point cloud model: calculating a rotation matrix and a translation matrix between two matched images by using a recovery algorithm from motion of multi-view stereo vision, and calculating three-dimensional coordinates of characteristic points so as to obtain sparse three-dimensional network point cloud; using a multi-view stereo and batch-based multi-view stereo algorithm, inputting sparse point cloud obtained by an SFM algorithm as seed points, clustering an image sequence according to a view angle by using a CMVS algorithm, reducing the data volume of dense reconstruction, diffusing the seed points to the periphery by using a PMVS algorithm based on a micro-Patch model to obtain space directed point cloud or Patch, and completing the dense reconstruction under the constraint of local luminosity consistency and global visibility;
7) three-dimensional model meshing and texturing: reconstructing a surface mesh of the three-dimensional dense point cloud model by using a Poisson surface reconstruction algorithm, and mapping the texture information of the surface into a network model;
8) and (3) three-dimensional reconstruction quality evaluation: and after the traffic accident scene three-dimensional model is reconstructed, analyzing the similarity of the evaluation object between the reference image and the processed image, and modeling again if the similarity does not reach a preset value.
4. The three-dimensional reconstruction method according to claim 3, wherein in the step 1), the flight radius is 3-10m, the number of waypoints is 12-24, the unmanned aerial vehicle hovers at each waypoint for 2-5 seconds, and the photo is taken at an angle of 30-90 °.
5. The three-dimensional reconstruction method of claim 4, wherein the onboard camera of the unmanned aerial vehicle takes pictures at horizontal angles, 45 ° and 90 °.
6. The three-dimensional reconstruction method according to claim 3, wherein in the step 1), the unmanned aerial vehicle is divided into a low level, a middle level and a high level by taking a traffic accident scene as an object, wherein the height of the low level is 2-5m, the height of the middle level is 10-15m, and the height of the high level is 20-25 m.
7. The three-dimensional reconstruction method according to claim 3, wherein in the step 1), aerial photography is performed on the vehicle collision damage at a distance of 2-5m from horizontal, 45 ° and vertical angles.
8. The three-dimensional reconstruction method of claim 3, wherein when there are many aerial images, the unnecessary low-quality images need to be eliminated, and the image processing adopts a Brenner gradient function method to analyze the sharpness of the images, and the calculation formula is:
<mrow> <mi>D</mi> <mrow> <mo>(</mo> <mi>I</mi> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mo>&amp;Sigma;</mo> <mi>x</mi> </munder> <munder> <mo>&amp;Sigma;</mo> <mi>y</mi> </munder> <msup> <mrow> <mo>&amp;lsqb;</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>+</mo> <mn>2</mn> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> </mrow> <mn>2</mn> </msup> </mrow>
wherein: i (x, y) represents the gray value of the pixel point (x, y) corresponding to the image I, D (I) is the calculation result of the image definition, D (I) is reserved according to the situation of the image with the value ranked at the front, and other images are deleted.
9. The three-dimensional reconstruction method according to claim 3, wherein in the step 8), the evaluation index is selected from structural similarity, and the SSIM is calculated according to the following formula:
<mrow> <mi>S</mi> <mi>S</mi> <mi>I</mi> <mi>M</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mo>(</mo> <mn>2</mn> <msub> <mi>&amp;mu;</mi> <mi>x</mi> </msub> <msub> <mi>&amp;mu;</mi> <mi>y</mi> </msub> <mo>+</mo> <msub> <mi>c</mi> <mn>1</mn> </msub> <mo>)</mo> <mo>(</mo> <mn>2</mn> <msub> <mi>&amp;sigma;</mi> <mrow> <mi>x</mi> <mi>y</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>c</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msubsup> <mi>&amp;mu;</mi> <mi>x</mi> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>&amp;mu;</mi> <mi>y</mi> <mn>2</mn> </msubsup> <mo>+</mo> <msub> <mi>c</mi> <mn>1</mn> </msub> <mo>)</mo> <mo>(</mo> <msub> <mi>&amp;sigma;</mi> <mi>x</mi> </msub> <mo>+</mo> <msub> <mi>&amp;sigma;</mi> <mi>y</mi> </msub> <mo>+</mo> <msub> <mi>c</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mfrac> </mrow>
where x and y are the reference and processed images, respectively, μx and μyIs the mean value, σx and σyIs the standard deviation, σxyIs the covariance of x and y, c1 and c2The constant is used for maintaining stability, and when the SSIM value is less than or equal to 0.85, traffic accident scene remodeling is needed; when SSIM value>And when 0.85 time, the three-dimensional model of the traffic accident scene meets the precision requirement.
10. A three-dimensional reconstruction method as claimed in claim 3, wherein said evaluation objects include accident vehicles, brake imprints and accident scatterers.
CN201710343879.3A 2017-05-16 2017-05-16 Traffic accident scene three-dimensional reconstruction system and method based on unmanned aerial vehicle aircraft aerial photography Active CN107194989B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710343879.3A CN107194989B (en) 2017-05-16 2017-05-16 Traffic accident scene three-dimensional reconstruction system and method based on unmanned aerial vehicle aircraft aerial photography

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710343879.3A CN107194989B (en) 2017-05-16 2017-05-16 Traffic accident scene three-dimensional reconstruction system and method based on unmanned aerial vehicle aircraft aerial photography

Publications (2)

Publication Number Publication Date
CN107194989A true CN107194989A (en) 2017-09-22
CN107194989B CN107194989B (en) 2023-10-13

Family

ID=59873250

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710343879.3A Active CN107194989B (en) 2017-05-16 2017-05-16 Traffic accident scene three-dimensional reconstruction system and method based on unmanned aerial vehicle aircraft aerial photography

Country Status (1)

Country Link
CN (1) CN107194989B (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680378A (en) * 2017-11-07 2018-02-09 中车株洲电力机车有限公司 A kind of accident surveying method, system, equipment and computer-readable storage medium
CN108171790A (en) * 2017-12-25 2018-06-15 北京航空航天大学 A kind of Object reconstruction method based on dictionary learning
CN108680137A (en) * 2018-04-24 2018-10-19 天津职业技术师范大学 Earth subsidence detection method and detection device based on unmanned plane and Ground Penetrating Radar
CN108805869A (en) * 2018-06-12 2018-11-13 哈尔滨工业大学 It is a kind of based on the extraterrestrial target three-dimensional reconstruction appraisal procedure of the reconstruction model goodness of fit and application
CN109191447A (en) * 2018-08-31 2019-01-11 宁波大学 A kind of three-dimensional grid quality evaluating method based on geometric buckling analysis
CN109813281A (en) * 2017-11-20 2019-05-28 南京模幻天空航空科技有限公司 Navigation channel incident management system based on unmanned plane aerial photography technology
CN109931912A (en) * 2019-04-12 2019-06-25 成都睿铂科技有限责任公司 A kind of aviation oblique photograph method and device
CN110059101A (en) * 2019-04-16 2019-07-26 北京科基中意软件开发有限公司 A kind of vehicle data lookup system and lookup method based on image recognition
CN110969858A (en) * 2018-09-29 2020-04-07 比亚迪股份有限公司 Traffic accident processing method and device, storage medium and electronic equipment
CN111080794A (en) * 2019-12-10 2020-04-28 华南农业大学 Three-dimensional reconstruction method for farmland on-site edge cloud cooperation
CN111080685A (en) * 2019-12-17 2020-04-28 北京工业大学 Airplane sheet metal part three-dimensional reconstruction method and system based on multi-view stereoscopic vision
CN111102967A (en) * 2019-11-25 2020-05-05 桂林航天工业学院 Intelligent navigation mark supervision system and method based on unmanned aerial vehicle
CN111141264A (en) * 2019-12-31 2020-05-12 中国电子科技集团公司信息科学研究院 Unmanned aerial vehicle-based urban three-dimensional mapping method and system
CN111216668A (en) * 2018-11-23 2020-06-02 比亚迪股份有限公司 Vehicle collision processing method and unmanned aerial vehicle fixing device
CN112446958A (en) * 2020-11-13 2021-03-05 山东产研信息与人工智能融合研究院有限公司 Road traffic accident auxiliary processing method and system based on laser point cloud
CN113160406A (en) * 2021-04-26 2021-07-23 北京车和家信息技术有限公司 Road three-dimensional reconstruction method and device, storage medium and electronic equipment
CN114677429A (en) * 2022-05-27 2022-06-28 深圳广成创新技术有限公司 Positioning method and device of manipulator, computer equipment and storage medium
CN114777744A (en) * 2022-04-25 2022-07-22 中国科学院古脊椎动物与古人类研究所 Geological measurement method and device in ancient biology field and electronic equipment
US11403851B2 (en) * 2020-08-04 2022-08-02 Verizon Connect Development Limited Systems and methods for utilizing machine learning and other models to reconstruct a vehicle accident scene from video
CN117392328A (en) * 2023-12-07 2024-01-12 四川云实信息技术有限公司 Three-dimensional live-action modeling method and system based on unmanned aerial vehicle cluster

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105721751A (en) * 2016-03-28 2016-06-29 中国人民解放军第三军医大学第三附属医院 Method for collecting information of night traffic accident scene
CN106027980A (en) * 2016-06-22 2016-10-12 沈阳天择智能交通工程有限公司 Flight control system for aerial survey of traffic accident
WO2017030737A1 (en) * 2015-08-20 2017-02-23 Motionloft, Inc. Object detection and analysis via unmanned aerial vehicle

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017030737A1 (en) * 2015-08-20 2017-02-23 Motionloft, Inc. Object detection and analysis via unmanned aerial vehicle
CN105721751A (en) * 2016-03-28 2016-06-29 中国人民解放军第三军医大学第三附属医院 Method for collecting information of night traffic accident scene
CN106027980A (en) * 2016-06-22 2016-10-12 沈阳天择智能交通工程有限公司 Flight control system for aerial survey of traffic accident

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680378A (en) * 2017-11-07 2018-02-09 中车株洲电力机车有限公司 A kind of accident surveying method, system, equipment and computer-readable storage medium
CN109813281A (en) * 2017-11-20 2019-05-28 南京模幻天空航空科技有限公司 Navigation channel incident management system based on unmanned plane aerial photography technology
CN108171790A (en) * 2017-12-25 2018-06-15 北京航空航天大学 A kind of Object reconstruction method based on dictionary learning
CN108171790B (en) * 2017-12-25 2019-02-15 北京航空航天大学 A kind of Object reconstruction method dictionary-based learning
CN108680137A (en) * 2018-04-24 2018-10-19 天津职业技术师范大学 Earth subsidence detection method and detection device based on unmanned plane and Ground Penetrating Radar
CN108805869A (en) * 2018-06-12 2018-11-13 哈尔滨工业大学 It is a kind of based on the extraterrestrial target three-dimensional reconstruction appraisal procedure of the reconstruction model goodness of fit and application
CN109191447A (en) * 2018-08-31 2019-01-11 宁波大学 A kind of three-dimensional grid quality evaluating method based on geometric buckling analysis
CN109191447B (en) * 2018-08-31 2021-11-19 宁波大学 Three-dimensional grid quality evaluation method based on geometric curvature analysis
CN110969858A (en) * 2018-09-29 2020-04-07 比亚迪股份有限公司 Traffic accident processing method and device, storage medium and electronic equipment
CN111216668A (en) * 2018-11-23 2020-06-02 比亚迪股份有限公司 Vehicle collision processing method and unmanned aerial vehicle fixing device
CN109931912A (en) * 2019-04-12 2019-06-25 成都睿铂科技有限责任公司 A kind of aviation oblique photograph method and device
CN110059101B (en) * 2019-04-16 2021-08-13 北京科基中意软件开发有限公司 Vehicle data searching system and method based on image recognition
CN110059101A (en) * 2019-04-16 2019-07-26 北京科基中意软件开发有限公司 A kind of vehicle data lookup system and lookup method based on image recognition
CN111102967A (en) * 2019-11-25 2020-05-05 桂林航天工业学院 Intelligent navigation mark supervision system and method based on unmanned aerial vehicle
CN111080794B (en) * 2019-12-10 2022-04-05 华南农业大学 Three-dimensional reconstruction method for farmland on-site edge cloud cooperation
CN111080794A (en) * 2019-12-10 2020-04-28 华南农业大学 Three-dimensional reconstruction method for farmland on-site edge cloud cooperation
CN111080685A (en) * 2019-12-17 2020-04-28 北京工业大学 Airplane sheet metal part three-dimensional reconstruction method and system based on multi-view stereoscopic vision
CN111141264A (en) * 2019-12-31 2020-05-12 中国电子科技集团公司信息科学研究院 Unmanned aerial vehicle-based urban three-dimensional mapping method and system
CN111141264B (en) * 2019-12-31 2022-06-28 中国电子科技集团公司信息科学研究院 Unmanned aerial vehicle-based urban three-dimensional mapping method and system
US11403851B2 (en) * 2020-08-04 2022-08-02 Verizon Connect Development Limited Systems and methods for utilizing machine learning and other models to reconstruct a vehicle accident scene from video
US11798281B2 (en) 2020-08-04 2023-10-24 Verizon Connect Development Limited Systems and methods for utilizing machine learning models to reconstruct a vehicle accident scene from video
CN112446958A (en) * 2020-11-13 2021-03-05 山东产研信息与人工智能融合研究院有限公司 Road traffic accident auxiliary processing method and system based on laser point cloud
CN112446958B (en) * 2020-11-13 2023-07-28 山东产研信息与人工智能融合研究院有限公司 Road traffic accident auxiliary processing method and system based on laser point cloud
CN113160406A (en) * 2021-04-26 2021-07-23 北京车和家信息技术有限公司 Road three-dimensional reconstruction method and device, storage medium and electronic equipment
CN113160406B (en) * 2021-04-26 2024-03-01 北京车和家信息技术有限公司 Road three-dimensional reconstruction method and device, storage medium and electronic equipment
CN114777744A (en) * 2022-04-25 2022-07-22 中国科学院古脊椎动物与古人类研究所 Geological measurement method and device in ancient biology field and electronic equipment
CN114777744B (en) * 2022-04-25 2024-03-08 中国科学院古脊椎动物与古人类研究所 Geological measurement method and device in ancient organism field and electronic equipment
CN114677429A (en) * 2022-05-27 2022-06-28 深圳广成创新技术有限公司 Positioning method and device of manipulator, computer equipment and storage medium
CN117392328A (en) * 2023-12-07 2024-01-12 四川云实信息技术有限公司 Three-dimensional live-action modeling method and system based on unmanned aerial vehicle cluster
CN117392328B (en) * 2023-12-07 2024-02-23 四川云实信息技术有限公司 Three-dimensional live-action modeling method and system based on unmanned aerial vehicle cluster

Also Published As

Publication number Publication date
CN107194989B (en) 2023-10-13

Similar Documents

Publication Publication Date Title
CN107194989B (en) Traffic accident scene three-dimensional reconstruction system and method based on unmanned aerial vehicle aircraft aerial photography
WO2020062434A1 (en) Static calibration method for external parameters of camera
CN207068060U (en) The scene of a traffic accident three-dimensional reconstruction system taken photo by plane based on unmanned plane aircraft
Skarlatos et al. Accuracy assessment of minimum control points for UAV photography and georeferencing
CN110706273B (en) Real-time collapse area measurement method based on unmanned aerial vehicle
Zietara Creating Digital Elevation Model (DEM) based on ground points extracted from classified aerial images obtained from Unmanned Aerial Vehicle (UAV)
CN114004977A (en) Aerial photography data target positioning method and system based on deep learning
CN109341686A (en) A kind of tightly coupled aircraft lands position and orientation estimation method of view-based access control model-inertia
CN113284239B (en) Method and device for manufacturing electronic sand table of smart city
CN112446915A (en) Picture-establishing method and device based on image group
CN116433865B (en) Space-ground collaborative acquisition path planning method based on scene reconstructability analysis
CN108195359B (en) Method and system for acquiring spatial data
Schleiss et al. VPAIR--Aerial Visual Place Recognition and Localization in Large-scale Outdoor Environments
CN112785686A (en) Forest map construction method based on big data and readable storage medium
Muji et al. Assessment of Digital Elevation Model (DEM) using onboard GPS and ground control points in UAV image processing
CN109003295B (en) Rapid matching method for aerial images of unmanned aerial vehicle
Simon et al. 3D mapping of a village with a WingtraOne VTOL tailsiter drone using Pix4 Dmapper.
Chaudhry et al. A comparative study of modern UAV platform for topographic mapping
KR102587445B1 (en) 3d mapping method with time series information using drone
CN113403942B (en) Label-assisted bridge detection unmanned aerial vehicle visual navigation method
CN111238524B (en) Visual positioning method and device
Dong et al. Fast stereo aerial image construction and measurement for emergency rescue
Paszkuta et al. Uav on-board emergency safe landing spot detection system combining classical and deep learning-based segmentation methods
KR102557775B1 (en) Drone used 3d mapping method
Noor et al. The fixed wing UAV usage on land use mapping for gazetted royal land in Malaysia

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant