CN115115859A - Long linear engineering construction progress intelligent identification and analysis method based on unmanned aerial vehicle aerial photography - Google Patents

Long linear engineering construction progress intelligent identification and analysis method based on unmanned aerial vehicle aerial photography Download PDF

Info

Publication number
CN115115859A
CN115115859A CN202210682673.4A CN202210682673A CN115115859A CN 115115859 A CN115115859 A CN 115115859A CN 202210682673 A CN202210682673 A CN 202210682673A CN 115115859 A CN115115859 A CN 115115859A
Authority
CN
China
Prior art keywords
image
construction
point
aerial vehicle
unmanned aerial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210682673.4A
Other languages
Chinese (zh)
Inventor
刘东海
马子茹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202210682673.4A priority Critical patent/CN115115859A/en
Publication of CN115115859A publication Critical patent/CN115115859A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Remote Sensing (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a long linear engineering construction progress intelligent identification method based on unmanned aerial vehicle aerial photography, which comprises six steps of unmanned aerial vehicle inspection aerial photography, construction completion area target detection, construction node pixel coordinate positioning, construction node space coordinate conversion, construction progress identification and construction progress analysis, and is scientific and reasonable in design, rapid full-coverage detection and intelligent identification of a long linear work area in engineering progress are realized by combining unmanned aerial vehicle inspection, and the defects of low efficiency and untimely information feedback of manual inspection in the traditional progress are overcome; by utilizing the image recognition technology, the automatic processing, recognition and analysis of the unmanned aerial vehicle inspection image data are realized, and the problems of low processing efficiency, easiness in interference of human factors and processing result feedback lag during manual processing of the image data are avoided.

Description

Long linear engineering construction progress intelligent identification and analysis method based on unmanned aerial vehicle aerial photography
Technical Field
The invention belongs to the technical field of engineering construction, relates to an intelligent identification technology of construction progress, and particularly relates to an intelligent identification and analysis method of long-linear engineering construction progress based on unmanned aerial vehicle aerial photography.
Background
The engineering progress control is one of important links in the whole project management process, and the construction progress directly influences the exertion of economic benefits and social benefits of the engineering project. Long linear hydraulic engineering is like river course lining among the small watershed treatment engineering in mountain area, ecological bank protection equidistance long, the wide range, arrange the dispersion, and the mountain area traffic is inconvenient, and traditional manual inspection is wasted time and energy, is difficult to in time master the whole construction progress of engineering, is difficult to carry out long-range accuse and real-time management decision to the construction.
At present, researchers have proposed various automated construction progress detection methods, such as collecting a delay picture from a fixed view angle of a construction site, acquiring a mask image of the same view angle in a 4D BIM (3D BIM + planned progress) model, and finally comparing two groups of pictures to obtain a site progress; in recent years, with the rapid development of three-dimensional reconstruction technologies such as SFM (structure from motion), progress monitoring methods based on three-dimensional point clouds have become the mainstream trend of research, such as generating a site building point cloud by using SFM technology, recovering the scale and reference plane of the site point cloud by using a control point arranged on site, generating a CAD model by using point cloud software, and comparing the CAD model with a planned BIM model to judge the site construction progress; the SFM-MVS (multi view stereo matching) technology can generate dense on-site building point clouds, further align the on-site building point clouds and the planned BIM point clouds by a manual registration method, and perform geometric occupation judgment on the two point clouds to obtain on-site construction progress.
The technical content disclosed above mainly focuses on the construction progress recognition of buildings and vertical infrastructures, but because the work area range of long linear projects such as river lining and ecological revetment is far larger than that of conventional building projects, the existing research method has certain limitation in the progress recognition of the long linear projects.
Unmanned aerial vehicle is as the wide novel aerial detection platform who receives the attention in recent years, has the characteristics of flexible, the field of vision is wide, can be used to overcome the limitation in traditional fields such as engineering patrol and examine. However, if the mass image data acquired by the unmanned aerial vehicle is manually processed and analyzed, the efficiency is low; if the construction progress information can be intelligently identified from the image data by utilizing the image identification technology, the problems existing in the existing method can be effectively solved.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an intelligent identification and analysis method for the construction progress of long linear engineering based on aerial photography of an unmanned aerial vehicle.
The technical problem to be solved by the invention is realized by adopting the following technical scheme:
the intelligent identification method for the construction progress of the long linear project based on the aerial photography of the unmanned aerial vehicle comprises six steps of aerial photography inspection by the unmanned aerial vehicle, target detection of a construction completion area, construction node pixel coordinate positioning, construction node space coordinate conversion, construction progress identification and construction progress analysis, and is described as follows:
the method comprises the following steps: unmanned aerial vehicle inspection aerial photography
And (3) polling the long linear project by using an unmanned aerial vehicle, and acquiring aerial video images of the long linear project. The unmanned aerial vehicle needs to be provided with an aerial camera with high enough resolution and a Real Time Kinematic (RTK) system with high precision, can acquire coordinates of the unmanned aerial vehicle and shoot images in Real Time, and can synchronously store and transmit acquired image data and corresponding coordinate information of the unmanned aerial vehicle.
Step two: target detection of construction completion area
The method comprises the following steps of carrying out target detection on a project construction completion area in an aerial image acquired by an unmanned aerial vehicle:
2-1. data set Collection and labeling
And (3) utilizing the aerial images of the unmanned aerial vehicle, pre-collecting and constructing sample data sets of engineering types corresponding to different types of long linear engineering, and labeling the target area.
2-2 image data preprocessing
And (4) carrying out enhancement processing on image data acquired by the unmanned aerial vehicle so as to improve the precision and robustness of the target detection model. The data enhancement method specifically includes changes in brightness (brightness), contrast (contrast), hue (hue), saturation (saturability), and gaussian noise of image data, and clipping, flipping, rotating, random scaling, and the like of an image.
2-3, constructing a construction completion area target detection network based on a deep learning algorithm
2-4, training a target detection network to obtain a target detection model of a construction completion area
2-5, based on the trained target detection model, realizing the target detection of the construction completion area, and outputting the angular point coordinate values of the target detection frame
Step three: construction node pixel coordinate positioning
The method specifically comprises the following two steps of positioning the pixel coordinates of the engineering construction node in the aerial photography image:
3-1, determining the position of a construction node according to the image recognition result
And identifying the unmanned aerial vehicle inspection aerial video by using the method in the second step, selecting the last frame to identify the image of the target project, and taking the central point of the target detection frame in the image as a construction progress node. For the image A of the target project identified in the last frame, the pixel coordinates (u) of the upper left corner point of the detection frame output in the step two are utilized min ,v min ) And the upper right corner point pixel coordinate (u) max ,v max ) Calculating the center coordinates (x, y) of the target detection frame, wherein u is u min And u max V is v min And v max Average value of (a). And taking the pixel coordinates (u, v) of the central point of the target detection frame as the pixel coordinates of the construction node.
3-2, matching the positions of construction nodes in different images and locating the pixel coordinates
For other images containing construction nodes in the image A, adopting an SIFT (Scale-invariant feature transform) algorithm to match and position the construction nodes, and specifically comprising the following steps:
1. feature point extraction and description generation
Firstly, extracting feature points in an image B to be detected and an image A of a target project identified from the last frame by using an SIFT built-in function SIFTAnd the SIFT descriptor is a 128-dimensional vector and contains scale, position and direction information. And then, obtaining the pixel coordinates of each characteristic point in the image by utilizing a pt method of a KeyPoint class in OpenCV. The feature point sets in the image A and the image B are respectively (A) 1 ,A 2 ,…,A n )、(B 1 ,B 2 ,…,B m ) (ii) a The description vectors corresponding to the feature points in the image A and the image B are respectively A i =(a i1 ,a i2 ,…a i128 )、B i =(b i1 ,b i2 ,…b i128 )。
2. Feature point matching
And matching the image A of the target project identified in the last frame with the characteristic points of the image B to be detected. The matching process adopts a Kd-tree algorithm to traverse the feature points (A) in the image A 1 ,A 2 ,…,A n ) For each feature point A in the image A i (i ═ 1,2, …, n), in the feature point set of image B (B) 1 ,B 2 ,…,B m ) To find out and A i Point B of closest vector distance S1 Point B next closest to the vector S2 Respectively calculate the feature points A i 、B S1 And A i 、B S2 And judging whether the feature point in the reference image and the point closest to the reference image are matched pairs or not according to the distance ratio.
The calculation of the vector distance is shown in the formula:
Figure BDA0003699018310000031
wherein n is a vector dimension, and n is 128; l (A) i ,B j ) Is a characteristic point A i Corresponding description vector and feature point B j The distance of the corresponding description vector.
Judging the characteristic point A i And B S1 The formula for a pair of matching points is:
Figure BDA0003699018310000032
if the distance ratio of the vectors satisfies the formula, that is, is not greater than the set threshold (generally 0.8), the point B in the image B to be detected is considered to be the point B S1 With point a in image a i Are matched pairs.
3. Feature point pixel coordinate positioning
Based on the characteristic point matching step, obtaining a matching point pair set (M) of two images, namely an image A of the last frame for identifying the target project and an image B to be detected 1 ,M 2 ,…,M t ). Wherein M is k (A i ,B j ) And (k ═ 1,2, …, t) is a matching point pair, which represents the k-th matching point pair in image a and image B, and i and j are the indexes of the corresponding feature points in image a and image B. The index of the feature points in each matching point pair realizes the matching relation of each feature point in the two images. The queryIdx and trainIdx methods of the DMatch class in OpenCV can return the indices of pairs of feature points matching in both images, each in a matching pair. And returning the index of each feature point in the image A in the matching pair by the queryIdx, and returning the index of each feature point in the image B in the matching pair by the trainIdx.
Calculating the distance between the pixel coordinates of each feature point in the image A and the center coordinates of the target detection frame, and taking the feature point A closest to the center point of the target detection frame p As the coordinates of the construction node. According to the characteristic point A p The index p and the matching point pair set are used for establishing the corresponding relation of the characteristic points in the image A and the image B to obtain the index q of the characteristic points in the image B corresponding to the index p and the matching point pair set. Calculating to obtain a characteristic point B by adopting a pt method of KeyPoint class q And the coordinates in the image B are the pixel coordinates of the construction node of the image B to be detected.
Step four: construction node space coordinate transformation
Converting the pixel coordinates of the construction nodes into actual work area coordinates, and specifically comprising the following two steps:
4-1 unmanned aerial vehicle camera calibration
Before the unmanned aerial vehicle patrols and examines, utilize Zhangyingyou plane calibration method to demarcate the unmanned aerial vehicle camera, obtain the internal reference matrix and the distortion parameter of camera.
Obtaining longitude and latitude coordinates of a camera by using RTK positioning of an unmanned aerial vehicle, and converting the longitude and latitude coordinates into actual work area coordinates of the camera under a local northeast (EastNorth Up, ENU) coordinate system by using a coordinate conversion method (the origin and coordinate axis directions of the selected local ENU coordinate system are consistent with the actual work area coordinate system), thereby obtaining a translation vector of the camera; and obtaining the camera attitude (yaw angle, pitch angle and roll angle) by utilizing a triaxial accelerometer and a triaxial gyroscope of the unmanned aerial vehicle, thereby obtaining the rotation matrix of the camera. The translation vector and the rotation matrix constitute an appearance matrix of the camera.
4-2 spatial localization based on motion parallax
Firstly, the method of the second step and the method of the third step are utilized to carry out pixel coordinate positioning on a construction node, two images which contain the construction node and have different shooting moments are selected, and the pixel coordinates (u) of the construction node in the two images are respectively calculated t ,v t )、(u t+Δt ,v t+Δt ). Then, the depth information z of the construction node is recovered by adopting a motion parallax method in computer vision C And solving the actual work area coordinate (x) of the construction node under the local ENU coordinate system W ,y W ,z W ) See formula:
Figure BDA0003699018310000041
in the formula, M t 、M t+Δt The camera parameters corresponding to the time t and the time t + delta t are respectively, namely the product of the internal reference matrix and the external reference matrix.
Step five: construction progress identification
And identifying the actual construction progress of the project by using the actual work area coordinates of the current construction node obtained in the step four and the information of the plane layout drawing (such as the CAD electronic drawing) of the long linear project. Firstly, equally dividing an engineering axis in a floor plan into small segments to obtain line segment end points equally divided at intervals, and extracting coordinate data of the end points to Excel; and then, calculating and comparing the distance between the actual work area coordinate of the construction node and each endpoint coordinate in the design drawing, and taking the endpoint coordinate closest to the space coordinate of the construction node as a coordinate point representing the construction progress. And finally, multiplying the distance equal division end point sequence number corresponding to the point by the length of the equal division line, namely the constructed length, and taking the constructed length as the estimation of the actual construction progress.
Step six: construction progress analysis
And D, comparing the actual construction progress obtained in the step five with the planned progress, and analyzing the advance or lag state of the actual progress. The content of the construction progress analysis comprises the following steps: the project accumulates and represents the completion percentage, the construction lead/lag length and percentage, the construction lead/lag time and the estimated completion time.
The construction progress analysis result can visually reflect the construction progress situation of the engineering site, and assists engineering managers in remotely mastering and judging the actual construction progress situation of the site, so that a basis is provided for engineering decision making.
The invention has the advantages and positive effects that:
the method is scientific and reasonable in design, realizes quick full-coverage detection and intelligent identification of a long linear work area of the project progress by combining unmanned aerial vehicle inspection, and overcomes the defects of low efficiency and untimely information feedback of manual inspection in the traditional progress; by utilizing the image recognition technology, the automatic processing, recognition and analysis of the unmanned aerial vehicle inspection image data are realized, and the problems of low processing efficiency, easiness in interference of human factors and processing result feedback lag during manual processing of the image data are avoided.
Drawings
FIG. 1 is a flow chart of a construction progress identification and analysis method of a long linear project according to the present invention;
FIG. 2 is a flow chart of the construction node pixel coordinate positioning of the present invention;
FIG. 3 is a schematic diagram of a detection result of a target in a construction completion area according to the present invention;
FIG. 4 is a schematic diagram of the location matching of construction nodes in different images based on the SIFT algorithm.
Detailed Description
The present invention is further illustrated by the following specific examples, which are intended to be illustrative, not limiting and are not intended to limit the scope of the invention.
The embodiment of the invention provides an intelligent identification method of a construction progress of a long linear project based on unmanned aerial vehicle aerial photography, which comprises five steps of unmanned aerial vehicle routing inspection aerial photography, construction completion area target detection, construction node pixel coordinate positioning, construction node space coordinate conversion, construction progress identification and analysis, and is shown in figure 1. The present example describes the detailed embodiment of the lining construction in the long linear river regulation engineering.
The method comprises the following steps: unmanned aerial vehicle inspection aerial photography
And (3) polling the long linear river regulation project by using an unmanned aerial vehicle, and acquiring aerial video images of the long linear project. In order to adapt to the characteristics of long linear engineering distance and complex meteorological geographic conditions, the unmanned aerial vehicle with long-distance airworthiness (wind resistance, freezing resistance, water resistance and long range) and a Real Time Kinematic (RTK) system is selected, carries an aerial camera with enough high resolution and storage capacity, and patrols and synchronously stores acquired aerial videos and corresponding unmanned aerial vehicle data information along the long linear engineering.
Step two: target detection of construction completion area
The method comprises the following steps of carrying out target detection on a project construction completion area in an aerial image acquired by an unmanned aerial vehicle:
2-1. data set Collection and labeling
And (4) collecting a sample data set for constructing the river lining construction state in advance by using the aerial image of the unmanned aerial vehicle. Select this data set of sample in the image that unmanned aerial vehicle gathered along different river courses, the image of taking photo by plane of selection covers different visual angles, engineering state and illumination condition as far as to improve the variety of image data sample.
Marking a lining area in the picture by using a rectangular frame by adopting labelImg marking tool software to generate data containing an image name, lining pixel coordinates and a lining type
2-2 image data preprocessing
And preprocessing the image data acquired by the unmanned aerial vehicle. First, the image is adjusted to a uniform size (e.g., 640 × 640 pixels). Then, the brightness (brightness), contrast (contrast), hue (hue), saturation (saturability), and gaussian noise of the image collected by the unmanned aerial vehicle are changed, or the image collected by the unmanned aerial vehicle is cut, turned, rotated, and randomly scaled, so that an image with data added is obtained and used as a data set input to the target detection network.
2-3, constructing a construction completion area target detection network based on a deep learning algorithm
The deep learning algorithm adopts a Yolov5 algorithm, and the target detection network comprises an input layer, a trunk network, a multi-scale feature fusion network and a prediction layer which are sequentially connected
The input layer comprises a Mosaic data enhancement module, an adaptive anchor frame calculation module and an adaptive picture scaling module. The Mosaic data enhancement module randomly uses 4 pictures in the data set, and adopts the modes of random scaling, random cutting and random arrangement for splicing, so that the sample data set can be enriched and the robustness of the network can be enhanced; the self-adaptive anchor frame calculation and self-adaptive picture scaling module reduces the calculation amount of reasoning and improves the target detection speed.
The main network adopts a CSPDarknet network, and an activation function is a SiLU and is used for extracting the characteristics of the image; the multi-scale feature fusion network consists of FPN + PAN modules and is used for fusing features output by a main network, obtaining a feature map with higher semantic information and improving the diversity and robustness of the features; the loss function adopted by the prediction layer is GIoU Loss For regression of higher semantic features to obtain predicted results
The SiLU activation function is shown as follows:
Figure BDA0003699018310000061
loss function GIoU Loss See formula:
Figure BDA0003699018310000062
wherein A is a prediction box, B is an actual box, and C is a minimum bounding rectangle containing A and B.
2-4, training a target detection network to obtain a target detection model of a construction completion area
And training a construction completion area target detection network by taking the preprocessed unmanned aerial vehicle aerial image as a sample data set, and obtaining a construction completion area target detection model. And if a new category long linear project is added subsequently, supplementing the data set corresponding to the new type and retraining.
2-5, realizing target detection of the construction completion area based on the trained target detection model
And inputting the unmanned aerial vehicle inspection shot image into the construction completion area target detection model to realize the identification of the construction completion area, namely the lining area, as shown in fig. 3(a) - (c). Setting a threshold value of an intersection ratio IoU (measuring the coincidence degree of a prediction frame and an actual frame) to be 0.5, when IoU is greater than 0.5, considering that an object is detected, obtaining the types of the prediction frame as TP (True Positives), FP (False Positives), TN (True Negatives) and FN (False Negatives), and calculating model performance evaluation indexes P (accuracy), R (recall), AP (precision average), mAP (average precision average, namely averaging the AP values of all classes).
The accuracy, the recall rate and the AP value are calculated according to the formula:
Figure BDA0003699018310000063
Figure BDA0003699018310000064
Figure BDA0003699018310000065
in the image that unmanned aerial vehicle gathered along different river courses, 351 river course aerial photograph pictures of this example were selected among them as sample data set. The size of an original image is uniformly adjusted to 640 multiplied by 640 pixels, data expansion is carried out, and a training set and a verification set are divided according to the ratio of 9: 1. And according to different lining types, dividing target detection categories into concrete, masonry and gabion nets. The freezing training is 50epoch, the unfreezing training is 200epoch, and the mAP value of the target detection model obtained by training is 90.11%, as shown in fig. 3(d), the reliability of the target detection model is proved.
Step three: construction node pixel coordinate positioning
And (3) positioning pixel coordinates of the engineering construction nodes in the aerial photography image, wherein the basic flow is shown in figure 2. The method comprises the following two steps:
3-1, determining the position of a construction node according to the image recognition result
And (4) polling the long linear river regulation project to be detected by using the unmanned aerial vehicle to obtain an aerial video, and identifying the unmanned aerial vehicle polling aerial video frame by using the construction completion area target detection model in the step two. And selecting the first frame or the last frame to identify the image of the target project (lining), and taking the central point of the target detection frame in the image as a construction progress node. Specifically, if the detection results of no lining from the 1 st frame to the (k-1) th frame are output and the detection result of lining appearing on the k-th frame is output, selecting the center point of a target detection frame in the image of the k-th frame as a construction progress node; and if the detection results of the lining exist from the frame 1 to the frame (t-1) and the detection result of the non-lining of the frame t is output, selecting the central point of the target detection frame in the image of the frame t as a construction progress node.
And (4) for the image A of the target project identified in the last frame, calculating by using the coordinates of the corner points of the detection frame output in the step two to obtain the center coordinates of the target detection frame, and taking the pixel coordinates of the point as the pixel coordinates of the construction node. The calculation of the pixel coordinates (u, v) of the center point of the target detection frame is shown as follows:
Figure BDA0003699018310000071
in the formula (u) min ,v min )、(u max ,v max ) Are respectively provided withPixel coordinates of the upper left corner point and the lower right corner point of the corresponding target detection frame.
3-2, matching the positions of construction nodes in different images and locating the pixel coordinates
And for other images containing the construction nodes in the image A, adopting a scale-invariant feature transform (SIFT) algorithm to match and position the construction nodes. The method comprises the following specific steps:
1. feature point extraction and description generation
Extracting feature points in an image B to be detected and an image A of the target project identified from the last frame by using a built-in function SIFT. detectandcomputer () of SIFT in an opencv-consistency-python library and generating an SIFT descriptor, wherein the descriptor is a 128-dimensional vector and contains scale, position and direction information. And then, obtaining the pixel coordinates of each characteristic point in the image by utilizing a pt method of a KeyPoint class in OpenCV. The feature point sets in the image A and the image B are respectively (A) 1 ,A 2 ,…,A n )、(B 1 ,B 2 ,…,B m ) The feature point extraction results of the two images are shown in fig. 4 (a); the description vectors corresponding to the feature points in the image A and the image B are respectively A i =(a i1 ,a i2 ,…a i128 )、B i =(b i1 ,b i2 ,…b i128 )。
2. Feature point matching
Matching the characteristic points of the image A and the image B, wherein in the matching process, the SIFT algorithm uses a Kd-tree algorithm to traverse the characteristic points (A) in the image A 1 ,A 2 ,…,A n ) For each feature point A in the image A i (i ═ 1,2, …, n), in the feature point set of image B (B) 1 ,B 2 ,…,B m ) To find out and A i Point B of closest vector distance S1 Point B next closest to the vector S2 As shown in FIG. 4(b), the feature points A are calculated separately i 、B S1 And A i 、B S2 And judging whether the feature point in the reference image and the point closest to the reference image are matched pairs or not according to the distance ratio.
The calculation of the vector distance is shown in the formula:
Figure BDA0003699018310000072
wherein n is a vector dimension, and n is 128; l (A) i ,B j ) Is a characteristic point A i Corresponding description vector and feature point B j The distance of the corresponding description vector.
Judging the characteristic point A i And B S1 The formula for a pair of matching points is:
Figure BDA0003699018310000081
if the distance ratio of the vectors satisfies the formula, i.e. is not greater than the set threshold (in this embodiment, the threshold is 0.8), it is determined that the point B in the image B to be detected is the point B S1 With point a in image a i Are matched pairs. The matching result of the feature point pairs in different images is shown in fig. 4(c), where all match lines form a set of match point pairs (M) 1 ,M 2 ,…,M t ) Each match line corresponds to a match point pair M k (A i ,B j ),A i 、B j Two points of the matchline connection are respectively.
3. Feature point pixel coordinate positioning
Based on the characteristic point matching step, obtaining a matching point pair set (M) of two images, namely an image A of the last frame for identifying the target project and an image B to be detected 1 ,M 2 ,…,M t ) Wherein M is k (A i ,B j ) And (k ═ 1,2, …, t) is a matching point pair, which represents the k-th matching point pair in image a and image B, and i and j are the indexes of the corresponding feature points in image a and image B. And returning indexes of the matched feature point pairs in the two images in the matching pairs respectively by utilizing the queryIdx and the trainIdx methods of the DMatch class in OpenCV. The queryIdx returns the index of each feature point in the image A in the matching pair, and the trainIdx returns the index of each feature point in the image B in the matching pair.
Then, the distance between the pixel coordinates of each feature point in the image A and the central coordinates (u, v) of the target detection frame is calculated, and the feature point A closest to the central point of the target detection frame is selected p Coordinate (u) of A ,v A ) As construction node coordinates. According to the characteristic point A i The index p and the matching point pair set are used for establishing the corresponding relation of the characteristic points in the image A and the image B to obtain the index q of the characteristic points in the image B corresponding to the index p and the matching point pair set. Then, a pt method of KeyPoint class is adopted to calculate and obtain a characteristic point B q Coordinates (u) in image B B ,v B ) And the construction node pixel coordinates of the image B to be detected are obtained.
The mapping relationship of the construction nodes in different images obtained based on the central coordinates (u, v) of the target detection frame is shown in fig. 4 (d).
Step four: construction node space coordinate transformation
Converting the pixel coordinates of the construction nodes into actual work area coordinates, and specifically comprising the following two steps:
4-1 unmanned aerial vehicle camera calibration
The mapping relation between the pixel coordinate system and the world coordinate system is shown as follows:
Figure BDA0003699018310000082
in the formula (I), the compound is shown in the specification,
Figure BDA0003699018310000091
in order to be an affine transformation matrix,
Figure BDA0003699018310000092
is a perspective projection matrix, the product of the two
Figure BDA0003699018310000093
An internal reference matrix of the unmanned aerial vehicle camera;
Figure BDA0003699018310000094
the method comprises the following steps of (1) obtaining an external reference matrix of an unmanned aerial vehicle camera, wherein R is a rotation matrix, and T is a translation vector; z is a radical of C Is the depth, i.e. the value of the target point in the Z-direction of the camera coordinate system. The specific calibration method of the internal reference matrix and the external reference matrix comprises the following steps:
1. internal reference matrix calibration
Before the unmanned aerial vehicle patrols and examines, utilize Zhangyingyou plane calibration method to demarcate the unmanned aerial vehicle camera, obtain the internal reference matrix and the distortion parameter of camera. Firstly, pasting Zhangyingyou scaling method checkerboards on a plane board to manufacture a scaling board, setting parameters of an unmanned aerial vehicle to be consistent with actual application parameters, and shooting a plurality of scaling board images from different angles. For the shot calibration plate image, checkerboard corner detection is performed by using a function cv2. findchessboardcorrers () in OpenCV, and pixel coordinates of the corner are obtained. And defining a plane where the checkerboard is located as an XY plane (Z is 0) of the world coordinates, and calculating the world coordinate values of the corner points of the checkerboard by combining the known size of the checkerboard, wherein the origin is located at the upper left corner of the checkerboard. And based on the pixel coordinates and world coordinates of the corner points of the checkerboard, calibrating by using a cv2. calibretacarama () function, and returning to an internal parameter matrix and distortion parameters of the unmanned aerial vehicle camera.
2. External reference matrix calibration
(1) And solving the translation vector. The method comprises the steps of obtaining longitude and latitude coordinates of a camera by using RTK positioning of an unmanned aerial vehicle, converting the longitude and latitude coordinates into actual work area coordinates of the camera under a local northeast Up (ENU) coordinate system by using a coordinate conversion method, and obtaining a translation vector of the camera. And the origin and the coordinate axis direction of the selected local ENU coordinate system are consistent with the actual work area coordinate system.
The longitude and latitude coordinates of the unmanned aerial vehicle camera need to be converted into an earth-centered earth-fixed (ECEF) coordinate system, and then converted into an ENU coordinate system. And the position coordinates of the unmanned aerial vehicle camera in the ENU coordinate system are translation vectors of the camera. The transformation formula from longitude and latitude coordinates (lat, lon, alt) in the longitude and latitude coordinate system to points (x, y, z) in the ECEF coordinate system is as follows:
Figure BDA0003699018310000095
wherein e is the eccentricity of the reference ellipsoid,e 2 =(a 2 -b 2 )/a 2 N is the curvature radius of the reference ellipsoid,
Figure BDA0003699018310000096
a. b are respectively the major half axis and the minor half axis of the reference ellipsoid, and for the WGS84 coordinate system, the ellipsoid parameters are represented by a-6378137 and b-6356752.3142.
Longitude and latitude coordinates of the origin of the ENU coordinate system are (lon) 0 ,lat 0 ,alt 0 ) The ECEF coordinate is (x) 0 ,y 0 ,z 0 ) Then, the conversion from any point (x, y, z) in the ECEF coordinate system to the ENU coordinate system is shown as:
Figure BDA0003699018310000097
calculating coordinates (e, n, u) of the unmanned aerial vehicle camera in a local ENU coordinate system by using an equation, wherein a translation vector T of the camera is [ e n u ═] T
(2) And (5) solving a rotation matrix. And obtaining the camera attitude (yaw angle psi, pitch angle theta and roll angle phi) by utilizing a triaxial accelerometer and a triaxial gyroscope of the unmanned aerial vehicle, thereby obtaining the rotation matrix of the camera. The calculation of the rotation matrix from the reference coordinate system (ENU coordinate system) to the body coordinate system is given by:
Figure BDA0003699018310000101
through the translation vector T and the rotation matrix R of the unmanned aerial vehicle camera, the external parameter matrix of the camera can be obtained.
4-2 spatial localization based on motion parallax
Method for solving depth value z by using motion parallax method in computer vision C . Firstly, identifying construction nodes according to the method of the second step and the third step, positioning pixel coordinates of the construction nodes, selecting two images which contain the construction nodes and have different shooting moments, and respectively calculating the pixel coordinates (u) of the construction nodes in the two images t ,v t )、(u t+Δt ,v t+Δt ). Then, recovering the depth information of the construction nodes by utilizing the trigonometric relation, and solving the actual work area coordinates (x) of the construction nodes under the local ENU coordinate system W ,y W ,z W ) See formula:
Figure BDA0003699018310000102
in the formula, M t 、M t+Δt The camera parameters corresponding to the time t and the time t + delta t, namely the product of the internal reference matrix and the external reference matrix,
Figure BDA0003699018310000103
step five: construction progress identification
And identifying the actual construction progress of the project by using the construction progress coordinate obtained in the step four and the information of the plane layout drawing (the CAD plane layout electronic drawing is adopted in the embodiment) of the long linear project.
Firstly, dividing the axes of a long linear project in a floor plan into small segments at equal distance (equally dividing according to the distance of 0.1m) in AutoCAD software to obtain line segment end points of the equally divided segments at equal distance, and importing the position X and position Y coordinate data of the equally divided segments at equal distance into Excel by adopting a data extraction command. And then, calculating the distances between the world coordinates of the construction nodes obtained in the fourth step and all the endpoint coordinates in the design drawing, comparing the obtained distances, and taking the endpoint coordinate closest to the space coordinate of the construction nodes as a coordinate point representing the construction progress.
And multiplying the distance equal division end point serial number corresponding to the point by the length (0.1m) of the equal division line, namely the constructed length, and taking the constructed length as the estimation of the actual construction progress.
Step six: construction progress analysis
And D, comparing the actual construction progress obtained in the step five with the planned progress, and analyzing the advance or lag state of the actual progress. The construction progress analysis result is updated once every time the unmanned aerial vehicle patrols and examines. The content of the construction progress analysis comprises the accumulated completion percentage of the project, the construction advance/lag length and percentage, the construction advance/lag time and the estimated completion time, and is represented by a drawing.
The concrete calculation and analysis method of the construction progress comprises the following steps:
1. the construction progress percentage (completion percentage) is calculated according to the formula:
Figure BDA0003699018310000111
wherein L is the identified actual construction length, L General assembly The total length of construction is designed.
2. The construction advance/lag length and percentage are calculated according to the formula:
construction advance/lag length is L-L 0 (2.15)
Figure BDA0003699018310000112
In the formula, L 0 And (4) acquiring the planned construction length corresponding to the inspection image acquisition time T for the unmanned aerial vehicle, wherein the inspection image acquisition time is the time corresponding to the image of the target project identified from the first frame or the last frame in the step three. If the actual construction length is larger than the planned construction length, L is larger than L 0 If the current construction progress is advanced; otherwise, the hysteresis is adopted.
In the planned construction information, the relationship between the construction length and the time is discrete and is approximately determined by adopting an interpolation method, which is shown as the following formula:
Figure BDA0003699018310000113
in the formula, T 1 、L 1 Respectively corresponding previous planned construction time and length of the unmanned aerial vehicle inspection image acquisition time T in the planned construction information; t is a unit of 2 、L 2 And respectively determining the next planned construction time and length corresponding to the unmanned aerial vehicle inspection image acquisition time T in the planned construction information.
3. The construction lead/lag time is calculated according to the formula:
construction lead/lag time T 0 -T (2.18)
In the formula, T is unmanned aerial vehicle patrol and examine image acquisition time, T 0 The planned construction time corresponding to the actual construction length L. If the unmanned aerial vehicle patrols and examines the image acquisition time earlier than plan construction time, T is more than T promptly 0 If the current construction progress is advanced; otherwise, the hysteresis is adopted.
As described above, T 0 Calculating by adopting an interpolation method, see formula:
Figure BDA0003699018310000114
in the formula, L 1 、T 1 Respectively representing the previous planned construction length and time corresponding to the actual construction length L in the planned construction information; l is 2 、T 2 Respectively, the actual construction length L is the corresponding next planned construction length and time in the planned construction information.
4. The estimated completion time is calculated according to the formula:
Figure BDA0003699018310000121
wherein V is construction speed, and V is L/t, which is the ratio of the identified actual construction length L to the construction time t.
The construction progress analysis result can visually reflect the construction progress of the engineering site, and assists engineering managers in remotely mastering and judging the actual construction progress condition of the site, so that a basis is provided for engineering decision making.
Although the embodiments of the present invention have been disclosed for illustrative purposes, those skilled in the art will appreciate that: various substitutions, changes and modifications are possible without departing from the spirit and scope of the invention and the appended claims, and therefore the scope of the invention is not limited to the embodiments disclosed.

Claims (3)

1. An intelligent identification method for long linear engineering construction progress based on unmanned aerial vehicle aerial photography comprises the following steps:
step one, unmanned aerial vehicle patrols and examines aerial photograph
The method comprises the steps that an unmanned aerial vehicle is used for polling long linear projects, long linear project aerial video images are collected, the unmanned aerial vehicle is provided with a high-resolution aerial camera and a high-precision real-time differential positioning system and is used for collecting coordinates of the unmanned aerial vehicle and shooting images in real time, and collected image data and corresponding coordinate information of the unmanned aerial vehicle are synchronously stored and transmitted;
step two, detecting the target of the construction completion area
The method comprises the following steps of carrying out target detection on an engineering construction completion area in the aerial image acquired in a man-machine-free mode:
(1) collecting and labeling a data set, collecting and constructing a sample data set of a corresponding engineering type in advance aiming at different types of long linear engineering by using an unmanned aerial vehicle aerial image, and labeling a target area;
(2) the method comprises the steps of image data preprocessing, namely enhancing image data acquired by an unmanned aerial vehicle to improve the precision and robustness of a target detection model, wherein the data enhancement method specifically comprises the steps of changing the brightness, the contrast, the hue, the saturation and Gaussian noise of the image data, and cutting, turning, rotating and randomly zooming the image;
(3) constructing a construction completion area target detection network based on a deep learning algorithm;
(4) training a target detection network to obtain a target detection model of a construction completion area;
(5) based on the trained target detection model, realizing target detection of the construction completion area, and outputting the angular point coordinate values of the target detection frame;
thirdly, positioning the pixel coordinates of the construction nodes
The method specifically comprises the following two steps of positioning the pixel coordinates of the engineering construction node in the aerial photography image:
(1) determining the position of a construction node according to the image recognition result
Utilizing the stepsSecondly, identifying the unmanned aerial vehicle inspection aerial photography video, selecting the last frame of image identifying the target project, taking the central point of the target detection frame in the image as a construction progress node, identifying the image A of the target project for the last frame, and utilizing the pixel coordinate (u) of the upper left corner point of the detection frame output in the second step min ,v min ) And the upper right corner point pixel coordinate (u) max ,v max ) Calculating the center coordinates (x, y) of the target detection frame, wherein u is u min And u max V is v min And v max Taking the pixel coordinates (u, v) of the central point of the target detection frame as the pixel coordinates of the construction node;
(2) position matching and pixel coordinate positioning of construction nodes in different images
For other images containing construction nodes in the image A, matching and positioning the construction nodes by adopting an SIFT scale invariant feature transform algorithm;
step four, converting the space coordinates of the construction nodes
Converting the pixel coordinates of the construction nodes into actual work area coordinates, and specifically comprising the following two steps:
(1) unmanned aerial vehicle camera calibration
Before the unmanned aerial vehicle patrols and examines, an unmanned aerial vehicle camera is calibrated by using a Zhangyingyou plane calibration method to obtain an internal reference matrix and distortion parameters of the camera, RTK positioning of the unmanned aerial vehicle is used to obtain longitude and latitude coordinates of the camera, the longitude and latitude coordinates are converted into actual work area coordinates of the camera under a local northeast Up (ENU) coordinate system through a coordinate conversion method, the selected origin and coordinate axis directions of the local ENU coordinate system are consistent with the actual work area coordinate system, so that a translation vector of the camera is obtained, a camera attitude yaw angle, a pitch angle and a roll angle are obtained by using a triaxial accelerometer and a triaxial gyroscope of the unmanned aerial vehicle, so that a rotation matrix of the camera is obtained, and the translation vector and the rotation matrix form an external reference matrix of the camera;
(2) motion parallax based spatial localization
Firstly, the method of the second step and the third step is utilized to carry out pixel coordinate positioning on the construction node, and two packages are selectedImages containing construction nodes and different shooting moments, and respectively calculating pixel coordinates (u) of the construction nodes in the two images t ,v t )、(u t+Δt ,v t+Δt ). Then, the depth information z of the construction node is recovered by adopting a motion parallax method in computer vision C And solving the actual work area coordinate (x) of the construction node under the local ENU coordinate system W ,y W ,z W ) See the following formula:
Figure FDA0003699018300000021
in the formula, M t 、M t+Δt Camera parameters corresponding to the time t and the time t + delta t respectively, namely the product of the internal reference matrix and the external reference matrix;
step five, identifying the construction progress
Identifying the actual construction progress of the project by using the actual work area coordinates of the current construction node and the information of the planar layout CAD graph of the long linear project, namely, dividing the project axis in the planar layout graph into small segments at equal intervals, obtaining line segment end points which are equally divided at equal intervals, and extracting the coordinate data of the end points to Excel; then, calculating and comparing the distance between the actual work area coordinate of the construction node and each endpoint coordinate in the design drawing, and taking the endpoint coordinate closest to the space coordinate of the construction node as a coordinate point representing the construction progress; finally, multiplying the distance equal division end point serial number corresponding to the point by the length of the equal division line, namely the constructed length, and taking the constructed length as the estimation of the actual construction progress;
step six, analyzing the construction progress
And D, comparing the actual construction progress obtained in the step five with the planned progress, analyzing the advance or delay state of the actual progress, and drawing and representing.
2. The intelligent identification method for the construction progress of the long linear project based on the aerial photography of the unmanned aerial vehicle as claimed in claim 1, wherein: the specific method for matching and positioning the construction nodes by the SIFT scale invariant feature transform algorithm comprises the following steps:
(1) feature point extraction and description generation
Firstly, extracting feature points in an image B to be detected and an image A of a target project in the last frame by using an SIFT built-in function SIFT. detectandCompute () and generating an SIFT descriptor, wherein the descriptor is a 128-dimensional vector and contains scale, position and direction information, then obtaining pixel coordinates of each feature point in the image by using a method of KeyPoint class pt in OpenCV, and extracting to obtain feature point sets (A) in the image A and the image B 1 ,A 2 ,…,A n )、(B 1 ,B 2 ,…,B m ) The description vectors corresponding to the feature points in the image A and the image B are respectively A i =(a i1 ,a i2 ,…a i128 )、B i =(b i1 ,b i2 ,…b i128 );
(2) Feature point matching
Matching the image A of the target project identified in the last frame with the characteristic points of the image B to be detected, traversing the characteristic points (A) in the image A by adopting a Kd-tree algorithm in the matching process 1 ,A 2 ,…,A n ) For each feature point A in the image A i (i is 1,2, …, n), and in the feature point set (B) of image B 1 ,B 2 ,…,B m ) To find out and A i Point B of closest vector distance S1 Point B next closest to the vector S2 Respectively calculate the feature points A i 、B S1 And A i 、B S2 The vector distance between the characteristic points and the point closest to the characteristic point in the reference image is judged whether to be a matching pair or not through the distance ratio,
the vector distance is calculated as follows:
Figure FDA0003699018300000031
wherein n is a vector dimension, and n is 128; l (A) i ,B j ) Is a characteristic point A i Corresponding description vector and feature point B j The distance of the corresponding description vector is,
judging the characteristic point A i And B S1 The formula for a pair of matching points is:
Figure FDA0003699018300000032
if the distance ratio of the vectors meets the above formula, namely is not greater than the set threshold, the point B in the image B to be detected is considered to be the point B S1 With point a in image a i Are matched pairs;
(3) feature point pixel coordinate positioning
Based on the characteristic point matching step, obtaining a matching point pair set (M) of two images, namely an image A of the last frame for identifying the target project and an image B to be detected 1 ,M 2 ,…,M t ),
Wherein M is k (A i ,B j ) (k ═ 1,2, …, t) is a matching point pair, which represents the k-th matching point pair in image A and image B, i, j are the indexes of the corresponding feature points in image A and image B of the matching point pair, respectively, the indexes of the feature points in each matching point pair realize the matching relationship of the feature points in the two images, the queryIdx and rainidx method of the DMatch class in OpenCV can return the indexes of the feature point pairs matched in the two images respectively in the matching pair, wherein the queryIdx returns the index of each feature point in image A in the matching pair, the rainidx returns the index of each feature point in image B in the matching pair,
calculating the distance between the pixel coordinates of each feature point in the image A and the center coordinates of the target detection frame, and taking the feature point A closest to the center point of the target detection frame p As a construction node coordinate, according to the feature point A p Establishing the corresponding relation of the characteristic points in the image A and the image B to obtain the index q of the characteristic points in the image B corresponding to the index p and the matching point pair set, and calculating by adopting a KeyPoint pt method to obtain the characteristic points B q And the coordinates in the image B are the construction node pixel coordinates of the image B to be detected.
3. The intelligent identification method for the construction progress of the long linear project based on the aerial photography of the unmanned aerial vehicle as claimed in claim 1, wherein: the content of the construction progress analysis comprises the following steps: the accumulated completion percentage of the project, the construction lead/lag length and percentage, the construction lead/lag time and the estimated completion time.
CN202210682673.4A 2022-06-16 2022-06-16 Long linear engineering construction progress intelligent identification and analysis method based on unmanned aerial vehicle aerial photography Pending CN115115859A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210682673.4A CN115115859A (en) 2022-06-16 2022-06-16 Long linear engineering construction progress intelligent identification and analysis method based on unmanned aerial vehicle aerial photography

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210682673.4A CN115115859A (en) 2022-06-16 2022-06-16 Long linear engineering construction progress intelligent identification and analysis method based on unmanned aerial vehicle aerial photography

Publications (1)

Publication Number Publication Date
CN115115859A true CN115115859A (en) 2022-09-27

Family

ID=83328476

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210682673.4A Pending CN115115859A (en) 2022-06-16 2022-06-16 Long linear engineering construction progress intelligent identification and analysis method based on unmanned aerial vehicle aerial photography

Country Status (1)

Country Link
CN (1) CN115115859A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI802514B (en) * 2022-10-07 2023-05-11 國立中興大學 Processing method of target identification for unmanned aerial vehicle (uav)
CN116301055A (en) * 2023-04-25 2023-06-23 西安玖安科技有限公司 Unmanned aerial vehicle inspection method and system based on building construction
CN117037075A (en) * 2023-10-08 2023-11-10 深圳市金众工程检验检测有限公司 Engineering detection method and system based on image processing
CN117115414A (en) * 2023-10-23 2023-11-24 西安羚控电子科技有限公司 GPS-free unmanned aerial vehicle positioning method and device based on deep learning
CN117367425A (en) * 2023-09-18 2024-01-09 广州里工实业有限公司 Mobile robot positioning method and system based on multi-camera fusion

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI802514B (en) * 2022-10-07 2023-05-11 國立中興大學 Processing method of target identification for unmanned aerial vehicle (uav)
CN116301055A (en) * 2023-04-25 2023-06-23 西安玖安科技有限公司 Unmanned aerial vehicle inspection method and system based on building construction
CN117367425A (en) * 2023-09-18 2024-01-09 广州里工实业有限公司 Mobile robot positioning method and system based on multi-camera fusion
CN117367425B (en) * 2023-09-18 2024-05-28 广州里工实业有限公司 Mobile robot positioning method and system based on multi-camera fusion
CN117037075A (en) * 2023-10-08 2023-11-10 深圳市金众工程检验检测有限公司 Engineering detection method and system based on image processing
CN117115414A (en) * 2023-10-23 2023-11-24 西安羚控电子科技有限公司 GPS-free unmanned aerial vehicle positioning method and device based on deep learning
CN117115414B (en) * 2023-10-23 2024-02-23 西安羚控电子科技有限公司 GPS-free unmanned aerial vehicle positioning method and device based on deep learning

Similar Documents

Publication Publication Date Title
Toft et al. Long-term visual localization revisited
CN111462329B (en) Three-dimensional reconstruction method of unmanned aerial vehicle aerial image based on deep learning
US11443444B2 (en) Interior photographic documentation of architectural and industrial environments using 360 panoramic videos
CN110956651B (en) Terrain semantic perception method based on fusion of vision and vibrotactile sense
CN115115859A (en) Long linear engineering construction progress intelligent identification and analysis method based on unmanned aerial vehicle aerial photography
Fathi et al. Automated as-built 3D reconstruction of civil infrastructure using computer vision: Achievements, opportunities, and challenges
CN111340797A (en) Laser radar and binocular camera data fusion detection method and system
CN111429514A (en) Laser radar 3D real-time target detection method fusing multi-frame time sequence point clouds
CN112734765B (en) Mobile robot positioning method, system and medium based on fusion of instance segmentation and multiple sensors
CN106529538A (en) Method and device for positioning aircraft
CN107967457A (en) A kind of place identification for adapting to visual signature change and relative positioning method and system
CN109596121B (en) Automatic target detection and space positioning method for mobile station
CN111856963A (en) Parking simulation method and device based on vehicle-mounted looking-around system
CN113095152B (en) Regression-based lane line detection method and system
CN108648194A (en) Based on the segmentation of CAD model Three-dimensional target recognition and pose measuring method and device
CN111862673A (en) Parking lot vehicle self-positioning and map construction method based on top view
CN116258817B (en) Automatic driving digital twin scene construction method and system based on multi-view three-dimensional reconstruction
CN113221647A (en) 6D pose estimation method fusing point cloud local features
CN111998862A (en) Dense binocular SLAM method based on BNN
CN115861619A (en) Airborne LiDAR (light detection and ranging) urban point cloud semantic segmentation method and system of recursive residual double-attention kernel point convolution network
CN111860651A (en) Monocular vision-based semi-dense map construction method for mobile robot
CN115049821A (en) Three-dimensional environment target detection method based on multi-sensor fusion
CN113192200A (en) Method for constructing urban real scene three-dimensional model based on space-three parallel computing algorithm
CN115032648A (en) Three-dimensional target identification and positioning method based on laser radar dense point cloud
CN114429435A (en) Wide-field-of-view range target searching device, system and method in degraded visual environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination