CN115908706B - High-speed railway completion acceptance method with fusion of live three-dimensional model and image - Google Patents
High-speed railway completion acceptance method with fusion of live three-dimensional model and image Download PDFInfo
- Publication number
- CN115908706B CN115908706B CN202211425950.XA CN202211425950A CN115908706B CN 115908706 B CN115908706 B CN 115908706B CN 202211425950 A CN202211425950 A CN 202211425950A CN 115908706 B CN115908706 B CN 115908706B
- Authority
- CN
- China
- Prior art keywords
- image
- live
- projection
- model
- action
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a high-speed railway completion acceptance method with a live-action three-dimensional model and an image fused, which comprises the following steps: acquiring image and POS data, and processing to acquire undistorted image data and internal and external azimuth elements; performing live-action three-dimensional reconstruction, and calculating a linear optimal column matrix; searching and extracting an optimal matching image by utilizing a linear optimal column matrix; modeling the optimal image projection in the three-dimensional scene to obtain a projection model; establishing image texture mapping, and projecting by utilizing image grading to form data visualization and fusion interaction; the linear array matrix is adopted to extract the optimal matching image, the image is subjected to hierarchical projection by utilizing the established texture mapping after modeling is carried out on the optimal matching image, visual errors can be eliminated to a large extent, and the problems of pattern drawing, cavity bottleneck and the like of the model are avoided.
Description
Technical Field
The invention relates to the technical field of intelligent rail transit, in particular to a high-speed railway completion acceptance method by fusing a live-action three-dimensional model and an image.
Background
The completion acceptance of high-speed railways involves a plurality of professions and topics, and the time is heavy. The conventional method for checking and accepting the completion of the high-speed railway is to carry out field investigation operation according to related technical specifications by professional technicians and comb to form the correction problem. With the advent of high spatial resolution unmanned aerial vehicle image and real scene three-dimensional modeling technology, the interpretation and analysis of the viewing acceptance profession of the internal industry system are possible.
The problem of model flower pulling and cavity bottleneck exists when the real three-dimensional model is applied in completion acceptance, the single model cannot be directly applied to observation acceptance, and vision interpretation errors can be introduced, so that the application researches a high-speed railway completion acceptance method with the real three-dimensional model fused with images.
Disclosure of Invention
Therefore, the invention aims to provide a high-speed railway completion acceptance method for fusing a live-action three-dimensional model and an image, which adopts a linear array matrix to extract an optimal matching image, performs image texture mapping after projection modeling through the optimal matching image in a view field observation point, utilizes hierarchical images to perform gradual fusion interaction, can perform the visual error elimination to a great extent, avoids the problems of model drawing, cavity bottleneck and the like, realizes the high-quality fusion of a three-dimensional result model and the image, and solves the problems that the live-action model is unreal and the independent image has no spatial reference.
In order to achieve the purpose, the invention provides a high-speed railway completion acceptance method with a three-dimensional live-action model and an image fused, which comprises the following steps:
s1, acquiring image and POS data, and performing space-time three encryption processing to acquire undistorted image data and accurate internal and external azimuth elements;
s2, carrying out real-scene three-dimensional reconstruction on the basis of air three encryption by adopting an oblique modeling mode to obtain a real-scene three-dimensional model; observing position, gesture and external azimuth elements in the live-action three-dimensional scene; calculating a linear optimal column matrix; extracting an optimal matching image by using the linear array matrix;
s3, modeling the optimal image projection under the live-action three-dimensional scene to obtain a projection model; calculating the corner points of the projection model;
s4, forming a projection surface by using the corner points of the projection model, constructing an outsourcing polyhedron, and performing texture mapping on the image;
s5, utilizing the current view field observation point in S1, the corner point of the projection model in S3 and texture mapping in S4; after the original image is sectioned in a grading way, data visualization and fusion interaction are carried out; when the change of the observation point of the current field is realized, the screening of the corresponding images and the mapping loading of the images are synchronously carried out;
s6, performing visual judgment and geometric dimension measurement on the completion project by using the established live-action three-dimensional model; and carrying out templating integral output on the identified and measured defect information.
Further, preferably, in S1, the internal azimuth element includes a camera focal length, a pixel size, a phase width height, an abscissa of an image principal point, and an ordinate of the image principal point; the external orientation elements include line elements and angle elements; the line elements comprise exposure point abscissas, exposure point ordinates and exposure point elevations; the angle elements include pitch angle, roll angle, yaw angle.
Further, preferably, in S2, the calculating a linear rank matrix in the live three-dimensional scene includes the following steps:
s201, performing live-action three-dimensional reconstruction by adopting an oblique modeling mode on the basis of space three encryption to obtain an OSGB format live-action three-dimensional model;
s202, performing top layer reset, texture compression and format conversion on an OSGB format inclined model to obtain 3dtiles format live-action model data;
s203, acquiring an observation position point and an external azimuth element of a current scene under a visual interface of a live-action three-dimensional scene; the Euclidean distance between the observation position point and the line element in the external azimuth element is calculated, and a distance matrix A is obtained;
s204, sorting the distance matrix A, and forming the first 10% of the distance matrix A into a linear excellent column matrix B.
Further, preferably, in S2, the extracting the best matching image by using the linear rank matrix includes the following steps:
s205, forming a matrix C from angle elements in external azimuth elements at the observation position point of the current field, and forming a matrix B composed of angle elements corresponding to the excellent column matrix B j; Calculation of C and B respectively j Correlation coefficient r of matrix j Obtaining a related system matrix R;
s206, sorting the correlation coefficient matrix R, and selecting the image corresponding to the maximum correlation coefficient as the optimal matching image.
Further, preferably, in S3, the modeling of the optimal image projection under the live-action three-dimensional scene to obtain a projection model includes the following steps:
s301, calculating a ground projection abscissa and a ground projection ordinate of a projection center D by using the inner azimuth element and the outer azimuth element of the optimal matching image;
s302, setting the average elevation of the current view field observation point as an initial elevation value, and performing iterative calculation by using the elevation value to obtain a three-dimensional projection coordinate (X) of the image center C ,Y C ,Z C );
S303, based on the obtained three-dimensional projection coordinates (X C ,Y C ,Z C ) And respectively carrying out iterative computation by using the elevation values, and sequentially computing angular point projection coordinates of the upper left corner, the lower left corner, the upper right corner and the lower right corner of the image.
Further, preferably, in S4, a projection plane is formed by using the corner points of the projection model, and the photo texture is mapped to construct an outsourcing polyhedron, which includes the following steps:
taking the corner point of the projection model as the bottom surface of the outsourcing polyhedron;
setting a reference surface according to the distance from the highest point of the real model to the bottom surface;
rotating one side of the reference surface by theta degrees for the axis, and selecting addresses; obtaining an angular point projection surface; wherein θ=90 ° -Phi; phi is the camera pitch angle;
and mapping the picture as texture by using the corner projection surface.
Further, preferably, the method for slicing the original image in a grading manner comprises the following steps:
the obtained original image is subjected to image blocking and image resolution resampling by adopting a binary tree classification method;
the image slice data of each stage after the classification of the binary tree classification method is independently placed under a folder;
and naming the folders by adopting the form of image names and hierarchical numbers.
Further, preferably, in S5, when data visualization and fusion interaction are performed, loading live-action three-dimensional model data is performed by adopting a WEBGL open source frame, image loading region space polygon surface calculation is performed on a live-action three-dimensional model scene, and a live-action model of a loaded image region is displayed in a hidden manner;
mapping the image on a live-action three-dimensional scene by using the four corner coordinates of the image calculated in the step S3 and the outsourcing polyhedron constructed in the step S4, wherein the image is used as the texture of the outsourcing polyhedron;
and taking the observation position angle and the posture change of the real three-dimensional model as events triggering the image screening and loading, and carrying out image screening and image mapping loading once every time of visual angle transformation.
Further, preferably, in S6, when the built three-dimensional model is used for visual recognition of the completion project, the method includes acquiring a defect position spatial coordinate as point_qx through a pick-up point at a corresponding position of the three-dimensional model i (X,Y,Z);
Interpolation calculation is carried out through the mileage stake height index table to obtain a detailed mileage coordinate index table;
calculating Euclidean distance D of defect point position and mileage coordinate index table i Obtaining a distance matrix [ D ] i ]Sorting the distance matrix, and selecting mileage corresponding to the coordinate with the smallest distance as mileage information of the defect position;
and (3) using mileage, mark segments, coordinate information, position description, problem description, photos and remarks as a header to establish a templated form, and outputting a defect report.
Compared with the prior art, the high-speed railway completion acceptance method with the fusion of the live-action three-dimensional model and the image has at least the following advantages:
1. the linear array matrix is adopted to extract the optimal matching image, the optimal matching image is projected and modeled in one view field observation point, and the formed texture map provides a spatial reference for the original image without spatial reference, so that visual errors can be eliminated to a large extent, the problems of model drawing, cavity bottleneck and the like are avoided, the high-quality fusion of the three-dimensional result model and the influence is realized, and the technical problem that the real-scene three-dimensional model cannot be directly applied to engineering completion acceptance is solved.
2. The method and the device realize railway impression defect detection, engineering geometric attribute extraction and thematic application analysis demonstration application based on the fusion scene, reform the technical mode that railway engineering completion impression acceptance needs to be developed on site at the present stage, and greatly improve completion acceptance efficiency.
Drawings
Fig. 1 is a flow chart of a method for completing and accepting a high-speed railway by fusing a live-action three-dimensional model and an image.
Fig. 2 is a view of the three-dimensional reconstruction effect of the live-action of the application.
Fig. 3 is an effect diagram of the fusion of the three-dimensional model and the image in the present application.
Fig. 4 is a schematic diagram of arch width measurement and slope gradient analysis in the dimension measurement based on the fusion scene.
Fig. 5 is a schematic diagram of sound barrier height and position measurement in the fusion scene-based dimension measurement of the present application.
FIG. 6 is a schematic diagram of model-based defect review in defect detection and finalization analysis according to the present application.
FIG. 7 is a schematic diagram of another defect inspection in the defect inspection and finalization analysis of the present application.
FIG. 8 is a schematic diagram of the sound barrier visual inspection under an image in the defect inspection and finalization analysis of the present application.
Fig. 9 is a schematic diagram of the construction of the overcladding polyhedron of the present application.
Fig. 10 is a schematic flow chart of the original image hierarchical slicing in the present application.
Detailed Description
The invention is described in further detail below with reference to the drawings and the detailed description.
As shown in fig. 1, the method for completing and accepting a high-speed railway by fusing a live-action three-dimensional model and an image provided by an embodiment of the invention comprises the following steps:
s1, acquiring image and POS data, and performing space-time three encryption processing to acquire undistorted image data and accurate internal and external azimuth elements;
s2, carrying out real-scene three-dimensional reconstruction on the basis of air three encryption by adopting an oblique modeling mode to obtain a real-scene three-dimensional model; observing position, gesture and external azimuth elements in the live-action three-dimensional scene; calculating a linear optimal column matrix; extracting an optimal matching image by using the linear array matrix;
s3, modeling the optimal image projection under the live-action three-dimensional scene to obtain a projection model; calculating the corner points of the projection model;
s4, forming a projection surface by using the corner points of the projection model, constructing an outsourcing polyhedron, and performing texture mapping on the image;
s5, utilizing the current view field observation point in S1, the corner point of the projection model in S3 and texture mapping in S4; after the original image is sectioned in a grading way, data visualization and fusion interaction are carried out; when the change of the observation point of the current field is realized, the screening of the corresponding images and the mapping loading of the images are synchronously carried out;
s6, performing visual judgment and geometric dimension measurement on the completion project by using the established live-action three-dimensional model; and carrying out templating integral output on the identified and measured defect information.
In a specific embodiment, in S1, the internal azimuth element includes a camera focal length, a pixel size, a phase width height, an abscissa of an image principal point, and an ordinate of the image principal point; the external orientation elements include line elements and angle elements; the line elements comprise exposure point abscissas, exposure point ordinates and exposure point elevations; the angle elements include pitch angle, roll angle, yaw angle.
For one device, the same internal orientation element is provided, and for each image, the external orientation element is provided independently. Regional images (generally in jpg or tiff format) and POS data (geographic space coordinate position when a photo is taken) are acquired through field data acquisition, and accurate internal and external azimuth elements and undistorted image data are acquired through space three encryption processing of input data and camera parameters. Wherein: the internal azimuth element is parameter information of the camera, and mainly comprises: focal length of camera, pixel size Pixel, phase Width size, width of Width, high Height, image dominant point position. The external orientation elements include: accurate object coordinates and accurate angular elements. The camera internal azimuth elements obtained through camera calibration are shown in table 1, and the external azimuth elements of the photos are obtained through field data acquisition, wherein i=0 to count, and count is the total number of the photos in the operation, and the table 2 shows the total number of the photos.
TABLE 1 internal azimuth element
Focal length of camera | Pixel size | Phase breadth | High phase amplitude | Principal point abscissa of image | Principal point ordinate of image |
F | P | W | H | x | y |
TABLE 2 external orientation element
Exposure spot abscissa | Exposure spot ordinate | Elevation of exposure point | Pitching | Roll-over | Deviation of navigation |
X i | Y i | Z i | Phi i | Omega i | Kappa i |
Further, in S2, the calculating a linear rank matrix in the live-action three-dimensional scene includes the following steps:
s201, performing live-action three-dimensional reconstruction by adopting an oblique modeling mode on the basis of space three encryption to obtain an OSGB format live-action three-dimensional model;
s202, performing top layer reset, texture compression and format conversion on an OSGB format inclined model to obtain 3dtiles format live-action model data;
s203, acquiring an observation position point and an external azimuth element of a current scene under a visual interface of a live-action three-dimensional scene; the Euclidean distance between the observation position point and the line element in the external azimuth element is calculated, and a distance matrix A is obtained;
s204, sorting the distance matrix A, and forming the first 10% of the distance matrix A into a linear excellent column matrix B.
Further, preferably, in S2, the extracting the best matching image by using the linear rank matrix includes the following steps:
s205, forming a matrix C by angle elements in external azimuth elements at the observation position point of the current field, and forming a moment composed of angle elements corresponding to the priority matrix BArray B j; Calculation of C and B respectively j Correlation coefficient r of matrix j Obtaining a related system matrix R;
s206, sorting the correlation coefficient matrix R, and selecting the image corresponding to the maximum correlation coefficient as the optimal matching image.
In the calculation of the linear array matrix, the method of the following example is adopted.
Obtaining an observation position point View of a current scene under a visual interface of a live-action three-dimensional scene s (X S ,Y S ,Z S ) Pitch roll and yaw of the field angle. Image i The external azimuth line element is X i ,Y i ,Z i The corner element Omega i ,Phi i ,Kappa i . Respectively calculating View s (X S ,Y S ,Z S ) And Image i Line element AND (X) i ,Y i ,Z i ) Euclidean distance D between i Obtaining distance matrix A [ D ] i ]The correlation calculation formula is as follows:
……………… (1)
pair matrix [ D i ]Ordering, selecting the first 10% of the components to form a linear array matrix B [ D ] j ]Where j=i/10.
(4) Optimally matched image extraction
The angle elements at the observation point of the field of view form a matrix C [ Phi ] s , Omega s ,Kappa s ]The matrix formed by the angle elements corresponding to the optimal column matrix B is B j [Phi j ,Omega j ,Kappa j ]Calculate C and B respectively i Correlation coefficient r of matrix j Obtaining a related system matrix R;
…………… (2)
wherein Cov (B) j C) is B j Covariance with C, var [ B ] j ]Is B j Variance of Var [ C ]]Is the variance of C. Ordering the correlation coefficient matrix R, and selecting R with the largest correlation coefficient value k For optimally matching the Image k 。
As shown in fig. 2, in S3, the modeling of the optimal image projection under the live-action three-dimensional scene to obtain a projection model includes the following steps:
s301, calculating a ground projection abscissa and a ground projection ordinate of a projection center D by using the inner azimuth element and the outer azimuth element of the optimal matching image;
s302, setting the average elevation of the current view field observation point as an initial elevation value, and performing iterative calculation by using the elevation value to obtain a three-dimensional projection coordinate (X) of the image center C ,Y C ,Z C )
S303, based on the obtained three-dimensional projection coordinates (X C ,Y C ,Z C ) And respectively carrying out iterative computation by using the elevation values, and sequentially computing angular point projection coordinates of the upper left corner, the lower left corner, the upper right corner and the lower right corner of the image.
Specifically, the internal azimuth elements F, P, W, H, x and y obtained by the method are shown in the specification; the obtained optimal matching Image k Is a foreign element of: x is X k ,Y k ,Z k ,Phi k ,Omega k ,Kappa k . Z is the ground elevation of the proxel. Calculating a projection center coordinate value (X, Y) by using the external orientation element of the image, wherein the calculation formula is as follows:
………………………………………………(3)
… (4)
…(5)
………………………………………(6)
(sin()*……(7)
(cos())…(8)
according to the calculated indirect variable, the ground projection abscissa X is obtained,
……………………(9)
………………………(10)
……………………………(11)
…………………………… (12)
according to the calculated indirect variable, obtaining the ground projection abscissa Y 0 ,
……………………(13)
First, the ground projection coordinates of the image center point are calculated. Taking the average elevation Z of the model under the current view field from the initial value of Z 0 ,、Taking 0, calculating the coordinates (X) of the central point of the image by using the formula (9) and the formula (13) C1 ,Y C1 ) Obtaining the elevation value Z under the current coordinate C1 . Calculation of Z re = Z C1 - Z 0 If Z re >1, entering iterative operation, Z taking Z c1 H, W taking 0, calculate (X) C2 ,Y C2 ) Extracting the current seat elevation as Z c2 Calculate Z re = Z C2 - Z C1 If Z re >1, continue iteration until Z re <The iteration is ended by=1, and the image center projection coordinates (X C ,Y C ,Z C )。
(2) Calculating the model projection coordinate of the upper left corner LU of the image, and taking Z as Z C Taking the height H of the phase frame,taking-W, calculating the coordinate (X) by using the formula (9) and the formula (13) LU1 ,Y LU1 ) Obtaining the elevation value Z under the current coordinate LU1 . Calculation of Z re = Z LU1 - Z c If Z re >1, entering iterative operation, Z taking Z LU1 ,The height H of the phase amplitude is taken,taking-W and calculating to obtainTo (X) LU 2 ,Y LU 2 ) Extracting the current seat elevation Z LU 2 Calculate Z re = Z LU 2 - Z LU 1 If Z re >1, continue iteration until Z re <The iteration is ended by the method of (1), and the projection coordinates (X LU ,Y LU ,Z LU )。
(3) Calculating the model projection coordinates of the RU of the right upper corner of the image, and taking Z as Z C ,The height H of the phase amplitude is taken,taking W, calculating the coordinate (X) by using the formula (9) and the formula (13) RU1 ,Y RU1 ) Obtaining the elevation value Z under the current coordinate RU1 . Calculation of Z re = Z RU1 - Z c If Z re >1, entering iterative operation, Z taking Z RU1 ,The height H of the phase amplitude is taken,taking W, calculating to obtain (X RU2 ,Y RU2 ) Extracting the current seat elevation Z RU 2 Calculate Z re = Z RU2 - Z RU1 If Z re >1, continue iteration until Z re <The iteration is ended by the method of (1), and the projection coordinates (X RU ,Y RU ,Z RU )。
(4) Calculating the model projection coordinate of the right lower corner RD of the image, and taking Z as Z C ,Taking out the component-H,taking W, using equation (9), equation(13) Calculated coordinates (X) RD1 ,Y RD1 ) Obtaining the elevation value Z under the current coordinate RU1 . Calculation of Z re = Z RD1 - Z c If Z re >1, entering iterative operation, Z taking Z RD1 ,Taking the height-H of the phase amplitude,taking W, calculating to obtain (X RD2 ,Y RD2 ) Extracting the current seat elevation Z RD2 Calculate Z re =Z RD2 -Z RD1 If Z re >1, continue iteration until Z re <The iteration is ended by the method of the combination of the number of the left corner points and the number of the right corner points (1, the projection coordinates (X RD ,Y RD ,Z RD )。
(5) Calculating the model projection coordinate of the left lower corner LD of the image, and taking Z as Z C ,Taking out the component-H,taking-W, calculating the coordinate (X) by using the formula (9) and the formula (13) LD1 ,Y LD1 ) Obtaining the elevation value Z under the current coordinate LD1 . Calculation of Z re = Z LD1 - Z c If Z re >1, entering iterative operation, Z taking Z LD1 ,Taking the height-H of the phase amplitude,taking-W, calculating to obtain (X LD2 ,Y LD2 ) Extracting the current seat elevation Z LD2 Calculate Z re =Z LD2 -Z LD1 If Z re >1, continue iteration until Z re <End of the stack =1Instead, the projection coordinates (X) of the left lower corner model of the image are obtained LD ,Y LD ,Z LD )。
As shown in fig. 9, in S4, a projection plane is formed by using the corner points of the projection model, an outsourcing polyhedron is constructed, and the image is subjected to texture mapping; the method comprises the following steps:
taking the corner point of the projection model as the bottom surface of the outsourcing polyhedron;
setting a reference surface according to the distance from the highest point of the real model to the bottom surface;
rotating one side of the reference surface by theta degrees for the axis, and selecting addresses; obtaining an angular point projection surface;
and mapping the picture as texture by using the corner projection surface.
And drawing the bottom surface of the outsourcing polyhedron by utilizing the corner points according to the four corners LU, LD, RD, RU obtained by the calculation. And the h value is the distance value from the highest point of the live-action model to the bottom surface in the region range. Reference planes LU ', RU', RD 'and LD' parallel to LU, RU, RD and LD bottom planes are constructed. And (3) addressing the surfaces LU ', RU', RD 'and LD' according to the angle theta to obtain projection surfaces taking LU '', RU '', RD 'and LD' as corner points, and mapping the photo as texture. Wherein θ=90 ° -Phi; is the complementary angle of the camera pitch angle.
As shown in fig. 10, the method further comprises the steps of slicing the original image in a grading manner:
the obtained original image is subjected to image blocking and image resolution resampling by adopting a binary tree classification method;
the image slice data of each stage after the classification of the binary tree classification method is independently placed under a folder;
and naming the folders by adopting the form of image names and hierarchical numbers.
The resolution of the original image is m x n where (m>n), the size of the slice is defined as 64 pixels on the long side. Then the dynamic classification is q1, where q1 should satisfy 2 q1 <floor (m/64), wherein floor is a round-down function. The original image is partitioned by adopting a dichotomy, the level 0 is not partitioned, and the level 1 is 2 1 *2 1 Block with Q1 level of 2 Q1 *2 Q1 A block.
And (3) centrally storing the result model data subjected to professional conversion in the 3dtiles format and the single image data subjected to influence grading by adopting a cloud server, publishing an image folder in a static data form, and realizing the relative address structured management of model and image data network publishing of different mileage sections by adopting a database. The structure of the relevant database table is as follows:
TABLE 3 model image database list structure
Sequence number | Engineering name | Segment segments | Initial mileage | Terminating mileage | Model_URL | Photo_URL |
1 | Towards Ling Gaotie | DK | 32.0 | 35.5 | /CL/GEOdata/Model/DK/32.0-35.5/ | /CL/GEOdata/Photo/DK/32.0-35.5/ |
2 | Hangzhou Shaoxing table high-speed rail | YDK | 1.0 | 2.5 | /HST/GEOdata/Model/DK/1-2.5/ | /HST/GEOdata/Photo/DK/1-2.5/ |
Further, preferably, in S5, when data visualization and fusion interaction are performed, loading live-action three-dimensional model data is performed by adopting a WEBGL open source frame, image loading region space polygon surface calculation is performed on a live-action three-dimensional model scene, and a live-action model of a loaded image region is displayed in a hidden manner;
on a live-action three-dimensional scene, mapping the image as the texture of the outsourcing polyhedron by using the four corner coordinates of the image calculated in the step S3, the four corner coordinates of the image calculated in the step S3 and the outsourcing polyhedron constructed in the step S4;
and taking the observation position angle and the posture change of the real three-dimensional model as events triggering the image screening and loading, and carrying out image screening and image mapping loading once every time of visual angle transformation.
As shown in fig. 6-8, in S6, when the built three-dimensional model is used for visual identification of the completion project, the method includes acquiring the spatial coordinates of the defect position as point_qx through the pick-up point at the corresponding position of the three-dimensional model i (X,Y,Z);
Interpolation calculation is carried out through the mileage stake height index table to obtain a detailed mileage coordinate index table; as shown in table 4.
Meter 4 Mileage pile high meter (simulation test data)
ID | Mileage | X | Y | Z | Remarks |
1 | DK100+250 | 21260562.77 | 3334516.71 | 152.33 | Raw data |
2 | DK100+251 | 21260567.47 | 3334515.31 | 152.45 | Interpolation data |
3 | DK100+252 | 21260572.17 | 3334513.92 | 152.55 | Interpolation data |
4 | DK100+253 | 21260576.87 | 3334512.524 | 152.67 | Interpolation data |
5 | DK100+254 | 21260581.57 | 3334511.13 | 152.75 | Interpolation data |
6 | … | … | … | … | … |
7 | DK100+260 | 21260567.47 | 3334515.31 | 155.77 | Raw data |
Calculating Euclidean distance D of defect point position and mileage coordinate index table i Obtaining a distance matrix [ D ] i ]Sorting the distance matrix, and selecting mileage corresponding to the coordinate with the smallest distance as mileage information of the defect position; and (3) using mileage, mark segments, coordinate information, position description, problem description, photos and remarks as a header to establish a templated form, and outputting a defect report. As shown in table 5:
TABLE 5 Defect detection information template
As shown in fig. 4-5, the model may also be used for geometric measurements; the geometrical scale measuring tool based on the live-action three-dimensional model is constructed, and the tool comprises a horizontal distance d, a vertical distance h, a spatial distance l, a gradient i measuring tool and the like. According to the relevant regulations related to dimension measurement, which are involved in the technical specification of static acceptance of high-speed railway engineering, TB 10760-2013, the method is executed in combination with the design construction requirement. The relevant structured tables are shown in table 6:
TABLE 6 engineering target geometry measurement inspection statistics
The correlation calculation formula is as follows:
obtaining the space coordinates of two points to be measured as (x) 1 ,y 1 ,z 1 ),(x 2 ,y 2 ,z 2 )
(1) Horizontal distance d is calculated:
…………………………(14)
(2) vertical height h calculation:
(3) spatial distance calculation l:
…………………(15)
(4) gradient calculation i:
………………………………………(16)
it is apparent that the above examples are given by way of illustration only and are not limiting of the embodiments. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. While still being apparent from variations or modifications that may be made by those skilled in the art are within the scope of the invention.
Claims (6)
1. A high-speed railway completion acceptance method integrating a live-action three-dimensional model and an image is characterized by comprising the following steps of: the method comprises the following steps:
s1, acquiring an area image and a geographic space coordinate position during shooting, and performing space three encryption processing to acquire undistorted image data and accurate inner azimuth elements and accurate outer azimuth elements;
s2, carrying out real-scene three-dimensional reconstruction on the basis of air three encryption by adopting an oblique modeling mode to obtain a real-scene three-dimensional model; observing position, gesture and external azimuth elements in the live-action three-dimensional scene; calculating a linear optimal column matrix; extracting an optimal matching image by using the linear array matrix; wherein, calculate the linear excellent matrix in the three-dimensional scene of the live-action, including the following step:
s201, performing live-action three-dimensional reconstruction by adopting an oblique modeling mode on the basis of space three encryption to obtain an OSGB format live-action three-dimensional model;
s202, performing top layer reset, texture compression and format conversion on an OSGB format inclined model to obtain 3dtiles format live-action model data;
s203, acquiring an observation position point and an external azimuth element of a current scene under a visual interface of a live-action three-dimensional scene; the Euclidean distance between the observation position point and the line element in the external azimuth element is calculated, and a distance matrix A is obtained;
s204, sorting the distance matrix A, and forming a linear excellent column matrix B from the first 10 percent of parts;
in S2, the extracting the best matching image by using the linear array matrix includes the following steps:
s205, forming a matrix C by angle elements in external azimuth elements at observation position points of the current field of viewMatrix B consisting of corner elements corresponding to column matrix B j; Calculation of C and B respectively j Correlation coefficient r of matrix j Obtaining a related system matrix R;
s206, sorting the correlation coefficient matrix R, and selecting an image corresponding to the maximum value of the correlation coefficient as an optimal matching image;
s3, modeling the projection of the optimally matched image in the live-action three-dimensional scene to obtain a projection model; calculating the corner points of the projection model; modeling the optimal image projection under the live-action three-dimensional scene to obtain a projection model, wherein the method comprises the following steps of:
s301, calculating a ground projection abscissa and a ground projection ordinate of a projection center D by using the inner azimuth element and the outer azimuth element of the optimal matching image;
s302, setting the average elevation of the current view field observation point as an initial elevation value, and performing iterative calculation by using the elevation value to obtain a three-dimensional projection coordinate (X) of the image center C ,Y C ,Z C );
S303, based on the obtained three-dimensional projection coordinates (X C ,Y C ,Z C ) Respectively carrying out iterative computation by using elevation values, and sequentially computing angular point projection coordinates of an upper left corner, a lower left corner, an upper right corner and a lower right corner of the image;
s4, forming a projection surface by using the corner points of the projection model, constructing an outsourcing polyhedron, and performing texture mapping on the image;
s5, utilizing the current view field observation point in S1, the angular point of the projection model in S3 and texture mapping in S4; after the original image is sectioned in a grading way, data visualization and fusion interaction are carried out; when the change of the observation point of the current field is realized, the screening of the corresponding images and the mapping loading of the images are synchronously carried out;
s6, performing visual judgment and geometric dimension measurement on the completion project by using the established live-action three-dimensional model; and carrying out templating integral output on the identified and measured defect information.
2. The method for completing and accepting a high-speed railway by fusing a live three-dimensional model and an image according to claim 1, wherein the method comprises the following steps of: in S1, the internal azimuth elements include a camera focal length, a pixel size, a phase width height, an image principal point abscissa and an image principal point ordinate; the external orientation elements include line elements and angle elements; the line elements comprise exposure point abscissas, exposure point ordinates and exposure point elevations; the angle elements include pitch angle, roll angle, yaw angle.
3. The method for completing and accepting a high-speed railway by fusing a live three-dimensional model and an image according to claim 1, wherein the method comprises the following steps of: in S4, forming a projection plane by using the corner points of the projection model, mapping the texture of the photo, and constructing an outsourcing polyhedron, comprising the following steps:
taking the corner point of the projection model as the bottom surface of the outsourcing polyhedron;
setting a reference surface according to the distance from the highest point of the real model to the bottom surface;
rotating one side of the reference surface by theta degrees for the axis, and selecting addresses; obtaining an angular point projection surface; wherein θ=90 ° -Phi; phi is the camera pitch angle;
and mapping the picture as texture by using the corner projection surface.
4. The method for completing and accepting a high-speed railway by fusing a live three-dimensional model and an image according to claim 1, wherein the method comprises the following steps of: in S5, the method for slicing the original image in a grading manner is as follows:
the obtained original image is subjected to image blocking and image resolution resampling by adopting a binary tree classification method;
the image slice data of each stage after the classification of the binary tree classification method is independently placed under a folder;
and naming the folders by adopting the form of image names and hierarchical numbers.
5. The method for completing and accepting a high-speed railway by fusing a live three-dimensional model and an image according to claim 1, wherein the method comprises the following steps of: in S5, when data visualization and fusion interaction are carried out, loading of live-action three-dimensional model data is carried out by adopting a WEBGL open source frame, image loading area space polygon surface calculation is carried out on a live-action three-dimensional model scene, and hidden display is carried out on a live-action model of a loaded image area;
mapping the image on a live-action three-dimensional scene by using the four corner coordinates of the image calculated in the step S3 and the outsourcing polyhedron constructed in the step S4, wherein the image is used as the texture of the outsourcing polyhedron;
and taking the observation position angle and the posture change of the real three-dimensional model as events triggering the image screening and loading, and carrying out image screening and image mapping loading once every time of visual angle transformation.
6. The method for completing and accepting a high-speed railway by fusing a live three-dimensional model and an image according to claim 1, wherein the method comprises the following steps of: in S6, when the built three-dimensional model is used for carrying out visual judgment on the completion project, the method comprises the steps of acquiring the spatial coordinates of the defect position as point_qx through pick-up points at the corresponding positions of the three-dimensional model i (X,Y,Z);
Interpolation calculation is carried out through the mileage stake height index table to obtain a detailed mileage coordinate index table;
calculating Euclidean distance D of defect point position and mileage coordinate index table i Obtaining a distance matrix [ D ] i ]Sorting the distance matrix, and selecting mileage corresponding to the coordinate with the smallest distance as mileage information of the defect position;
and (3) using mileage, mark segments, coordinate information, position description, problem description, photos and remarks as a header to establish a templated form, and outputting a defect report.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211425950.XA CN115908706B (en) | 2022-11-15 | 2022-11-15 | High-speed railway completion acceptance method with fusion of live three-dimensional model and image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211425950.XA CN115908706B (en) | 2022-11-15 | 2022-11-15 | High-speed railway completion acceptance method with fusion of live three-dimensional model and image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115908706A CN115908706A (en) | 2023-04-04 |
CN115908706B true CN115908706B (en) | 2023-08-08 |
Family
ID=86473968
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211425950.XA Active CN115908706B (en) | 2022-11-15 | 2022-11-15 | High-speed railway completion acceptance method with fusion of live three-dimensional model and image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115908706B (en) |
Citations (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2716257A1 (en) * | 2010-10-01 | 2012-04-01 | Martin Habbecke | System and method for interactive painting of 2d images for iterative 3d modeling |
CN103226838A (en) * | 2013-04-10 | 2013-07-31 | 福州林景行信息技术有限公司 | Real-time spatial positioning method for mobile monitoring target in geographical scene |
CN104361628A (en) * | 2014-11-27 | 2015-02-18 | 南宁市界围工程咨询有限公司 | Three-dimensional real scene modeling system based on aviation oblique photograph measurement |
CN105262958A (en) * | 2015-10-15 | 2016-01-20 | 电子科技大学 | Panoramic feature splicing system with virtual viewpoint and method thereof |
WO2016138161A1 (en) * | 2015-02-24 | 2016-09-01 | HypeVR | Lidar stereo fusion live action 3d model virtual reality video |
WO2017027638A1 (en) * | 2015-08-10 | 2017-02-16 | The Board Of Trustees Of The Leland Stanford Junior University | 3d reconstruction and registration of endoscopic data |
KR101912396B1 (en) * | 2017-06-13 | 2018-10-26 | 주식회사 아이닉스 | Apparatus and Method for Generating Image at any point-view based on virtual camera |
CN110246221A (en) * | 2019-06-25 | 2019-09-17 | 中煤航测遥感集团有限公司 | True orthophoto preparation method and device |
CN110570466A (en) * | 2019-09-09 | 2019-12-13 | 广州建通测绘地理信息技术股份有限公司 | Method and device for generating three-dimensional live-action point cloud model |
CN111260777A (en) * | 2020-02-25 | 2020-06-09 | 中国电建集团华东勘测设计研究院有限公司 | Building information model reconstruction method based on oblique photography measurement technology |
CN111429498A (en) * | 2020-03-26 | 2020-07-17 | 中国铁路设计集团有限公司 | Railway business line three-dimensional center line manufacturing method based on point cloud and image fusion technology |
CN111537515A (en) * | 2020-03-31 | 2020-08-14 | 国网辽宁省电力有限公司朝阳供电公司 | Iron tower bolt defect display method and system based on three-dimensional live-action model |
CN111629193A (en) * | 2020-07-28 | 2020-09-04 | 江苏康云视觉科技有限公司 | Live-action three-dimensional reconstruction method and system |
CN111836012A (en) * | 2020-06-28 | 2020-10-27 | 航天图景(北京)科技有限公司 | Video fusion and video linkage method based on three-dimensional scene and electronic equipment |
CN112085844A (en) * | 2020-09-11 | 2020-12-15 | 中国人民解放军军事科学院国防科技创新研究院 | Unmanned aerial vehicle image rapid three-dimensional reconstruction method for field unknown environment |
CN112258624A (en) * | 2020-09-15 | 2021-01-22 | 广东电网有限责任公司 | Three-dimensional live-action fusion modeling method |
CN112927360A (en) * | 2021-03-24 | 2021-06-08 | 广州蓝图地理信息技术有限公司 | Three-dimensional modeling method and system based on fusion of tilt model and laser point cloud data |
CN113192193A (en) * | 2021-04-23 | 2021-07-30 | 安徽省皖北煤电集团有限责任公司 | High-voltage transmission line corridor three-dimensional reconstruction method based on Cesium three-dimensional earth frame |
CN113192200A (en) * | 2021-04-26 | 2021-07-30 | 泰瑞数创科技(北京)有限公司 | Method for constructing urban real scene three-dimensional model based on space-three parallel computing algorithm |
CN113192183A (en) * | 2021-04-29 | 2021-07-30 | 山东产研信息与人工智能融合研究院有限公司 | Real scene three-dimensional reconstruction method and system based on oblique photography and panoramic video fusion |
CN113506370A (en) * | 2021-07-28 | 2021-10-15 | 自然资源部国土卫星遥感应用中心 | Three-dimensional geographic scene model construction method and device based on three-dimensional remote sensing image |
CN113706698A (en) * | 2021-10-25 | 2021-11-26 | 武汉幻城经纬科技有限公司 | Live-action three-dimensional road reconstruction method and device, storage medium and electronic equipment |
CN113706623A (en) * | 2021-11-01 | 2021-11-26 | 中国测绘科学研究院 | Air-to-three encryption method suitable for aviation oblique images |
WO2022001590A1 (en) * | 2020-06-30 | 2022-01-06 | 中兴通讯股份有限公司 | Camera system, mobile terminal, and three-dimensional image acquisition method |
CN114387198A (en) * | 2022-03-24 | 2022-04-22 | 青岛市勘察测绘研究院 | Fusion display method, device and medium for image and live-action model |
CN114443793A (en) * | 2022-01-25 | 2022-05-06 | 陈进雄 | Method for designing detailed planning three-dimensional scene of space-time data visualization homeland space |
CN114494388A (en) * | 2022-01-27 | 2022-05-13 | 中国铁建重工集团股份有限公司 | Three-dimensional image reconstruction method, device, equipment and medium in large-view-field environment |
CN114859374A (en) * | 2022-07-11 | 2022-08-05 | 中国铁路设计集团有限公司 | Newly-built railway cross measurement method based on unmanned aerial vehicle laser point cloud and image fusion |
CN115147538A (en) * | 2022-02-22 | 2022-10-04 | 山东赛瑞智能科技有限公司 | Method for dynamically updating live-action three-dimensional modeling based on environment monitoring unmanned aerial vehicle |
-
2022
- 2022-11-15 CN CN202211425950.XA patent/CN115908706B/en active Active
Patent Citations (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2716257A1 (en) * | 2010-10-01 | 2012-04-01 | Martin Habbecke | System and method for interactive painting of 2d images for iterative 3d modeling |
CN103226838A (en) * | 2013-04-10 | 2013-07-31 | 福州林景行信息技术有限公司 | Real-time spatial positioning method for mobile monitoring target in geographical scene |
CN104361628A (en) * | 2014-11-27 | 2015-02-18 | 南宁市界围工程咨询有限公司 | Three-dimensional real scene modeling system based on aviation oblique photograph measurement |
WO2016138161A1 (en) * | 2015-02-24 | 2016-09-01 | HypeVR | Lidar stereo fusion live action 3d model virtual reality video |
WO2017027638A1 (en) * | 2015-08-10 | 2017-02-16 | The Board Of Trustees Of The Leland Stanford Junior University | 3d reconstruction and registration of endoscopic data |
CN105262958A (en) * | 2015-10-15 | 2016-01-20 | 电子科技大学 | Panoramic feature splicing system with virtual viewpoint and method thereof |
KR101912396B1 (en) * | 2017-06-13 | 2018-10-26 | 주식회사 아이닉스 | Apparatus and Method for Generating Image at any point-view based on virtual camera |
CN110246221A (en) * | 2019-06-25 | 2019-09-17 | 中煤航测遥感集团有限公司 | True orthophoto preparation method and device |
CN110570466A (en) * | 2019-09-09 | 2019-12-13 | 广州建通测绘地理信息技术股份有限公司 | Method and device for generating three-dimensional live-action point cloud model |
CN111260777A (en) * | 2020-02-25 | 2020-06-09 | 中国电建集团华东勘测设计研究院有限公司 | Building information model reconstruction method based on oblique photography measurement technology |
CN111429498A (en) * | 2020-03-26 | 2020-07-17 | 中国铁路设计集团有限公司 | Railway business line three-dimensional center line manufacturing method based on point cloud and image fusion technology |
CN111537515A (en) * | 2020-03-31 | 2020-08-14 | 国网辽宁省电力有限公司朝阳供电公司 | Iron tower bolt defect display method and system based on three-dimensional live-action model |
CN111836012A (en) * | 2020-06-28 | 2020-10-27 | 航天图景(北京)科技有限公司 | Video fusion and video linkage method based on three-dimensional scene and electronic equipment |
WO2022001590A1 (en) * | 2020-06-30 | 2022-01-06 | 中兴通讯股份有限公司 | Camera system, mobile terminal, and three-dimensional image acquisition method |
CN111629193A (en) * | 2020-07-28 | 2020-09-04 | 江苏康云视觉科技有限公司 | Live-action three-dimensional reconstruction method and system |
CN112085844A (en) * | 2020-09-11 | 2020-12-15 | 中国人民解放军军事科学院国防科技创新研究院 | Unmanned aerial vehicle image rapid three-dimensional reconstruction method for field unknown environment |
CN112258624A (en) * | 2020-09-15 | 2021-01-22 | 广东电网有限责任公司 | Three-dimensional live-action fusion modeling method |
CN112927360A (en) * | 2021-03-24 | 2021-06-08 | 广州蓝图地理信息技术有限公司 | Three-dimensional modeling method and system based on fusion of tilt model and laser point cloud data |
CN113192193A (en) * | 2021-04-23 | 2021-07-30 | 安徽省皖北煤电集团有限责任公司 | High-voltage transmission line corridor three-dimensional reconstruction method based on Cesium three-dimensional earth frame |
CN113192200A (en) * | 2021-04-26 | 2021-07-30 | 泰瑞数创科技(北京)有限公司 | Method for constructing urban real scene three-dimensional model based on space-three parallel computing algorithm |
CN113192183A (en) * | 2021-04-29 | 2021-07-30 | 山东产研信息与人工智能融合研究院有限公司 | Real scene three-dimensional reconstruction method and system based on oblique photography and panoramic video fusion |
CN113506370A (en) * | 2021-07-28 | 2021-10-15 | 自然资源部国土卫星遥感应用中心 | Three-dimensional geographic scene model construction method and device based on three-dimensional remote sensing image |
CN113706698A (en) * | 2021-10-25 | 2021-11-26 | 武汉幻城经纬科技有限公司 | Live-action three-dimensional road reconstruction method and device, storage medium and electronic equipment |
CN113706623A (en) * | 2021-11-01 | 2021-11-26 | 中国测绘科学研究院 | Air-to-three encryption method suitable for aviation oblique images |
CN114443793A (en) * | 2022-01-25 | 2022-05-06 | 陈进雄 | Method for designing detailed planning three-dimensional scene of space-time data visualization homeland space |
CN114494388A (en) * | 2022-01-27 | 2022-05-13 | 中国铁建重工集团股份有限公司 | Three-dimensional image reconstruction method, device, equipment and medium in large-view-field environment |
CN115147538A (en) * | 2022-02-22 | 2022-10-04 | 山东赛瑞智能科技有限公司 | Method for dynamically updating live-action three-dimensional modeling based on environment monitoring unmanned aerial vehicle |
CN114387198A (en) * | 2022-03-24 | 2022-04-22 | 青岛市勘察测绘研究院 | Fusion display method, device and medium for image and live-action model |
CN114859374A (en) * | 2022-07-11 | 2022-08-05 | 中国铁路设计集团有限公司 | Newly-built railway cross measurement method based on unmanned aerial vehicle laser point cloud and image fusion |
Non-Patent Citations (1)
Title |
---|
低空无人机倾斜摄影测量成果精度研究;李欢;;甘肃科学学报(第02期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN115908706A (en) | 2023-04-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110148169B (en) | Vehicle target three-dimensional information acquisition method based on PTZ (pan/tilt/zoom) pan-tilt camera | |
CN103017653B (en) | Registration and measurement method of spherical panoramic image and three-dimensional laser scanning point cloud | |
WO2020062434A1 (en) | Static calibration method for external parameters of camera | |
CN102003938B (en) | Thermal state on-site detection method for large high-temperature forging | |
CN107767440B (en) | Cultural relic sequence image fine three-dimensional reconstruction method based on triangulation network interpolation and constraint | |
EP2111530B1 (en) | Automatic stereo measurement of a point of interest in a scene | |
CN109115186B (en) | 360-degree measurable panoramic image generation method for vehicle-mounted mobile measurement system | |
Pepe et al. | Techniques, tools, platforms and algorithms in close range photogrammetry in building 3D model and 2D representation of objects and complex architectures | |
CN108765298A (en) | Unmanned plane image split-joint method based on three-dimensional reconstruction and system | |
CN106447601B (en) | Unmanned aerial vehicle remote sensing image splicing method based on projection-similarity transformation | |
JP2005308553A (en) | Three-dimensional image measuring device and method | |
CN102778224B (en) | Method for aerophotogrammetric bundle adjustment based on parameterization of polar coordinates | |
CN106683173A (en) | Method of improving density of three-dimensional reconstructed point cloud based on neighborhood block matching | |
JP2017182695A (en) | Information processing program, information processing method, and information processing apparatus | |
CN109961485A (en) | A method of target positioning is carried out based on monocular vision | |
Gerke | Using horizontal and vertical building structure to constrain indirect sensor orientation | |
CN113642463B (en) | Heaven and earth multi-view alignment method for video monitoring and remote sensing images | |
CN112270698A (en) | Non-rigid geometric registration method based on nearest curved surface | |
CN104361563B (en) | GPS-based (global positioning system based) geometric precision correction method of hyperspectral remote sensing images | |
US20100066740A1 (en) | Unified spectral and Geospatial Information Model and the Method and System Generating It | |
Dahaghin et al. | Precise 3D extraction of building roofs by fusion of UAV-based thermal and visible images | |
CN115527016A (en) | Three-dimensional GIS video fusion registration method, system, medium, equipment and terminal | |
CN117853540A (en) | Vegetation segmentation effect evaluation method based on laser point cloud | |
CN115908706B (en) | High-speed railway completion acceptance method with fusion of live three-dimensional model and image | |
CN107941241A (en) | A kind of resolving power test target and its application method for aerophotogrammetry quality evaluation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |