CN116977581A - Traffic data display method, device, computer equipment and storage medium - Google Patents
Traffic data display method, device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN116977581A CN116977581A CN202310927009.6A CN202310927009A CN116977581A CN 116977581 A CN116977581 A CN 116977581A CN 202310927009 A CN202310927009 A CN 202310927009A CN 116977581 A CN116977581 A CN 116977581A
- Authority
- CN
- China
- Prior art keywords
- data
- result
- map
- traffic
- correction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 68
- 238000003860 storage Methods 0.000 title claims abstract description 25
- 238000012545 processing Methods 0.000 claims abstract description 49
- 238000005259 measurement Methods 0.000 claims abstract description 33
- 238000013461 design Methods 0.000 claims abstract description 30
- 238000005516 engineering process Methods 0.000 claims abstract description 23
- 238000012937 correction Methods 0.000 claims description 82
- 238000004422 calculation algorithm Methods 0.000 claims description 25
- 238000011049 filling Methods 0.000 claims description 23
- 238000004590 computer program Methods 0.000 claims description 22
- 238000001514 detection method Methods 0.000 claims description 13
- 238000000605 extraction Methods 0.000 claims description 11
- 238000011068 loading method Methods 0.000 claims description 11
- 230000011218 segmentation Effects 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 9
- 230000008520 organization Effects 0.000 claims description 7
- 238000004519 manufacturing process Methods 0.000 claims description 6
- 230000003993 interaction Effects 0.000 abstract description 14
- 230000000007 visual effect Effects 0.000 abstract description 14
- 238000010586 diagram Methods 0.000 description 14
- 238000013527 convolutional neural network Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 230000008901 benefit Effects 0.000 description 5
- 229910052792 caesium Inorganic materials 0.000 description 5
- TVFDJXOCXUVLDH-UHFFFAOYSA-N caesium atom Chemical compound [Cs] TVFDJXOCXUVLDH-UHFFFAOYSA-N 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000002452 interceptive effect Effects 0.000 description 4
- 238000002955 isolation Methods 0.000 description 4
- 238000007726 management method Methods 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 3
- 238000013079 data visualisation Methods 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 238000009877 rendering Methods 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 241000283070 Equus zebra Species 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000013499 data model Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 235000007631 Cassia fistula Nutrition 0.000 description 1
- 240000004752 Laburnum anagyroides Species 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013497 data interchange Methods 0.000 description 1
- 238000013523 data management Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 235000019800 disodium phosphate Nutrition 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000008846 dynamic interplay Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/36—Videogrammetry, i.e. electronic processing of video signals from a single source or from different sources to give parallax or range information
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Graphics (AREA)
- Signal Processing (AREA)
- Radar, Positioning & Navigation (AREA)
- Processing Or Creating Images (AREA)
- Instructional Devices (AREA)
Abstract
The embodiment of the invention discloses a traffic data display method, a traffic data display device, computer equipment and a storage medium. The method comprises the following steps: acquiring measurement data and engineering design drawing files of the aerial unmanned aerial vehicle to obtain original data; performing data processing on the original data to obtain map structured data; the map structured data is presented based on a 3D model and 3D animation techniques. By implementing the method provided by the embodiment of the invention, the real restoration of the traffic condition of the interface can be realized by combining the visual interaction technology of the 3D equipment model, and the three-dimensional visual high-definition map requirement of traffic supervision personnel is met.
Description
Technical Field
The present invention relates to a map data display method, and more particularly, to a traffic data display method, a device, a computer apparatus, and a storage medium.
Background
When the unmanned aerial vehicle is used for mapping roads, the unmanned aerial vehicle is suitable for low-altitude flight, the flight approval process is simple and easy to implement, the unmanned aerial vehicle is less influenced by weather factors, the unmanned aerial vehicle does not need to take off and land at an airport, is suitable for mapping towns and villages, and can quickly reach a measurement area. The unmanned aerial vehicle has the advantages that the measurement is simplified by utilizing the characteristic map mapping of the unmanned aerial vehicle, the all-dimensional scanning can be carried out on the target object in the measuring area only by setting the flight route, and the data are summarized and output. The security of survey staff can obtain very big assurance to unmanned aerial vehicle selling price is generally not high and the maintenance is got up very conveniently, compares the cost-effective obvious with traditional survey and drawing mode.
However, map data mapped by an unmanned aerial vehicle is obtained by using an oblique photogrammetry technology, an orthographic image is acquired, distortion is generated at a place where a stereoscopic surface appears, data correction processing is required to be performed, high-precision orthographic image is realized, three-dimensional models such as buildings and facility equipment are also required to be integrated, a three-dimensional high-precision map is formed, data visualization processing is performed, and the requirement of 3D visualization reduction of a real road is met.
The industry has applications for combining more sophisticated 2D electronic maps with 3D data visualization tools; however, the mode cannot reflect the real traffic running state, a 3D model needs to be expanded to realize a custom interaction event, a data source is accessed through a data-driven 3D scene, a required 3D visual application scene is realized, and the data value of traffic control is comprehensively improved through a digital goldenrain technology.
Therefore, a new method is necessary to be designed, the real restoration of the traffic condition of the interface is realized by combining the visual interaction technology of the 3D equipment model, and the three-dimensional visual high-definition map requirement of traffic supervision personnel is met.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a traffic data display method, a traffic data display device, computer equipment and a storage medium.
In order to achieve the above purpose, the present invention adopts the following technical scheme: the traffic data display method comprises the following steps:
acquiring measurement data and engineering design drawing files of the aerial unmanned aerial vehicle to obtain original data;
performing data processing on the original data to obtain map structured data;
the map structured data is presented based on a 3D model and 3D animation techniques.
The further technical scheme is as follows: the data processing is performed on the original data to obtain map structured data, including:
carrying out data correction on the original data to obtain a correction result;
extracting geographic characteristic data from the correction result to obtain geographic characteristic elements;
classifying the correction result, converting and storing the geographic characteristic elements to obtain map structured data.
The further technical scheme is as follows: the step of carrying out data correction on the original data to obtain a correction result comprises the following steps:
image file splicing is carried out on the image files in the measurement data so as to obtain splicing results;
superposing geographic information in the engineering design drawing file in the splicing result to obtain a superposition result;
And correcting the superposition result to obtain a correction result.
The further technical scheme is as follows: the step of performing image file splicing on the image files in the measurement data to obtain a splicing result comprises the following steps:
slicing the deformed region of the image file in the measurement data to obtain a plurality of regions;
performing deformation restoration calculation on the regions, and rapidly filling each region according to an original standard algorithm to obtain a filling result;
adding ink Cal projection information to the filling result on the image to obtain an adding result;
putting the increase result into a grid for correction to obtain a processing result;
and splicing the processing results by adopting a SIFT algorithm to obtain splicing results.
The further technical scheme is as follows: the extracting the geographic feature data from the correction result to obtain geographic feature elements includes:
dividing urban roads and lanes by combining the correction result with 2D electronic map data by adopting a Mask R-CNN algorithm to obtain a division result;
and carrying out identification label and vehicle detection on the segmentation result through extraction of connected components so as to obtain geographic characteristic elements.
The further technical scheme is as follows: the classifying the correction result, converting and storing the geographic feature element to obtain map structured data, including:
classifying the correction results according to the hierarchical directory organization, and loading image data of the corresponding file directory through a map display area;
and converting the geographic characteristic elements into a 3D Tiles format, and storing the geographic characteristic elements to obtain map structured data.
The further technical scheme is as follows: the displaying the map structured data based on the 3D model and the 3D animation technology comprises the following steps:
manufacturing a 3D model, and superposing map structured data with the 3D model;
setting a 3D model custom event;
and displaying the map structured data through the 3D model animation in combination with the custom event.
The invention also provides a traffic data display device, which comprises:
the data acquisition unit is used for acquiring measurement data of the aerial unmanned aerial vehicle and engineering design drawing files so as to obtain original data;
the data processing unit is used for carrying out data processing on the original data so as to obtain map structured data;
and the display unit is used for displaying the map structured data based on the 3D model and the 3D animation technology.
The invention also provides a computer device which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the method when executing the computer program.
The present invention also provides a storage medium storing a computer program which, when executed by a processor, implements the above method.
Compared with the prior art, the invention has the beneficial effects that: according to the invention, by acquiring the measurement data and the engineering design drawing file of the aerial unmanned aerial vehicle, correcting and extracting the geographic characteristic elements and storing the data, and then loading and displaying the data by adopting a 3D model, the real restoration of the traffic condition of the interface is realized by combining the visual interaction technology of the 3D equipment model, and the three-dimensional visual high-definition map requirement of traffic supervision personnel is met.
The invention is further described below with reference to the drawings and specific embodiments.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of an application scenario of a traffic data display method according to an embodiment of the present invention;
fig. 2 is a flow chart of a traffic data display method according to an embodiment of the present invention;
fig. 3 is a schematic sub-flowchart of a traffic data display method according to an embodiment of the present invention;
fig. 4 is a schematic sub-flowchart of a traffic data display method according to an embodiment of the present invention;
fig. 5 is a schematic sub-flowchart of a traffic data display method according to an embodiment of the present invention;
fig. 6 is a schematic sub-flowchart of a traffic data display method according to an embodiment of the present invention;
fig. 7 is a schematic sub-flowchart of a traffic data display method according to an embodiment of the present invention;
fig. 8 is a schematic sub-flowchart of a traffic data display method according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of map data provided by an embodiment of the present invention;
FIG. 10 is a schematic diagram of a 3D model of a road according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of a 3D model animation display to achieve a signal lamp animation display effect of an intersection;
FIG. 12 is a schematic block diagram of a traffic data display device provided by an embodiment of the present invention;
FIG. 13 is a schematic block diagram of a data processing unit of a traffic data display device provided by an embodiment of the present invention;
FIG. 14 is a schematic block diagram of a corrective subunit of a traffic data display device provided in accordance with an embodiment of the present invention;
FIG. 15 is a schematic block diagram of a splice module of a traffic data display device provided by an embodiment of the present invention;
FIG. 16 is a schematic block diagram of an extraction subunit of a traffic data display device provided by an embodiment of the invention;
FIG. 17 is a schematic block diagram of a storage subunit of a traffic data display device provided in accordance with an embodiment of the present invention;
FIG. 18 is a schematic block diagram of a display unit of a traffic data display device provided by an embodiment of the present invention;
fig. 19 is a schematic block diagram of a computer device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be understood that the terms "comprises" and "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
Referring to fig. 1 and fig. 2, fig. 1 is a schematic view of an application scenario of a traffic data display method according to an embodiment of the present invention. Fig. 2 is a schematic flow chart of a traffic data display method according to an embodiment of the present invention. The traffic data display method is applied to the server. The server, the terminal and the aerial unmanned aerial vehicle perform data interaction, high-precision map data acquired based on the aerial unmanned aerial vehicle are used for realizing real restoration of traffic conditions at an interface by combining a 3D equipment model visual interaction technology through a data correction processing technology, three-dimensional visual high-definition map requirements for traffic supervision personnel are met, and lane-level fine map data support is provided for traffic supervision.
Fig. 2 is a flow chart of a traffic data display method according to an embodiment of the present invention. As shown in fig. 2, the method includes the following steps S110 to S150.
S110, acquiring measurement data of the aerial unmanned aerial vehicle and an engineering design drawing file to obtain original data.
In this embodiment, the original data refers to measurement data of the aerial unmanned aerial vehicle and an engineering design drawing file, where the measurement data includes acquisition point number information, acquisition point image information, acquisition point position information, and the like, and the image information includes information such as an image file, sharpness of an image, contrast, saturation, exposure, and the like; the position information of the acquisition point contains longitude, latitude and altitude information, the actual overlapping degree is determined according to the geographic information of the image, the overlapping degree meets the requirement of transverse overlapping by more than 60 percent, and the side overlapping meets the requirement of side overlapping by more than 40 percent, so that the requirement of subsequent data processing can be met.
In the embodiment, based on the measurement data of the aerial unmanned aerial vehicle, the high-precision map data manufactured by combining with the engineering design drawing file achieves the three-dimensional high-precision map display target with high fidelity and low cost through a data correction processing method. Data acquisition of a road center line, a road width, a lane center line, a main building, a greening area and the like is realized, as shown in fig. 9.
Specifically, aiming at an urban road in a road completion period as an acquisition area, setting a fixed flight track by an aerial unmanned aerial vehicle according to the acquisition area, and then acquiring in an aerial photographing mode; the road elements visible on the road surface, including lane lines, road side lines, road traffic marks, deceleration strips, isolation strips, signal lamp poles, electronic police poles, display boards and the like, are mainly acquired through oblique photography, and all acquired images obtained from a plurality of acquisition points and acquisition tracks are formed into full-road-section-level lane-level high-precision image map data.
Geographical information such as related buildings, road lanes, isolation belt positions, green belt positions, guard rail positions and the like is obtained from engineering design drawing CAD or BIM data of the collected area through a conventional map tool. The information is combined with geographical information, namely measurement data, of the image acquired by the aerial unmanned aerial vehicle, and technical support is provided for subsequent targeted data processing.
And S120, carrying out data processing on the original data to obtain map structured data.
In this embodiment, the map structured data refers to the correction result of the data and the extracted geographic feature elements.
In one embodiment, referring to fig. 3, the step S120 may include steps S121 to S123.
S121, carrying out data correction on the original data to obtain a correction result.
In this embodiment, the correction result refers to geometric correction of the image, and adopts the geographic information such as the main building, the crossing isolation zone and the like in the engineering design drawing as the control point positioning data, and through the data correction processing technology, the quick processing of the unmanned aerial vehicle image data is realized, and then the high-definition map image and the geographic information of the acquired road section are obtained through the image splicing processing.
In one embodiment, referring to fig. 4, the step S121 may include steps S1211 to S1213.
S1211, performing image file stitching on the image files in the measurement data to obtain a stitching result.
In this embodiment, the image file refers to the result formed by slicing the image file, performing restoration calculation with the region, filling correction, and stitching.
In one embodiment, referring to fig. 5, the step S1211 may include steps S12111 to S12115.
S12111, slicing the deformed region of the image file in the measurement data to obtain a plurality of regions;
s12112, performing deformation restoration calculation on the regions, and rapidly filling each region according to an original standard algorithm to obtain a filling result.
In the present embodiment, the filling result refers to a result of performing deformation restoration calculation on the region and rapidly filling the formed region.
S12113, adding the ink Cal projection information to the filling result on the image to obtain an adding result.
In the present embodiment, the addition result refers to a result of adding the formation of the ink karl projection information to the image for the filling result.
S12114, putting the added result into a grid for correction to obtain a processing result.
In this embodiment, the processing result refers to a result formed by putting the added result into a grid for correction.
S12115, splicing the processing results by adopting a SIFT algorithm to obtain splicing results.
S1212, superposing the geographic information in the engineering design drawing file in the splicing result to obtain a superposition result.
The geographic information in the engineering design drawing file is specifically marked with a marked building and a marked point.
And S1213, correcting the superposition result to obtain a correction result.
In this embodiment, a data correction processing technology is used to correct the superposition result, specifically correct incorrect data, and complement missing data.
Specifically, the unmanned aerial vehicle camera system has the defects of no frame mark, inaccurate orientation and no geographic reference in the process of acquiring the remote sensing image. Accurate geographic information such as main buildings, crossing isolation zones, traffic light positions and the like in engineering design drawings are adopted as control point positioning data, high-precision real shot images are adopted as base images, and a forward correction method for calculating corrected image coordinates by aerial image coordinates is adopted. The operation efficiency is quickened through software optimization processing, the deformation area is divided into a series of small areas, the small areas can be infinitely refined according to the precision requirement, then deformation recovery calculation is carried out on the small areas, and each small rectangle is rapidly filled according to an original standard algorithm. The general ink Karl projection information is added on the image, so that the processed image can be reduced, and the base map is put into a grid for correction through the high-resolution image.
The features extracted by the SIFT algorithm are local features of the image, the rotation, the scale scaling and the brightness change are kept unchanged, the video angle change, the affine transformation and the noise can also be kept stable to a certain degree, and the method is simultaneously suitable for rapid and accurate matching in mass features. The method utilizes the automatic matching of the SIFT algorithm to synthesize the image into a high-definition large image through the control point positioning information.
S122, extracting geographic characteristic data from the correction result to obtain geographic characteristic elements.
In the present embodiment, the geographic feature elements refer to road feature data, lane feature data, and road environment feature data.
In one embodiment, referring to fig. 6, the step S122 may include steps S1221 to S1223.
S1221, dividing the correction result into urban roads and lanes by adopting a Mask R-CNN algorithm and combining 2D electronic map data so as to obtain a division result.
In the present embodiment, the segmentation result refers to the segmentation result of the urban road and the lane.
S1222, carrying out identification label and vehicle detection on the segmentation result through extraction of connected components so as to obtain geographic characteristic elements.
In this embodiment, the deep learning method based on the convolutional neural network has strong feature learning and expression capability, and becomes a mainstream algorithm of the current target detection task. In the correction result, the scene semantic perception algorithm based on the deep neural network is utilized to accurately divide urban roads and lanes, the identification label and the vehicle detection are realized through the extraction of connected components, and the detection performance target is improved by combining the medium semantic division and the target detection of the correction result.
In the embodiment, the Mask R-CNN algorithm is adopted to realize image segmentation and target detection, and extraction of road, lane and environmental characteristic data is completed. The method for extracting the target features by combining the target detection with the 2D electronic map data and the image semantic segmentation realizes the acquisition of the road feature data, the lane feature data and the road environment feature data, and the geographic feature elements comprise the road feature data, the lane feature data and the road environment feature data.
The road characteristic data is mainly divided into road marking lines and road facilities. Road markings include road horizontal and vertical markings, marking type, marking color, sharpness, etc. The road facilities include a belt location, a green belt location, a guard rail location, a landmark building location, and the like.
The lane characteristic data comprises information such as the number, gradient, curvature, heading, elevation and the like of lanes, and specifically comprises lane datum lines, lane connecting points, lane types (such as common lanes, traffic lanes, overtaking lanes, auxiliary lanes and the like) and lane functions (such as bus lanes, HOV lanes, tide lanes and the like). The lane datum line reflects the association relation among different lanes; the lane connection points represent the connection relation among lanes of different road sections.
The road environment characteristic data mainly comprises traffic signal lamp positions, traffic sign positions and semantics, zebra crossing positions and semantics and the like, such as traffic signal lamps, traffic signs, traffic guiding signs and the like.
Specifically, refer to "intelligent transportation system intelligent driving electronic map data model and interchange format" part 2: the requirements of the data model in the urban road are that the map layer of all the characteristic data of the road is generated by extracting the characteristic data of the collected image data and marking the set labels based on the characteristic data, and the map layer supporting different structures including the road reference line, the lane center line, the lane line, the road marking, the sign plate, the lamp post, the peripheral accessories and the like is realized.
S123, classifying the correction result, converting and storing the geographic characteristic elements to obtain map structured data.
In one embodiment, referring to fig. 7, the step S123 may include steps S1231 to S1232.
S1231, classifying the correction results according to the hierarchical directory organization, and loading image data of the corresponding file directory through the map display area.
In this embodiment, after data correction processing is performed on a high-definition captured image, the processed image files are classified according to a first-level and second-level directory organization, high-definition image data corresponding to a file directory is loaded through a map display area, and different definition images of the image are displayed in a classified manner by the file, so that a display as an image loading mode is provided.
S1232, converting the geographic characteristic elements into a 3D Tiles format, and storing the geographic characteristic elements to obtain map structured data.
In this embodiment, for the geographic object from which the feature is extracted, a third party conversion tool, such as Cesium Lab, is used to convert the map layer data into 3D Tiles format. And in the conversion process, the space and attribute information of each object are input through a singulation technology. Aiming at the display of large-scale and intensive data, the loading and the display of the three-dimensional model in the view angle range are automatically controlled through a 3D Tiles data format space index dicing mechanism. And the converted data format adopts a multi-file directory to realize layered LOD storage. And defining, configuring and managing the metadata through the automatically generated 3D_TILES.json, and rendering and visualizing the data in the visible range in the actual calling process.
S130, displaying the map structured data based on a 3D model and a 3D animation technology.
Road traffic systems are complex systems consisting of people, vehicles and roads. The embodiment adopts a micro-service architecture to realize three-dimensional scene browsing and service data management functions based on technologies such as Cesium, HTML5 and the like. On the basis of 3D high-precision map data display, interactive design of road facility equipment and animation display effect of traffic dynamic information are achieved.
In one embodiment, referring to fig. 8, the step S130 may include steps S131 to S133.
S131, manufacturing a 3D model, and superposing the map structured data with the 3D model.
In this embodiment, a 3D model file is fabricated by using a 3DMAX modeling method for traffic facilities and devices on both sides of a road. The 3D model is converted into a gltf file format which can be loaded by Cesium, mathematical model abstract modeling is carried out through a 3D technology based on HTML5 and three.js, and three-dimensional presentation of facility equipment is realized. The 3D model is overlapped with the high-precision map geographic data, the problem that the matching of the 3D model and the spherical coordinates is inaccurate exists, the 3D model is segmented according to 100 meters, and the 3D model and the spherical coordinates are perfectly matched through positioning point insertion and small-angle rotation. After the 3D model is released by the uploading server, operations such as browsing and inquiring at a browser end can be realized. After data processing, loading by adopting a map layer mode, and superposing traffic facilities and a set 3D model, so that the traffic static information of the urban road can be truly restored, as shown in fig. 10.
S132, setting a 3D model custom event.
In this embodiment, cesium supports mouse interactions (e.g., translation, scaling, rotation, etc.). More interactive operations such as mouse events, keyboard events, camera events, rendering events and scene events are supported by a custom event mode. By expanding the custom event set and response, interactive operation is realized, and events such as a monitoring keyboard, a scene, a mouse, a camera, rendering and the like are adopted to realize the custom interactive function. Space analysis functions such as query retrieval, highlighting, superposition and the like are realized, and traffic data mounting and displaying functions in a custom format are supported.
S133, displaying the map structured data through the 3D model animation in combination with the custom event.
Cesium supports animation models, and one or more animations defined in the model JSON need to be specified for playing aiming at the gltf model animations. In animation design, it is necessary to provide a separate track for animation for the animation of the design model. And setting the animation in a specific 3D model for playing. The animation playing parameters support the information of configuration script execution steps, sequence, playing, pause, circulation, delay and the like. The distribution parameters supported by the animation comprise the effects of self-defining lens animation, model movement, entity display and hidden, flickering dots, particle simulation animation and the like.
Through the 3D model animation display technology, accurate display of facility equipment, weather, road conditions, vehicles and pedestrian information is realized, and corresponding state information can be dynamically presented. The method has the advantages that the method can be used for superposing the video information, the traffic dynamic information, the traffic abnormal event and other information of the road side high-definition camera, automatically calling the road side video according to the vehicle position information, realizing video visual angle switching display, and displaying the vehicle speed, the alarm, the early warning and other information in real time, and providing support for fine management of the component level and the individuation of the urban road. The signal lamp animation display effect of the intersection is achieved by adopting the 3D model animation display of the embodiment as shown in fig. 11.
The method of the embodiment adopts the 3D model custom interaction event to realize visual display, and makes a 3D model library for various facility equipment such as road surfaces, lane lines, sidewalks, zebra stripes, central separation zones, traffic lights, cameras and the like of roads, and renders respectively. The 3D animation design technology realizes visual display and real-time interaction display of the 3D model on a Web map, and combines a self-defined interaction event based on JS language to realize a dynamic interaction effect.
Based on a high-precision map acquired by the aerial unmanned aerial vehicle, the real restoration of all elements of a road is met by combining with the geographic data of engineering construction design drawings, and the data acquisition precision is up to within 50 cm; loading and displaying a 3D model of traffic facility equipment until the page response time is within 500ms, and supporting animation display and animation display effects in a custom event mode; the integrity of static and dynamic information of urban road traffic is vividly restored to more than 90 percent.
Based on different acquisition modes such as laser radar, high definition camera and specialty survey and drawing vehicle, adopt unmanned aerial vehicle to take photo by plane to carry out map acquisition, obtain tile formula high definition map data to synthesize based on the longitude and latitude information at acquisition point position, and geographic information such as road, lane, crossing, building and afforestation of the engineering construction design drawing of combining again realizes the lightweight and the low cost of high accuracy map data acquisition and preparation. Through targeted data correction and other processing modes, the 3D model self-defined interaction event and the 3D animation display technology are combined, so that the display effect of the high-definition electronic map is realized, and the traffic control business requirement is met. The method has the advantages that the collection characteristics of the aerial unmanned aerial vehicle are fully utilized, data processing is performed pertinently, the display and business interaction of the high-precision map are met, and the traffic safety and the traffic efficiency are improved.
The method has the advantages that the existing aerial unmanned aerial vehicle is fully utilized to collect high-precision map data, the service requirement of traffic data visualization in traffic supervision is combined, the dynamic and static integrated data display of the high-precision map is supported, the traffic environment and the running condition of vehicles are truly restored, accurate decision support basis is provided, a digital twin display and interaction system supporting intelligent road management, running and maintenance is constructed, the refined technical support is provided for traffic management, key vehicle monitoring management and the like, the emergency handling level is improved, and good economic and social benefits are generated.
According to the traffic data display method, the data are corrected, the geographic characteristic elements are extracted and stored by acquiring the measurement data and the engineering design drawing file of the aerial unmanned aerial vehicle, and then the 3D model is used for loading and displaying, so that the real restoration of the traffic condition of the interface is realized by combining the 3D equipment model visual interaction technology, and the three-dimensional visual high-definition map requirement of traffic supervision staff is met.
Fig. 12 is a schematic block diagram of a traffic data display device 300 according to an embodiment of the present invention. As shown in fig. 12, the present invention further provides a traffic data display device 300 corresponding to the above traffic data display method. The traffic data display apparatus 300 includes a unit for performing the traffic data display method described above, and the apparatus may be configured in a server. Specifically, referring to fig. 12, the traffic data display device 300 includes a data acquisition unit 301, a data processing unit 302, and a display unit 303.
The data acquisition unit 301 is configured to acquire measurement data of the aerial unmanned aerial vehicle and an engineering design drawing file, so as to obtain original data; a data processing unit 302, configured to perform data processing on the raw data to obtain map structured data; and the display unit 303 is used for displaying the map structured data based on a 3D model and a 3D animation technology.
In one embodiment, as shown in fig. 13, the data processing unit 302 includes a correction subunit 3021, an extraction subunit 3022, and a storage subunit 3023.
A correction subunit 3021, configured to perform data correction on the raw data to obtain a correction result; an extraction subunit 3022, configured to extract geographic feature data from the correction result to obtain a geographic feature element; and the storage subunit 3023 is configured to classify the correction result, convert and store the geographic feature element, so as to obtain map structured data.
In an embodiment, as shown in fig. 14, the correction subunit 3021 includes a stitching module 30211, a superposition module 30212, and a result correction module 30213.
The splicing module 30211 is used for splicing the image files in the measurement data to obtain a splicing result; the superposition module 30212 is configured to superimpose the geographic information in the engineering design drawing file on the splicing result, so as to obtain a superimposed result; and the result correction module 30213 is configured to correct the superposition result to obtain a correction result.
In one embodiment, as shown in fig. 15, the stitching module 30211 includes a slicing sub-module 302111, a padding sub-module 302112, an adding sub-module 302113, a correction sub-module 302114, and a result stitching sub-module 302115.
The slicing submodule 302111 is used for slicing the deformed region of the image file in the measurement data to obtain a plurality of regions; the filling submodule 302112 is used for carrying out deformation restoration calculation on the regions and carrying out rapid filling on each region according to an original standard algorithm so as to obtain a filling result; an adding submodule 302113, configured to add ink karl projection information to the image for the filling result, so as to obtain an adding result; a correction submodule 302114, configured to put the added result into a grid for correction, so as to obtain a processing result; and the result splicing submodule 302115 is used for splicing the processing results by adopting a SIFT algorithm to obtain splicing results.
In an embodiment, as shown in fig. 16, the extracting subunit 3022 includes a dividing module 30221 and a detecting module 30222.
The segmentation module 30221 is configured to segment the urban road and the lane by combining the Mask R-CNN algorithm with the 2D electronic map data to obtain a segmentation result; the detection module 30222 is used for carrying out identification label and vehicle detection on the segmentation result through extraction of connected components so as to obtain geographic characteristic elements.
In an embodiment, as shown in fig. 17, the storage subunit 3023 includes a categorizing module 30231 and a converting module 30232.
The classifying module 30231 is configured to classify the correction result according to a hierarchical directory organization, and load image data of a corresponding file directory through a map display area; the conversion module 30232 is configured to convert the geographic feature element into a 3D Tiles format, and store the geographic feature element to obtain map structured data.
In one embodiment, as shown in fig. 18, the presentation unit 303 includes a production subunit 3031, an event setting subunit 3032, and an animation presentation subunit 3033.
A making subunit 3031, configured to make a 3D model, and superimpose the map structured data with the 3D model; an event setting subunit 3032, configured to set a 3D model custom event; and the animation display subunit 3033 is used for displaying the map structured data through the 3D model animation in combination with the custom event.
It should be noted that, as will be clearly understood by those skilled in the art, the specific implementation process of the traffic data display device 300 and each unit may refer to the corresponding description in the foregoing method embodiments, and for convenience and brevity of description, the detailed description is omitted herein.
The traffic data display apparatus 300 described above may be implemented in the form of a computer program that can be run on a computer device as shown in fig. 19.
Referring to fig. 19, fig. 19 is a schematic block diagram of a computer device according to an embodiment of the present application. The computer device 500 may be a server, where the server may be a stand-alone server or may be a server cluster formed by a plurality of servers.
With reference to FIG. 19, the computer device 500 includes a processor 502, memory, and a network interface 505 connected by a system bus 501, where the memory may include a non-volatile storage medium 503 and an internal memory 504.
The non-volatile storage medium 503 may store an operating system 5031 and a computer program 5032. The computer program 5032 includes program instructions that, when executed, cause the processor 502 to perform a traffic data presentation method.
The processor 502 is used to provide computing and control capabilities to support the operation of the overall computer device 500.
The internal memory 504 provides an environment for the execution of a computer program 5032 in the non-volatile storage medium 503, which computer program 5032, when executed by the processor 502, causes the processor 502 to perform a traffic data presentation method.
The network interface 505 is used for network communication with other devices. It will be appreciated by those skilled in the art that the structure shown in FIG. 19 is merely a block diagram of some of the structures associated with the present inventive arrangements and does not constitute a limitation of the computer device 500 to which the present inventive arrangements may be applied, and that a particular computer device 500 may include more or fewer components than shown, or may combine certain components, or may have a different arrangement of components.
Wherein the processor 502 is configured to execute a computer program 5032 stored in a memory to implement the steps of:
acquiring measurement data and engineering design drawing files of the aerial unmanned aerial vehicle to obtain original data; performing data processing on the original data to obtain map structured data; the map structured data is presented based on a 3D model and 3D animation techniques.
In one embodiment, when the step of performing data processing on the raw data to obtain map structured data is performed by the processor 502, the following steps are specifically implemented:
carrying out data correction on the original data to obtain a correction result; extracting geographic characteristic data from the correction result to obtain geographic characteristic elements; classifying the correction result, converting and storing the geographic characteristic elements to obtain map structured data.
In one embodiment, when the step of correcting the original data to obtain the correction result is implemented by the processor 502, the following steps are specifically implemented:
image file splicing is carried out on the image files in the measurement data so as to obtain splicing results; superposing geographic information in the engineering design drawing file in the splicing result to obtain a superposition result; and correcting the superposition result to obtain a correction result.
In an embodiment, when the step of performing image file stitching on the image file in the measurement data to obtain the stitching result is performed by the processor 502, the following steps are specifically implemented:
slicing the deformed region of the image file in the measurement data to obtain a plurality of regions; performing deformation restoration calculation on the regions, and rapidly filling each region according to an original standard algorithm to obtain a filling result; adding ink Cal projection information to the filling result on the image to obtain an adding result; putting the increase result into a grid for correction to obtain a processing result; and splicing the processing results by adopting a SIFT algorithm to obtain splicing results.
In one embodiment, when the step of extracting the geographic feature data from the correction result to obtain the geographic feature element is implemented by the processor 502, the following steps are specifically implemented:
dividing urban roads and lanes by combining the correction result with 2D electronic map data by adopting a Mask R-CNN algorithm to obtain a division result; and carrying out identification label and vehicle detection on the segmentation result through extraction of connected components so as to obtain geographic characteristic elements.
In one embodiment, when the step of classifying the correction result and converting and storing the geographic feature element to obtain the map structured data is implemented by the processor 502, the following steps are specifically implemented:
classifying the correction results according to the hierarchical directory organization, and loading image data of the corresponding file directory through a map display area; and converting the geographic characteristic elements into a 3D Tiles format, and storing the geographic characteristic elements to obtain map structured data.
In an embodiment, when the processor 502 performs the step of displaying the map structured data based on the 3D model and the 3D animation technology, the following steps are specifically implemented:
manufacturing a 3D model, and superposing map structured data with the 3D model; setting a 3D model custom event; and displaying the map structured data through the 3D model animation in combination with the custom event.
It should be appreciated that in an embodiment of the application, the processor 502 may be a central processing unit (Central Processing Unit, CPU), the processor 502 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSPs), application specific integrated circuits (Application Specific Integrated Circuit, ASICs), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. Wherein the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Those skilled in the art will appreciate that all or part of the flow in a method embodying the above described embodiments may be accomplished by computer programs instructing the relevant hardware. The computer program comprises program instructions, and the computer program can be stored in a storage medium, which is a computer readable storage medium. The program instructions are executed by at least one processor in the computer system to implement the flow steps of the embodiments of the method described above.
Accordingly, the present application also provides a storage medium. The storage medium may be a computer readable storage medium. The storage medium stores a computer program which, when executed by a processor, causes the processor to perform the steps of:
Acquiring measurement data and engineering design drawing files of the aerial unmanned aerial vehicle to obtain original data; performing data processing on the original data to obtain map structured data; the map structured data is presented based on a 3D model and 3D animation techniques.
In one embodiment, when the processor executes the computer program to perform the step of performing data processing on the raw data to obtain map structured data, the following steps are specifically implemented:
carrying out data correction on the original data to obtain a correction result; extracting geographic characteristic data from the correction result to obtain geographic characteristic elements; classifying the correction result, converting and storing the geographic characteristic elements to obtain map structured data.
In one embodiment, when the processor executes the computer program to implement the step of correcting the original data to obtain a correction result, the following steps are specifically implemented:
image file splicing is carried out on the image files in the measurement data so as to obtain splicing results; superposing geographic information in the engineering design drawing file in the splicing result to obtain a superposition result; and correcting the superposition result to obtain a correction result.
In one embodiment, when the processor executes the computer program to implement the step of performing image file stitching on the image files in the measurement data to obtain a stitching result, the method specifically includes the following steps:
slicing the deformed region of the image file in the measurement data to obtain a plurality of regions; performing deformation restoration calculation on the regions, and rapidly filling each region according to an original standard algorithm to obtain a filling result; adding ink Cal projection information to the filling result on the image to obtain an adding result; putting the increase result into a grid for correction to obtain a processing result; and splicing the processing results by adopting a SIFT algorithm to obtain splicing results.
In one embodiment, when the processor executes the computer program to implement the step of extracting the geographic feature data from the corrected result to obtain the geographic feature element, the steps are specifically implemented as follows:
dividing urban roads and lanes by combining the correction result with 2D electronic map data by adopting a Mask R-CNN algorithm to obtain a division result; and carrying out identification label and vehicle detection on the segmentation result through extraction of connected components so as to obtain geographic characteristic elements.
In one embodiment, when the processor executes the computer program to implement the step of classifying the correction result, converting and storing the geographic feature element to obtain the map structured data, the steps are specifically implemented as follows:
classifying the correction results according to the hierarchical directory organization, and loading image data of the corresponding file directory through a map display area; and converting the geographic characteristic elements into a 3D Tiles format, and storing the geographic characteristic elements to obtain map structured data.
In one embodiment, when the processor executes the computer program to implement the step of displaying the map structured data based on the 3D model and the 3D animation technology, the processor specifically implements the following steps:
manufacturing a 3D model, and superposing map structured data with the 3D model; setting a 3D model custom event; and displaying the map structured data through the 3D model animation in combination with the custom event.
The storage medium may be a U-disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk, or other various computer-readable storage media that can store program codes.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the device embodiments described above are merely illustrative. For example, the division of each unit is only one logic function division, and there may be another division manner in actual implementation. For example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed.
The steps in the method of the embodiment of the invention can be sequentially adjusted, combined and deleted according to actual needs. The units in the device of the embodiment of the invention can be combined, divided and deleted according to actual needs. In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The integrated unit may be stored in a storage medium if implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present invention is essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a terminal, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention.
While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.
Claims (10)
1. The traffic data display method is characterized by comprising the following steps:
acquiring measurement data and engineering design drawing files of the aerial unmanned aerial vehicle to obtain original data;
performing data processing on the original data to obtain map structured data;
the map structured data is presented based on a 3D model and 3D animation techniques.
2. The traffic data display method according to claim 1, wherein the data processing the raw data to obtain map structured data comprises:
carrying out data correction on the original data to obtain a correction result;
extracting geographic characteristic data from the correction result to obtain geographic characteristic elements;
classifying the correction result, converting and storing the geographic characteristic elements to obtain map structured data.
3. The traffic data display method according to claim 2, wherein the performing data correction on the raw data to obtain a correction result includes:
image file splicing is carried out on the image files in the measurement data so as to obtain splicing results;
superposing geographic information in the engineering design drawing file in the splicing result to obtain a superposition result;
And correcting the superposition result to obtain a correction result.
4. The traffic data display method according to claim 3, wherein the performing image file stitching on the image file in the measurement data to obtain a stitching result includes:
slicing the deformed region of the image file in the measurement data to obtain a plurality of regions;
performing deformation restoration calculation on the regions, and rapidly filling each region according to an original standard algorithm to obtain a filling result;
adding ink Cal projection information to the filling result on the image to obtain an adding result;
putting the increase result into a grid for correction to obtain a processing result;
and splicing the processing results by adopting a SIFT algorithm to obtain splicing results.
5. The traffic data display method according to claim 2, wherein the extracting the geographic feature data from the corrected result to obtain the geographic feature element includes:
dividing urban roads and lanes by combining the correction result with 2D electronic map data by adopting a Mask R-CNN algorithm to obtain a division result;
and carrying out identification label and vehicle detection on the segmentation result through extraction of connected components so as to obtain geographic characteristic elements.
6. The traffic data display method according to claim 2, wherein classifying the correction result, converting and storing the geographic feature element to obtain map structured data, comprises:
classifying the correction results according to the hierarchical directory organization, and loading image data of the corresponding file directory through a map display area;
and converting the geographic characteristic elements into a 3D Tiles format, and storing the geographic characteristic elements to obtain map structured data.
7. The traffic data presentation method according to claim 1, wherein the presenting the map structured data based on the 3D model and 3D animation technique comprises:
manufacturing a 3D model, and superposing map structured data with the 3D model;
setting a 3D model custom event;
and displaying the map structured data through the 3D model animation in combination with the custom event.
8. Traffic data display device, characterized in that includes:
the data acquisition unit is used for acquiring measurement data of the aerial unmanned aerial vehicle and engineering design drawing files so as to obtain original data;
the data processing unit is used for carrying out data processing on the original data so as to obtain map structured data;
And the display unit is used for displaying the map structured data based on the 3D model and the 3D animation technology.
9. A computer device, characterized in that it comprises a memory on which a computer program is stored and a processor which, when executing the computer program, implements the method according to any of claims 1-7.
10. A storage medium storing a computer program which, when executed by a processor, implements the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310927009.6A CN116977581A (en) | 2023-07-26 | 2023-07-26 | Traffic data display method, device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310927009.6A CN116977581A (en) | 2023-07-26 | 2023-07-26 | Traffic data display method, device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116977581A true CN116977581A (en) | 2023-10-31 |
Family
ID=88478982
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310927009.6A Pending CN116977581A (en) | 2023-07-26 | 2023-07-26 | Traffic data display method, device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116977581A (en) |
-
2023
- 2023-07-26 CN CN202310927009.6A patent/CN116977581A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111542860B (en) | Sign and lane creation for high definition maps of autonomous vehicles | |
US12111177B2 (en) | Generating training data for deep learning models for building high definition maps | |
US11367208B2 (en) | Image-based keypoint generation | |
CN110796714B (en) | Map construction method, device, terminal and computer readable storage medium | |
US11151394B2 (en) | Identifying dynamic objects in a point cloud | |
US11590989B2 (en) | Training data generation for dynamic objects using high definition map data | |
EP3660737A1 (en) | Method, apparatus, and system for providing image labeling for cross view alignment | |
CN113034566B (en) | High-precision map construction method and device, electronic equipment and storage medium | |
WO2009065003A1 (en) | Method and apparatus of taking aerial surveys | |
CN108428254A (en) | The construction method and device of three-dimensional map | |
CN113358125B (en) | Navigation method and system based on environment target detection and environment target map | |
US20230121226A1 (en) | Determining weights of points of a point cloud based on geometric features | |
Bu et al. | A UAV photography–based detection method for defective road marking | |
CN108388995A (en) | A kind of method for building up of road asset management system and establish system | |
Ducoffe et al. | LARD--Landing Approach Runway Detection--Dataset for Vision Based Landing | |
CN117853904A (en) | Road disease detection method, device, equipment, medium and system | |
Sadekov et al. | Road sign detection and recognition in panoramic images to generate navigational maps | |
CN116295463A (en) | Automatic labeling method for navigation map elements | |
CN116977581A (en) | Traffic data display method, device, computer equipment and storage medium | |
Shi et al. | Lane-level road network construction based on street-view images | |
AU2013260677B2 (en) | Method and apparatus of taking aerial surveys | |
CN112686988A (en) | Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium | |
Chen et al. | Towards next-generation map making | |
CN116863093A (en) | Terrain modeling method, apparatus, computer device and medium | |
Zang et al. | Lane Boundary Geometry Extraction from Satellite Imagery |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |