CN116091723B - Fire emergency rescue live-action three-dimensional modeling method and system based on unmanned aerial vehicle - Google Patents

Fire emergency rescue live-action three-dimensional modeling method and system based on unmanned aerial vehicle Download PDF

Info

Publication number
CN116091723B
CN116091723B CN202211712253.2A CN202211712253A CN116091723B CN 116091723 B CN116091723 B CN 116091723B CN 202211712253 A CN202211712253 A CN 202211712253A CN 116091723 B CN116091723 B CN 116091723B
Authority
CN
China
Prior art keywords
model
corresponding relation
image
points
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211712253.2A
Other languages
Chinese (zh)
Other versions
CN116091723A (en
Inventor
滕波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Wangluo Electronic Science & Technology Co ltd
Original Assignee
Shanghai Wangluo Electronic Science & Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Wangluo Electronic Science & Technology Co ltd filed Critical Shanghai Wangluo Electronic Science & Technology Co ltd
Priority to CN202211712253.2A priority Critical patent/CN116091723B/en
Publication of CN116091723A publication Critical patent/CN116091723A/en
Application granted granted Critical
Publication of CN116091723B publication Critical patent/CN116091723B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/04Architectural design, interior design
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Architecture (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a fire emergency rescue live-action three-dimensional modeling method and a fire emergency rescue live-action three-dimensional modeling system based on an unmanned aerial vehicle, wherein the three-dimensional modeling method comprises the following steps of S1: shooting a building through an unmanned aerial vehicle to obtain image information; s2: analyzing the characteristic points of the image information, and establishing a corresponding relation between the image and the object; s3: generating a building three-dimensional model through a three-dimensional modeling engine, performing position matching with image information, and establishing a corresponding relation between an image and the model; s4: fusing the corresponding relation between the image and the real object and the corresponding relation between the image and the model to generate the corresponding relation among the image, the real object and the model; s5: converting the corresponding relation of the three into a virtual three-dimensional model through a mixed reality technology, and presenting the virtual three-dimensional model on a mobile carrier; s6: and matching and correcting the virtual three-dimensional model on the mobile carrier with a real building to complete real-scene three-dimensional modeling.

Description

Fire emergency rescue live-action three-dimensional modeling method and system based on unmanned aerial vehicle
[ field of technology ]
The invention relates to the technical field of unmanned aerial vehicles, in particular to a fire-fighting emergency rescue live-action three-dimensional modeling method and system based on an unmanned aerial vehicle.
[ background Art ]
As urban buildings continue to grow, significant and oversized fires are also sometimes encountered. After a fire disaster occurs, the field environment is very complex, and the work of rescue workers is quite difficult to develop. Meanwhile, due to the complex environment of a fire scene, rescue workers face huge danger at any time, and the life safety of the rescue workers is seriously endangered. Therefore, it is important to be able to grasp the geographical position information and related parameters of the disaster scene in a very short time, and create favorable conditions for rescuing trapped people and protecting the people and property of the country.
Currently, with the rapid development of unmanned aerial vehicles, the unmanned aerial vehicles have flexible maneuvering and low cost; the visual field is comprehensive and the global property is strong; has the characteristics of strong expansibility, wide application and the like, and can be widely applied to fire emergency rescue. The 5G and unmanned aerial vehicle technology is proposed to be applied to fire emergency rescue of dangerous chemical warehouses, so that real-time image transmission of on-site shooting pictures can be realized; it is proposed to construct a three-dimensional fire-fighting auxiliary rescue system by means of an unmanned plane, perform cloud mathematical modeling and calculation processing on real-time data acquired by the unmanned plane, predict a possible evolution path and a possible disaster process of a fire disaster, form a fire-fighting rescue decision scheme and the like which are dynamically adjusted according to real-time disaster information, provide disaster information of fire-fighting rescue workers on a certain basis, and improve fire-fighting rescue efficiency. However, the map position and the three-dimensional modeling are not bound, and when the geographic position of the building changes, the three-dimensional modeling can deviate; in addition, the building indoor map is not subjected to three-dimensional modeling, and is not marked with indoor fire-fighting facility information, so that full-element fire-fighting information cannot be provided. This patent proposes taking advantage of unmanned aerial vehicle oblique photography technique, carries out circuit planning and three-dimensional modeling take photo by plane through special software. Meanwhile, the building geographic position and the constructed three-dimensional model are subjected to position binding, and an automatic deviation correcting or manual deviation correcting mode is adopted, so that the two are highly matched. More importantly, the clerks also construct an indoor three-dimensional map aiming at the building, and mark all-element fire-fighting facility information, including fire elevators, fireproof doors, fire hydrants and the like, so as to strongly support the varied disaster scene fire-extinguishing rescue command decisions.
Accordingly, there is a need to develop a fire emergency rescue live-action three-dimensional modeling method and system based on unmanned aerial vehicles to address the deficiencies of the prior art, to address or mitigate one or more of the problems described above.
[ invention ]
In view of the above, the invention provides a fire emergency rescue live-action three-dimensional modeling method and system based on an unmanned aerial vehicle, which realize indoor and outdoor integrated dynamic matching building live-action three-mode modeling by means of unmanned aerial vehicle, oblique photography, 5G, image recognition, indoor positioning, mixed Reality (MR) and other technologies, can master the dynamic information of a fire scene in real time, update a fire plan more timely, accurately and efficiently according to field emergency, provide scientific and reasonable auxiliary decisions for emergency rescue, and meet the actual combat requirements of the existing fire emergency rescue.
In one aspect, the invention provides a fire emergency rescue live-action three-dimensional modeling method based on an unmanned aerial vehicle, which comprises the following steps:
s1: shooting a building through an unmanned aerial vehicle to obtain image information;
s2: analyzing the characteristic points of the image information, and establishing a corresponding relation between the image and the object;
s3: generating a building three-dimensional model through a three-dimensional modeling engine, performing position matching with image information, and establishing a corresponding relation between an image and the model;
s4: fusing the corresponding relation between the image and the real object and the corresponding relation between the image and the model to generate the corresponding relation among the image, the real object and the model;
s5: converting the corresponding relation of the three into a virtual three-dimensional model through a mixed reality technology, and presenting the virtual three-dimensional model on a mobile carrier;
s6: and matching and correcting the virtual three-dimensional model on the mobile carrier with a real building to complete real-scene three-dimensional modeling.
The above aspect and any possible implementation manner further provide an implementation manner, where the establishing a correspondence between the image and the real object in S2 specifically is: calculating the characteristics of each pixel point of the picture by adopting a SIFT operator; and forming a plurality of physical key point pictures in the shooting process, naming the physical pictures according to the key points, and establishing the corresponding relation between the characteristic points and the physical key points according to the characteristic point analysis result. The correspondence is presented in a binary group: < feature point, physical point >.
In the aspect and any possible implementation manner described above, there is further provided an implementation manner, where the establishing a correspondence between the image and the model in S3 specifically is: and (3) matching and corresponding the pixels of the plurality of pictures through the characteristic points obtained in the previous step, so that camera parameters are estimated, sparse 3D information is obtained, dense reconstruction is carried out according to the obtained camera parameters, and point clouds are obtained, so that the mapping from the characteristic points to the model structure points is realized. The correspondence is presented in a binary group: < feature points, model points >.
The above aspect and any possible implementation manner further provide an implementation manner, where the generating the correspondence relationship between the image, the physical object and the model in S4 specifically is: and constructing the corresponding relation < feature point, model point and physical point > among the three according to the < feature point, physical point >, < feature point and model point >.
In the foregoing aspect and any possible implementation manner, there is further provided an implementation manner, in S5, converting, by a mixed reality technology, a correspondence relationship between the three into a virtual three-dimensional model is specifically: and (3) performing dense reconstruction according to the sparse 3D information and camera parameters to obtain a point cloud, performing post-processing on the point cloud to obtain a grid, removing noise points and the like, and then obtaining a virtual three-dimensional model.
In the aspect and any possible implementation manner described above, there is further provided an implementation manner, where the matching method in S6 specifically is: and matching the physical points with the virtual three-dimensional model midpoints according to the constructed relationship of the < feature points, the model points and the physical points >.
The aspect and any possible implementation manner as described above further provide an implementation manner, where the deviation rectifying method in S6 specifically includes: and correcting the deviation of the physical points and the virtual three-dimensional model midpoint according to the constructed relation of the < feature points, the model points and the physical points >.
Aspects and any possible implementation manner as described above, further provide a fire emergency rescue live-action three-dimensional modeling system based on an unmanned aerial vehicle, the three-dimensional modeling system comprising:
the image information acquisition module is used for shooting a building through the unmanned aerial vehicle to acquire image information;
the characteristic point analysis module is used for carrying out characteristic point analysis on the image information and establishing a corresponding relation between the image and the object;
the building model matching module is used for generating a building three-dimensional model through the three-dimensional modeling engine, performing position matching with the image information and establishing a corresponding relation between the image and the model;
the data fusion module is used for fusing the corresponding relation between the image and the real object and the corresponding relation between the image and the model to generate the corresponding relation among the image, the real object and the model;
the virtual modeling module converts the corresponding relation of the three into a virtual three-dimensional model through a mixed reality technology and displays the virtual three-dimensional model on a mobile carrier;
and the live-action modeling module is used for matching and correcting the virtual three-dimensional model on the mobile carrier with the real building to complete live-action three-dimensional modeling.
Aspects and any possible implementation as described above further provide a readable storage medium comprising: a memory storing a program; a processor that implements the three-dimensional modeling method of any of the above when executing the program.
Compared with the prior art, the invention can obtain the following technical effects:
the invention can realize the binding of the geographical position of the building and the position of the three-dimensional model, and the automatic deviation correction or the manual deviation correction is adopted, so that the two are highly matched, the precision can reach the centimeter level, and the precision is far better than the prior decimeter level precision. More importantly, the method also aims at building an indoor three-dimensional map, marks fire-fighting facility information of all elements, including a fire-fighting elevator, a fire door, a fire hydrant and the like, and can be used for marking real-time position information of fire-fighting rescue workers, fire-fighting vehicles and equipment on a constructed live-action three-dimensional model by combining a three-dimensional visual fire-fighting rescue plan, so that the method is more visual, quicker and accurate.
Of course, it is not necessary for any of the products embodying the invention to achieve all of the technical effects described above at the same time.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a three-dimensional modeling method provided by one embodiment of the present invention.
[ detailed description ] of the invention
For a better understanding of the technical solution of the present invention, the following detailed description of the embodiments of the present invention refers to the accompanying drawings.
It should be understood that the described embodiments are merely some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
The invention provides a fire emergency rescue live-action three-dimensional modeling method based on an unmanned aerial vehicle, which comprises the following steps:
s1: shooting a building through an unmanned aerial vehicle to obtain image information;
s2: analyzing the characteristic points of the image information, and establishing a corresponding relation between the image and the object;
s3: the three-dimensional modeling engine is used for extracting and analyzing the coordinate information of the shot image, establishing accurate two-dimensional information of the building, and constructing the indoor and outdoor three-dimensional frame map information with basic characteristics through the layer height processing of the two-dimensional information of the building. Generating a building three-dimensional model, performing position matching with image information, and establishing a corresponding relation between the image and the model;
s4: fusing the corresponding relation between the image and the real object and the corresponding relation between the image and the model to generate the corresponding relation among the image, the real object and the model;
s5: converting the corresponding relation of the three into a virtual three-dimensional model through a mixed reality technology, and presenting the virtual three-dimensional model on a mobile carrier;
s6: matching and correcting the virtual three-dimensional model on the mobile carrier with a real building to complete real-scene three-dimensional modeling, wherein the virtual three-dimensional model is presented at a mobile terminal for facilitating various intelligent terminals to check, and the correction process is mainly based on shooting high-resolution pictures by the unmanned aerial vehicle.
The corresponding relation between the image and the real object is established in the S2 specifically as follows: calculating the characteristics of each pixel point of the picture by adopting a SIFT operator; and forming a plurality of physical key point pictures in the shooting process, naming the physical pictures according to the key points, and establishing the corresponding relation between the characteristic points and the physical key points according to the characteristic point analysis result. The correspondence is presented in a binary group: < feature point, physical point >. The corresponding relation between the image and the model established in the S3 is specifically as follows: and (3) matching and corresponding the pixels of the plurality of pictures through the characteristic points obtained in the previous step, so that camera parameters are estimated, sparse 3D information is obtained, dense reconstruction is carried out according to the obtained camera parameters, and point clouds are obtained, so that the mapping from the characteristic points to the model structure points is realized. The correspondence is presented in a binary group: < feature points, model points >. The corresponding relation among the image, the real object and the model generated in the S4 is specifically as follows: and constructing the corresponding relation < feature point, model point and physical point > among the three according to the < feature point, physical point >, < feature point and model point >. In the step S5, the corresponding relation of the three is converted into a virtual three-dimensional model by a mixed reality technology, specifically: and (3) performing dense reconstruction according to the sparse 3D information and camera parameters to obtain a point cloud, performing post-processing on the point cloud to obtain a grid, removing noise points and the like, and then obtaining a virtual three-dimensional model.
The matching method in the S6 specifically comprises the following steps: and matching the physical points with the virtual three-dimensional model midpoints according to the constructed relationship of the < feature points, the model points and the physical points >. The correction method in the S6 specifically comprises the following steps: and correcting the deviation of the physical points and the virtual three-dimensional model midpoint according to the constructed relation of the < feature points, the model points and the physical points >.
The invention also provides a fire emergency rescue live-action three-dimensional modeling system based on the unmanned aerial vehicle, which comprises:
the image information acquisition module is used for shooting a building through the unmanned aerial vehicle to acquire image information;
the characteristic point analysis module is used for carrying out characteristic point analysis on the image information and establishing a corresponding relation between the image and the object;
the building model matching module is used for generating a building three-dimensional model through the three-dimensional modeling engine, performing position matching with the image information and establishing a corresponding relation between the image and the model;
the data fusion module is used for fusing the corresponding relation between the image and the real object and the corresponding relation between the image and the model to generate the corresponding relation among the image, the real object and the model;
the virtual modeling module converts the corresponding relation of the three into a virtual three-dimensional model through a mixed reality technology and displays the virtual three-dimensional model on a mobile carrier;
and the live-action modeling module is used for matching and correcting the virtual three-dimensional model on the mobile carrier with the real building to complete live-action three-dimensional modeling.
The present invention also provides a readable storage medium comprising: a memory storing a program; a processor that implements the three-dimensional modeling method of any of the above when executing the program.
Example 1:
as shown in fig. 1, the invention acquires ground image information of a target building by unmanned aerial vehicle oblique photography. Through the relative fixed three-dimensional space coordinates of the unmanned aerial vehicle camera and the shooting angle of the target building, the three-dimensional space position of the target building can be determined, and the limitations that the traditional three-dimensional modeling needs a large amount of manual external measurement and complex modeling and mapping processing, especially for constructing a large-area three-dimensional digital city, the workload is large, the task period is long, the processing process is complex and the cost is high are avoided. On the basis, the position of the image feature points in the real environment is determined by shooting the image feature points, and the corresponding relation between the shooting image and the real building geographic position information is established. More innovatively, the three-dimensional modeling engine is used for rapidly generating a building three-dimensional model, performing position matching with the shot image, and establishing a corresponding relation between the building three-dimensional model and the shot image, so that the mutual corresponding relation among the geographic position information of the real environment, the three-dimensional coordinate information of the shot image and the generated three-dimensional model is realized. By means of the corresponding relation of the three, the binding of the building geographic position and the three-dimensional model is realized, and an automatic deviation correction or manual deviation correction mode is adopted, so that the two are highly matched, the precision can reach the centimeter level, and the precision is far better than the existing decimeter level precision. More importantly, the method also aims at building an indoor three-dimensional map, marks fire-fighting facility information of all elements, including a fire-fighting elevator, a fire door, a fire hydrant and the like, and can be used for marking real-time position information of fire-fighting rescue workers, fire-fighting vehicles and equipment on a constructed live-action three-dimensional model by combining a three-dimensional visual fire-fighting rescue plan, so that the method is more visual, quicker and accurate.
Aiming at the current situation that fire fighting fires in China show a trend of 'complex conditions and changeable sites', the invention is beneficial to the auxiliary decision of fire fighting emergency rescue. The real building and the generated virtual three-dimensional model can be presented on a carrier (mobile phone and tablet … …) through a mixed reality technology by means of mutual binding of the real three-dimensional model generated by the unmanned aerial vehicle and the geographical position of the real environment.
Aiming at dangerous chemical warehouses, collected disaster site fire information can be uploaded in real time by means of a 5G technology, background intelligent analysis is carried out, and a commander is assisted in combat decision, more advantageously, fire facilities, personnel and vehicles realized by the method are identified in a three-dimensional model, the commander can click on a fire building through a carrier, tank basic information (comprising volume, stored gas, temperature, wind direction and the like) can be mastered, and a reasonable combat scheme is timely and efficiently formulated on the basis of an original fire control plan by combining fire fighter and fire control vehicle position information. Aiming at a large complex or an underground building, the method can grasp the specific ignition position, firefighters and fire-fighting vehicle distribution conditions in multiple views, and as the real three-dimensional model generated by the unmanned aerial vehicle is mutually bound with the geographic position of the real environment, a commander rotates the three-dimensional model, the real ignition building also rotates along with the three-dimensional model, and the real ignition building is kept to be highly overlapped at all times. In addition, fire control commander accessible indoor three-dimensional map and identification information master building inner structure and fire control facility distribution condition, for the inside combat of fire control rescue personnel reasonable combat route of formulation, really realize inside and outside three-dimensional live-action construction, target building and three-dimensional model linkage, fire control commander can more accurately carry out fire control scheme and study judgement, face fire trend change, make the scheme adjustment on line in real time, high-efficient, linkage, timely make actual combat deployment, promote combat effectiveness, accomplish information linkage, accurate command.
The fire emergency rescue live-action three-dimensional modeling method and system based on the unmanned aerial vehicle provided by the embodiment of the application are described in detail. The above description of embodiments is only for aiding in understanding the method of the present application and its core ideas; meanwhile, as those skilled in the art will have modifications in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.
Certain terms are used throughout the description and claims to refer to particular components. Those of skill in the art will appreciate that a hardware manufacturer may refer to the same component by different names. The description and claims do not take the form of an element differentiated by name, but rather by functionality. As referred to throughout the specification and claims, the terms "comprising," including, "and" includes "are intended to be interpreted as" including/comprising, but not limited to. By "substantially" is meant that within an acceptable error range, a person skilled in the art is able to solve the technical problem within a certain error range, substantially achieving the technical effect. The description hereinafter sets forth the preferred embodiment for carrying out the present application, but is not intended to limit the scope of the present application in general, for the purpose of illustrating the general principles of the present application. The scope of the present application is defined by the appended claims.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a product or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such product or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a commodity or system comprising such elements.
It should be understood that the term "and/or" as used herein is merely one relationship describing the association of the associated objects, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
While the foregoing description illustrates and describes the preferred embodiments of the present application, it is to be understood that this application is not limited to the forms disclosed herein, but is not to be construed as an exclusive use of other embodiments, and is capable of many other combinations, modifications and environments, and adaptations within the scope of the teachings described herein, through the foregoing teachings or through the knowledge or skills of the relevant art. And that modifications and variations which do not depart from the spirit and scope of the present invention are intended to be within the scope of the appended claims.

Claims (3)

1. The fire emergency rescue live-action three-dimensional modeling method based on the unmanned aerial vehicle is characterized by comprising the following steps of: s1: shooting a building through an unmanned aerial vehicle to obtain image information;
s2: analyzing the characteristic points of the image information, and establishing a corresponding relation between the image and the object;
s3: generating a building three-dimensional model through a three-dimensional modeling engine, performing position matching with image information, and establishing a corresponding relation between an image and the model;
s4: fusing the corresponding relation between the image and the real object and the corresponding relation between the image and the model to generate the corresponding relation among the image, the real object and the model;
s5: converting the corresponding relation of the three into a virtual three-dimensional model through a mixed reality technology, and presenting the virtual three-dimensional model on a mobile carrier;
s6: matching and correcting the virtual three-dimensional model on the mobile carrier with a real building to complete real-scene three-dimensional modeling;
the corresponding relation between the image and the real object is established in the S2 specifically as follows: calculating the characteristics of each pixel point of the picture by adopting a SIFT operator; forming a plurality of physical key point pictures in the shooting process, naming the physical pictures according to the key points, establishing corresponding relations between the characteristic points and the physical key points according to characteristic point analysis results, and presenting the corresponding relations in a binary group: < feature point, physical point >;
the corresponding relation between the image and the model established in the S3 is specifically as follows: and (3) matching and corresponding the characteristic points obtained in the step (S2) to a plurality of picture pixels, so as to estimate camera parameters, obtain sparse 3D information, and perform dense reconstruction according to the obtained camera parameters to obtain point clouds, thereby realizing the mapping from the characteristic points to the model structure points, wherein the corresponding relation is represented by binary groups: < feature points, model points >;
the corresponding relation among the image, the real object and the model generated in the S4 is specifically as follows: constructing a corresponding relation < feature point, model point and physical point > among the three according to the < feature point, physical point >, < feature point and model point >;
in the step S5, the corresponding relation of the three is converted into a virtual three-dimensional model by a mixed reality technology, specifically: according to sparse 3D information and camera parameters, dense reconstruction is carried out to obtain point clouds, post-processing is carried out on the point clouds to obtain grids and noise points are removed, and then a virtual three-dimensional model is obtained;
the matching method in the S6 specifically comprises the following steps: matching the physical points with the virtual three-dimensional model midpoint according to the constructed relationship of the characteristic points, the model points and the physical points;
the correction method in the S6 specifically comprises the following steps: and correcting the deviation of the physical points and the virtual three-dimensional model midpoint according to the constructed relation of the < feature points, the model points and the physical points >.
2. The three-dimensional modeling system for fire emergency rescue live-action based on unmanned aerial vehicle based on the three-dimensional modeling method of claim 1, wherein the three-dimensional modeling system comprises: the image information acquisition module is used for shooting a building through the unmanned aerial vehicle to acquire image information;
the characteristic point analysis module is used for carrying out characteristic point analysis on the image information and establishing a corresponding relation between the image and the object;
the building model matching module is used for generating a building three-dimensional model through the three-dimensional modeling engine, performing position matching with the image information and establishing a corresponding relation between the image and the model;
the data fusion module is used for fusing the corresponding relation between the image and the real object and the corresponding relation between the image and the model to generate the corresponding relation among the image, the real object and the model;
the virtual modeling module converts the corresponding relation of the three into a virtual three-dimensional model through a mixed reality technology and displays the virtual three-dimensional model on a mobile carrier;
and the live-action modeling module is used for matching and correcting the virtual three-dimensional model on the mobile carrier with the real building to complete live-action three-dimensional modeling.
3. A readable storage medium, comprising: a memory storing a program; a processor which when executing the program implements the three-dimensional modeling method as defined in claim 1.
CN202211712253.2A 2022-12-29 2022-12-29 Fire emergency rescue live-action three-dimensional modeling method and system based on unmanned aerial vehicle Active CN116091723B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211712253.2A CN116091723B (en) 2022-12-29 2022-12-29 Fire emergency rescue live-action three-dimensional modeling method and system based on unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211712253.2A CN116091723B (en) 2022-12-29 2022-12-29 Fire emergency rescue live-action three-dimensional modeling method and system based on unmanned aerial vehicle

Publications (2)

Publication Number Publication Date
CN116091723A CN116091723A (en) 2023-05-09
CN116091723B true CN116091723B (en) 2024-01-05

Family

ID=86198541

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211712253.2A Active CN116091723B (en) 2022-12-29 2022-12-29 Fire emergency rescue live-action three-dimensional modeling method and system based on unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN116091723B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152592B (en) * 2023-10-26 2024-01-30 青岛澳西智能科技有限公司 Building information and fire information visualization system and method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110706328A (en) * 2019-08-21 2020-01-17 重庆特斯联智慧科技股份有限公司 Three-dimensional scene virtual generation method and system based on GAN network
CN111091613A (en) * 2019-10-31 2020-05-01 中国化学工程第六建设有限公司 Three-dimensional live-action modeling method based on unmanned aerial vehicle aerial survey
CN111260777A (en) * 2020-02-25 2020-06-09 中国电建集团华东勘测设计研究院有限公司 Building information model reconstruction method based on oblique photography measurement technology
CN114037799A (en) * 2021-11-08 2022-02-11 深圳星寻科技有限公司 Projection system for automatic real scene modeling of oblique photography model and use method thereof
WO2022084796A1 (en) * 2020-10-19 2022-04-28 刘卫敏 System for managing building progress on basis of lidar technology
CN114463489A (en) * 2021-12-28 2022-05-10 上海网罗电子科技有限公司 Oblique photography modeling system and method for optimizing unmanned aerial vehicle air route
CN114894253A (en) * 2022-05-18 2022-08-12 威海众合机电科技有限公司 Emergency visual sense intelligent enhancement method, system and equipment
CN115527008A (en) * 2021-06-24 2022-12-27 中国石油化工股份有限公司 Safety simulation experience training system based on mixed reality technology

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111599001B (en) * 2020-05-14 2023-03-14 星际(重庆)智能装备技术研究院有限公司 Unmanned aerial vehicle navigation map construction system and method based on image three-dimensional reconstruction technology

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110706328A (en) * 2019-08-21 2020-01-17 重庆特斯联智慧科技股份有限公司 Three-dimensional scene virtual generation method and system based on GAN network
CN111091613A (en) * 2019-10-31 2020-05-01 中国化学工程第六建设有限公司 Three-dimensional live-action modeling method based on unmanned aerial vehicle aerial survey
CN111260777A (en) * 2020-02-25 2020-06-09 中国电建集团华东勘测设计研究院有限公司 Building information model reconstruction method based on oblique photography measurement technology
WO2022084796A1 (en) * 2020-10-19 2022-04-28 刘卫敏 System for managing building progress on basis of lidar technology
CN115527008A (en) * 2021-06-24 2022-12-27 中国石油化工股份有限公司 Safety simulation experience training system based on mixed reality technology
CN114037799A (en) * 2021-11-08 2022-02-11 深圳星寻科技有限公司 Projection system for automatic real scene modeling of oblique photography model and use method thereof
CN114463489A (en) * 2021-12-28 2022-05-10 上海网罗电子科技有限公司 Oblique photography modeling system and method for optimizing unmanned aerial vehicle air route
CN114894253A (en) * 2022-05-18 2022-08-12 威海众合机电科技有限公司 Emergency visual sense intelligent enhancement method, system and equipment

Also Published As

Publication number Publication date
CN116091723A (en) 2023-05-09

Similar Documents

Publication Publication Date Title
CN110874391B (en) Data fusion and display method based on urban space three-dimensional grid model
Jiang et al. UAV-based 3D reconstruction for hoist site mapping and layout planning in petrochemical construction
CN107367262B (en) A kind of unmanned plane display interconnection type control method of positioning mapping in real time at a distance
US7944547B2 (en) Method and system of generating 3D images with airborne oblique/vertical imagery, GPS/IMU data, and LIDAR elevation data
CN105847750B (en) The method and device of UAV Video image real-time display based on geocoding
CN112184890B (en) Accurate positioning method of camera applied to electronic map and processing terminal
CN107292989A (en) High voltage power transmission cruising inspection system based on 3DGIS technologies
CN116091723B (en) Fire emergency rescue live-action three-dimensional modeling method and system based on unmanned aerial vehicle
CN106919987B (en) Method for manufacturing and managing operation and maintenance information model of high-speed railway passenger dedicated line equipment in virtual reality environment
CN110737742A (en) map platform modeling and personnel track display method and system
CN112783196A (en) Distribution network line unmanned aerial vehicle autonomous flight path planning method and system
US8395760B2 (en) Unified spectral and geospatial information model and the method and system generating it
Bi et al. Research on the construction of City information modelling basic platform based on multi-source data
Kim et al. Data management framework of drone-based 3D model reconstruction of disaster site
CN105865413A (en) Method and device for acquiring building height
CN112785686A (en) Forest map construction method based on big data and readable storage medium
Yeh et al. The Evaluation of GPS techniques for UAV-based Photogrammetry in Urban Area
CN112150622A (en) Construction method of three-dimensional urban landscape and three-dimensional planning aid decision-making system
Gruen Next generation smart cities-the role of geomatics
CN106528554A (en) System for quickly determining initial position of personnel and construction parameters
CN108954016A (en) Fuel gas pipeline leakage disposal system based on augmented reality
CN113763216A (en) WebGIS-based smart campus system
Sohn et al. Resilient Heritage Using Aerial and Ground-Based Multi-sensor Imagery
Liu et al. Design and implementation of community safety management oriented public information platform for a smart city
Fukuda et al. Availability of mobile augmented reality system for urban landscape simulation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant