CN111968221A - Dual-mode three-dimensional modeling method and device based on temperature field and live-action video stream - Google Patents
Dual-mode three-dimensional modeling method and device based on temperature field and live-action video stream Download PDFInfo
- Publication number
- CN111968221A CN111968221A CN202010769149.1A CN202010769149A CN111968221A CN 111968221 A CN111968221 A CN 111968221A CN 202010769149 A CN202010769149 A CN 202010769149A CN 111968221 A CN111968221 A CN 111968221A
- Authority
- CN
- China
- Prior art keywords
- dimensional model
- temperature field
- live
- video stream
- initial
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 230000015654 memory Effects 0.000 claims description 20
- 238000013507 mapping Methods 0.000 claims description 6
- 238000005516 engineering process Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 5
- 230000004927 fusion Effects 0.000 claims description 4
- 238000010276 construction Methods 0.000 abstract description 19
- 239000002609 medium Substances 0.000 description 10
- 230000000694 effects Effects 0.000 description 9
- 230000008569 process Effects 0.000 description 7
- 230000004044 response Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 230000010485 coping Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000009529 body temperature measurement Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000007781 pre-processing Methods 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 230000004888 barrier function Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 239000012120 mounting media Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/26—Government or public services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Computer Graphics (AREA)
- Tourism & Hospitality (AREA)
- Economics (AREA)
- Health & Medical Sciences (AREA)
- Strategic Management (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- Human Resources & Organizations (AREA)
- General Health & Medical Sciences (AREA)
- Primary Health Care (AREA)
- Educational Administration (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Development Economics (AREA)
- Geometry (AREA)
- Processing Or Creating Images (AREA)
- Radiation Pyrometers (AREA)
Abstract
The embodiment of the application discloses a dual-mode three-dimensional modeling method and device based on a temperature field and a live-action video stream. According to the technical scheme, the method comprises the steps of collecting point cloud data of a fire scene, collecting an infrared image of the fire scene through an infrared camera, collecting a video stream image of the fire scene through a visible light camera, recording first position posture information of the infrared camera and second position posture information of the visible light camera, further building an initial three-dimensional model of the fire scene based on the point cloud data, matching the infrared image with the initial three-dimensional model based on the first position posture information, building a temperature field three-dimensional model, fusing the video stream image with the initial three-dimensional model based on the second position posture information, building a live-action three-dimensional model, and finally outputting and displaying the temperature field three-dimensional model and the live-action three-dimensional model on the same screen. By adopting the technical means, the construction and display of the three-dimensional model of the temperature field and the three-dimensional model of the real scene in the fire scene can be realized, the situation of the fire scene is restored, and the distribution of the fire source is displayed in detail.
Description
Technical Field
The embodiment of the application relates to the technical field of three-dimensional model construction, in particular to a dual-mode three-dimensional modeling method and device based on a temperature field and a live-action video stream.
Background
At present, when a fire disaster occurs, if the fire disaster is not put out in time, economic loss and even casualties are inevitably caused. Therefore, when a fire occurs, people always pay attention to how to extinguish the fire in time. Therefore, when a large fire occurs, the three-dimensional modeling is carried out on a fire scene, so that the layout conditions of buildings, various barriers and the like on the fire scene can be accurately restored, firefighters can carry out rescue layout and formulate corresponding suppression plans based on a three-dimensional model of the fire scene, and therefore the fire suppression efficiency can be improved, the fire source can be timely suppressed, and the life and property safety of people can be guaranteed.
However, since the situation of the fire scene is complex and the scene environment changes constantly, it is difficult to better reduce the actual situation of the fire scene only by the three-dimensional model and to know the specific distribution situation of the fire source, and the rescue layout is easily misled based on the simple three-dimensional model, resulting in the situation of error in the rescue plan.
Disclosure of Invention
The embodiment of the application provides a dual-mode three-dimensional modeling method and device based on a temperature field and a live-action video stream, which can accurately restore the situation of a fire scene, display the distribution of a fire source in detail and optimize the information display effect of the fire scene.
In a first aspect, an embodiment of the present application provides a dual-mode three-dimensional modeling method based on a temperature field and a live-action video stream, including:
acquiring point cloud data of a fire scene, acquiring an infrared image of the fire scene through an infrared camera, acquiring a video stream image of the fire scene through a visible light camera, and recording first position and attitude information of the infrared camera and second position and attitude information of the visible light camera;
constructing an initial three-dimensional model of a fire scene based on the point cloud data, matching the infrared image with the initial three-dimensional model based on the first attitude information, constructing a temperature field three-dimensional model, and fusing the video stream image with the initial three-dimensional model based on the second attitude information to construct a live-action three-dimensional model;
and outputting and displaying the temperature field three-dimensional model and the real scene three-dimensional model on the same screen.
Further, matching the infrared image with the initial three-dimensional model based on the first pose information to construct a temperature field three-dimensional model, including:
establishing a matching relation between each point on the infrared image and a space point of the initial three-dimensional model based on the first attitude information;
determining the temperature value of each point on the infrared image, and mapping the temperature value of each point on the infrared image to the initial three-dimensional model based on the matching relation;
and carrying out interpolation processing on the initial three-dimensional model to construct a temperature field three-dimensional model.
Further, determining the temperature value of each point on the infrared image includes:
and determining the temperature value of each point on the infrared image based on the image characteristics of each point.
Further, fusing the video stream image with the initial three-dimensional model based on the second pose information to construct a live-action three-dimensional model, including:
constructing a virtual view volume of the visible light camera on the initial three-dimensional model based on the second pose information;
projecting the video stream image into the initial three-dimensional model by utilizing a projection texture technology based on the shooting range of the virtual view;
and performing texture fusion on the video stream image and the initial three-dimensional model to construct a real-scene three-dimensional model.
Further, the displaying the temperature field three-dimensional model and the live-action three-dimensional model in a same screen output mode further comprises:
and displaying the temperature field three-dimensional model and the live-action three-dimensional model in the same direction on a display screen, and responding to the display direction adjustment of the temperature field three-dimensional model to correspondingly adjust the display direction of the live-action three-dimensional model, or responding to the display direction adjustment of the live-action three-dimensional model to correspondingly adjust the display direction of the temperature field three-dimensional model.
Further, after the temperature field three-dimensional model and the live-action three-dimensional model are output and displayed on the same screen, the method further comprises the following steps:
and comparing the temperature value of each space point on the three-dimensional model of the temperature field with a preset temperature threshold, determining a high-temperature position with an overproof temperature, and marking the high-temperature position on the three-dimensional model of the temperature field.
Further, after determining a high temperature position with an excessive temperature and marking the high temperature position on the three-dimensional temperature field model, the method further comprises the following steps:
and determining a corresponding coordinate point in the real three-dimensional model according to the coordinate point of the high-temperature position and marking the coordinate point.
In a second aspect, an embodiment of the present application provides a dual-mode three-dimensional modeling apparatus based on a temperature field and a live-action video stream, including:
the system comprises an acquisition module, a video acquisition module and a display module, wherein the acquisition module is used for acquiring point cloud data of a fire scene, acquiring an infrared image of the fire scene through an infrared camera, acquiring a video stream image of the fire scene through a visible light camera, and recording first position and attitude information of the infrared camera and second position and attitude information of the visible light camera;
the building module is used for building an initial three-dimensional model of a fire scene based on the point cloud data, matching the infrared image with the initial three-dimensional model based on the first attitude information, building a temperature field three-dimensional model, and fusing the video stream image with the initial three-dimensional model based on the second attitude information to build a real scene three-dimensional model;
and the display module is used for outputting and displaying the temperature field three-dimensional model and the real scene three-dimensional model on the same screen.
In a third aspect, an embodiment of the present application provides an electronic device, including:
a memory and one or more processors;
the memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the dual-mode three-dimensional modeling method based on temperature field and live-action video stream according to the first aspect.
In a fourth aspect, embodiments of the present application provide a storage medium containing computer-executable instructions for performing the dual-mode three-dimensional modeling method based on temperature field and live-action video stream according to the first aspect when executed by a computer processor.
According to the method and the device, the point cloud data of the fire scene are collected, the infrared image of the fire scene is collected through the infrared camera, the video stream image of the fire scene is collected through the visible light camera, the first position and posture information of the infrared camera and the second position and posture information of the visible light camera are recorded, the initial three-dimensional model of the fire scene is further constructed based on the point cloud data, the infrared image is matched with the initial three-dimensional model based on the first position and posture information, the temperature field three-dimensional model is constructed, the video stream image and the initial three-dimensional model are fused based on the second position and posture information, the real three-dimensional model is constructed, and finally the temperature field three-dimensional model and the real three-dimensional model are output. By adopting the technical means, the construction and display of the three-dimensional model of the temperature field and the three-dimensional model of the real scene in the fire scene can be realized, the situation of the fire scene can be accurately restored, and the distribution of the fire source can be displayed in detail. Therefore, fire fighters can conveniently know the real-time fire situation, a fire fighting response scheme is formulated, and the fire scene information display effect and the fire rescue efficiency are optimized.
Drawings
FIG. 1 is a flowchart of a dual-mode three-dimensional modeling method based on a temperature field and a live-action video stream according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of a three-dimensional temperature field modeling in the first embodiment of the present application;
FIG. 3 is a schematic diagram of spatial point matching in the first embodiment of the present application;
FIG. 4 is a flow chart of the construction of a live-action three-dimensional model according to a first embodiment of the present application;
FIG. 5 is a schematic structural diagram of a dual-mode three-dimensional modeling apparatus based on a temperature field and a live-action video stream according to a second embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to a third embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, specific embodiments of the present application will be described in detail with reference to the accompanying drawings. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some but not all of the relevant portions of the present application are shown in the drawings. Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
The dual-mode three-dimensional modeling method based on the temperature field and the live-action video stream aims to more vividly restore the situation of a fire scene and display the distribution of fire sources in detail by constructing and displaying a three-dimensional temperature field model and a three-dimensional live-action model of the fire scene. For fire fighters, the fire source and the distribution of high-temperature points can be specifically known based on the real-scene three-dimensional model and the temperature field distribution, and a fire fighting coping scheme is formulated by combining the real-scene three-dimensional model and the temperature field three-dimensional model, so that the fire fighting efficiency can be effectively improved. Compared with the traditional fire scene three-dimensional model construction mode, when the fire scene three-dimensional model is constructed, the three-dimensional model is constructed only based on the point cloud data of the fire scene, so that the three-dimensional model is obtained, and the provided fire scene information is relatively less. The fire fighters can only know the layout of buildings, obstacles and the like related to the fire scene based on the three-dimensional model. And no knowledge is available about the real-time changing fire conditions. Clearly, it is difficult to understand the details of the fire scene based solely on such three-dimensional models. Therefore, the dual-mode three-dimensional modeling method based on the temperature field and the live-action video stream is provided to solve the technical problem that the existing three-dimensional model information of the fire scene is simple to display.
The first embodiment is as follows:
fig. 1 is a flowchart of a dual-mode three-dimensional modeling method based on a temperature field and a live-action video stream according to an embodiment of the present disclosure, where the dual-mode three-dimensional modeling method based on a temperature field and a live-action video stream provided in this embodiment may be executed by a dual-mode three-dimensional modeling device based on a temperature field and a live-action video stream, the dual-mode three-dimensional modeling device based on a temperature field and a live-action video stream may be implemented in a software and/or hardware manner, and the dual-mode three-dimensional modeling device based on a temperature field and a live-action video stream may be formed by two or more physical entities or may be formed by one physical entity. Generally speaking, the dual-mode three-dimensional modeling device based on the temperature field and the live-action video stream can be a computer, a tablet, a mobile phone and other intelligent terminals with display screens.
The following description will be given taking the dual-mode three-dimensional modeling device based on the temperature field and the live-action video stream as an example of a main body for executing the dual-mode three-dimensional modeling method based on the temperature field and the live-action video stream. Referring to fig. 1, the dual-mode three-dimensional modeling method based on the temperature field and the live-action video stream specifically includes:
s110, collecting point cloud data of a fire scene, collecting an infrared image of the fire scene through an infrared camera, collecting a video stream image of the fire scene through a visible light camera, and recording first position and attitude information of the infrared camera and second position and attitude information of the visible light camera.
In the embodiment of the application, when a fire disaster occurs, the unmanned aerial vehicle carries a laser radar, an infrared camera and a visible light camera to scan and shoot the fire scene in real time. The laser radar is used for collecting point cloud data of a fire scene, the infrared camera is used for collecting infrared images of the fire scene, and the visible light camera is used for collecting video stream images of the fire scene. The laser radar, the infrared camera and the visible light camera can be respectively carried on different unmanned aerial vehicles to collect corresponding data, and can also be carried on the same unmanned aerial vehicle to collect corresponding data. And, according to actual need, the quantity of unmanned aerial vehicle can be one or more, when acquireing the on-spot cloud data of fire scene, infrared image or video stream image promptly, can all be gathered by a plurality of equipment to above-mentioned any kind of data. For example, four unmanned aerial vehicles carry visible light cameras to acquire video stream images of a fire scene, different visible light cameras are responsible for different shooting directions, and the video stream images shot by the four cameras are spliced to obtain panoramic video stream images corresponding to the fire scene. It should be noted that, in order to facilitate subsequent model construction, the laser radar, the infrared camera and the visible light camera are set to be located at the same height and the same direction to acquire corresponding data, so that when the subsequent three-dimensional model construction is performed, the data of the relevant spatial points can be matched. Therefore, the unmanned aerial vehicle is generally used to carry a laser radar, an infrared camera and a visible light camera simultaneously to acquire point cloud data, infrared images and video stream images of a fire scene.
Furthermore, the related data collected by the laser radar, the infrared camera and the visible light camera are all sent to the dual-mode three-dimensional modeling device based on the temperature field and the live-action video stream in the embodiment of the application, so that the construction of the temperature field three-dimensional model and the live-action three-dimensional model can be conveniently carried out based on the obtained data. When the data are acquired, the dual-mode three-dimensional modeling device based on the temperature field and the live-action video stream further acquires pose information of the infrared camera and the visible light camera, defines the pose information of the infrared camera as first pose information, and defines the pose information of the visible light camera as second pose information. And recording corresponding acquired infrared image data or video stream image data corresponding to the pose information so as to be used for subsequently calling and constructing a temperature field three-dimensional model and a real scene three-dimensional model.
S120, constructing an initial three-dimensional model of a fire scene based on the point cloud data, matching the infrared image with the initial three-dimensional model based on the first attitude information, constructing a temperature field three-dimensional model, and fusing the video stream image with the initial three-dimensional model based on the second attitude information to construct a live-action three-dimensional model.
Further, after the relevant data are collected, an initial three-dimensional model is constructed based on the point cloud data. The construction of the temperature field three-dimensional model and the live-action three-dimensional model in the embodiment of the application is based on the initial three-dimensional model, and the temperature field three-dimensional model and the live-action three-dimensional model are obtained by mapping the acquired infrared image data and the acquired live-action video stream image data to corresponding positions on the initial three-dimensional model. In order to ensure the mapping position to be accurate, the positions of the infrared camera and the visible light camera relative to the initial three-dimensional model need to be determined according to the acquired first position and second position information. It can be understood that the range of the infrared camera and the visible light camera projected on the three-dimensional model is the range of the infrared camera and the visible light camera collecting the relevant data. Based on the principle, the temperature field three-dimensional model and the live-action three-dimensional model can be constructed by matching and mapping the relevant data.
Before this, an initial three-dimensional model of the fire scene was constructed from the point cloud data previously collected using the lidar. In the prior art, there are many technical means for constructing a three-dimensional model based on point cloud data, and the embodiment of the present application is not subject to fixed limitation and is not described herein in detail. After the initial three-dimensional model is built, the temperature field three-dimensional model and the live-action three-dimensional model can be respectively built based on the initial three-dimensional model. Referring to fig. 2, the process of constructing the three-dimensional model of the temperature field includes:
s1201, establishing a matching relation between each point on the infrared image and a space point of the initial three-dimensional model based on the first posture information;
s1202, determining temperature values of all points on the infrared image, and mapping the temperature values of all points on the infrared image to the initial three-dimensional model based on the matching relation;
s1203, performing interpolation processing on the initial three-dimensional model to construct a temperature field three-dimensional model.
Specifically, since the infrared image acquired by the infrared camera is two-dimensional, the point on the infrared image needs to be matched with the spatial point on the three-dimensional model to establish a matching relationship. Moreover, since the three-dimensional model of the temperature field is established, the actually determined temperature value should be the temperature value of each space point in the initial three-dimensional model. Therefore, the temperature values of all the points on the infrared image are determined, and then the temperature values of all the points on the infrared image are mapped to the positions of the space points corresponding to the initial three-dimensional model according to the matching relation, so that the construction of the three-dimensional model of the temperature field can be completed. Wherein the temperature value of each point is determined based on the image characteristics of each point on the infrared image. It can be understood that, based on the infrared temperature measurement technology, the infrared image characteristics of the points with different temperatures are different, and therefore, the temperature value of each point can be determined through image characteristic identification comparison.
Further, when matching the point of the infrared image with the spatial point of the initial three-dimensional model, referring to fig. 3, it is assumed that a certain point p in the three-dimensional space is [ x, y, z ═ x, y, z]TIf the first attitude information, the rotation matrix, and the translation vector are a, R, and t, respectively, the coordinates of the point P in the space in the camera coordinate system are:
pt=R(p-t)
then under the camera coordinate system, pt=[xt,yt,zt]. Will PtPerforming deformation treatment to obtain pt′=[xt/zt,yt/zt,1]。PtThe coordinates in the image coordinate system are:
pi=A*pt′
all three-dimensional points in space pass throughThe projection transformation is projected on a two-dimensional image plane, and each space point can establish a relation with a point in the infrared image, namely the matching relation. As shown in FIG. 3, adjacent points P in three-dimensional spacel,P2,P3The point corresponding to the infrared image is Pl′,P2′,P3′。Pl,P2,P3And Pl′,P2′,P3The infrared temperature measurement method has the advantages that the infrared temperature measurement method has the same topological structure, the direct matching relation between the initial three-dimensional model and the infrared image is established, the temperature value of the corresponding point in the infrared image is returned to the initial three-dimensional model, and the preliminary three-dimensional temperature distribution can be obtained. However, the three-dimensional model information is not exactly equal to the information in the infrared image, that is, there may be some points in the infrared image that do not correspond to the corresponding points in the initial three-dimensional model during the matching process, and therefore, the three-dimensional model needs to be interpolated.
In the embodiment of the present application, in the thermal infrared image, a point P is assumedc' at Pl′,P2′,P3' in the constructed triangle, then Pc' and Pl′,P2′,P3' satisfies:
∠p1′pc′p3′+∠p2′pc′p3′+∠p1′pc′p2′=2π (1)
from the above equation (1), P can be determinedc' ray equation in the initial three-dimensional model space:
pc=pc′*A-1R-1+t
Pl′,P2′,P3' the plane can be expressed as:
therefore, a complete three-dimensional model of the temperature field is obtained through interpolation processing of the initial three-dimensional model.
Further, referring to fig. 4, a process of constructing a live-action three-dimensional model according to an embodiment of the present application includes:
s1204, constructing a virtual view volume of the visible light camera on the initial three-dimensional model based on the second posture information;
s1205, projecting the video stream image to the initial three-dimensional model by utilizing a projection texture technology based on the shooting range of the virtual view;
and S1206, carrying out texture fusion on the video stream image and the initial three-dimensional model to construct a real-scene three-dimensional model.
Specifically, when a live-action three-dimensional model is constructed, the relative position, orientation and size between the model space position and the model in the initial three-dimensional model are required to be consistent with the video stream image of the live-action, so that the laser radar and the visible light camera are carried on the same unmanned aerial vehicle for data acquisition. Further, the longitude and latitude coordinates of the earth surface where the visible light camera is located are converted into world coordinates expressed by using Cartesian coordinates in the initial three-dimensional model based on the second pose information, a virtual projector model and a virtual view volume corresponding to the virtual projector model are correspondingly added into the initial three-dimensional model, and the virtual projector model is arranged in the initial three-dimensional model corresponding to the position of the visible light camera in the real scene and used for projecting video textures in the initial three-dimensional model, and meanwhile, the initial pose value of the virtual projector model of the initial three-dimensional model is set according to the pose information of the visible light camera. Further, video frame preprocessing is carried out on the actually shot video stream image to obtain dynamic video texture, and the video data after preprocessing is projected to the initial three-dimensional model by utilizing a projection texture technology. Furthermore, static textures and/or original remote sensing image textures of the earth surface in the initial three-dimensional model are fused with dynamic video textures in the video stream image, and texture fusion is adopted for different intersected coverage areas of the projectors in the virtual projector model. Thus, the construction of the real-scene three-dimensional model can be completed.
It should be noted that, in practical application, the temperature field three-dimensional model and the real three-dimensional model may be constructed by adopting various model construction methods according to the actual model construction requirements, and the embodiment of the present application is not limited herein.
And S130, outputting and displaying the temperature field three-dimensional model and the real scene three-dimensional model on the same screen.
And finally, based on the constructed three-dimensional model of the temperature field and the three-dimensional model of the real scene, outputting the two models to a display screen of the dual-mode three-dimensional modeling equipment based on the temperature field and the video stream of the real scene for displaying, so that fire fighters can know the fire scene condition and the fire source distribution according to the models, and further, the formulation of a fire fighting coping scheme is optimized. Illustratively, in practical application, a firefighter receives point cloud data, an infrared image and a video stream image of a fire scene through a tablet computer, and records first position and orientation information corresponding to an infrared camera and second position and orientation information corresponding to a visible light camera. The temperature field and real scene video stream-based dual-mode three-dimensional modeling method completes construction of a temperature field three-dimensional model and a real scene three-dimensional model, and further outputs and displays the temperature field three-dimensional model and the real scene three-dimensional model on a display interface of a tablet personal computer. Based on the three-dimensional temperature field model and the three-dimensional live-action model of the fire scene displayed on the display interface, the fire fighters can visually and vividly know the specific situation of the fire scene so as to assist the fire fighters in formulating a fire fighting coping scheme and optimize the information display effect and the fire rescue efficiency of the fire scene. It should be noted that, because the conditions of the fire scene are variable, when the updated data of the infrared image and the live-action video stream image of the fire scene is received again subsequently, the dual-mode three-dimensional modeling method based on the temperature field and the live-action video stream according to the embodiment of the present application may also be used to update and display the temperature field three-dimensional model and the live-action three-dimensional model, so that the display of the temperature field three-dimensional model and the live-action three-dimensional model can be guaranteed to have real-time performance, and the fire scene information display effect is further optimized.
In one embodiment, the dual-mode three-dimensional modeling device based on the temperature field and the live-action video stream also displays the temperature field three-dimensional model and the live-action three-dimensional model on a display screen in the same orientation, and correspondingly adjusts the display orientation of the live-action three-dimensional model in response to the display orientation adjustment of the temperature field three-dimensional model, or correspondingly adjusts the display orientation of the temperature field three-dimensional model in response to the display orientation adjustment of the live-action three-dimensional model. It can be understood that the three-dimensional temperature field model and the three-dimensional live-action model are displayed in the same direction, so that fire fighters can conveniently compare and check the live-action situation and the temperature distribution situation of the fire scene, and a better information checking effect is obtained. Similarly, when the user operates the display orientation of any one of the three-dimensional temperature field model and the three-dimensional live-action model, the dual-mode three-dimensional modeling device based on the temperature field and the video stream of the live-action correspondingly adjusts the display orientation of the other three-dimensional model in response to the operation of the user, so that the real-time same-orientation display effect of the three-dimensional temperature field model and the three-dimensional live-action model can be realized. Of course, according to actual needs, the user may also adjust the display orientation of only one of the three-dimensional temperature field model and the three-dimensional live-action model through corresponding control operations, while the other one remains unchanged.
Further, in an embodiment, after the temperature field three-dimensional model and the live-action three-dimensional model are displayed on the display screen, the embodiment of the application further compares the temperature values of all space points on the temperature field three-dimensional model with a preset temperature threshold, determines a high-temperature position with an excessive temperature, and marks the high-temperature position on the temperature field three-dimensional model. In addition, in order to facilitate comparison and viewing, the embodiment of the application further determines a corresponding coordinate point in the live-action three-dimensional model according to the coordinate point of the high-temperature position and marks the coordinate point. It can be understood that the high-temperature positions are marked on the temperature field three-dimensional model and the live-action three-dimensional model, so that a user can know the high-temperature positions of the fire scene conveniently, the determination of a fire source, the determination of a rescue path and the determination of a fire fighting coping scheme can be facilitated, and the information display effect and the fire rescue efficiency of the fire scene are further optimized.
The method comprises the steps of collecting point cloud data of a fire scene, collecting an infrared image of the fire scene through an infrared camera, collecting a video stream image of the fire scene through a visible light camera, recording first position and posture information of the infrared camera and second position and posture information of the visible light camera, further constructing an initial three-dimensional model of the fire scene based on the point cloud data, matching the infrared image with the initial three-dimensional model based on the first position and posture information, constructing a temperature field three-dimensional model, fusing the video stream image with the initial three-dimensional model based on the second position and posture information, constructing a real three-dimensional model, and finally outputting and displaying the temperature field three-dimensional model and the real three-dimensional model on the same screen. By adopting the technical means, the construction and display of the three-dimensional model of the temperature field and the three-dimensional model of the real scene in the fire scene can be realized, the situation of the fire scene can be accurately restored, and the distribution of the fire source can be displayed in detail. Therefore, fire fighters can conveniently know the real-time fire situation, a fire fighting response scheme is formulated, and the fire scene information display effect and the fire rescue efficiency are optimized.
Example two:
based on the above embodiments, fig. 5 is a schematic structural diagram of a dual-mode three-dimensional modeling apparatus based on a temperature field and a live-action video stream according to a second embodiment of the present application. Referring to fig. 5, the dual-mode three-dimensional modeling apparatus based on a temperature field and a live-action video stream provided in the present embodiment specifically includes: an acquisition module 21, a construction module 22 and a display module 23.
The acquisition module 21 is used for acquiring point cloud data of a fire scene, acquiring an infrared image of the fire scene through an infrared camera, acquiring a video stream image of the fire scene through a visible light camera, and recording first position and orientation information of the infrared camera and second position and orientation information of the visible light camera;
the construction module 22 is configured to construct an initial three-dimensional model of a fire scene based on the point cloud data, match the infrared image with the initial three-dimensional model based on the first pose information, construct a temperature field three-dimensional model, and fuse the video stream image with the initial three-dimensional model based on the second pose information, thereby constructing a live-action three-dimensional model;
the display module 23 is configured to output and display the temperature field three-dimensional model and the live-action three-dimensional model on the same screen.
The method comprises the steps of collecting point cloud data of a fire scene, collecting an infrared image of the fire scene through an infrared camera, collecting a video stream image of the fire scene through a visible light camera, recording first position and posture information of the infrared camera and second position and posture information of the visible light camera, further constructing an initial three-dimensional model of the fire scene based on the point cloud data, matching the infrared image with the initial three-dimensional model based on the first position and posture information, constructing a temperature field three-dimensional model, fusing the video stream image with the initial three-dimensional model based on the second position and posture information, constructing a real three-dimensional model, and finally outputting and displaying the temperature field three-dimensional model and the real three-dimensional model on the same screen. By adopting the technical means, the construction and display of the three-dimensional model of the temperature field and the three-dimensional model of the real scene in the fire scene can be realized, the situation of the fire scene can be accurately restored, and the distribution of the fire source can be displayed in detail. Therefore, fire fighters can conveniently know the real-time fire situation, a fire fighting response scheme is formulated, and the fire scene information display effect and the fire rescue efficiency are optimized.
The dual-mode three-dimensional modeling device based on the temperature field and the live-action video stream provided by the second embodiment of the application can be used for executing the dual-mode three-dimensional modeling method based on the temperature field and the live-action video stream provided by the first embodiment, and has corresponding functions and beneficial effects.
Example three:
an embodiment of the present application provides an electronic device, and with reference to fig. 6, the electronic device includes: a processor 31, a memory 32, a communication module 33, an input device 34, and an output device 35. The number of processors in the electronic device may be one or more, and the number of memories in the electronic device may be one or more. The processor, memory, communication module, input device, and output device of the electronic device may be connected by a bus or other means.
The memory 32 is a computer readable storage medium, and can be used for storing software programs, computer executable programs, and modules, such as program instructions/modules corresponding to the dual-mode three-dimensional modeling method based on temperature field and live-action video stream according to any embodiment of the present application (for example, the acquisition module, the construction module, and the display module in the dual-mode three-dimensional modeling apparatus based on temperature field and live-action video stream). The memory can mainly comprise a program storage area and a data storage area, wherein the program storage area can store an operating system and an application program required by at least one function; the storage data area may store data created according to use of the device, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the memory may further include memory located remotely from the processor, and these remote memories may be connected to the device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The communication module 33 is used for data transmission.
The processor 31 executes various functional applications of the device and data processing by running software programs, instructions and modules stored in the memory, namely, implements the dual-mode three-dimensional modeling method based on the temperature field and the live-action video stream.
The input device 34 may be used to receive entered numeric or character information and to generate key signal inputs relating to user settings and function controls of the apparatus. The output device 35 may include a display device such as a display screen.
The electronic device provided above can be used to execute the dual-mode three-dimensional modeling method based on the temperature field and the live-action video stream provided in the first embodiment, and has corresponding functions and advantages.
Example four:
embodiments of the present application further provide a storage medium containing computer-executable instructions, which when executed by a computer processor, perform a dual-mode three-dimensional modeling method based on a temperature field and a live-action video stream, the dual-mode three-dimensional modeling method based on a temperature field and a live-action video stream comprising: acquiring point cloud data of a fire scene, acquiring an infrared image of the fire scene through an infrared camera, acquiring a video stream image of the fire scene through a visible light camera, and recording first position and attitude information of the infrared camera and second position and attitude information of the visible light camera; constructing an initial three-dimensional model of a fire scene based on the point cloud data, matching the infrared image with the initial three-dimensional model based on the first attitude information, constructing a temperature field three-dimensional model, and fusing the video stream image with the initial three-dimensional model based on the second attitude information to construct a live-action three-dimensional model; and outputting and displaying the temperature field three-dimensional model and the real scene three-dimensional model on the same screen.
Storage medium-any of various types of memory devices or storage devices. The term "storage medium" is intended to include: mounting media such as CD-ROM, floppy disk, or tape devices; computer system memory or random access memory such as DRAM, DDR RAM, SRAM, EDO RAM, Lanbas (Rambus) RAM, etc.; non-volatile memory such as flash memory, magnetic media (e.g., hard disk or optical storage); registers or other similar types of memory elements, etc. The storage medium may also include other types of memory or combinations thereof. In addition, the storage medium may be located in a first computer system in which the program is executed, or may be located in a different second computer system connected to the first computer system through a network (such as the internet). The second computer system may provide program instructions to the first computer for execution. The term "storage medium" may include two or more storage media residing in different locations, e.g., in different computer systems connected by a network. The storage medium may store program instructions (e.g., embodied as a computer program) that are executable by one or more processors.
Of course, the storage medium provided by the embodiments of the present application contains computer-executable instructions, and the computer-executable instructions are not limited to the dual-mode three-dimensional modeling method based on temperature field and live-action video stream as described above, and may also perform related operations in the dual-mode three-dimensional modeling method based on temperature field and live-action video stream as provided by any embodiments of the present application.
The dual-mode three-dimensional modeling apparatus, the storage medium, and the electronic device based on the temperature field and the live-action video stream provided in the foregoing embodiments may perform the dual-mode three-dimensional modeling method based on the temperature field and the live-action video stream provided in any embodiments of the present application, and reference may be made to the dual-mode three-dimensional modeling method based on the temperature field and the live-action video stream provided in any embodiments of the present application without detailed technical details described in the foregoing embodiments.
The foregoing is considered as illustrative of the preferred embodiments of the invention and the technical principles employed. The present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the claims.
Claims (10)
1. A dual-mode three-dimensional modeling method based on a temperature field and a live-action video stream is characterized by comprising the following steps:
acquiring point cloud data of a fire scene, acquiring an infrared image of the fire scene through an infrared camera, acquiring a video stream image of the fire scene through a visible light camera, and recording first position and attitude information of the infrared camera and second position and attitude information of the visible light camera;
constructing an initial three-dimensional model of a fire scene based on the point cloud data, matching the infrared image with the initial three-dimensional model based on the first attitude information, constructing a temperature field three-dimensional model, and fusing the video stream image with the initial three-dimensional model based on the second attitude information to construct a live-action three-dimensional model;
and outputting and displaying the temperature field three-dimensional model and the real scene three-dimensional model on the same screen.
2. The dual-mode three-dimensional modeling method based on temperature field and live-action video stream according to claim 1, wherein matching the infrared image with the initial three-dimensional model based on the first pose information, constructing a temperature field three-dimensional model, comprises:
establishing a matching relation between each point on the infrared image and a space point of the initial three-dimensional model based on the first attitude information;
determining the temperature value of each point on the infrared image, and mapping the temperature value of each point on the infrared image to the initial three-dimensional model based on the matching relation;
and carrying out interpolation processing on the initial three-dimensional model to construct a temperature field three-dimensional model.
3. The dual-mode three-dimensional modeling method based on temperature field and live-action video stream according to claim 1, wherein determining temperature values of points on the infrared image comprises:
and determining the temperature value of each point on the infrared image based on the image characteristics of each point.
4. The dual-mode three-dimensional modeling method based on temperature field and live-action video stream according to claim 1, wherein fusing the video stream image with the initial three-dimensional model based on the second pose information to construct a live-action three-dimensional model, comprises:
constructing a virtual view volume of the visible light camera on the initial three-dimensional model based on the second pose information;
projecting the video stream image into the initial three-dimensional model by utilizing a projection texture technology based on the shooting range of the virtual view;
and performing texture fusion on the video stream image and the initial three-dimensional model to construct a real-scene three-dimensional model.
5. The dual-mode three-dimensional modeling method based on temperature field and live-action video stream according to claim 1, wherein said temperature field three-dimensional model and said live-action three-dimensional model are displayed by being output on the same screen, further comprising:
and displaying the temperature field three-dimensional model and the live-action three-dimensional model in the same direction on a display screen, and responding to the display direction adjustment of the temperature field three-dimensional model to correspondingly adjust the display direction of the live-action three-dimensional model, or responding to the display direction adjustment of the live-action three-dimensional model to correspondingly adjust the display direction of the temperature field three-dimensional model.
6. The dual-mode three-dimensional modeling method based on temperature field and live-action video stream as claimed in claim 1, further comprising, after displaying said temperature field three-dimensional model and said live-action three-dimensional model in a same screen output mode:
and comparing the temperature value of each space point on the three-dimensional model of the temperature field with a preset temperature threshold, determining a high-temperature position with an overproof temperature, and marking the high-temperature position on the three-dimensional model of the temperature field.
7. The dual-mode three-dimensional modeling method based on temperature field and live-action video stream according to claim 6, characterized in that after determining the high temperature position with excessive temperature, marking the high temperature position on the temperature field three-dimensional model, further comprising:
and determining a corresponding coordinate point in the real three-dimensional model according to the coordinate point of the high-temperature position and marking the coordinate point.
8. A dual-mode three-dimensional modeling device based on a temperature field and a live-action video stream is characterized by comprising:
the system comprises an acquisition module, a video acquisition module and a display module, wherein the acquisition module is used for acquiring point cloud data of a fire scene, acquiring an infrared image of the fire scene through an infrared camera, acquiring a video stream image of the fire scene through a visible light camera, and recording first position and attitude information of the infrared camera and second position and attitude information of the visible light camera;
the building module is used for building an initial three-dimensional model of a fire scene based on the point cloud data, matching the infrared image with the initial three-dimensional model based on the first attitude information, building a temperature field three-dimensional model, and fusing the video stream image with the initial three-dimensional model based on the second attitude information to build a real scene three-dimensional model;
and the display module is used for outputting and displaying the temperature field three-dimensional model and the real scene three-dimensional model on the same screen.
9. An electronic device, comprising:
a memory and one or more processors;
the memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the dual-mode three-dimensional modeling method based on temperature field and live-action video stream of any of claims 1-7.
10. A storage medium containing computer-executable instructions for performing the dual-mode three-dimensional modeling method based on temperature field and live-action video stream according to any one of claims 1 to 7 when executed by a computer processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010769149.1A CN111968221B (en) | 2020-08-03 | 2020-08-03 | Dual-mode three-dimensional modeling method and device based on temperature field and live-action video stream |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010769149.1A CN111968221B (en) | 2020-08-03 | 2020-08-03 | Dual-mode three-dimensional modeling method and device based on temperature field and live-action video stream |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111968221A true CN111968221A (en) | 2020-11-20 |
CN111968221B CN111968221B (en) | 2024-10-15 |
Family
ID=73363890
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010769149.1A Active CN111968221B (en) | 2020-08-03 | 2020-08-03 | Dual-mode three-dimensional modeling method and device based on temperature field and live-action video stream |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111968221B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112465987A (en) * | 2020-12-17 | 2021-03-09 | 武汉第二船舶设计研究所(中国船舶重工集团公司第七一九研究所) | Navigation map construction method for three-dimensional reconstruction of visual fusion information |
CN113596335A (en) * | 2021-07-31 | 2021-11-02 | 重庆交通大学 | Highway tunnel fire monitoring system and method based on image fusion |
CN117994648A (en) * | 2023-12-29 | 2024-05-07 | 上海新高桥凝诚建设工程检测有限公司 | Method for detecting building outer facade by combining RTK unmanned aerial vehicle with infrared thermal imaging |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0239692A (en) * | 1988-07-29 | 1990-02-08 | Toshiba Corp | Monitoring equipment |
JP2003207596A (en) * | 2002-01-11 | 2003-07-25 | Hitachi Ltd | Abnormality monitoring device |
US20070065002A1 (en) * | 2005-02-18 | 2007-03-22 | Laurence Marzell | Adaptive 3D image modelling system and apparatus and method therefor |
KR100839090B1 (en) * | 2008-03-17 | 2008-06-20 | (주)나인정보시스템 | Image base fire monitoring system |
CN102147290A (en) * | 2011-01-14 | 2011-08-10 | 北京广微积电科技有限公司 | Infrared imaging temperature-monitoring method and system |
CN104915986A (en) * | 2015-06-26 | 2015-09-16 | 北京航空航天大学 | Physical three-dimensional model automatic modeling method |
CN107067470A (en) * | 2017-04-05 | 2017-08-18 | 东北大学 | Portable three-dimensional reconstruction of temperature field system based on thermal infrared imager and depth camera |
CN107808412A (en) * | 2017-11-16 | 2018-03-16 | 北京航空航天大学 | A kind of three-dimensional thermal source environmental model based on low cost determines environmental information method |
US20180209853A1 (en) * | 2017-01-23 | 2018-07-26 | Honeywell International Inc. | Equipment and method for three-dimensional radiance and gas species field estimation in an open combustion environment |
JP2019032600A (en) * | 2017-08-04 | 2019-02-28 | 日本電気株式会社 | Three-dimensional image generation device, three-dimensional image generation method, and three-dimensional image generation program |
CN109490899A (en) * | 2018-11-12 | 2019-03-19 | 广西交通科学研究院有限公司 | Fire source localization method in a kind of tunnel based on laser radar and infrared thermal imager |
-
2020
- 2020-08-03 CN CN202010769149.1A patent/CN111968221B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0239692A (en) * | 1988-07-29 | 1990-02-08 | Toshiba Corp | Monitoring equipment |
JP2003207596A (en) * | 2002-01-11 | 2003-07-25 | Hitachi Ltd | Abnormality monitoring device |
US20070065002A1 (en) * | 2005-02-18 | 2007-03-22 | Laurence Marzell | Adaptive 3D image modelling system and apparatus and method therefor |
KR100839090B1 (en) * | 2008-03-17 | 2008-06-20 | (주)나인정보시스템 | Image base fire monitoring system |
CN102147290A (en) * | 2011-01-14 | 2011-08-10 | 北京广微积电科技有限公司 | Infrared imaging temperature-monitoring method and system |
CN104915986A (en) * | 2015-06-26 | 2015-09-16 | 北京航空航天大学 | Physical three-dimensional model automatic modeling method |
US20180209853A1 (en) * | 2017-01-23 | 2018-07-26 | Honeywell International Inc. | Equipment and method for three-dimensional radiance and gas species field estimation in an open combustion environment |
CN107067470A (en) * | 2017-04-05 | 2017-08-18 | 东北大学 | Portable three-dimensional reconstruction of temperature field system based on thermal infrared imager and depth camera |
JP2019032600A (en) * | 2017-08-04 | 2019-02-28 | 日本電気株式会社 | Three-dimensional image generation device, three-dimensional image generation method, and three-dimensional image generation program |
CN107808412A (en) * | 2017-11-16 | 2018-03-16 | 北京航空航天大学 | A kind of three-dimensional thermal source environmental model based on low cost determines environmental information method |
CN109490899A (en) * | 2018-11-12 | 2019-03-19 | 广西交通科学研究院有限公司 | Fire source localization method in a kind of tunnel based on laser radar and infrared thermal imager |
Non-Patent Citations (1)
Title |
---|
孙春辉;成锡平;: "基于无人机的实景三维建模在消防中的应用", 消防科学与技术, no. 04 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112465987A (en) * | 2020-12-17 | 2021-03-09 | 武汉第二船舶设计研究所(中国船舶重工集团公司第七一九研究所) | Navigation map construction method for three-dimensional reconstruction of visual fusion information |
CN113596335A (en) * | 2021-07-31 | 2021-11-02 | 重庆交通大学 | Highway tunnel fire monitoring system and method based on image fusion |
CN117994648A (en) * | 2023-12-29 | 2024-05-07 | 上海新高桥凝诚建设工程检测有限公司 | Method for detecting building outer facade by combining RTK unmanned aerial vehicle with infrared thermal imaging |
Also Published As
Publication number | Publication date |
---|---|
CN111968221B (en) | 2024-10-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11074755B2 (en) | Method, device, terminal device and storage medium for realizing augmented reality image | |
US11165959B2 (en) | Connecting and using building data acquired from mobile devices | |
US11238666B2 (en) | Display of an occluded object in a hybrid-reality system | |
US11272165B2 (en) | Image processing method and device | |
CN111968221B (en) | Dual-mode three-dimensional modeling method and device based on temperature field and live-action video stream | |
US11748906B2 (en) | Gaze point calculation method, apparatus and device | |
US12033289B2 (en) | Method and system for visualizing overlays in virtual environments | |
CN111625091B (en) | Label overlapping method and device based on AR glasses | |
CN108304075B (en) | Method and device for performing man-machine interaction on augmented reality device | |
US20170186219A1 (en) | Method for 360-degree panoramic display, display module and mobile terminal | |
CN113570721A (en) | Method and device for reconstructing three-dimensional space model and storage medium | |
CN115641401A (en) | Construction method and related device of three-dimensional live-action model | |
US20220005281A1 (en) | Augmented reality (ar) imprinting methods and systems | |
CN104978077A (en) | Interaction method and interaction system | |
CN111885366A (en) | Three-dimensional display method and device for virtual reality screen, storage medium and equipment | |
CN114283243A (en) | Data processing method and device, computer equipment and storage medium | |
KR20180029690A (en) | Server and method for providing and producing virtual reality image about inside of offering | |
WO2023207354A1 (en) | Special effect video determination method and apparatus, electronic device, and storage medium | |
EP4227907A1 (en) | Object annotation information presentation method and apparatus, and electronic device and storage medium | |
CN111970504A (en) | Display method, device and system for reversely simulating three-dimensional sphere by utilizing virtual projection | |
WO2024055925A1 (en) | Image transmission method and apparatus, image display method and apparatus, and computer device | |
CN112286355B (en) | Interactive method and system for immersive content | |
CN118466518A (en) | Unmanned aerial vehicle aerial photographing method and device for photographing object and computer storage medium | |
CN115760964A (en) | Method and equipment for acquiring screen position information of target object | |
CN117036444A (en) | Three-dimensional model output method, device, equipment and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |