CN116485695A - Image-level fusion method, intermediate module, readable medium and program product based on models of different three-dimensional visualization engines - Google Patents

Image-level fusion method, intermediate module, readable medium and program product based on models of different three-dimensional visualization engines Download PDF

Info

Publication number
CN116485695A
CN116485695A CN202310465702.6A CN202310465702A CN116485695A CN 116485695 A CN116485695 A CN 116485695A CN 202310465702 A CN202310465702 A CN 202310465702A CN 116485695 A CN116485695 A CN 116485695A
Authority
CN
China
Prior art keywords
model
information
image
level fusion
visualization
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310465702.6A
Other languages
Chinese (zh)
Inventor
庞学雷
李艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202310465702.6A priority Critical patent/CN116485695A/en
Publication of CN116485695A publication Critical patent/CN116485695A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A30/00Adapting or protecting infrastructure or their operation
    • Y02A30/60Planning or developing urban green infrastructure

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Remote Sensing (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

An image level fusion method, an intermediate module, a readable medium and a program product based on models of different three-dimensional visualization engines, which can easily realize the image level fusion of models generated by different kinds of three-dimensional visualization engines regardless of the kinds of graphic engines, are characterized in that: obtaining a camera perspective from a first engine; acquiring first image information of a first model from a first engine according to a camera view angle, wherein the first image information is information obtained by processing the first model by a first visualization engine through a rendering pipeline technology; acquiring second image information of a second model from a second engine according to the view angle of the camera, wherein the second image information is information obtained by processing the second model by a second visualization engine through a rendering pipeline technology; and according to the first image information and the second image information, drawing the image in the first visualization engine.

Description

Image-level fusion method, intermediate module, readable medium and program product based on models of different three-dimensional visualization engines
Technical Field
The invention relates to an image level fusion method, an intermediate module, a readable medium and a program product based on models of different three-dimensional visualization engines.
Background
As known, with the continuous development of the application level of the digital twin technology, especially in the explosive growth background in recent years, the corresponding technical requirements are higher and higher, and the comprehensive application of various professions across fields is more and more.
In the digital twinning of the whole technology chain, one of the very important links is the application of graphic engines (visualization engines), including for example Unreal, unity3D, autodesk Forge, bimFace, arcgis, cesium, etc. Generally, the graphic engines are used to solve the visualization problem in the specific industry field. For example, unreal and Unity3D solve the visualization problem in the gaming field, autodesk Forge and BimFace solve the visualization problem in the construction industry, while Arcigis and Cesium solve the visualization problem in the GIS industry.
On the other hand, in the urban construction fields such as urban planning, architectural design and civil engineering, the demands for constructing intelligent and intelligent digital twin cities are becoming higher and higher in order to improve the core competitiveness in the technical level and meet the strategic demands of national digital transformation.
In particular, for the implementation of digital twin cities, the main key points are the construction and fusion of a live three-dimensional model, a building information model (i.e., a BIM model), a geographic information model (i.e., a GIS model), a city information model (i.e., a CIM model), and the like.
Disclosure of Invention
Technical problem to be solved by the invention
However, in general, fusion of the above models presents technical difficulties. The reason for this is that the underlying architecture of some models is completely different and cannot be directly integrated. For example, the data formats of the BIM model, the CIM model, the GIS model, and the live-action three-dimensional model are not identical. The "live three-dimensional model" is, for example, a three-dimensional geometric model constructed based on at least oblique photography techniques. As a construction method, a live three-dimensional model is constructed, for example, in the following manner:
(1) Acquiring an image of a target object through unmanned aerial vehicle oblique photogrammetry;
(2) Extracting a point cloud by using a specific graphic engine;
(3) Calculating a triangle mesh using a particular image engine;
(4) Generating an oblique image pickup model to form a real-scene three-dimensional model.
However, different graphics engines typically have different underlying architectures and data formats are different. As a result, if a live-action three-dimensional model constructed by a specific graphic engine is to be fused with a BIM model, a GIS model, or the like, a specific data interface has to be designed for the specific graphic engine, which is undesirable from the viewpoints of labor cost, time cost, and monetary cost.
In particular, unlike other fields such as games, animations, etc., due to the particularities of the urban construction industry, different types of three-dimensional visualization engines are often employed at different stages (e.g., project stage, design stage, construction stage, operation and maintenance stage, city update stage, etc.). As a result, if the types of three-dimensional visualization engines used are large, it is technically very difficult to integrate models.
Moreover, the visualization technology belongs to the underlying basic technology, the technology development is slow, meanwhile, the difficulty of crossing different professional fields is considered, and the professional graphic engine in a single field grows into a multi-field engine with great difficulty. This creates a conflict between the single domain graphical visualization engine and the comprehensive application requirements.
Technical proposal adopted for solving the technical problems
The present invention has been made in view of the above-described problems, and an object of the present invention is to provide an image-level fusion method, an intermediate module, a readable medium, and a computer program product for models based on different three-dimensional visualization engines, which can easily realize image-level fusion of models generated by different types of three-dimensional visualization engines, regardless of the types of graphics engines.
The first technical scheme of the invention provides an image-level fusion method of models based on different three-dimensional visualization engines, which is characterized by comprising the following steps of:
obtaining a camera perspective from a first visualization engine;
acquiring first image information of a first model from the first visualization engine according to the camera view angle, wherein the first image information is information obtained by processing the first model by the first visualization engine through a rendering pipeline technology;
acquiring second image information of a second model from a second visualization engine according to the camera view angle, wherein the second image information is information obtained by processing the second model by the second visualization engine through a rendering pipeline technology; and
and according to the first image information and the second image information, drawing of the image is completed in the first visualization engine.
In a second aspect of the present invention, the image information includes at least first color information and first depth information, and the second image information includes at least second color information and second depth information.
In a third aspect of the present invention, the number of the second visualization engines is preferably two or more, and the types of the first visualization engines and the second visualization engines are the same or different.
In a fourth aspect of the present invention, the first model and/or the second model is preferably any one of an engineering information model, an engineering object three-dimensional model, and a model created by data acquired in a physical space.
In the fifth aspect of the present invention, it is preferable that the first model is an urban information model, a building information model, or a geographic information model, and the second model is a live-action three-dimensional model formed by modeling using any one of data obtained by a tilt photography technique, a close-range photography technique, a laser point cloud technique, a high-precision photography technique, or the like, or by combining a plurality of the data.
A sixth technical aspect of the present invention provides an intermediate module, where the intermediate module interacts with a first visualization engine and a second visualization engine to implement the image level fusion method according to any one of the first to fifth technical aspects, and the intermediate module is characterized in that the intermediate module includes:
a camera view angle information acquisition section that acquires a camera view angle from a first visualization engine;
an information extraction unit that, when the camera view angle is received, acquires second image information of a second model from a second visualization engine, the second image information being information obtained by the second visualization engine processing the second model using a rendering pipeline technique;
an instruction generation unit that generates, when the camera angle of view is received and the second image information is acquired, an image-level fusion instruction for causing the first visualization engine to draw an image from first image information of a first model and the second image information of the second model, the first image information being information obtained by the first visualization engine processing the first model using a rendering pipeline technique; and
and a transmitting unit configured to transmit the second image information and the image-level fusion instruction when the image-level fusion instruction is generated.
A seventh aspect of the present invention provides the sixth aspect, wherein the first image information includes at least first color information and first depth information, and the second image information includes at least second color information and second depth information.
In an eighth aspect of the present invention, the first model and/or the second model is preferably any one of an engineering information model, an engineering object three-dimensional model, and a model created by data acquired in a physical space.
In the ninth aspect of the present invention, it is preferable that the first model is an urban information model, a building information model, or a geographic information model, and the second model is a live-action three-dimensional model formed by modeling using any one of data obtained by a tilt photography technique, a close-range photography technique, a laser point cloud technique, a high-precision photography technique, or the like, or by combining a plurality of the data.
A tenth aspect of the present invention provides a computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the image level fusion method according to any one of the first to fifth aspects.
An eleventh technical means of the present invention provides a computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the image level fusion method according to any one of the first to fifth technical means.
Effects of the invention
According to the image level fusion method of the first aspect, camera view angle information is first acquired from a first visualization engine as a host engine, and then first image information of a first model is acquired from the first visualization engine according to the acquired camera view angle information. On the other hand, second image information of a second model is acquired from a second visualization engine different from the host engine, based on camera view angle information acquired from the above-described host engine. Then, the first visualization engine as the host engine completes drawing of the image by using the image processing unit, i.e., the GPU, and according to the acquired first image information of the first model and the acquired second image information of the second model. Since the data types of the respective information for realizing image drawing are the same, no additional data processing is required. In this way, whether the three-dimensional visualization engines based on the three-dimensional models are of the same type or not, the models can be subjected to image-level fusion, and various types of costs caused by designing a specific API for a specific graphic engine in order to fuse the engines based on the three-dimensional visualization engines of different types together are avoided.
According to the image-level fusion method of the second aspect, since the first image information and the second image information each include at least the first color information and the first depth information and the second color information and the second depth information, not only the depth relation of the building group (or the building infrastructure group) at the specific camera view angle but also the appearance color of each building, each infrastructure, and the like can be restored. Therefore, the image-level fusion of different models can be better realized, and the visualization effect is better.
According to the image level fusion method of the third technical scheme, image level fusion of multiple models of the same type or different types can be achieved, and the situation that specific APIs are required to be designed respectively for fusing the models generated by three or more three-dimensional visualization engines of different types is avoided.
According to the image level fusion method of the fourth technical scheme, any one or both of the first model and the second model can be an engineering information model or an engineering object three-dimensional model, so that in engineering design, construction, operation and maintenance, transformation and other processes, image level fusion can be carried out on different types of engineering models which are created for different engineering purposes, and the requirements of different stages of urban construction are met.
According to the image-level fusion method of the fifth technical scheme, image-level fusion between the urban information model or the building information model or the geographic information model and any one of data obtained by adopting an oblique photography technology, a close-range photography technology or a laser point cloud technology, a high-precision photography technology and the like or a real-scene three-dimensional model formed by fusing a plurality of data for modeling can be realized, and rapid construction and display of digital city (smart city) images can be realized.
According to the intermediate module in the sixth technical scheme, by setting the intermediate module, the image-level fusion of different types of models can be realized by only calling the universal interface, and the situation that a special interface is required to be designed for fusion between models generated by different types of three-dimensional visualization engines is avoided.
Drawings
Fig. 1 is a flowchart showing an image level fusion method of a live three-dimensional model and a city information model according to an embodiment of the present invention.
Fig. 2 is a functional block diagram illustrating intermediate modules for implementing the image level fusion method shown in fig. 1.
Fig. 3 is a schematic diagram showing a structure of an electronic device for implementing the image level fusion method of fig. 2.
Detailed Description
First, an image level fusion method of a live three-dimensional model as a second model and a city information model as a first model according to an embodiment of the present invention will be described with reference to fig. 1. The city information model is an example of the first model, and the live three-dimensional model is an example of the second model. That is, the first model is not limited to the city information model, and the second model is not limited to the live three-dimensional model, and may be other kinds of three-dimensional models. Specifically, either or both of the first model and the second model may be any one of an engineering information model, an engineering object three-dimensional model, and a model created by data acquired in a physical space. The term "engineering object three-dimensional model" as used herein refers to a model that performs only three-dimensional expression of visual information such as engineering object geometry, and is constructed by an engine such as skchup, 3DsMAX, or MAYA. The term "engineering information model" as used herein refers to a model which is not only a three-dimensional representation of visual information such as engineering object geometry but also a model which is rich in object properties, object relationships, and the like, and is constructed from an information perspective, and which can improve a language necessary to express characteristics and functions of an object, and includes, for example, a building information model (i.e., a BIM model), a geographic information model (i.e., a GIS model), a city information model (i.e., a CIM model), and the like. The term "model created from data acquired in a physical space" as used herein refers to a model constructed by acquiring image data from a physical space by, for example, oblique photography, close-range photography, laser point cloud, high-precision photography, or the like, and processing the image data.
Fig. 1 shows a flowchart of an image level fusion method of a live three-dimensional model and a city information model according to an embodiment of the present invention. The order of the steps in the flowchart is merely an example, and the flowchart is not limited thereto, and some steps may be exchanged and some steps may be executed synchronously as long as the target result is achieved.
First, in step ST1, a camera view is acquired from a city information model engine (i.e., CIM engine) which is an example of a first visualization engine (also simply referred to as a first engine) which is a host engine. More specifically, in step ST1, view angle information of a camera that observes a city information model (i.e., CIM model) that is an example of the first model is acquired from the city information model engine. It should be noted that, for each frame, the camera view angle is obtained from the city information model engine. After the operation in step ST1 is completed, the process proceeds to step ST2.
In step ST2, city image information (i.e., first image information) of the city information model is acquired from the city information model based on the acquired camera angle-of-view information. The city image information is obtained by processing the city information model by using a graphics rendering pipeline technology. The "graphics rendering pipeline technology" is a real-time rendering technology that functions to generate or render a two-dimensional image given a virtual camera, a three-dimensional scene object, and a light source. Specifically, the graphics rendering pipeline mainly includes two functions: firstly, changing the three-dimensional coordinates of an object into two-dimensional coordinates of a screen space; and secondly, coloring each pixel point of the screen. The general flow of the graphics rendering pipeline includes, in order, input of vertex data, processing based on vertex shaders, tessellation (optional), geometry shaders (optional), primitive assembly, clip culling, rasterization, processing based on fragment shaders, and blending testing. The city image information is an information set corresponding to each pixel point on the display device, and at least includes pixel color information (i.e., first color information) and pixel depth information (i.e., first depth information) corresponding to the city information model, and in a special case, transparency information may also be included, where the pixel depth information is information of a distance between (each vertex of) each object and the camera. The above information of each pixel point comes from the result of the spatial transformation of the model, and the pixel point is a basic operation unit for image level fusion.
After step ST1 is completed and before, after, or simultaneously with the completion of the action of step ST2, the action in step ST3 is performed. In addition, only the case where step ST3 is located after step ST2 is shown in fig. 1 due to the limitation of the flowchart itself, but the present invention is not limited thereto. As described above, after step ST1 is completed, that is, camera view angle information is acquired, the operation in step ST3 can be performed.
In step ST3, from the acquired camera view angle information, live-action image information (i.e., second image information) of a live-action three-dimensional model in a graphic engine that is an example of a second visualization engine (also simply referred to as a second engine) is acquired. The live-action image information and the live-action depth information are information obtained by processing the live-action three-dimensional model by the second visualization engine by using the graphics rendering pipeline technology, and include at least live-action color information (i.e., second color information) and live-action depth information (i.e., second depth information). The live-action three-dimensional model is, for example, a tilt camera model formed based on a tilt camera technique. Specifically, after obtaining an image map by the oblique photography technique, a specific graphics engine is used to perform point cloud extraction, triangle mesh calculation, and mapping, thereby generating an oblique photography model. Or, in order to overcome the defects of lack of detail texture and color, low precision, low simulation degree, inability to accurately express geometric scale relationship, non-solid non-continuous surface and the like of the oblique photography model, a live-action three-dimensional model can be formed by filling gaps between point clouds by using a specific algorithm and adding color data of a digital photo to the model after the point clouds are extracted. However, the formation of the live-action three-dimensional model is not limited to the oblique photographing technique, and may be performed by modeling using any one of the data obtained by the close-range photographing technique, the laser point cloud technique, the high-precision photographing technique, and the like, or by combining a plurality of the above data. After the completion of the operation of step ST3, the operation of step ST4 is performed.
In step ST4, an image-level fusion instruction is generated, and image-level fusion is performed according to the city image information and the live-action image information by the image-level fusion instruction. Regarding the implementation manner of the image level fusion action, for example, the image level fusion instruction is a series of codes or programs corresponding to the pseudo codes described below, which can be recognized and processed by the GPU, and the GPU completes the image level fusion of the two models by executing the image level fusion instruction described below, thereby constructing the target image.
Note that, the above-described code shows a case where color textures (color information) and depth textures (depth information) are acquired from image information, but is not limited thereto, and other types of information, such as transparency information, may be acquired according to actual needs in addition to the acquisition of color textures and depth textures. After step ST4 is completed, step ST5 is performed.
In step ST5, the target image is transmitted from the GPU to the host engine, that is, the city information model engine, and the host engine outputs the target image to the display to form a target screen.
According to the image level fusion method of this embodiment, camera view angle information is acquired from a first engine for each frame, and first image information of a first model is acquired from the first visualization engine according to the acquired camera view angle information. On the other hand, second image information of the second model is acquired from the second visualization engine according to the acquired camera view angle information. And then, according to the image-level fusion instruction, completing image-level fusion of the urban information model and the live-action three-dimensional model by utilizing the information. Therefore, no matter whether the plurality of three-dimensional visualization engines are engines of the same type or not, the image-level fusion of the three-dimensional models of different types can be realized by only calling the universal interfaces of the engines. Therefore, an increase in cost due to designing a dedicated interface for a specific graphics engine can be avoided.
In the above-described image level fusion method, the case where the number of second engines is one was described, but the present invention is not limited thereto. The number of the second engines may be two or more, and the types of the second engines may be the same or different, and the types of the first engines and the second engines may be the same or different. This is because, according to the above-described image level fusion method, image level fusion can be achieved regardless of the number of three-dimensional visualization engines, whether or not the types are the same.
In the above-described image level fusion method, the case where the first model is the city information model, that is, the CIM model, and the second model is the live-action three-dimensional model has been described, but the present invention is not limited thereto. The first model may be a model other than a CIM model, for example, a building information model, or a BIM model, or a geographic information model, or a GIS model, and the second model may be a model other than a live-action three-dimensional model, for example, a city information model, or a building information model, or a geographic information model. Of course, the first model may be an engineering information model other than the above model, and may be an engineering object three-dimensional model or a live-action three-dimensional model, and the same is true for the second model.
Next, on the basis of the above-described image level fusion method, the main functional configuration of an intermediate module that implements the above-described fusion method will be described with reference to fig. 2.
Fig. 2 shows a functional block diagram of intermediate modules implementing the image level fusion method shown in fig. 1.
As shown in fig. 2, the intermediate module S mainly includes a camera angle information acquisition section SA, an information extraction section SB, an instruction generation section SC, and a transmission section SD. The camera view angle information acquisition section SA is a functional section for interacting with the city information model engine as the host engine, i.e., the first visualization engine, and specifically, the camera view angle information acquisition section SA acquires camera view angle information from the city information model engine through a specific application programming interface (i.e., API) of the city information model engine. The information extraction unit SB is a functional unit for interacting with a graphic engine as a second visualization engine, and specifically, after the intermediate module S receives camera view information, the information extraction unit SB transmits the camera view information to the graphic engine through a specific API of the graphic engine, and acquires second image information of the second model from the graphic engine. The instruction generating unit SC is a functional unit for generating an image-level fusion instruction, and specifically, after the information extracting unit SB of the intermediate module S acquires the second image information, the instruction generating unit SC generates an image-level fusion instruction for image-level fusing the first model and the second model. The transmitting unit SD is a functional unit for transmitting information, and specifically, when the instruction generating unit SC generates an image-level fusion instruction, the transmitting unit SD transmits the second image information and the image-level fusion instruction. More specifically, the transmitting unit SD transmits the second image information and the image-level fusion instruction to the city information model engine as the host engine. After receiving the second image information and the image-level fusion instruction, the city information model engine sends city image information (i.e., the first image information) of the city information model, the received second image information and the image-level fusion instruction to a graphics processing unit (or called a hardware display card), and the GPU constructs a target image by using the first image information and the second image information according to the received image-level fusion instruction. The target object of the second image information and the image-level fusion instruction transmitted by the transmitting unit SD is not limited to the host engine, and may be directly transmitted to the GPU.
Fig. 3 is a schematic structural diagram of an electronic device for use in the image level fusion method of the present invention. Fig. 3 shows a computer system as an example of the electronic device, but the present invention is not limited to this, and may be any other type of electronic device as long as the functions and the application range of the embodiments of the present invention are not limited.
As shown in fig. 3, the computer system 100 includes a Central Processing Unit (CPU) 101 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 102 or a program loaded from a storage section 108 into a Random Access Memory (RAM) 103. In the RAM 103, various programs and data required for the operation of the computer system 100 are also stored. The CPU 101, ROM 102, and RAM 103 are connected to each other through a bus 104. An input/output (I/O) interface 105 is also connected to bus 104.
An I/O device including an input section 106 such as a keyboard and a mouse, an output section 107 such as a Liquid Crystal Display (LCD), a speaker, a storage section 108 such as a hard disk, and a communication section 109 of a network interface card such as a modem is connected to the I/O interface 105. The communication section 109 performs communication processing via a network such as the internet. The drive 110 may also be connected to the I/O interface 105 as desired. Further, the removable medium 111 may also be mounted on the drive 110 as needed, so that a computer program read out from the removable medium 111 may be mounted to the storage section 108 as needed.
In particular, the procedure described with reference to fig. 1, i.e. the image level fusion method of an embodiment of the invention and variants thereof, may be implemented as a computer software program. For example, one embodiment of the invention includes a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the image level fusion method shown in either or both of fig. 1. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 109, and/or installed from the removable medium 111. The above-described functions defined in the system of the present invention are performed when the computer program is executed by a Central Processing Unit (CPU) 101.
The computer readable medium shown in the present invention may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
As another aspect, the present invention also provides a computer-readable storage medium that may be included in the computer system described in the above embodiment or may exist alone without being assembled into the computer system. The computer-readable storage medium carries one or more programs that, when executed by a computer system, cause the computer system to implement the methods of the embodiments and variations thereof. For example, the computer system described above may implement the various steps shown in FIG. 1.
According to one aspect of the present invention, a computer program product is provided that includes computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the methods provided in the various alternative implementations of the above-described embodiments and modifications.
It is to be understood that the invention is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (11)

1. An image level fusion method based on models of different three-dimensional visualization engines is characterized in that:
obtaining a camera perspective from a first visualization engine;
acquiring first image information of a first model from the first visualization engine according to the camera view angle, wherein the first image information is information obtained by processing the first model by the first visualization engine through a rendering pipeline technology;
acquiring second image information of a second model from a second visualization engine according to the camera view angle, wherein the second image information is information obtained by processing the second model by the second visualization engine through a rendering pipeline technology; and
and according to the first image information and the second image information, drawing of the image is completed in the first visualization engine.
2. The method of image level fusion of models based on different three-dimensional visualization engines of claim 1,
the first image information includes at least first color information and first depth information,
the second image information includes at least second color information and second depth information.
3. The method of image level fusion of models based on different three-dimensional visualization engines according to claim 1 or 2,
the number of the second visualization engines is more than two,
the first visualization engine and the second visualization engine are the same or different in kind.
4. The method of image level fusion of models based on different three-dimensional visualization engines according to claim 1 or 2,
the first model and/or the second model is any one of an engineering information model, an engineering object three-dimensional model, and a model created by data acquired in a physical space.
5. The method of image level fusion of models based on different three-dimensional visualization engines of claim 4,
the first model is a city information model or a building information model or a geographic information model,
the second model is a live-action three-dimensional model, and the live-action three-dimensional model is formed by modeling by adopting any one of data such as an oblique photography technology, a close-range photography technology or a laser point cloud technology, a high-precision photo and the like or by fusing a plurality of data.
6. An intermediate module that interacts with a first visualization engine and a second visualization engine to implement the image level fusion method of any of claims 1-5, comprising:
a camera view angle information acquisition section that acquires a camera view angle from a first visualization engine;
an information acquisition unit that acquires, when the camera angle of view is received, second image information of a second model from the second visualization engine, the second image information being information obtained by the second visualization engine processing the second model using a rendering pipeline technique;
an instruction generation unit that generates, when the camera angle of view is received and the second image information is acquired, an image-level fusion instruction for causing the first visualization engine to draw an image from first image information of a first model and the second image information of the second model, the first image information being information obtained by the first visualization engine processing the first model using a rendering pipeline technique; and
and a transmitting unit configured to transmit the second image information and the image-level fusion instruction when the image-level fusion instruction is generated.
7. The intermediate module as recited in claim 6, wherein,
the first image information includes at least first color information and first depth information,
the second image information includes at least second color information and second depth information.
8. An intermediate module as claimed in claim 6 or 7, characterized in that,
the first model and/or the second model is any one of an engineering information model, an engineering object three-dimensional model, and a model created by data acquired in a physical space.
9. The intermediate module as recited in claim 8, wherein,
the first model is a city information model or a building information model or a geographic information model,
the second model is a live-action three-dimensional model, and the live-action three-dimensional model is formed by modeling by adopting any one of data such as an oblique photography technology, a close-range photography technology or a laser point cloud technology, a high-precision photo and the like or by fusing a plurality of data.
10. A computer-readable storage medium storing a computer program, characterized in that,
the computer program, when executed by a processor, implements the image level fusion method of any one of claims 1 to 5.
11. A computer program product comprising a computer program, characterized in that,
the computer program, when executed by a processor, implements the image level fusion method of any one of claims 1 to 5.
CN202310465702.6A 2023-04-26 2023-04-26 Image-level fusion method, intermediate module, readable medium and program product based on models of different three-dimensional visualization engines Pending CN116485695A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310465702.6A CN116485695A (en) 2023-04-26 2023-04-26 Image-level fusion method, intermediate module, readable medium and program product based on models of different three-dimensional visualization engines

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310465702.6A CN116485695A (en) 2023-04-26 2023-04-26 Image-level fusion method, intermediate module, readable medium and program product based on models of different three-dimensional visualization engines

Publications (1)

Publication Number Publication Date
CN116485695A true CN116485695A (en) 2023-07-25

Family

ID=87222874

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310465702.6A Pending CN116485695A (en) 2023-04-26 2023-04-26 Image-level fusion method, intermediate module, readable medium and program product based on models of different three-dimensional visualization engines

Country Status (1)

Country Link
CN (1) CN116485695A (en)

Similar Documents

Publication Publication Date Title
CN108919944B (en) Virtual roaming method for realizing data lossless interaction at display terminal based on digital city model
CN108648269B (en) Method and system for singulating three-dimensional building models
Heo et al. Productive high-complexity 3D city modeling with point clouds collected from terrestrial LiDAR
JP5891388B2 (en) Image drawing apparatus, image drawing method, and image drawing program for drawing a stereoscopic image
EP2051533B1 (en) 3D image rendering apparatus and method
US9495767B2 (en) Indexed uniform styles for stroke rendering
US8466915B1 (en) Fusion of ground-based facade models with 3D building models
CN114863038B (en) Real-time dynamic free visual angle synthesis method and device based on explicit geometric deformation
US20130063472A1 (en) Customized image filters
CN109544658B (en) Map rendering method and device, storage medium and electronic device
Vincke et al. Immersive visualisation of construction site point cloud data, meshes and BIM models in a VR environment using a gaming engine
KR101507776B1 (en) methof for rendering outline in three dimesion map
CN110910504A (en) Method and device for determining three-dimensional model of region
CN113750516A (en) Method, system and equipment for realizing three-dimensional GIS data loading in game engine
CN113743155A (en) Method for constructing multi-detail level model of object and computer equipment
CN112053440A (en) Method for determining individualized model and communication device
CN106204703A (en) Three-dimensional scene models rendering intent and device
CN113761618A (en) 3D simulation road network automation construction method and system based on real data
KR102041320B1 (en) Precision-Location Based Optimized 3D Map Delivery System
CN110610543B (en) Method and device for building three-dimensional pavement and intersection
CN116485695A (en) Image-level fusion method, intermediate module, readable medium and program product based on models of different three-dimensional visualization engines
CN114972612B (en) Image texture generation method based on three-dimensional simplified model and related equipment
CN111383334B (en) System and method for rendering objects
CN114723864A (en) Arc rendering method of bandwidth and computer program product
Wang et al. Real‐time fusion of multiple videos and 3D real scenes based on optimal viewpoint selection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination