CN113409473B - Method, device, electronic equipment and storage medium for realizing virtual-real fusion - Google Patents

Method, device, electronic equipment and storage medium for realizing virtual-real fusion Download PDF

Info

Publication number
CN113409473B
CN113409473B CN202110763864.9A CN202110763864A CN113409473B CN 113409473 B CN113409473 B CN 113409473B CN 202110763864 A CN202110763864 A CN 202110763864A CN 113409473 B CN113409473 B CN 113409473B
Authority
CN
China
Prior art keywords
virtual
image data
fused
region
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110763864.9A
Other languages
Chinese (zh)
Other versions
CN113409473A (en
Inventor
简艺
孙红亮
王子彬
李炳泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Shangtang Technology Development Co Ltd
Original Assignee
Zhejiang Shangtang Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Shangtang Technology Development Co Ltd filed Critical Zhejiang Shangtang Technology Development Co Ltd
Priority to CN202110763864.9A priority Critical patent/CN113409473B/en
Publication of CN113409473A publication Critical patent/CN113409473A/en
Application granted granted Critical
Publication of CN113409473B publication Critical patent/CN113409473B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a method, a device, electronic equipment and a storage medium for realizing virtual-real fusion, wherein the method for realizing virtual-real fusion comprises the following steps: acquiring image data of a region to be fused; dividing regions to be fused in the image data to obtain target regions of the regions to be fused; constructing a virtual model of a region to be fused based on the image data; determining corresponding position information of each target area in the virtual model; and correspondingly adding virtual content in each target area in the virtual model based on the position information so as to perform virtual-real fusion on each target area of the area to be fused. By means of the method, virtual-real fusion of each target area in the area to be fused can be achieved quickly and accurately.

Description

Method, device, electronic equipment and storage medium for realizing virtual-real fusion
Technical Field
The present application relates to the field of augmented reality technologies, and in particular, to a method, an apparatus, an electronic device, and a storage medium for implementing virtual-real fusion.
Background
Augmented Reality (Augmented Reality) technology is a technology for skillfully fusing virtual information with the real world. The method and the system widely apply various technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, sensing and the like. Augmented reality is realized by applying virtual information such as characters, images, three-dimensional models, music, videos and the like generated by a computer to the real world after analog simulation, so that the two kinds of information are mutually supplemented, and the real world is enhanced. With the development of technologies, especially with the falling and maturity of high-precision map technologies in the AR field, various scientific concepts such as digital twin and parallel world become possible gradually.
The existing methods for realizing the virtual-real fusion comprise a method based on image recognition, a method based on three-dimensional object recognition and an SLAM (simultaneous localization and mapping) method for achieving the effect of sharing the virtual-real fusion in a local map sharing mode. Various ways of fusing between true and false include: the editing of the virtual and real fused contents of image data/object/face/human body is to edit the virtual contents by having corresponding entity reference objects, such as: tracked image data, objects, standard faces, standard skeletal structures, and the like. Even a somewhat larger range cloud anchor (clouded anchor) can be edited and then saved online.
For the virtual-real fusion in the actual scene area, there may be virtual-real fusion for non-solid objects. Therefore, how to implement virtual-real fusion on non-entity targets in a scene area becomes a problem to be solved urgently for augmented reality application based on a high-precision map.
Disclosure of Invention
The application provides a method, a device, an electronic device and a storage medium for realizing virtual-real fusion, so as to solve the problem that the application range of the virtual-real fusion technology in the prior art is limited.
In order to solve the above technical problem, a first aspect of the present application provides a method for implementing virtual-real fusion, including: acquiring image data of a region to be fused; dividing regions to be fused in the image data to obtain target regions of the regions to be fused; constructing a virtual model of a region to be fused based on the image data; determining corresponding position information of each target area in the virtual model; and correspondingly adding virtual content in each target area in the virtual model based on the position information so as to perform virtual-real fusion on each target area of the area to be fused.
Therefore, the corresponding position information of each target area in the virtual model is determined, so that the virtual content can be accurately placed in the corresponding area based on the position information, the virtual-real fusion result is more accurate, and the efficiency of the process of realizing the virtual-real fusion is improved.
The method for constructing the virtual model of the region to be fused based on the image data comprises the following steps: performing feature extraction on the image data to obtain a virtual map of the area to be fused; and performing three-dimensional reconstruction on the region to be fused based on the image data and the virtual map to obtain a virtual model of the region to be fused.
Therefore, the virtual model of the region to be fused is constructed based on the image data of the region to be fused, so that the precision and the accuracy of the virtual model are improved, and the efficiency and the precision of the virtual model construction can be further improved by constructing the virtual map and then performing three-dimensional reconstruction based on the virtual map and the image data.
Wherein, carry out feature extraction to image data, obtain the virtual map of waiting to fuse the region, include: performing feature extraction on the plurality of pieces of image data to obtain feature points of each piece of image data; carrying out feature point matching on the feature points of each image data to obtain the corresponding relation between the feature points of each image data; triangularization is performed on the feature points based on the pose and the corresponding relation of each image data to construct a virtual map.
Therefore, feature point matching is performed based on the feature points of each image data to obtain the corresponding relationship between the feature points of each image data, and triangularization is performed based on the pose and the corresponding relationship of the image data to construct a virtual map, so that the construction of a three-dimensional virtual map can be realized, the similarity between the virtual map and a real area to be fused is improved, and the subsequent construction of a virtual model is facilitated.
The three-dimensional reconstruction of the region to be fused based on the image data and the virtual map to obtain a virtual model of the region to be fused comprises the following steps: carrying out depth estimation on the image data and the virtual map to generate an image depth map, and fusing the image depth map into point cloud; a virtual model is generated based on the point cloud.
Therefore, the image depth map is generated by carrying out depth estimation on the image data and the virtual map, the image depth map is fused into the point cloud, and finally the virtual model is generated based on the point cloud, so that the precision and the three-dimensional degree of the virtual model can be improved through the point cloud, and the virtual model is closer to a real region to be fused.
The image data comprises at least two poses, depth estimation is carried out on the image data and the virtual map, an image depth map is generated, and the image depth map is fused into a point cloud, and the method comprises the following steps: respectively carrying out depth estimation on the image data of at least two poses and the virtual map to generate at least two image depth maps, and correspondingly fusing the at least two image depth maps into at least two point clouds; integrating at least two point clouds to obtain dense point clouds of a region to be fused; generating a virtual model based on the point cloud, comprising: a virtual model is generated based on the dense point cloud that includes the dense point cloud and the mesh.
Therefore, the accuracy and the characteristic quantity of the virtual model are further improved by generating the point clouds of at least two poses, fusing the point clouds to obtain dense point clouds and generating the virtual model comprising the dense point clouds and the grid on the basis of the dense point clouds, so that the similarity between the virtual model and the real region to be fused is improved, and the occurrence of inaccurate false-true fusion is reduced.
The method for dividing the region to be fused in the image data to obtain each target region of the region to be fused comprises the following steps: dividing regions to be fused in the image data through a neural network to obtain each target region of the regions to be fused and coordinate information of the target region on the image data; determining the position information corresponding to each target area in the virtual model, wherein the position information comprises the following steps: and determining the corresponding position information of each target area in the virtual model based on the coordinate information of each target area and the target area on the image data and the pose of the image data.
Therefore, the image data is divided through the neural network to obtain the target area and the coordinate information of the target area on the image data, and the corresponding position information of each target area in the virtual model is determined based on the coordinate information and the pose of the image data, so that the accuracy of the corresponding position information of each target area in the virtual model can be improved based on accurate coordinate information, and the phenomenon of fusion dislocation is reduced.
The image data comprises at least two poses, and the corresponding position information of each target area in the virtual model is determined based on the target area, the coordinate information of the target area on the image data and the pose of the image data, and the method comprises the following steps: determining the matching relation of each same target area on the image data corresponding to the at least two poses based on the coordinate information of each target area on the image data corresponding to the at least two poses; determining corresponding three-dimensional position information of each target area in the virtual model based on the matching relation; correspondingly adding virtual content in each target area in the virtual model based on the position information, wherein the virtual content comprises the following steps: and correspondingly adding virtual content in each target area in the virtual model based on the three-dimensional position information.
Therefore, the matching relation of each same target area on the image data corresponding to the at least two poses is determined based on the coordinate information of each target area on the image data corresponding to the at least two poses; and determining the corresponding three-dimensional position information of each target area in the virtual model based on the matching relation, so that virtual content can be correspondingly added to each target area in the virtual model based on the three-dimensional position information, and the accurate division of each target area on the virtual model is further improved.
Correspondingly adding virtual content in each target area in the virtual model based on the three-dimensional position information, wherein the method comprises the following steps: and correspondingly adding a three-dimensional virtual special effect in each target area in the virtual model based on the three-dimensional position information.
Therefore, the three-dimensional virtual special effect is correspondingly added to each target area in the virtual model based on the three-dimensional position information, so that the accuracy of the three-dimensional virtual feature alignment is improved.
Before acquiring the image data of the region to be fused, the method comprises the following steps: training the initial network through a training sample; and stopping training to obtain the neural network when the accuracy of the initial network for dividing each target area of the training sample meets the preset accuracy.
Therefore, before the virtual-real fusion, the neural network is trained on the initial network through the training sample, so that the accuracy of dividing each target area in the image data by the neural network can be improved, and the workload of manual division can be reduced.
In order to solve the technical problem, a second aspect of the present application provides a method for implementing virtual-real fusion, including acquiring image data of a region to be fused; dividing a region to be fused in image data to obtain a horizon, a sky region and a ground region of the region to be fused; constructing a virtual model of a region to be fused based on the image data; determining corresponding position information of a horizon, a sky area and a ground area in a virtual model; and correspondingly adding virtual contents in the horizon, the sky area and the ground area in the virtual model based on the position information so as to perform virtual-real fusion on the horizon, the sky area and the ground area of the area to be fused.
The virtual content is correspondingly added to the horizon, the sky area and the ground area in the virtual model based on the position information so as to perform virtual-real fusion on the horizon, the sky area and the ground area of the region to be fused, and the virtual-real fusion method includes the following steps: the augmented reality traffic device acquires new image data of a region to be fused in real time in the moving process; dividing the new image data to obtain a horizon, a sky area and a ground area of the new image data; and correspondingly adding virtual contents on the horizon, the sky area and the ground area respectively based on the virtual model.
Therefore, the augmented reality traffic device divides new image data in real-time movement, and then virtual fusion is carried out, so that each region of the region to be fused can be virtually fused in real time in the movement process of the augmented reality traffic device, and real-time virtual fusion of multiple scenes such as traffic, movement and the like is realized.
By the scheme, the virtual content can be accurately added into the corresponding horizon, sky and ground areas based on the position information, so that the virtual-real fusion result is more accurate, and the efficiency of the virtual-real fusion process is improved.
In order to solve the above technical problem, a third aspect of the present application provides a device for implementing virtual-real fusion, where the device for virtual-real fusion includes: the system comprises an acquisition module, a division module, a construction module, a determination module and a fusion module; the acquisition module is used for acquiring image data of a region to be fused; the dividing module is used for dividing the region to be fused in the image data to obtain each target region of the region to be fused; the construction module is used for constructing a virtual model of the region to be fused based on the image data; the determining module is used for determining the corresponding position information of each target area in the virtual model; the fusion module is used for correspondingly adding virtual content in each target area in the virtual model based on the position information so as to perform virtual-real fusion on each target area of the area to be fused.
In order to solve the above technical problem, a fourth aspect of the present application provides a device for implementing virtual-real fusion, where the device for virtual-real fusion includes: the system comprises an acquisition module, a division module, a construction module, a determination module and a fusion module; the acquisition module is used for acquiring image data of a region to be fused; the dividing module is used for dividing the region to be fused in the image data to obtain the horizon, the sky region and the ground region of the region to be fused; the construction module is used for constructing a virtual model of the region to be fused based on the image data; the determining module is used for determining corresponding position information of a horizon, a sky area and a ground area in the virtual model; the fusion module is used for correspondingly adding virtual contents to the horizon, the sky area and the ground area in the virtual model based on the position information so as to perform virtual-real fusion on the horizon, the sky area and the ground area of the area to be fused.
In order to solve the above technical problem, a fifth aspect of the present application provides an electronic device, including: a memory and a processor coupled to each other, the processor being configured to execute the program data stored in the memory to implement the method for implementing virtual-real fusion in any of the above aspects.
In order to solve the above technical problem, a sixth aspect of the present application provides a computer-readable storage medium storing program data, where the program data is executable to implement the method for implementing virtual-real fusion in any one of the above aspects.
According to the scheme, each target area of the area to be fused is obtained first, and then the corresponding position information of each target area in the virtual model can be determined, so that the virtual content can be accurately placed in the corresponding area based on the position information, the virtual-real fusion result is more accurate, and the efficiency of the process of realizing the virtual-real fusion is improved.
Drawings
FIG. 1 is a schematic flowchart illustrating an embodiment of a method for implementing virtual-real fusion according to the present application;
FIG. 2 is a schematic flowchart of another embodiment of a method for implementing virtual-real fusion according to the present application;
FIG. 3 is a schematic flowchart illustrating a method for implementing virtual-real fusion according to another embodiment of the present application;
FIG. 4 is a schematic structural diagram of an embodiment of an apparatus for implementing virtual-real fusion according to the present application;
FIG. 5 is a schematic structural diagram of another embodiment of an apparatus for implementing virtual-real fusion according to the present application;
FIG. 6 is a schematic structural diagram of an embodiment of an electronic device provided in the present application;
FIG. 7 is a schematic structural diagram of an embodiment of a computer-readable storage medium provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
Referring to fig. 1, fig. 1 is a schematic flow chart of an embodiment of a method for implementing virtual-real fusion according to the present application, where the method for implementing virtual-real fusion includes the following steps:
s11: and acquiring image data of the region to be fused.
The region to be fused is a region to be studied by a virtual-real fusion technique. Specifically, the region to be fused may include a solid region, such as a building, a street, or a mountain peak, or a non-solid region, such as a sky region, a vacuum region, or an air region, or may further include a fused region between a solid and a non-solid, and the like, which is not limited herein. The region to be fused is a region which actually exists in the real world.
The image data of the region to be fused can be acquired from a space angle through a global satellite inertial navigation system, can be acquired from a ground angle through a vehicle-mounted camera on a ground AR vehicle, can be acquired from an aerial photographing angle of the region to be fused through an unmanned aerial vehicle, and can also be acquired through all photographing equipment or photographing equipment of other photographing angles, so that multi-pose and multi-angle image data of the region to be fused can be acquired, the subsequent construction of the region to be fused based on the image data is facilitated, and the construction precision of a virtual model is improved.
The image data of the present embodiment may include a plurality of image data at different angles.
S12: and dividing the region to be fused in the image data to obtain each target region of the region to be fused.
And dividing the region to be fused in each image data to obtain each target region of the region to be fused. In a specific application scenario, when the area to be fused includes target areas of the ground and the wall, the area to be fused in each image data is divided to obtain the target areas of the ground and the wall respectively. In another specific application scenario, when the region to be fused includes a target region of the ground, the sky, and the horizon, the region to be fused in each image data is divided to obtain the target region of the ground, the sky, and the horizon, respectively. Specifically, the region to be fused in each image data may be divided by a neural network or a manual or other classifier, which is not limited herein.
S13: and constructing a virtual model of the region to be fused based on the image data.
And after the image data of the region to be fused is obtained, reconstructing the image data to obtain a virtual model corresponding to the region to be fused. In a specific application scenario, the image data can be reconstructed into a virtual model by using a three-dimensional reconstruction algorithm based on the image data in a computer-recognizable storage medium, so that a three-dimensional model corresponding to the image data, namely a virtual three-dimensional model of a region to be fused, is constructed. In another specific application scenario, a virtual three-dimensional model of the region to be fused can also be constructed on the basis of points based on the feature points of each image data by performing feature analysis on the image data. The specific method for constructing the virtual model of the region to be fused is not limited herein.
The virtual model of the region to be fused is the same as the virtual model corresponding to the actual scene of the region to be fused.
S14: and determining the corresponding position information of each target area in the virtual model.
After the virtual model of the region to be fused is established, the corresponding position information of each target region of the region to be fused in the virtual model is determined. In a specific application scenario, the corresponding position information of each target area in the virtual model may be determined based on the position of each target area in each image data. In another specific application scenario, manual division of each target area of the virtual model may also be accepted, so as to determine corresponding position information of each target area in the virtual model. In another specific application scenario, each target region in the virtual model can be determined through a trained deep neural network capable of dividing three-dimensional data.
S15: and correspondingly adding virtual content in each target area in the virtual model based on the position information so as to perform virtual-real fusion on each target area of the area to be fused.
And correspondingly adding virtual content to each target area in the virtual model based on the position information of each target area in the virtual model acquired in the step S14, so as to perform virtual-real fusion on each target area of the area to be fused.
The virtual content may be added based on the correlation characteristic correspondence of each target area. Such as: when a virtual vehicle needs to be added to the ground area of the virtual model of the area to be fused and a virtual bird needs to be added to the sky area of the virtual model of the area to be fused, the ground area and the sky area in the virtual model need to be divided, and specific position information is determined, so that corresponding virtual content can be conveniently added to a corresponding position, and the situation that the virtual content is placed in a disordered mode is avoided.
After the virtual content is added, the virtual content can be displayed, so that the virtual content is displayed in each corresponding target area by utilizing the addition, and the result after the virtual content and the real content are fused can be conveniently applied.
Through the method, the image data are divided into the areas to be fused to obtain the target areas of the areas to be fused, the virtual model of the areas to be fused is constructed based on the image data, the corresponding position information of the target areas in the virtual model is determined, and the virtual content is correspondingly added to the target areas in the virtual model based on the position information to perform virtual-real fusion on the target areas of the areas to be fused, so that the target areas of the areas to be fused can be obtained first, the corresponding position information of the target areas in the virtual model is determined, the virtual content can be accurately placed in the corresponding areas based on the position information, the virtual-real fusion result is more accurate, and the efficiency of the process of realizing the virtual-real fusion is improved.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating another embodiment of a method for implementing virtual-real fusion according to the present application.
S21: and acquiring image data of the region to be fused.
Wherein, the region to be fused is a region needing to be researched by a virtual-real fusion technology.
In the step, a region to be fused, which needs to be subjected to virtual-real fusion, is determined first, and then multi-angle and multi-pose shooting is carried out on the region to be fused through a global satellite inertial navigation system, a ground AR vehicle and an aerial unmanned aerial vehicle, so that multi-image data with multiple poses and multiple angles are obtained, and a virtual model of the region to be fused is conveniently constructed in a follow-up all-round and high-precision manner.
S22: and dividing the region to be fused in the image data through a neural network to obtain each target region of the region to be fused and coordinate information of the target region on the image data.
And dividing the region to be fused in each image data through a neural network to obtain each target region of the region to be fused and coordinate information of each target region on each image data. The coordinate information may be only the coordinate information of the range of the entire target area on the image data.
The neural network is obtained by training an initial network through a training sample before carrying out virtual-real fusion on a region to be fused. Wherein, the training sample is marked with each target area; wherein the training samples comprise image data related or similar to the region to be fused. During training, in response to the fact that the accuracy of the initial network in dividing each target area of the training sample meets the preset accuracy, stopping training to obtain the neural network. Specifically, when the difference between the division of each target area of the training sample by the initial network and the labeling division of each target area of the training sample labeled meets a preset difference, the training is stopped, and the neural network is obtained.
The neural network of the present embodiment includes a deep neural network, a convolutional neural network, a cyclic neural network, or the like, which can perform classification.
S23: and performing feature extraction on the image data to obtain a virtual map of the region to be fused, and performing three-dimensional reconstruction on the region to be fused based on the image data and the virtual map to obtain a virtual model of the region to be fused.
After image data are obtained, feature extraction is respectively carried out on the image data to obtain feature points of the image data; then, carrying out feature point matching on the feature points of each image data to obtain the corresponding relation between the feature points of each image data; therefore, triangularization is performed on the feature points based on the pose and the corresponding relation of each image data to construct a virtual map.
In a specific application scenario, an SIFT feature extraction operator can be adopted to extract SIFT feature points in each image data, the extracted feature points are further utilized to carry out mutual matching, and after mismatching is eliminated, the corresponding relation between the feature points of each image data is obtained. And triangularizing the feature points based on the pose and the corresponding relation of each image data to initially construct a three-dimensional virtual map of the region to be fused.
After the virtual map is obtained, depth estimation is carried out on the image data and the virtual map to generate an image depth map, the image depth map is fused into point cloud, and a virtual model is generated based on the point cloud. In a specific application scene, depth estimation is carried out on image data based on the pose of the image data to generate an image depth map, then the image depth map is fused into point cloud, and a virtual model is generated based on the point cloud. In a specific application scenario, when all image data have two poses of a ground AR vehicle and an aerial unmanned aerial vehicle, depth estimation is performed respectively based on image data acquired by the ground AR vehicle and the aerial unmanned aerial vehicle respectively to obtain two image depth maps corresponding to the two poses of the ground AR vehicle and the aerial unmanned aerial vehicle. And then respectively fusing the two image depth maps into two point clouds, and generating a virtual model based on the two point clouds. In practical application, the image data can have a plurality of poses, and the application scene does not limit the poses of the image data.
The depth estimation method may be performed by using a method such as colomap or OpenMVS.
In a specific application scene, depth estimation can be respectively carried out on image data and a virtual map of at least two poses to generate at least two image depth maps, the at least two image depth maps are correspondingly fused into at least two point clouds respectively, the at least two point clouds are integrated to obtain dense point clouds of an area to be fused, and finally a virtual model comprising the dense point clouds and a grid is generated based on the dense point clouds. Specifically, whether each point is used as a mesh vertex can be judged according to two indexes of visibility and reprojection error of each point in the dense point cloud, so that a Delaunay tetrahedron is constructed, and a virtual model comprising the dense point cloud and the mesh is obtained.
S24: and determining the corresponding position information of each target area in the virtual model based on the coordinate information of each target area and the target area on the image data and the pose of the image data.
After the virtual model of the region to be fused is obtained, the corresponding position information of each target region in the virtual model is determined based on each target region, the coordinate information of each target region on the image data and the pose of the image data.
In a specific application scenario, the matching relationship of each same target area on the image data corresponding to at least two poses can be determined based on the coordinate information of each target area on the image data corresponding to at least two poses, the three-dimensional position information of each target area corresponding to the virtual model is determined based on the matching relationship, and virtual content is correspondingly added to each target area in the virtual model based on the three-dimensional position information.
In another specific application scenario, a matching relationship between three-dimensional feature points on the virtual model and two-dimensional feature points on the image data can be established through 2D-3D re-projection, and the corresponding three-dimensional position information of each target area in the virtual model is determined based on the coordinate information of the matching relationship and the image data corresponding to each target area in at least two poses.
S25: and correspondingly adding virtual content in each target area in the virtual model based on the position information so as to perform virtual-real fusion on each target area of the area to be fused.
After the position information of each target area corresponding to the virtual model is acquired, virtual content is correspondingly added to each target area in the virtual model based on the position information, and therefore the virtual-real fusion of each target area of the area to be fused is completed. Specifically, a three-dimensional virtual special effect may be correspondingly added to each target region in the virtual model based on the three-dimensional position information.
In a specific application scenario, the center point of the three-dimensional virtual special effect may be aligned with the target point of each target area in the corresponding three-dimensional position information in the virtual model, so that the three-dimensional virtual special effect is added to the target point of each target area in the virtual model.
After the center point of the three-dimensional virtual special effect is aligned with the target point of each target area, the size and the direction of the three-dimensional virtual special effect can be adjusted, so that the target object and the target point can be aligned in six degrees of freedom. The six degrees of freedom refer to the degrees of freedom of movement in the directions of three orthogonal coordinate axes of x, y and z and the degrees of freedom of rotation around the three coordinate axes.
Through the manner, the method for realizing the virtual-real fusion of the embodiment divides the region to be fused in the image data through the neural network to obtain each target region of the region to be fused and coordinate information of the target region on the image data, extracts the characteristics of the image data to obtain the virtual map of the region to be fused, carries out three-dimensional reconstruction on the region to be fused based on the image data and the virtual map to obtain the virtual model of the region to be fused, determines the corresponding position information of each target region in the virtual model based on the coordinate information of each target region and the image data and the pose of the image data, and correspondingly adds virtual content to each target region in the virtual model based on the position information to carry out the virtual-real fusion on each target region of the region to be fused, so that the virtual content can be accurately placed in the corresponding region based on the position information, the result of the virtual-real fusion is more accurate, and the efficiency of the process for realizing the virtual-real fusion is improved.
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating a method for implementing virtual-real fusion according to another embodiment of the present application.
S31: and acquiring image data of the region to be fused.
The region to be fused of the present embodiment is a region including the horizon, the sky region, and the ground region. Wherein. The image data of the region to be fused can be obtained through a global satellite inertial navigation system, can be obtained through a vehicle-mounted camera on a ground AR vehicle, can be obtained through aerial photography of the region to be fused by an unmanned aerial vehicle, and can also be obtained through all shooting equipment, so that the multi-pose and multi-angle image data of the region to be fused can be obtained, the subsequent construction of the region to be fused based on the image data is facilitated, and the construction precision of a virtual model is improved.
S32: and dividing the region to be fused in the image data to obtain the horizon, the sky region and the ground region of the region to be fused.
And dividing the region to be fused in each image data to obtain the horizon, the sky region and the ground region of the region to be fused.
S33: and constructing a virtual model of the region to be fused based on the image data.
This step is the same as step S13 of the previous embodiment, please refer to the foregoing, and will not be described herein again.
S34: and determining corresponding position information of the horizon, the sky area and the ground area in the virtual model.
After a virtual model of the region to be fused is established, the corresponding position information of the horizon, the sky region and the ground region of the region to be fused in the virtual model is determined. In a specific application scenario, the corresponding position information of the horizon, the sky region, and the ground region in the virtual model may be determined based on the positions of the horizon, the sky region, and the ground region in each image data. In another specific application scenario, the division of the horizon, the sky region and the ground region in the virtual model by an artificial method may also be accepted, so as to determine the corresponding position information of the horizon, the sky region and the ground region in the virtual model. In another specific application scenario, the horizon, the sky region and the ground region in the virtual model can be determined through a trained deep neural network capable of dividing three-dimensional data.
S35: and correspondingly adding virtual contents in the horizon, the sky area and the ground area in the virtual model based on the position information so as to perform virtual-real fusion on the horizon, the sky area and the ground area of the area to be fused.
Correspondingly adding virtual contents in the horizon, the sky area and the ground area in the virtual model based on the position information corresponding to the horizon, the sky area and the ground area in the virtual model so as to perform virtual-real fusion on each target area of the area to be fused.
In a specific application scenario, an AR orbit special effect may be added to a ground area in the virtual model, and an AR bird special effect may be added to a sky area. Specifically, the addition of the virtual content is not limited herein.
After the virtual model is built, the augmented reality traffic device can acquire new image data of the area to be fused in real time in the moving process; and dividing the new image data to obtain the horizon, the sky area and the ground area of the new image data, and finally correspondingly adding virtual contents in the horizon, the sky area and the ground area respectively based on the virtual model. The method for dividing each region of the new image data is the same as the foregoing embodiment, please refer to the foregoing text, and will not be described herein again.
In a specific application scene, the augmented reality traffic device acquires new image data of a region to be fused in real time through a camera or a sensor in the moving process, then divides the new image data to obtain the horizon, the sky region, the ground region and coordinate information of the new image data, and finally correspondingly adds virtual contents in the horizon, the sky region and the ground region respectively based on the position information of each region in the virtual model and the divided coordinate information of each region in the new image data, so that the real-time virtual-real fusion of the newly acquired image data can be realized in the moving process of the augmented reality traffic device, and the display effect of the virtual-real fusion of the region to be fused is further improved.
Wherein, augmented reality traffic devices can include and load traffic devices such as vehicles, boats, aircraft of AR camera or relevant AR technique to make the people can be in real time virtual reality amalgamation along with augmented reality traffic devices's removal on augmented reality traffic devices.
By the method, the virtual-real fusion realization method enables the virtual content to be added to be accurately placed in the corresponding horizon, sky area and ground area based on the position information, enables the virtual-real fusion result to be more accurate, improves the efficiency of the virtual-real fusion realization process, and can realize real-time virtual-real fusion of the area to be fused based on the augmented reality traffic device.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an embodiment of the apparatus for implementing virtual-real fusion according to the present application, and the apparatus 40 for implementing virtual-real fusion according to the present embodiment includes an obtaining module 41, a dividing module 42, a constructing module 43, a determining module 44, and a fusing module 45.
The obtaining module 41 is configured to obtain image data of a region to be fused; the dividing module 42 is configured to divide the region to be fused in the image data to obtain each target region of the region to be fused; the construction module 43 is configured to construct a virtual model of the region to be fused based on the image data; the determining module 44 is configured to determine corresponding position information of each target area in the virtual model; the fusion module 45 is configured to add virtual content to each target region in the virtual model based on the position information, so as to perform virtual-real fusion on each target region of the region to be fused.
According to the scheme, each target area of the area to be fused is obtained first, and then the corresponding position information of each target area in the virtual model is determined, so that the virtual content can be accurately placed in the corresponding area based on the position information, the virtual-real fusion result is more accurate, and the efficiency of the process of realizing the virtual-real fusion is improved.
In some embodiments, the building module 43 performs feature extraction on the image data to obtain a virtual map of the region to be fused; and performing three-dimensional reconstruction on the region to be fused based on the image data and the virtual map to obtain a virtual model of the region to be fused.
Different from the embodiment, the virtual model of the region to be fused is constructed based on the image data of the region to be fused, so that the precision and the accuracy of the virtual model are improved, and the efficiency and the precision of the virtual model construction can be further improved by constructing the virtual map and then performing three-dimensional reconstruction based on the virtual map and the image data.
In some embodiments, the construction module 43 performs feature extraction on a plurality of pieces of image data to obtain feature points of each piece of image data; carrying out feature point matching on the feature points of each image data to obtain the corresponding relation between the feature points of each image data; triangularization is performed on the feature points based on the pose and the corresponding relation of each image data to construct a virtual map.
Different from the embodiment, feature point matching is performed based on the feature points of each image data to obtain the corresponding relationship between the feature points of each image data, and triangularization is performed based on the pose and the corresponding relationship of the image data to construct a virtual map, so that the construction of a three-dimensional virtual map can be realized, the similarity between the virtual map and a real area to be fused is improved, and the construction of a subsequent virtual model is facilitated.
In some embodiments, the construction module 43 performs depth estimation on the image data and the virtual map, generates an image depth map, and fuses the image depth map into a point cloud; a virtual model is generated based on the point cloud.
Different from the embodiment, the image depth map is generated by performing depth estimation on the image data and the virtual map, the image depth map is fused into the point cloud, and finally the virtual model is generated based on the point cloud, so that the precision and the stereoscopy of the virtual model can be improved through the point cloud, and the virtual model is closer to a real region to be fused.
In some embodiments, the constructing module 43 performs depth estimation on the image data of the at least two poses and the virtual map, respectively, generates at least two image depth maps, and correspondingly fuses the at least two image depth maps into at least two point clouds, respectively; integrating at least two point clouds to obtain dense point clouds of a region to be fused; a virtual model is generated based on the dense point cloud that includes the dense point cloud and the mesh.
Different from the embodiment, the method comprises the steps of generating point clouds of at least two poses, fusing the point clouds to obtain dense point clouds, and generating a virtual model comprising the dense point clouds and a grid on the basis of the dense point clouds to further improve the precision and the characteristic quantity of the virtual model, so that the similarity between the virtual model and a real region to be fused is improved, and the occurrence of inaccurate virtual-real fusion is reduced.
In some embodiments, the dividing module 42 divides the region to be fused in the image data through a neural network to obtain each target region of the region to be fused and coordinate information of the target region on the image data; and determining the corresponding position information of each target area in the virtual model based on each target area, the coordinate information of each target area on the image data and the position and orientation of the image data.
Different from the embodiment, the image data is divided through the neural network to obtain the target area and the coordinate information of the target area on the image data, and the corresponding position information of each target area in the virtual model is determined based on the coordinate information and the pose of the image data, so that the accuracy of the corresponding position information of each target area in the virtual model can be improved based on accurate coordinate information, and the phenomenon of fusion dislocation is reduced.
In some embodiments, the image data includes at least two poses, and the determining module 44 determines a matching relationship of each same target region on the image data corresponding to the at least two poses based on the coordinate information of each target region on the image data corresponding to the at least two poses; determining corresponding three-dimensional position information of each target area in the virtual model based on the matching relation; and correspondingly adding virtual content in each target area in the virtual model based on the three-dimensional position information.
Different from the foregoing embodiment, the matching relationship of each same target area on the image data corresponding to at least two poses is determined based on the coordinate information of each target area on the image data corresponding to at least two poses; and determining the corresponding three-dimensional position information of each target area in the virtual model based on the matching relation, so that virtual content can be correspondingly added to each target area in the virtual model based on the three-dimensional position information, and the accurate division of each target area on the virtual model is further improved.
In some embodiments, the fusion module 45 adds a three-dimensional virtual special effect to each target region in the virtual model based on the three-dimensional position information.
Different from the foregoing embodiment, the three-dimensional virtual special effect is correspondingly added to each target region in the virtual model based on the three-dimensional position information, so as to improve the accuracy of three-dimensional virtual feature alignment.
In some embodiments, before acquiring the image data of the region to be fused, the method includes: training the initial network through a training sample; and stopping training to obtain the neural network when the accuracy of the initial network for dividing each target area of the training sample meets the preset accuracy.
Different from the embodiment, before the virtual-real fusion is performed, the neural network is obtained by training the initial network through the training sample, so that the accuracy of dividing each target area in the image data by the neural network can be improved, and the workload of manual division can be reduced.
Through the mode, the virtual-real fusion device divides the to-be-fused area in the image data through the neural network to obtain each target area of the to-be-fused area, then constructs the virtual model of the to-be-fused area based on the image data, then determines the corresponding position information of each target area in the virtual model, and finally correspondingly adds virtual content to each target area in the virtual model based on the position information to perform virtual-real fusion on each target area of the to-be-fused area, so that each target area of the to-be-fused area can be obtained through the neural network, the corresponding position information of each target area in the virtual model is further determined, the virtual content can be accurately placed in the corresponding area based on the position information, the virtual-real fusion result is more accurate, and the efficiency of the process of realizing the virtual-real fusion is improved.
Referring to fig. 5, fig. 5 is a schematic structural diagram of another embodiment of the apparatus for implementing virtual-real fusion according to the present application, and the apparatus 50 for implementing virtual-real fusion according to the present embodiment includes an obtaining module 51, a dividing module 52, a constructing module 53, a determining module 54, and a fusing module 55.
The obtaining module 51 is configured to obtain image data of a region to be fused; the dividing module 52 is configured to divide the region to be fused in the image data to obtain a horizon, a sky region, and a ground region of the region to be fused; the construction module 53 is configured to construct a virtual model of the region to be fused based on the image data; the determining module 54 is configured to determine corresponding location information of the horizon, the sky region, and the ground region in the virtual model; the fusion module 55 is configured to add virtual content to the horizon, the sky area, and the ground area in the virtual model based on the position information, so as to perform virtual-real fusion on the horizon, the sky area, and the ground area of the area to be fused.
By the scheme, the virtual content can be accurately added into the corresponding horizon, sky and ground areas based on the position information, so that the virtual-real fusion result is more accurate, and the efficiency of the virtual-real fusion process is improved.
In some embodiments, correspondingly adding virtual content to the horizon, the sky region, and the ground region in the virtual model based on the position information to perform virtual-real fusion on the horizon, the sky region, and the ground region of the region to be fused, includes: acquiring new image data of a region to be fused in real time in the moving process of the augmented reality traffic device; dividing the new image data to obtain the horizon, the sky area and the ground area of the new image data; and correspondingly adding virtual contents on the horizon, the sky area and the ground area respectively based on the virtual model.
Different from the embodiment, the augmented reality traffic device divides new image data in real-time movement, and then virtual fusion is performed, so that each region of a region to be fused can be virtually fused in real time in the movement process of the augmented reality traffic device, and real-time virtual fusion of multiple scenes such as traffic and movement is realized.
Based on the same inventive concept, the present application further provides an electronic device, which can be executed to implement the method for implementing virtual-real fusion of any of the above embodiments, please refer to fig. 6, fig. 6 is a schematic structural diagram of an embodiment of the electronic device provided in the present application, the electronic device includes a memory 62 and a processor 61 coupled to each other, and the processor 61 is configured to execute program data stored in the memory 62 to implement the steps of any of the above embodiments of the method for implementing virtual-real fusion. In one particular implementation scenario, the electronic devices may include, but are not limited to: the electronic device may further include a mobile device such as a notebook computer and a tablet computer, which is not limited herein.
In particular, the processor 61 is configured to control itself and the memory 62 to implement the steps of any of the above-described method embodiments for implementing virtual-real fusion. The processor 61 may also be referred to as a CPU (Central Processing Unit). The processor 61 may be an integrated circuit chip having signal processing capabilities. The Processor 61 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 61 may be commonly implemented by an integrated circuit chip.
By the scheme, the virtual-real fusion of each target area in the area to be fused can be rapidly and accurately realized.
Referring to fig. 7, fig. 7 is a block diagram illustrating an embodiment of a computer readable storage medium according to the present application. The computer readable storage medium 70 stores program data 71 capable of being executed by the processor, the program data 71 being used to implement the steps of any of the above-described method embodiments for implementing virtual-real fusion.
By the scheme, the virtual-real fusion of each target area in the area to be fused can be quickly and accurately realized.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely one type of logical division, and an actual implementation may have another division, for example, a unit or a component may be combined or integrated with another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on network elements. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only an embodiment of the present application, and is not intended to limit the scope of the present application, and all equivalent structures or equivalent processes performed by the present application and the contents of the attached drawings, which are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (12)

1. A method for realizing virtual-real fusion is characterized by comprising the following steps:
acquiring image data of a region to be fused;
dividing the regions to be fused in the image data to obtain each target region of the regions to be fused;
constructing a virtual model of the region to be fused based on the image data;
determining corresponding position information of each target area in the virtual model;
correspondingly adding virtual content in each target area in the virtual model based on the position information so as to perform virtual-real fusion on each target area of the area to be fused;
wherein the constructing a virtual model of the region to be fused based on the image data comprises:
extracting the characteristics of the image data to obtain a virtual map of the area to be fused; the image data includes at least two poses;
respectively carrying out depth estimation on the image data of at least two poses and the virtual map to generate at least two image depth maps, and correspondingly fusing the at least two image depth maps into at least two point clouds;
integrating at least two point clouds to obtain dense point clouds of the region to be fused;
generating the virtual model based on visibility of points in the dense point cloud and reprojection errors.
2. The method for implementing virtual-real fusion according to claim 1, wherein the extracting the features of the image data to obtain the virtual map of the region to be fused includes:
performing feature extraction on the plurality of pieces of image data to obtain feature points of each piece of image data;
carrying out feature point matching on the feature points of each image data to obtain the corresponding relation between the feature points of each image data;
triangularizing the feature points based on the poses of the image data and the corresponding relations to construct the virtual map.
3. The method according to claim 1, wherein the dividing the region to be fused in the image data to obtain each target region of the region to be fused comprises:
dividing the region to be fused in the image data through a neural network to obtain each target region of the region to be fused and coordinate information of the target region on the image data;
the determining the position information corresponding to each target area in the virtual model includes:
and determining corresponding position information of each target area in the virtual model based on the coordinate information of each target area and the target area on the image data and the pose of the image data.
4. The method of claim 3, wherein the image data includes at least two poses,
the determining the corresponding position information of each target area in the virtual model based on the coordinate information of each target area and the target area on the image data and the pose of the image data comprises:
determining the matching relation of each same target area on the image data corresponding to the at least two poses based on the coordinate information of each target area on the image data corresponding to the at least two poses;
determining corresponding three-dimensional position information of each target area in the virtual model based on the matching relation;
correspondingly adding virtual content in each target area in the virtual model based on the position information comprises the following steps:
and correspondingly adding virtual content in each target area in the virtual model based on the three-dimensional position information.
5. The method for implementing virtual-real fusion as claimed in claim 4, wherein the adding virtual content in each target region in the virtual model based on the three-dimensional position information includes:
and correspondingly adding a three-dimensional virtual special effect in each target area in the virtual model based on the three-dimensional position information.
6. The method for implementing virtual-real fusion according to any one of claims 3-5, wherein before acquiring the image data of the region to be fused, the method comprises:
training the initial network through a training sample;
and stopping training to obtain the neural network when the accuracy of the initial network for dividing each target area of the training sample meets the preset accuracy.
7. A method for realizing virtual-real fusion is characterized by comprising the following steps:
acquiring image data of a region to be fused;
dividing the region to be fused in the image data to obtain the horizon, the sky region and the ground region of the region to be fused;
constructing a virtual model of the region to be fused based on the image data;
determining corresponding position information of the horizon, the sky area and the ground area in the virtual model;
correspondingly adding virtual contents to the horizon, the sky area and the ground area in the virtual model based on the position information so as to perform virtual-real fusion on the horizon, the sky area and the ground area of the area to be fused;
wherein the constructing a virtual model of the region to be fused based on the image data comprises:
extracting the characteristics of the image data to obtain a virtual map of the area to be fused; the image data includes at least two poses;
respectively carrying out depth estimation on the image data of at least two poses and the virtual map to generate at least two image depth maps, and correspondingly fusing the at least two image depth maps into at least two point clouds;
integrating at least two point clouds to obtain dense point clouds of the region to be fused;
generating the virtual model based on visibility of points in the dense point cloud and reprojection errors.
8. The method of claim 7, wherein the adding virtual content to the horizon, the sky area and the ground area in the virtual model based on the position information to virtually fuse the horizon, the sky area and the ground area of the area to be fused comprises:
the augmented reality traffic device acquires new image data of a region to be fused in real time in the moving process;
dividing the new image data to obtain a horizon, a sky area and a ground area of the new image data;
and correspondingly adding virtual contents in the horizon, the sky area and the ground area respectively based on the virtual model.
9. A device for realizing virtual-real fusion is characterized by comprising an acquisition module, a division module, a construction module, a determination module and a fusion module;
the acquisition module is used for acquiring image data of a region to be fused;
the dividing module is used for dividing the region to be fused in the image data to obtain each target region of the region to be fused;
the construction module is used for constructing a virtual model of the region to be fused based on the image data; wherein the constructing a virtual model of the region to be fused based on the image data comprises: extracting the features of the image data to obtain a virtual map of an area to be fused; the image data includes at least two poses; respectively carrying out depth estimation on the image data of at least two poses and the virtual map to generate at least two image depth maps, and correspondingly fusing the at least two image depth maps into at least two point clouds; integrating at least two point clouds to obtain dense point clouds of the region to be fused; generating the virtual model based on visibility of each point in the dense point cloud and a reprojection error;
the determining module is used for determining the corresponding position information of each target area in the virtual model;
the fusion module is used for correspondingly adding virtual content in each target area in the virtual model based on the position information so as to perform virtual-real fusion on each target area of the area to be fused.
10. A device for realizing virtual-real fusion is characterized by comprising an acquisition module, a division module, a construction module, a determination module and a fusion module;
the acquisition module is used for acquiring image data of a region to be fused;
the dividing module is used for dividing the region to be fused in the image data to obtain the horizon, the sky region and the ground region of the region to be fused;
the construction module is used for constructing a virtual model of the region to be fused based on the image data; wherein the constructing a virtual model of the region to be fused based on the image data comprises: extracting the characteristics of the image data to obtain a virtual map of the area to be fused; the image data includes at least two poses; respectively carrying out depth estimation on the image data of at least two poses and the virtual map to generate at least two image depth maps, and correspondingly fusing the at least two image depth maps into at least two point clouds; integrating at least two point clouds to obtain dense point clouds of the region to be fused; generating the virtual model based on visibility of each point in the dense point cloud and a reprojection error;
the determining module is used for determining corresponding position information of the horizon, the sky area and the ground area in the virtual model;
the fusion module is used for correspondingly adding virtual contents on the basis of the position information in the horizon, the sky area and the ground area in the virtual model so as to perform virtual-real fusion on the horizon, the sky area and the ground area of the area to be fused.
11. An electronic device, comprising a memory and a processor coupled to each other, wherein the processor is configured to execute program data stored in the memory to implement the method for implementing virtual-real fusion according to any one of claims 1 to 6 or the method for implementing virtual-real fusion according to claims 7 to 8.
12. A computer-readable storage medium, characterized in that the computer-readable storage medium stores program data that can be executed to implement the method of implementing virtual-real fusion of any one of claims 1 to 6 or the method of implementing virtual-real fusion of claims 7-8.
CN202110763864.9A 2021-07-06 2021-07-06 Method, device, electronic equipment and storage medium for realizing virtual-real fusion Active CN113409473B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110763864.9A CN113409473B (en) 2021-07-06 2021-07-06 Method, device, electronic equipment and storage medium for realizing virtual-real fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110763864.9A CN113409473B (en) 2021-07-06 2021-07-06 Method, device, electronic equipment and storage medium for realizing virtual-real fusion

Publications (2)

Publication Number Publication Date
CN113409473A CN113409473A (en) 2021-09-17
CN113409473B true CN113409473B (en) 2023-03-03

Family

ID=77685194

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110763864.9A Active CN113409473B (en) 2021-07-06 2021-07-06 Method, device, electronic equipment and storage medium for realizing virtual-real fusion

Country Status (1)

Country Link
CN (1) CN113409473B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116883627B (en) * 2023-06-19 2024-04-19 中铁第四勘察设计院集团有限公司 Unmanned aerial vehicle video augmented reality processing method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108335365A (en) * 2018-02-01 2018-07-27 张涛 A kind of image-guided virtual reality fusion processing method and processing device
CN109146954A (en) * 2017-06-19 2019-01-04 苹果公司 Augmented reality interface for being interacted with shown map
CN111724485A (en) * 2020-06-11 2020-09-29 浙江商汤科技开发有限公司 Method, device, electronic equipment and storage medium for realizing virtual-real fusion
CN112927349A (en) * 2021-02-22 2021-06-08 北京市商汤科技开发有限公司 Three-dimensional virtual special effect generation method and device, computer equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109146954A (en) * 2017-06-19 2019-01-04 苹果公司 Augmented reality interface for being interacted with shown map
CN108335365A (en) * 2018-02-01 2018-07-27 张涛 A kind of image-guided virtual reality fusion processing method and processing device
CN111724485A (en) * 2020-06-11 2020-09-29 浙江商汤科技开发有限公司 Method, device, electronic equipment and storage medium for realizing virtual-real fusion
CN112927349A (en) * 2021-02-22 2021-06-08 北京市商汤科技开发有限公司 Three-dimensional virtual special effect generation method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN113409473A (en) 2021-09-17

Similar Documents

Publication Publication Date Title
CN108401461B (en) Three-dimensional mapping method, device and system, cloud platform, electronic equipment and computer program product
CN108898676B (en) Method and system for detecting collision and shielding between virtual and real objects
JP6775776B2 (en) Free viewpoint movement display device
CN107862744B (en) Three-dimensional modeling method for aerial image and related product
CN109887003B (en) Method and equipment for carrying out three-dimensional tracking initialization
JP4685313B2 (en) Method for processing passive volumetric image of any aspect
CN109345596A (en) Multisensor scaling method, device, computer equipment, medium and vehicle
Wei et al. Applications of structure from motion: a survey
WO2023280038A1 (en) Method for constructing three-dimensional real-scene model, and related apparatus
CN102607532B (en) Quick low-level image matching method by utilizing flight control data
CN112312113B (en) Method, device and system for generating three-dimensional model
CN112348887A (en) Terminal pose determining method and related device
CN111721281A (en) Position identification method and device and electronic equipment
CN114549595A (en) Data processing method and device, electronic equipment and storage medium
CN113537047A (en) Obstacle detection method, obstacle detection device, vehicle and storage medium
CN113409473B (en) Method, device, electronic equipment and storage medium for realizing virtual-real fusion
CN114742894A (en) Multi-camera calibration method in large scene, information processing terminal and storage medium
Alam et al. Pose estimation algorithm for mobile augmented reality based on inertial sensor fusion.
CN114299230A (en) Data generation method and device, electronic equipment and storage medium
CN114882106A (en) Pose determination method and device, equipment and medium
CN112991441A (en) Camera positioning method and device, electronic equipment and storage medium
Beltrán et al. A method for synthetic LiDAR generation to create annotated datasets for autonomous vehicles perception
CN117132737A (en) Three-dimensional building model construction method, system and equipment
CN112132466A (en) Route planning method, device and equipment based on three-dimensional modeling and storage medium
CN116503566A (en) Three-dimensional modeling method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant