CN115861576A - Method, system, equipment and medium for realizing augmented reality of live-action image - Google Patents

Method, system, equipment and medium for realizing augmented reality of live-action image Download PDF

Info

Publication number
CN115861576A
CN115861576A CN202211585242.2A CN202211585242A CN115861576A CN 115861576 A CN115861576 A CN 115861576A CN 202211585242 A CN202211585242 A CN 202211585242A CN 115861576 A CN115861576 A CN 115861576A
Authority
CN
China
Prior art keywords
live
building
action image
action
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211585242.2A
Other languages
Chinese (zh)
Inventor
孙建龙
丁丁
苏燕莹
陈开麟
王书龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Cubespace Technology Co ltd
Original Assignee
Shenzhen Cubespace Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Cubespace Technology Co ltd filed Critical Shenzhen Cubespace Technology Co ltd
Priority to CN202211585242.2A priority Critical patent/CN115861576A/en
Publication of CN115861576A publication Critical patent/CN115861576A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method, a system, equipment and a medium for realizing augmented reality of live-action images, relating to the technical field of augmented reality image identification; the method comprises the following steps: acquiring a live-action image of a target area according to a set travelling route; determining the position coordinates of each building in the live-action image; acquiring different pre-stored building images and corresponding building names; matching and identifying the live-action image and different building images to obtain a matching result; determining the name of each building in the live-action image according to the matching result and the stored building name to obtain a matched building name; adding the matched building name and the position coordinates of each building in the live-action image into the live-action image to obtain an added live-action image of the target area; displaying the added live-action image by adopting a reality augmentation technology; the invention can form real images and improve experience.

Description

Method, system, equipment and medium for realizing augmented reality of live-action image
Technical Field
The present invention relates to the field of augmented reality image recognition technologies, and in particular, to a method, a system, a device, and a medium for implementing augmented reality of live-action images.
Background
In recent years, with the rapid development of technologies related to virtual reality, applications of virtual reality in various fields are also expanding. On the basis of a three-dimensional design result, a virtual simulation technology and a Building Information Model (BIM) are combined, and a brand-new environment can be provided for engineering design, construction and management by utilizing the interactive and immersive characteristics of VR simulation, so that real what you see is what you get and real-time interaction are realized. By using the technology, the design can be displayed more visually and stereoscopically, the scheme can be optimized more accurately, and the operation can be guided more accurately.
The existing technical scheme is to perform three-dimensional animation display, 3D structure modeling and actual three-dimensional structure manufacturing on the established project building and the marker. And the three-dimensional scene after being made is identified, displayed and interacted basically.
The prior technical scheme has the following defects: the actual animation is different from the real mountain, fan and environment, so that the sense of human is not real and the feeling is similar to that of an animation film; when a higher and more visual animation effect is required, engineering technicians are generally required to perform more comprehensive and more detailed modeling, and at the moment, the workload of modeling is greatly increased; in addition, people are in a closed VR animation environment, and the real experience is not strong.
Disclosure of Invention
The invention aims to provide a method, a system, equipment and a medium for realizing augmented reality of live-action images, which can form real images and improve experience.
In order to achieve the purpose, the invention provides the following scheme:
a method of implementing augmented reality of live-action images, the method comprising:
acquiring a live-action image of a target area according to a set travelling route;
determining the position coordinates of each building in the live-action image;
acquiring different pre-stored building images and corresponding building names;
matching and identifying the live-action image and different building images to obtain a matching result;
determining the names of buildings in the live-action image according to the matching result and the stored building names to obtain the matched building names;
adding the matched building name and the position coordinates of each building in the live-action image to obtain an added live-action image of the target area;
and displaying the added live-action image by adopting a reality augmentation technology.
Optionally, the matching and identifying the live-action image and different building images to obtain a matching result specifically includes:
setting a plurality of identification points at the edge positions of the outline of the building for the buildings in the live-action image and the buildings in different building images; the plurality of identification points are not all partially collinear;
carrying out scaling processing on the live-action image to obtain a processed live-action image;
comparing the identification point condition in the processed live-action image with the identification point condition in different building images; if the comparison result is within the set error range, the matching is successful, and the matching result is obtained; the identification point case includes: the distance between any two points of the identification points and the included angle formed by the connecting lines of the identification points of any three points.
Optionally, the method for determining the set travel route includes:
acquiring position coordinates of buildings in a pre-stored target area;
and drawing the set travelling route according to the position coordinates of each building in the target area.
Optionally, the method for determining the set travel route further includes:
determining a starting point and an end point of the target area;
determining an initial travel route according to the starting point and the end point;
and determining the route passing through all buildings in the target area in the initial traveling route as the set traveling route.
A system for implementing augmented reality of live-action images, the system comprising:
the live-action image acquisition module is used for acquiring a live-action image of the target area according to the set travelling route;
the position coordinate determination module is used for determining the position coordinates of each building in the live-action image;
the data acquisition module is used for acquiring different pre-stored building images and corresponding building names;
the matching module is used for matching and identifying the live-action image and different building images to obtain a matching result;
the name matching module is used for determining the names of all buildings in the live-action image according to the matching result and the stored building names to obtain matched building names;
the adding module is used for adding the matched building names and the position coordinates of all buildings in the live-action image into the live-action image to obtain an added live-action image of the target area;
and the display module is used for displaying the added live-action image by adopting a reality augmentation technology.
Optionally, the matching module comprises:
the first processing submodule is used for setting a plurality of identification points at the edge position of the outline of the building for the building in the live-action image and the building in different building images; the plurality of identification points are not all partially collinear;
the scaling processing submodule is used for scaling the live-action image to obtain a processed live-action image;
the comparison sub-module is used for comparing the identification point condition in the processed live-action image with the identification point condition in different building images, and if the comparison result is within a set error range, the matching is successful, so that the matching result is obtained; the identification point case includes: the distance between any two points of the identification points and the included angle formed by the connecting lines of the identification points of any three points.
An electronic device comprises a memory and a processor, wherein the memory is used for storing a computer program, and the processor runs the computer program to enable the electronic device to execute the method for realizing the augmented reality of the live-action image.
A computer-readable storage medium storing a computer program which, when executed by a processor, implements the method of implementing augmented reality of live-action images.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the embodiment of the invention provides a method, a system, equipment and a medium for realizing augmented reality of live-action images, wherein the live-action images of a target area are obtained according to a set travelling route, and the position coordinates of buildings in the live-action images are determined; then matching and identifying the live-action image and the acquired different pre-stored building images to obtain a matching result; determining the name of each building in the live-action image according to the matching result and the stored building name to obtain a matched building name; adding the matched building name and the position coordinates of each building in the live-action image into the live-action image to obtain an added live-action image of the target area; finally, displaying the added live-action image by adopting a reality augmentation technology; because the interaction can be carried out on the real environment information, namely the live-action image, the finally observed image sense is more real, and therefore, the real image can be formed, and the experience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
Fig. 1 is a flowchart of a method for implementing augmented reality of live-action images according to an embodiment of the present invention;
fig. 2 is a structural diagram of a system for implementing augmented reality of live-action images according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of travel route determination factors provided by an embodiment of the present invention;
fig. 4 is a schematic diagram illustrating a method for determining a travel route according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of another method for determining a travel route according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating an information string of a travel route according to an embodiment of the present invention;
FIG. 7 is a schematic illustration of non-collinear object identification points on a building provided by an embodiment of the present invention;
FIG. 8 is a schematic diagram of deformation of an isometric image according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of deformation in the transverse direction according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of deformation in the longitudinal direction provided by an embodiment of the present invention;
FIG. 11 is a schematic view of a deformation at a particular axial angle provided by an embodiment of the present invention;
FIG. 12 is a schematic diagram illustrating a transformation of the embodiment of the present invention to identify different scaling ratios between points;
FIG. 13 is a schematic diagram of an image to be labeled in practice according to an embodiment of the present invention;
fig. 14 is a schematic image diagram after augmented reality object identification according to an embodiment of the present invention.
Description of the symbols:
the system comprises a live-action image acquisition module-1, a position coordinate determination module-2, a data acquisition module-3, a matching module-4, a name matching module-5, an adding module-6 and a display module-7.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, it is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a method, a system, equipment and a medium for realizing augmented reality of live-action images, which can form real images and improve experience.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Example 1
As shown in fig. 1, an embodiment of the present invention provides a method for implementing augmented reality of live-action images, where the method includes:
step 100: and acquiring the live-action image of the target area according to the set travelling route.
In one embodiment, a method for determining a set travel route includes:
acquiring position coordinates of buildings in a pre-stored target area; and drawing the set traveling route according to the position coordinates of each building in the target area.
In another embodiment, the method for determining the set travel route may further include:
determining a starting point and an end point of the target area; determining an initial traveling route according to the starting point and the end point; and determining the route passing through all buildings in the target area in the initial traveling route as the set traveling route.
In practical application, an unmanned aerial vehicle can be selected to obtain a live-action image of a target area. Specifically, the unmanned aerial vehicle travel route is formed by using position coordinate information of more than two position points as coordinates of the unmanned aerial vehicle, and an image is acquired, as shown in fig. 3. Specific location coordinates include longitude, latitude, and altitude. And the unmanned aerial vehicle climbs to a specified height at the determined longitude and latitude position, and sequentially patrols and advances according to the advancing route.
The image path is formed by two methods:
path formation method 1: the traveling route can be formed by coordinates above two position points in fig. 3, such as the east longitude 113.583423 and the north latitude 38.722114 as the position of a; the coordinates are not only plane two-dimensional coordinates, but are determined by two parameters, and because a real-scene image needs to be acquired, image information of another dimension, namely the height position, is needed. Such as a set height H of 34 meters.
When the longitude and latitude coordinates are set, the sixth position after the decimal point is required to be reached, so that the two-dimensional position precision can reach 10 centimeters; if the position is accurate to the fifth position after a decimal point, the position accuracy can be ensured to be within the range of 1 meter, and the accuracy is too coarse.
And inputting parameter information of each point such as A, B, … … N and the like into equipment of the unmanned aerial vehicle, so that the unmanned aerial vehicle sequentially patrols and advances to collect images.
Fig. 4 is a travel route for setting each position point. The specific route is a layout of the travel route according to the actual environment and the form of the building.
Path forming method 2: actually controlling the unmanned aerial vehicle through the unmanned aerial vehicle flight hand to carry out single route and fly in fact to each position point data string of record flight in-process inputs this data string afterwards, realizes unmanned aerial vehicle's automatic cruise's image acquisition. Fig. 5 is a travel route map in this case. The information string of the travel route formed under the method is in the form of three columns of three-dimensional data information, see fig. 6. In addition, the data position step length can be determined according to the specific required data string size, so as to achieve the optimal data storage size.
Step 200: and determining the position coordinates of each building in the live-action image.
Step 300: and acquiring different pre-stored building images and corresponding building names.
Step 400: and matching and identifying the live-action image and different building images to obtain a matching result.
Step 500: and determining the names of all buildings in the live-action image according to the matching result and the stored building names to obtain the matched building names.
Wherein, step 300 specifically includes:
setting a plurality of identification points at the edge positions of the outline of the building for the building in the live-action image and the buildings in different building images; the plurality of identification points are not all collinear.
And carrying out zooming processing on the live-action image to obtain a processed live-action image.
Comparing the identification point condition in the processed live-action image with the identification point condition in different building images, and if the comparison result is within a set error range, successfully matching to obtain a matching result; the identification point case includes: the distance between any two points of the mark points and the connecting line of any three points of the mark points form an included angle.
The comparison result is in a set error range, that is, the difference between the distance between any two identification points in the building image and the distance between the corresponding two identification points in the building image is in the set distance error range, and the difference between the included angle formed by connecting lines of any three identification points in the building image and the included angle formed by connecting lines of the corresponding three identification points in the building image is in the set included angle error range.
Step 600: and adding the matched building name and the position coordinates of each building in the live-action image into the live-action image to obtain the added live-action image of the target area.
In short, in the process of generating an image, an actual landmark object needs to be identified, and object information is labeled, so that automatic object identification and prompting can be realized in the AR enhanced display.
In the process of actually acquiring the image, the image acquires the marker object at different angles, and at this time, the object image can generate different form changes along with different positions. It is necessary to ensure that the landmark object can still accurately identify the object and perform augmented reality information tagging on the object in the process of different changes.
Determining four non-collinear object identification points on a building to be identified in the image, such as the marking points in fig. 7, the determination principle of the four identification points is as follows: firstly, four identification points are non-collinear; secondly, the four identification points are selected from the edge positions of the outline structure of the mark foundation building, namely the building, so that the outline characteristics of the mark can be represented. And four identification points are uniformly distributed on the buildings in the acquired live-action image.
And then, transversely zooming, longitudinally zooming or scaling in equal proportion and zooming in a specific axial direction on the image of the building to obtain zoomed images, and acquiring the positions of the identification points in each zoomed image.
All pictures or images are composed of array pixels, for example, 1280 × 720 pixels constitute an image; when scaling vertically, it can be shown that there are still 1280 pixels horizontally, but in the vertical direction, the 720 pixels can be changed into 600 pixels correspondingly. The information of the original 720 pixels is still maintained by 600 pixels, except that the density of the information is slightly reduced, and the characteristics of the image are still maintained. The corresponding distance of the marker point after scaling may become larger or smaller due to the scaling. The position of the marker point after zooming is still at the position of the marker point due to the compression of the information, and the difference is that the coordinate position of the marker point is changed or the corresponding distance is changed.
As shown in fig. 8-11, taking the letter a as an example, the general form change may include the following forms: the left side is an original object, and the right side is an object which is actually deformed and used for obtaining the image; the deformation includes equal proportion deformation, transverse deformation, longitudinal deformation and deformation of specific image angle. In the actual augmented reality labeling process, the object information should be correctly labeled. FIG. 8 is an isometric image deformation. Fig. 9 deformation in the transverse direction. Fig. 10 shows the deformation in the longitudinal direction. Fig. 11 shows the deformation at a specific axial angle.
And comparing the position of each zoomed identification point with the positions of the four identification points in the original image, judging the images to be the same object when the positions are coincident, and marking the augmented reality information. The information mark means that prompt text information, a name for identifying the object, or a high-voltage parameter and the like appear on a specific object, such as the upper right.
The matching and corresponding method of the positions of the mark points before and after zooming comprises the following steps: the comparison comprises the relative distance between two points and an angle included angle formed by connecting lines of any three points; and when the relative distance between the identification points and the corresponding identification points form an included angle of a line, and the included angle is controlled to be within a certain percentage range with the identification points of the original image, the positions of the identification points after zooming and the identification points before zooming are considered to be matched.
The four selected position points are a, b, c and d shown in FIG. 12; assume that the scales at a and b are K2 and the scales at c and d are K1. When the scaling ratios are different, a specific image deformation is also generated. The actual image recognition process is to match the position of the mark point of the original image by an inverse transformation of the deformation. And when the positions of the identification points are matched, the objects are regarded as the same object, and the augmented reality object identification is carried out. Fig. 13 shows an image to be annotated, and fig. 14 shows an image after identification of an augmented reality object.
In addition, the corresponding scene information can be generated according to the actual specific characteristic object identification, such as fire alarm demonstration or other electric power and parameter augmented reality information identification.
Step 700: and displaying the added live-action image by adopting a reality augmentation technology.
After all the objects to be identified, namely all the buildings in the live-action image are identified, a vivid propaganda image suitable for large-scale infrastructure can be obtained.
And displaying the added live-action image by adopting a reality augmentation technology, and in short, enabling the far-end camera to reproduce the image according to the set advancing route. In this case, the difference is that in the process of forming the captured live-action image, live-action materials are recorded in the storage medium. When the real-time interaction is needed to be carried out on the large-scale infrastructure scene, the far-end camera is enabled to carry out the operation again according to the actual path, the real-time image data are collected through the tracking channel and synchronously transmitted to the AR end, and the identification of infrastructure objects and the identification of augmented reality information can be accompanied.
In addition, the invention can carry out real-time observation and monitor the real-time condition of the real-time infrastructure marker. For example, in practical application, when the fan is broken, the real-time infrastructure scene state can be observed through the AR terminal.
Example 2
As shown in fig. 2, an embodiment of the present invention provides a system for implementing augmented reality of live-action images, including: the system comprises a live-action image acquisition module 1, a position coordinate determination module 2, a data acquisition module 3, a matching module 4, a name matching module 5, an adding module 6 and a display module 7.
The live-action image acquisition module 1 is used for acquiring a live-action image of the target area according to the set travelling route.
And the position coordinate determination module 2 is used for determining the position coordinates of each building in the live-action image.
And the data acquisition module 3 is used for acquiring different pre-stored building images and corresponding building names.
And the matching module 4 is used for matching and identifying the live-action image and different building images to obtain a matching result.
And the name matching module 5 is used for determining the names of all buildings in the live-action image according to the matching result and the stored building names to obtain the matched building names.
And the adding module 6 is used for adding the matched building name and the position of each building in the live-action image into the live-action image to obtain the added live-action image of the target area.
And the display module 7 is used for displaying the added live-action image by adopting a reality augmentation technology.
Wherein, the matching module includes: the device comprises a first processing submodule, a scaling processing submodule and a comparison submodule.
The first processing submodule is used for setting a plurality of identification points at the edge position of the outline of the building for the building in the live-action image and the building in different building images; the plurality of identification points are not all collinear.
And the scaling processing submodule is used for scaling the live-action image to obtain a processed live-action image.
The comparison submodule is used for comparing the identification point condition in the processed live-action image with the identification point condition in different building images, and if the comparison result is within a set error range, the matching is successful, and a matching result is obtained; the identification point case includes: the distance between any two points of the mark points and the connecting line of any three points of the mark points form an included angle.
Example 3
The embodiment of the invention provides electronic equipment, which comprises a memory and a processor, wherein the memory is used for storing a computer program, and the processor runs the computer program so as to enable the electronic equipment to execute any one method for realizing augmented reality of live-action images in the embodiment 1.
Alternatively, the electronic device may be a server.
In addition, an embodiment of the present invention further provides a computer-readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the method for implementing augmented reality of live-action images in embodiment 1 is implemented.
The method and the system can vividly display large-scale infrastructure and can carry out interactive interaction on real environment. The method has the advantages that the method is different from the traditional image formed by virtual modeling, and can enable a user to visually and real-timely experience shocking infrastructure achievement by wearing an AR terminal.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (8)

1. A method for realizing augmented reality of live-action images is characterized by comprising the following steps:
acquiring a live-action image of a target area according to a set travelling route;
determining the position coordinates of each building in the live-action image;
acquiring different pre-stored building images and corresponding building names;
matching and identifying the live-action image and different building images to obtain a matching result;
determining the names of all buildings in the live-action image according to the matching result and the stored building names to obtain the matched building names;
adding the matched building name and the position coordinates of each building in the live-action image to obtain an added live-action image of the target area;
and displaying the added live-action image by adopting a reality augmentation technology.
2. The method for realizing augmented reality of live-action images according to claim 1, wherein the matching and identifying the live-action images with different building images to obtain matching results specifically comprises:
setting a plurality of identification points at the edge positions of the outline of the building for the buildings in the live-action image and the buildings in different building images; the plurality of identification points are not all partially collinear;
carrying out zooming processing on the live-action image to obtain a processed live-action image;
comparing the identification point condition in the processed live-action image with the identification point conditions in different building images; if the comparison result is within the set error range, the matching is successful, and the matching result is obtained; the identification point case includes: the distance between any two points and the identification point and the three points form an included angle by connecting lines of the mark points.
3. The method for realizing augmented reality of live-action images according to claim 1, wherein the method for determining the set travel route comprises:
acquiring position coordinates of buildings in a pre-stored target area;
and drawing the set travelling route according to the position coordinates of each building in the target area.
4. The method for realizing augmented reality of live-action images according to claim 1, wherein the method for determining the set travel route further comprises:
determining a starting point and an end point of the target area;
determining an initial travel route according to the starting point and the end point;
and determining the route passing through all buildings in the target area in the initial traveling route as the set traveling route.
5. A system for implementing augmented reality of live-action images, the system comprising:
the live-action image acquisition module is used for acquiring a live-action image of the target area according to the set travelling route;
the position coordinate determination module is used for determining the position coordinates of each building in the live-action image;
the data acquisition module is used for acquiring different pre-stored building images and corresponding building names;
the matching module is used for matching and identifying the live-action image and different building images to obtain a matching result;
the name matching module is used for determining the names of all buildings in the live-action image according to the matching result and the stored building names to obtain the matched building names;
the adding module is used for adding the matched building names and the position coordinates of all buildings in the live-action image into the live-action image to obtain an added live-action image of the target area;
and the display module is used for displaying the added live-action image by adopting a reality augmentation technology.
6. The system for realizing augmented reality of live-action images according to claim 5, wherein the matching module comprises:
the first processing sub-module is used for setting a plurality of identification points at the edge positions of the outline of the building for the building in the live-action image and the building in different building images; the plurality of identification points are not all partially collinear;
the scaling processing submodule is used for scaling the live-action image to obtain a processed live-action image;
the comparison submodule is used for comparing the identification point condition in the processed live-action image with the identification point condition in different building images, and if the comparison result is within a set error range, the matching is successful, and the matching result is obtained; the identification point case includes: the distance between any two points of the identification points and the included angle formed by the connecting lines of the identification points of any three points.
7. An electronic device, comprising a memory for storing a computer program and a processor for executing the computer program to make the electronic device execute the method for implementing augmented reality of live-action images according to any one of claims 1 to 4.
8. A computer-readable storage medium, storing a computer program, which when executed by a processor implements the method for implementing augmented reality of live-action images according to any one of claims 1 to 4.
CN202211585242.2A 2022-12-09 2022-12-09 Method, system, equipment and medium for realizing augmented reality of live-action image Pending CN115861576A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211585242.2A CN115861576A (en) 2022-12-09 2022-12-09 Method, system, equipment and medium for realizing augmented reality of live-action image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211585242.2A CN115861576A (en) 2022-12-09 2022-12-09 Method, system, equipment and medium for realizing augmented reality of live-action image

Publications (1)

Publication Number Publication Date
CN115861576A true CN115861576A (en) 2023-03-28

Family

ID=85671915

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211585242.2A Pending CN115861576A (en) 2022-12-09 2022-12-09 Method, system, equipment and medium for realizing augmented reality of live-action image

Country Status (1)

Country Link
CN (1) CN115861576A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101770582A (en) * 2008-12-26 2010-07-07 鸿富锦精密工业(深圳)有限公司 Image matching system and method
CN102411778A (en) * 2011-07-28 2012-04-11 武汉大学 Automatic registration method of airborne laser point cloud and aerial image
CN107220726A (en) * 2017-04-26 2017-09-29 消检通(深圳)科技有限公司 Fire-fighting equipment localization method, mobile terminal and system based on augmented reality
CN110083720A (en) * 2019-04-03 2019-08-02 泰瑞数创科技(北京)有限公司 The construction method and device of outdoor scene semantic structure model
CN111510701A (en) * 2020-04-22 2020-08-07 Oppo广东移动通信有限公司 Virtual content display method and device, electronic equipment and computer readable medium
CN113178006A (en) * 2021-04-25 2021-07-27 深圳市慧鲤科技有限公司 Navigation map generation method and device, computer equipment and storage medium
CN114332648A (en) * 2022-03-07 2022-04-12 荣耀终端有限公司 Position identification method and electronic equipment
CN114491743A (en) * 2022-01-12 2022-05-13 武汉大学 Satellite image building height estimation method using roof contour matching

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101770582A (en) * 2008-12-26 2010-07-07 鸿富锦精密工业(深圳)有限公司 Image matching system and method
CN102411778A (en) * 2011-07-28 2012-04-11 武汉大学 Automatic registration method of airborne laser point cloud and aerial image
CN107220726A (en) * 2017-04-26 2017-09-29 消检通(深圳)科技有限公司 Fire-fighting equipment localization method, mobile terminal and system based on augmented reality
CN110083720A (en) * 2019-04-03 2019-08-02 泰瑞数创科技(北京)有限公司 The construction method and device of outdoor scene semantic structure model
CN111510701A (en) * 2020-04-22 2020-08-07 Oppo广东移动通信有限公司 Virtual content display method and device, electronic equipment and computer readable medium
CN113178006A (en) * 2021-04-25 2021-07-27 深圳市慧鲤科技有限公司 Navigation map generation method and device, computer equipment and storage medium
CN114491743A (en) * 2022-01-12 2022-05-13 武汉大学 Satellite image building height estimation method using roof contour matching
CN114332648A (en) * 2022-03-07 2022-04-12 荣耀终端有限公司 Position identification method and electronic equipment

Similar Documents

Publication Publication Date Title
CN104376118B (en) The outdoor moving augmented reality method of accurate interest point annotation based on panorama sketch
CN112710325B (en) Navigation guidance, live-action three-dimensional model building method, device, equipment and medium
CN109872401B (en) Unmanned aerial vehicle video augmented reality implementation method
CN102867057B (en) Virtual wizard establishment method based on visual positioning
CN107885096B (en) Unmanned aerial vehicle patrols and examines three-dimensional emulation monitored control system of flight path
CN109242966B (en) 3D panoramic model modeling method based on laser point cloud data
WO2023093217A1 (en) Data labeling method and apparatus, and computer device, storage medium and program
CN110322564B (en) Three-dimensional model construction method suitable for VR/AR transformer substation operation environment
CN103226838A (en) Real-time spatial positioning method for mobile monitoring target in geographical scene
Gomez-Jauregui et al. Quantitative evaluation of overlaying discrepancies in mobile augmented reality applications for AEC/FM
KR101697713B1 (en) Method and apparatus for generating intelligence panorama VR(virtual reality) contents
CN113014824B (en) Video picture processing method and device and electronic equipment
CN115170749A (en) WEBGIS three-dimensional visualization construction method and system based on Cesium
CN116883604A (en) Three-dimensional modeling technical method based on space, air and ground images
CN116978010A (en) Image labeling method and device, storage medium and electronic equipment
CN115965743A (en) Three-dimensional modeling system and method based on VR and oblique photography collected data
CN103986905A (en) Method for video space real-time roaming based on line characteristics in 3D environment
CN116858215B (en) AR navigation map generation method and device
CN113838193A (en) Data processing method and device, computer equipment and storage medium
CN117034384A (en) Digital three-dimensional simulation method for high-voltage equipment of power grid substation
CN114089836B (en) Labeling method, terminal, server and storage medium
CN115861576A (en) Method, system, equipment and medium for realizing augmented reality of live-action image
CN114004957B (en) Augmented reality picture generation method, device, equipment and storage medium
CN115018984A (en) Power distribution construction project model association method based on unmanned aerial vehicle inspection
Kikuchi et al. How a Flooded City Can Be Visualized from Both the Air and the Ground with the City Digital Twin Approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination