CN109840951A - The method and device of augmented reality is carried out for plane map - Google Patents
The method and device of augmented reality is carried out for plane map Download PDFInfo
- Publication number
- CN109840951A CN109840951A CN201811619174.0A CN201811619174A CN109840951A CN 109840951 A CN109840951 A CN 109840951A CN 201811619174 A CN201811619174 A CN 201811619174A CN 109840951 A CN109840951 A CN 109840951A
- Authority
- CN
- China
- Prior art keywords
- map
- augmented reality
- plane
- constitution element
- plane map
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003190 augmentative effect Effects 0.000 title claims abstract description 67
- 238000000034 method Methods 0.000 title claims abstract description 28
- 108091026890 Coding region Proteins 0.000 claims description 26
- 238000004590 computer program Methods 0.000 claims description 13
- 238000000605 extraction Methods 0.000 claims description 4
- 239000000284 extract Substances 0.000 claims description 2
- 230000002708 enhancing effect Effects 0.000 description 6
- 239000011159 matrix material Substances 0.000 description 6
- 238000006243 chemical reaction Methods 0.000 description 5
- 230000009466 transformation Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 108700026244 Open Reading Frames Proteins 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The present invention provides a kind of method and device that augmented reality is carried out for plane map, wherein the described method includes: video image of the captured in real-time about the plane map;From identifying default label associated with the map constitution element in the plane map in the video image of shooting;Based on the default label identified, three dimensional virtual models corresponding with the map constitution element in the plane map and semantic description information are obtained;Three dimensional virtual models corresponding with the map constitution element in the plane map and semantic description information trace are registered in the video image of shooting, to construct the virtual three dimensional space map about the plane map in the video image of shooting.
Description
Technical field
This application involves cartographic information display field, in particular to a kind of method for carrying out augmented reality for plane map
And device.
Background technique
With the fast development of augmented reality research and deepening continuously for map and geographic information application, increasing
While strong reality becomes research hotspot, people are just attempting display enhancing and ground that augmented reality is applied to plane map
Manage information visualization.However, applying at present for the augmented reality of plane map, there are still many problems, for example, for increasing
The virtual three-dimensional model of strong reality is relatively simple, lacks corresponding semantic description, does not merge virtual three-dimensional model effectively
To real scene.
Therefore, there is an urgent need to a kind of methods and dress that augmented reality is carried out for plane map for being able to solve the above problem
It sets to improve the validity and identification of each map constitution element expression in plane map.
Summary of the invention
The purpose of the present invention is to provide a kind of method and devices that augmented reality is carried out for plane map.
According to an aspect of of the present present invention, a kind of method that wind carries out augmented reality for plane map, the method packet are provided
It includes: video image of the captured in real-time about the plane map;It is identified from the video image of shooting and the plane map
The associated default label of interior map constitution element;Based on the default label identified, obtain in the plane map
The corresponding three dimensional virtual models of map constitution element and semantic description information;Member will be constituted with the map in the plane map
The corresponding three dimensional virtual models of element and semantic description information trace are registered in the video image of shooting, in the video of shooting
Virtual three dimensional space map of the building about the plane map in image.
Preferably, the default label is existing with enhancing corresponding with the map constitution element in the plane map
The two dimensional code of real information.
Preferably, the coding region of the two dimensional code includes the first coding region and second code area domain, wherein described the
One coding region is used to record the coding of augmented reality information corresponding with the map constitution element in the plane map, institute
Second code area domain is stated for recording the volume of semantic description information corresponding with the map constitution element in the plane map
Code.
Preferably, it is identified in the video image from shooting related to the map constitution element in the plane map
The step of default label of connection includes: to be decoded to the first coding region of the two dimensional code, to obtain and the plane earth
The corresponding augmented reality information of map constitution element in figure, and the augmented reality information based on acquisition, from preset three
Three dimensional virtual models corresponding with the map constitution element in the plane map, institute are extracted in dimension dummy model database
Stating three dimensional virtual models database purchase has each augmented reality information and three-dimensional corresponding with each augmented reality information
Dummy model;The second code area domain of the two dimensional code is decoded, is constituted with obtaining with the map in the plane map
The corresponding semantic description information of element.
According to another aspect of the present invention, a kind of device that augmented reality is carried out for plane map, described device packet are provided
It includes: video capture unit, video image of the captured in real-time about the plane map;Tag recognition unit, from the video of shooting
Default label associated with the map constitution element in the plane map is identified in image;Model acquiring unit, is based on
The default label identified obtains three dimensional virtual models corresponding with the map constitution element in the plane map and semanteme
Description information;Augmented reality unit, by three dimensional virtual models corresponding with the map constitution element in the plane map and
Semantic description information trace is registered in the video image of shooting, to construct in the video image of shooting about the plane earth
The virtual three dimensional space map of figure.
Preferably, the default label is existing with enhancing corresponding with the map constitution element in the plane map
The two dimensional code of real information.
Preferably, the coding region of the two dimensional code includes the first coding region and second code area domain, wherein described the
One coding region is used to record the coding of augmented reality information corresponding with the map constitution element in the plane map, institute
Second code area domain is stated for recording the volume of semantic description information corresponding with the map constitution element in the plane map
Code.
Preferably, the model acquiring unit includes: model extraction subelement, to the first coding region of the two dimensional code
It is decoded, to obtain augmented reality information corresponding with the map constitution element in the plane map, and is based on obtaining
The augmented reality information taken is extracted from preset three dimensional virtual models database and is constituted with the map in the plane map
The corresponding three dimensional virtual models of element, the three dimensional virtual models database purchase have each augmented reality information and with it is each
The corresponding three dimensional virtual models of a augmented reality information;Semanteme obtains subelement, to the second code area domain of the two dimensional code
It is decoded, to obtain semantic description information corresponding with the map constitution element in the plane map.
According to another aspect of the present invention, a kind of computer readable storage medium being stored with computer program is provided, institute is worked as
Computer program is stated when being executed by processor, realizes the method for carrying out augmented reality for plane map as elucidated before.
According to another aspect of the present invention, a kind of computer equipment is provided, the computer equipment includes: processor;Storage
Device is stored with computer program, when the computer program is executed by processor, realizes and is directed to plane earth as elucidated before
The method that figure carries out augmented reality.
The method and device provided by the present invention for carrying out augmented reality for plane map can effectively improve plane
The validity and identification of each map constitution element expression in map.
Detailed description of the invention
By the description carried out with reference to the accompanying drawing, the purpose of the present invention and feature be will become apparent, in which:
Fig. 1 is the stream for showing the method that augmented reality is carried out for plane map of an exemplary embodiment of the present invention
Cheng Tu;
Fig. 2 is the knot for showing the device that augmented reality is carried out for plane map of an exemplary embodiment of the present invention
Structure block diagram;
Fig. 3 be show an exemplary embodiment of the present invention have it is opposite with the map constitution element in plane map
The schematic diagram of the two dimensional code (QR code) of augmented reality (AR) information answered.
Specific embodiment
The present general inventive concept is, using including augmented reality information relevant to map constitution element and semanteme
The label of description information constructs virtual three dimensional space map to be directed to plane map, so as to improve map structure each in plane map
The validity and identification expressed at element.
Hereinafter, with reference to the accompanying drawings to the embodiment that the present invention will be described in detail.
Fig. 1 is the stream for showing the method that augmented reality is carried out for plane map of an exemplary embodiment of the present invention
Cheng Tu.
In step 110, the video figure by video capture unit (such as camera) captured in real-time about plane map
Picture.
Here, as an example, plane map can be traditional plane paper map, in addition to this, it is also possible to other
The map of some any plane forms that can be taken.
Next, in the step 120, being identified from the video image of shooting and the map constitution element in plane map
Associated default label.
In general, may include various map constitution elements in plane map, as an example, map constitution element here can
To be the landmarks entity such as park, mansion, restaurant, supermarket, school.
In an alternative embodiment, it may be selected that there is enhancing corresponding with the map constitution element in plane map
The two dimensional code of real information is as default label associated with the map constitution element in plane map.In general, two dimensional code can
Including two parts of functional graphic and coding region, however, in this embodiment, as shown in figure 3, the code area of two dimensional code 300
Domain may include the first coding region 301 and second code area domain 302, wherein the first coding region 301 can be used for recording and plane
The coding of the corresponding augmented reality information of map constitution element in map, second code area domain 302 can be used for recording and putting down
The coding of the corresponding semantic description information of map constitution element in the map of face.Come below in conjunction with Fig. 3 in the embodiment
Two dimensional code 300 introduce in more detail.
Fig. 3 be show an exemplary embodiment of the present invention have it is opposite with the map constitution element in plane map
The schematic diagram of the two dimensional code 300 for the augmented reality information answered.
In the example depicted in fig. 3, the coding region of two dimensional code 300 may include two regions, one be for record with
First coding region 301 of the coding of the corresponding augmented reality information of map constitution element in plane map, the other is
For recording the second code area domain of the coding of semantic description information corresponding with the map constitution element in plane map
302.When it is implemented, the first coding region 301 shown in Fig. 3 can be divided into 7 × 7 grid, wherein black lattice indicates
0, white grid indicates 1, and augmented reality information corresponding with the map constitution element in plane map can be with binary
Data mode is coded in the grid inside the first coding region 301.It similarly, can be by second code area domain shown in Fig. 3
302 are divided into several grids, wherein black lattice indicates 0, and white grid indicates the map constitution element in 1, with plane map
Corresponding semantic description information can also be coded in the grid inside second code area domain 302 with binary data mode
In.
Here, augmented reality information corresponding with the map constitution element in plane map may include in plane map
Mark, title, type, pattern and the spatial position coordinate of the corresponding three dimensional virtual models of map constitution element etc..These
Information can be used to the drawing three-dimensional dummy model in the video image of shooting.
Correspondingly, in using embodiment of the two dimensional code 300 as default label, step 120 is directed to the knowledge of default label
Not may include following specific steps:
The video image of shooting is converted to gray level image from color image by step 1201, then from greyscale image transitions
For black white image;
Step 1202 carries out binaryzation to the video image of conversion using Gaussian smoothing filter and mean filter denoising
The methods of processing, then expanded using closed operation, burn into, to obtain the video image with square boundary;
Step 1203 carries out gradient correction (including angle correct and distortion correction) to the video image of processing, so as to
Position tracking and figure detection are carried out to video image;
The video image that step 1204, segmentation correct, and be rectangle data by each image recording after segmentation;
Step 1205 carries out outline to the rectangle data of record, to identify the figure with 300 feature of two dimensional code
Picture.
Next, in step 130, based on the default label identified, obtaining and constituting member with the map in plane map
The corresponding three dimensional virtual models of element and semantic description information.
In using embodiment of the two dimensional code 300 as default label, the parsing that step 130 is directed to default label may include
Following specific steps:
Step 1301 is decoded the first coding region 301 of the two dimensional code 300 identified, with acquisition and plane earth
The corresponding augmented reality information of map constitution element in figure, and the augmented reality information based on acquisition, from preset three
Three dimensional virtual models corresponding with the map constitution element in plane map, the three-dimensional are extracted in dimension dummy model database
Dummy model database purchase has each augmented reality information and three-dimensional mould corresponding with each augmented reality information
Type;
Step 1302 is decoded the second code area domain 302 of the two dimensional code 300 identified, with acquisition and plane earth
The corresponding semantic description information of map constitution element in figure.
Next, in step 140, by three dimensional virtual models corresponding with the map constitution element in plane map and
Semantic description information trace is registered in the video image of shooting, to construct in the video image of shooting about plane map
Virtual three dimensional space map.
When it is implemented, by three dimensional virtual models corresponding with the map constitution element in plane map and view can be passed through
The video image of frequency shooting unit captured in real-time carries out being registrated/being aligned (also referred to as three-dimensional Tracing Registration).In general, for three-dimensional
The three-dimensional Tracing Registration of dummy model relates generally to actual spatial coordinates system, Virtual Space coordinate system, eye coordinate and image
Four class of plane coordinate system, the purpose for carrying out three-dimensional Tracing Registration is in order to which these four types of coordinate systems are come together, to realize virtually
Object is merged with the seamless of real scene.Three-dimensional Tracing Registration based on computer vision be by a given width or several
Video image determines relative position and direction between each target in video camera and real world.
In one example, affine transformation can be used to realize three-dimensional Tracing Registration.Assuming that actual spatial coordinates system be [x,
Y, z, 1], Virtual Space coordinate system is [xv, yv, zv, 1], video camera space coordinates be [x ', y ', z ', 1], real space is sat
There is the coordinate transformation relation as shown in following formula (1) between mark system and Virtual Space coordinate system:
Exist as shown in following formula (2) between video camera space coordinates and actual spatial coordinates system and Virtual Space coordinate system
Coordinate transformation relation:
Coordinate conversion matrix U between actual spatial coordinates system and Virtual Space coordinate system4×4It is known, and images
Coordinate conversion matrix V between machine space coordinates and Virtual Space coordinate system4×4It can be solved by following formula (3):
Wherein, R3×3For spin matrix, T3×1For translation matrix, and spin matrix R3×3With translation matrix T3×1In it is each
A composition parameter can be obtained by video camera space coordinates relative to the position, direction and Attitude Calculation of three dimensional virtual models.Cause
This, formula (3) can be further broken into following formula (4):
Further, following formula (5) can be exported by above formula (2), (3), (4), with this determine video camera space coordinates with
Coordinate conversion between actual spatial coordinates system:
Similarly, it may be determined that the coordinate conversion between video camera space coordinates and Virtual Space coordinate system.
Therefore, the three-dimensional Tracing Registration in step 140 can be realized by the formula in above-mentioned affine transformation, thus clapping
The virtual three dimensional space map about plane map is constructed in the video image taken the photograph, and realizes three dimensional virtual models and real scene
Fusion (that is, i.e. by three dimensional virtual models and the direct Overlapping display of information in real scene).
In addition, during three dimensional virtual models are fused to real scene, it is also contemplated that three dimensional virtual models and true
The hiding relation of real field scape, the display factors such as reasonability and illumination consistency, with this set position that three dimensional virtual models show,
Angle and size, to reach Enhanced expressing effect.
Fig. 2 is the knot for showing the device that augmented reality is carried out for plane map of an exemplary embodiment of the present invention
Structure block diagram.
Referring to Fig. 2, device shown in Fig. 2 may include video capture unit 210, tag recognition unit 220, model acquisition list
Member 230 and augmented reality unit 240.Specifically, video capture unit 210 (such as camera) can be used for captured in real-time about
The video image of plane map;Tag recognition unit 220 can be used for identifying from the video image of shooting and plane map
The associated default label of interior map constitution element;Model acquiring unit 230 can be used for based on the default label identified,
Obtain three dimensional virtual models corresponding with the map constitution element in plane map and semantic description information;Augmented reality unit
240 can be used for three dimensional virtual models corresponding with the map constitution element in plane map and semantic description information trace
It is registered in the video image of shooting, to construct the virtual three dimensional space about plane map in the video image of shooting
Figure.
As previously mentioned, may be selected to have corresponding with the map constitution element in plane map in device shown in Fig. 2
Augmented reality information two dimensional code 300 as default label associated with the map constitution element in plane map.
As previously mentioned, the coding region of this two dimensional code 300 may include the first coding region in device shown in Fig. 2
301 and second code area domain 302, wherein the first coding region 301 is for recording and the map constitution element phase in plane map
The coding of corresponding augmented reality information, second code area domain 302 is for recording and the map constitution element phase in plane map
The coding of corresponding semantic description information.
As previously mentioned, model acquiring unit 230 may include model extraction subelement and semanteme in device shown in Fig. 2
It obtains subelement (having been not shown).Specifically, model extraction subelement can be used for the first code area to above-mentioned two dimensional code 300
Domain 301 is decoded, and to obtain augmented reality information corresponding with the map constitution element in plane map, and is based on obtaining
The augmented reality information taken extracts and the map constitution element in plane map from preset three dimensional virtual models database
Corresponding three dimensional virtual models, the three dimensional virtual models database purchase have each augmented reality information and with each enhancing
The corresponding three dimensional virtual models of real information;Semanteme, which obtains subelement, can be used for the second code area to above-mentioned two dimensional code 300
Domain 302 is decoded, to obtain semantic description information corresponding with the map constitution element in plane map.
By using above-mentioned implementation process, not only augmented reality can be applied to urban map Enhanced expressing,
And the validity and identification of each map constitution element expression of plane map can be improved.In addition, this utilization include with
The label of the relevant augmented reality information of map constitution element and semantic description information is empty for plane map building three-dimensional
Between the mode of map not only calculating speed is fast, but also can effectively meet the requirement of real-time of augmented reality.
In addition it is also necessary to explanation, although embodiment described here is mainly with two with enhancing identification information
The implementation process for plane map three-dimensional virtual object being fused in real scene is described for dimension code, it should be understood that
Be, embodiment described here the default label (for example, bar code etc.) of other forms can also be used realize it is above-mentioned will be three-dimensional
Dummy object is fused to the implementation process of the plane map in real scene.
An exemplary embodiment of the present invention also provides a kind of computer-readable storage medium for being stored with computer program
Matter.The computer-readable recording medium storage has makes processor execute determining wind according to the present invention when being executed by a processor
The computer program of the method for speed.The computer readable recording medium can be stored by any of the data of computer system reading
Data storage device.The example of computer readable recording medium include: read-only memory, random access memory, CD-ROM,
Tape, floppy disk, optical data storage devices and carrier wave (such as being transmitted through wired or wireless transmission path by the data of internet).
An exemplary embodiment of the present invention also provides a kind of computer equipment.The computer equipment include processor and
Memory.Memory is for storing computer program.The computer program is executed by processor so that processor executes basis
The computer program of the method for determination wind speed of the invention.
Although show and describing the application with reference to preferred embodiment, it will be understood by those skilled in the art that not
In the case where being detached from the spirit and scope that is defined by the claims, these embodiments can be carry out various modifications and
Transformation.
Claims (10)
1. a kind of method for carrying out augmented reality for plane map, which is characterized in that the described method includes:
Video image of the captured in real-time about the plane map;
From identifying default label associated with the map constitution element in the plane map in the video image of shooting;
Based on the default label identified, three-dimensional mould corresponding with the map constitution element in the plane map is obtained
Type and semantic description information;
Three dimensional virtual models corresponding with the map constitution element in the plane map and semantic description information trace are infused
Volume is into the video image of shooting, to construct the virtual three dimensional space about the plane map in the video image of shooting
Figure.
2. the method as described in claim 1, which is characterized in that the default label be with the ground in the plane map
The two dimensional code of the corresponding augmented reality information of figure constitution element.
3. method according to claim 2, which is characterized in that the coding region of the two dimensional code include the first coding region and
Second code area domain, wherein first coding region is opposite with the map constitution element in the plane map for recording
The coding for the augmented reality information answered, the second code area domain are used to record and the map constitution element in the plane map
The coding of corresponding semantic description information.
4. method as claimed in claim 3, which is characterized in that identified in the video image from shooting and the plane
The step of map constitution element in map associated default label includes:
First coding region of the two dimensional code is decoded, to obtain and the map constitution element phase in the plane map
Corresponding augmented reality information, and the augmented reality information based on acquisition, mention from preset three dimensional virtual models database
Take out three dimensional virtual models corresponding with the map constitution element in the plane map, the three dimensional virtual models database
It is stored with each augmented reality information and three dimensional virtual models corresponding with each augmented reality information;
The second code area domain of the two dimensional code is decoded, to obtain and the map constitution element phase in the plane map
Corresponding semantic description information.
5. a kind of device for carrying out augmented reality for plane map, which is characterized in that described device includes:
Video capture unit, video image of the captured in real-time about the plane map;
Tag recognition unit, it is associated with the map constitution element in the plane map from being identified in the video image of shooting
Default label;
Model acquiring unit is obtained opposite with the map constitution element in the plane map based on the default label identified
The three dimensional virtual models and semantic description information answered;
Augmented reality unit retouches three dimensional virtual models corresponding with the map constitution element in the plane map and semanteme
It states information trace to be registered in the video image of shooting, to construct three about the plane map in the video image of shooting
Tie up Virtual Space map.
6. device as claimed in claim 5, which is characterized in that the default label be with the ground in the plane map
The two dimensional code of the corresponding augmented reality information of figure constitution element.
7. device as claimed in claim 6, which is characterized in that the coding region of the two dimensional code include the first coding region and
Second code area domain, wherein first coding region is opposite with the map constitution element in the plane map for recording
The coding for the augmented reality information answered, the second code area domain are used to record and the map constitution element in the plane map
The coding of corresponding semantic description information.
8. device as claimed in claim 7, which is characterized in that the model acquiring unit includes:
Model extraction subelement is decoded the first coding region of the two dimensional code, in acquisition and the plane map
The corresponding augmented reality information of map constitution element, and the augmented reality information based on acquisition is empty from preset three-dimensional
Extract three dimensional virtual models corresponding with the map constitution element in the plane map in analog model database, described three
Dimension dummy model database purchase has each augmented reality information and three-dimensional corresponding with each augmented reality information
Model;
Semanteme obtains subelement, is decoded to the second code area domain of the two dimensional code, in acquisition and the plane map
The corresponding semantic description information of map constitution element.
9. a kind of computer readable storage medium for being stored with computer program, which is characterized in that when the computer program exists
The side that augmented reality is carried out for plane map as described in any one in claim 1-4 is realized when being executed by processor
Method.
10. a kind of computer equipment, which is characterized in that the computer equipment includes:
Processor;
Memory is stored with computer program, when the computer program is executed by processor, realizes such as claim 1-4
In any one described in for plane map carry out augmented reality method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811619174.0A CN109840951A (en) | 2018-12-28 | 2018-12-28 | The method and device of augmented reality is carried out for plane map |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811619174.0A CN109840951A (en) | 2018-12-28 | 2018-12-28 | The method and device of augmented reality is carried out for plane map |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109840951A true CN109840951A (en) | 2019-06-04 |
Family
ID=66883415
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811619174.0A Pending CN109840951A (en) | 2018-12-28 | 2018-12-28 | The method and device of augmented reality is carried out for plane map |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109840951A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110910497A (en) * | 2019-11-15 | 2020-03-24 | 北京信息科技大学 | Method and system for realizing augmented reality map |
CN112037264A (en) * | 2020-11-03 | 2020-12-04 | 浙江清鹤科技有限公司 | Video fusion system and method |
FR3107134A1 (en) | 2020-02-06 | 2021-08-13 | Resomedia | permanent geographic map device in paper format connected to a mobile application in order to locate points of interest (also known as "POIs") |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6097840A (en) * | 1996-03-29 | 2000-08-01 | Fujitsu Limited | Profile extracting method and system |
CN102768731A (en) * | 2012-06-29 | 2012-11-07 | 陕西省交通规划设计研究院 | Method and system for automatic positioning and identifying target based on high definition video images |
CN103049729A (en) * | 2012-12-30 | 2013-04-17 | 成都理想境界科技有限公司 | Method, system and terminal for augmenting reality based on two-dimension code |
CN103413160A (en) * | 2013-08-30 | 2013-11-27 | 北京慧眼智行科技有限公司 | Method, device and system for encoding and decoding |
CN104751211A (en) * | 2013-12-25 | 2015-07-01 | 再发现(北京)科技有限公司 | Two-dimension code method of integrated brand promotion and product verification |
US20160275443A1 (en) * | 2012-11-13 | 2016-09-22 | Kyodo Printing Co., Ltd. | Two-dimensional code, system for creation of two-dimensional code, and analysis program |
CN106816077A (en) * | 2015-12-08 | 2017-06-09 | 张涛 | Interactive sandbox methods of exhibiting based on Quick Response Code and augmented reality |
-
2018
- 2018-12-28 CN CN201811619174.0A patent/CN109840951A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6097840A (en) * | 1996-03-29 | 2000-08-01 | Fujitsu Limited | Profile extracting method and system |
CN102768731A (en) * | 2012-06-29 | 2012-11-07 | 陕西省交通规划设计研究院 | Method and system for automatic positioning and identifying target based on high definition video images |
US20160275443A1 (en) * | 2012-11-13 | 2016-09-22 | Kyodo Printing Co., Ltd. | Two-dimensional code, system for creation of two-dimensional code, and analysis program |
CN103049729A (en) * | 2012-12-30 | 2013-04-17 | 成都理想境界科技有限公司 | Method, system and terminal for augmenting reality based on two-dimension code |
CN103413160A (en) * | 2013-08-30 | 2013-11-27 | 北京慧眼智行科技有限公司 | Method, device and system for encoding and decoding |
CN104751211A (en) * | 2013-12-25 | 2015-07-01 | 再发现(北京)科技有限公司 | Two-dimension code method of integrated brand promotion and product verification |
CN106816077A (en) * | 2015-12-08 | 2017-06-09 | 张涛 | Interactive sandbox methods of exhibiting based on Quick Response Code and augmented reality |
Non-Patent Citations (3)
Title |
---|
王占刚、朱希安: "移动地图增强现实系统设计与实现", 北京信息科技大学学报, vol. 31, no. 5, pages 36 - 40 * |
王占刚等: "移动地图增强现实系统设计与实现", 《北京信息科技大学学报(自然科学版)》 * |
王占刚等: "移动地图增强现实系统设计与实现", 《北京信息科技大学学报(自然科学版)》, no. 05, 15 October 2016 (2016-10-15), pages 37 - 40 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110910497A (en) * | 2019-11-15 | 2020-03-24 | 北京信息科技大学 | Method and system for realizing augmented reality map |
CN110910497B (en) * | 2019-11-15 | 2024-04-19 | 北京信息科技大学 | Method and system for realizing augmented reality map |
FR3107134A1 (en) | 2020-02-06 | 2021-08-13 | Resomedia | permanent geographic map device in paper format connected to a mobile application in order to locate points of interest (also known as "POIs") |
CN112037264A (en) * | 2020-11-03 | 2020-12-04 | 浙江清鹤科技有限公司 | Video fusion system and method |
CN112037264B (en) * | 2020-11-03 | 2021-02-05 | 浙江清鹤科技有限公司 | Video fusion system and method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106897648B (en) | Method and system for identifying position of two-dimensional code | |
US8411986B2 (en) | Systems and methods for segmenation by removal of monochromatic background with limitied intensity variations | |
US20200380711A1 (en) | Method and device for joint segmentation and 3d reconstruction of a scene | |
JP5036084B2 (en) | Video processing apparatus, video processing method, and program | |
CN109840951A (en) | The method and device of augmented reality is carried out for plane map | |
US20150138193A1 (en) | Method and device for panorama-based inter-viewpoint walkthrough, and machine readable medium | |
Urban et al. | Finding a good feature detector-descriptor combination for the 2D keypoint-based registration of TLS point clouds | |
CN107689050A (en) | A kind of depth image top sampling method based on Color Image Edge guiding | |
CN100375124C (en) | A skeletonized object rebuild method | |
Ma et al. | Remote sensing image registration based on multifeature and region division | |
Wang et al. | Image-based building regularization using structural linear features | |
CN110390228A (en) | The recognition methods of traffic sign picture, device and storage medium neural network based | |
CN109064533A (en) | A kind of 3D loaming method and system | |
Hu et al. | Building modeling from LiDAR and aerial imagery | |
CN109166172B (en) | Clothing model construction method and device, server and storage medium | |
Hua et al. | Background extraction using random walk image fusion | |
Hu et al. | Boundary shape-preserving model for building mapping from high-resolution remote sensing images | |
CN113763438B (en) | Point cloud registration method, device, equipment and storage medium | |
CN113537187A (en) | Text recognition method and device, electronic equipment and readable storage medium | |
JP5767887B2 (en) | Image processing apparatus, image processing method, and image processing program | |
WO2023098635A1 (en) | Image processing | |
Sun et al. | Complex building roof detection and strict description from LIDAR data and orthorectified aerial imagery | |
TWI771932B (en) | Image conversion method for developing tactile learning material | |
CN201374082Y (en) | Augmented reality system based on image unique point extraction and random tree classification | |
CN115273080A (en) | Lightweight visual semantic odometer method for dynamic scene |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190604 |