CN105138763A - Method for real scene and reality information superposition in augmented reality - Google Patents

Method for real scene and reality information superposition in augmented reality Download PDF

Info

Publication number
CN105138763A
CN105138763A CN201510511764.1A CN201510511764A CN105138763A CN 105138763 A CN105138763 A CN 105138763A CN 201510511764 A CN201510511764 A CN 201510511764A CN 105138763 A CN105138763 A CN 105138763A
Authority
CN
China
Prior art keywords
entity
information
server
terminal device
outdoor scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510511764.1A
Other languages
Chinese (zh)
Inventor
林谋广
林格
陈泽伟
陈志斌
邓鑫亮
樊蔚萌
陈颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
National Sun Yat Sen University
Original Assignee
National Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Sun Yat Sen University filed Critical National Sun Yat Sen University
Priority to CN201510511764.1A priority Critical patent/CN105138763A/en
Publication of CN105138763A publication Critical patent/CN105138763A/en
Pending legal-status Critical Current

Links

Abstract

The invention provides a method for real scene and reality information superposition in augmented reality, comprising the following steps: a terminal device sends an acquired real scene and a selected entity to a server, abstracts the entity according to information returned by the server, and display information related to the entity; and the server acquires an entity list from the real scene, and acquires related entity information according to the entity. According to the embodiment of the invention, multiple entity recognition methods are combined to make entity recognition more accurate; and a more friendly way of information display is provided by adopting entity abstracting, user behavioral habit analysis and other technologies. Entity-related information desired by users is displayed in a real scene. By abstracting entities and integrating entities into user behavioral habits, information desired by users can be better displayed, and the user experience is improved.

Description

A kind of method of outdoor scene and real information superposition in augmented reality
Technical field
The present invention relates to field of computer technology, particularly relate to a kind of method of outdoor scene and real information superposition in augmented reality.
Background technology
In the information age, people need to obtain the information useful to oneself at any time, and these information are present in network or other memory devices with virtual form often, and existing method needs user to be obtained in an ivory-towered virtual monitor by PC or mobile terminal.Such as people inquire about the information required for oneself by Google or Baidu, obtain the geographic position of oneself or navigate by high moral map, obtained the basic intelligence of stranger, obtain briefing by relevant weather software by face book, etc.But the mode of this acquisition information, one is inconvenient, and people need deliberately to stop to inquire about; Two is untrue, the information obtained does not combine with the situation of reality, can not bring to user and experience intuitively, just ice-cold data and word that people see, and do not experience intuitively, the impression that such as existing navigation brings to user is just very bad, because be not combined with reality, when causing actual use, often go on wrong path.
In order to strengthen the mutual of outdoor scene in virtual reality and display information, make information required in the acquisition reality that people can be more directly perceived, convenient, more concrete, the matter of utmost importance that we will solve how to improve both stacking methods, the information required for user shown with open arms in outdoor scene.
A product be widely known by the people is Google's glasses of Google's research and development, and it has used for reference the individual research that an India scientist presents with all respect unselfishly.
GoogleProjectGlass primary structure comprises, and at a camera and the computer processor device being positioned at the wide strip on the right side of picture frame of the suspension of glasses front, the camera pixel of outfit is 5,000,000, can take 720p video.Eyeglass is equipped with one and wears the type display screen that declines, it can by the small screen above data projection to user's right eye.Display effect is as inch high definition screen of 25 outside 2.4 meters.
These glasses will integrate smart mobile phone, GPS, camera, before user, represent real-time information, upload as long as just can take pictures nictation, receive and dispatch note, will inquire about the operations such as weather road conditions.User just can surf the web or process Word message and Email without the need to starting, and meanwhile, puts on this " expand reality " glasses, user can take pictures with the Sound control of oneself, video calling and distinguish direction.This glasses reality information and outdoor scene Overlay unfriendly, Google's eyes are shown by entity information in eyes finding position, when can cause there is multiple entity like this, information can be chaotic, and the information that yet there will be display does not know to correspond to the situation of which entity in outdoor scene.Its poor practicability, Voice command taked by Google's glasses, if user has an accent, can not identify each instruction of user smoothly, causes using extremely inconvenient.Simultaneously it also supports to wait nictation body sense to operate, but is extremely discord for the smaller user of eyes like this.
BaiduEye be a be integrated with depending on, listen, say the equipment of ability, without the need to glasses screen, wearer only needs to draw a circle facing to certain article aloft with finger, or pick up this article, or sight line is paid close attention to certain article and is stopped, namely BaiduEye obtains instruction by these gestures, head kinetic potential, locks these article and carries out identifying and analyzing and processing.BaiduEye by carrying out the graphical analysis based on degree of depth study to the visual information at wearer first visual angle, and in conjunction with the powerful large data analysis capabilities of Baidu and natural human-computer interaction technology, for user provides finding information behind in kind, and join dependency service.Adopt voice output, although reach the effect of augmented reality to a certain extent, because it does not have display device, do not possess and experience experience intuitively.
Summary of the invention
The invention provides a kind of method of outdoor scene and real information superposition in augmented reality, in reality scene, catch and identify entity, and the relevant information of display entity with open arms, reach the object of augmented reality.
In order to solve the problem, the present invention proposes a kind of method of outdoor scene and real information superposition in augmented reality, comprising the steps:
Terminal device by getting outdoor scene, the entity chosen is sent to server, and take out picture according to the information that server returns and go out entity, display entity relevant information;
Network in charge obtains list of entities from outdoor scene, obtains related entities information according to entity.
Described method comprises:
Terminal device is by outdoor scene and GPS position information and infrared information are sent to server accordingly;
Server obtains list of entities according to outdoor scene and relevant information thereof, returns to terminal device;
Terminal device selects entity according to list of entities, and the entity chosen is sent to server;
Server obtains the relevant information of entity, returns to terminal device;
Terminal device takes out entity, shows entity relevant information;
Terminal device screens entity information, and screening conditions are sent to server;
Server obtains entity information again according to screening conditions, and returns to terminal device;
Entity relevant information shown by terminal device.
Described method comprises:
Terminal device is by outdoor scene and GPS position information and infrared information are sent to server accordingly;
Outdoor scene is passed to illumination model by server;
Illumination model, according to outdoor scene modeling, obtains solid model parameter;
Solid model parameter is returned to server by illumination model;
Server by outdoor scene and corresponding GPS position information and infrared information to supplementary transaction module;
Supplementary transaction module identification or extract auxiliary parameter;
Auxiliary parameter is returned to server by supplementary transaction module;
Server sends all parameters of entity to entity storehouse;
Entity storehouse obtains most probable list of entities according to substance parameter;
List of entities is returned to server by entity storehouse;
List of entities is returned to terminal device by server;
Terminal device carries out information displaying on captured 2D image, by the mode of the size that sets font, realizes far and near differentiation.
Described method comprises:
The entity chosen is sent to server by terminal device;
The entity chosen is sent to entity storehouse by server;
Tentatively sort to entity information according to the behavior of masses in entity storehouse;
Sort to entity information further according to individual subscriber behavior in entity storehouse;
The entity information sorted is returned to server by entity storehouse;
Entity information is returned to terminal device by server;
In terminal device, show the 3D model of this entity, and show the details at this position at the different parts of this 3D model.
In the method that the embodiment of the present invention provides, combine the method for multiple Entity recognition, make Entity recognition more accurate; Employing entity is abstract is accustomed to the technology such as analysis with user behavior, provides more friendly information display mode.In outdoor scene, show that user wants the entity relevant information obtained.Entity is taken out, and incorporates user behavior custom, can show that user wants the information obtained better, improve Consumer's Experience.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is the system architecture schematic diagram of outdoor scene and real information superposition in the augmented reality in the embodiment of the present invention;
Fig. 2 is the process flow diagram of handheld device in the embodiment of the present invention and server interaction;
Fig. 3 is the process flow diagram returning list of entities from handheld device transmission outdoor scene to server in the embodiment of the present invention;
Fig. 4 be in the embodiment of the present invention from handheld device send choose entity after to return the process flow diagram of this entity specifying information to server.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
In reality scene, catch and identify entity, and the relevant information of display entity with open arms, reach the object of augmented reality.
The invention provides a kind of in terminal device a kind of scheme according to the friendly display entity information of outdoor scene.The program combines the method for multiple Entity recognition, makes Entity recognition more accurate; Employing entity is abstract is accustomed to the technology such as analysis with user behavior, provides more friendly information display mode.
Below in conjunction with Fig. 1, the composition of system is described: native system is made up of terminal device, server.The function of each several part is as follows:
Terminal device is responsible for getting outdoor scene, the entity chosen is sent to server, and takes out picture according to the information that server returns and go out entity, display entity relevant information
Network in charge obtains list of entities from outdoor scene, obtains related entities information according to entity.
Fig. 2 describes the quick-reading flow sheets of handheld device and server interaction.
The first step: terminal device is by outdoor scene and GPS position information and infrared information are sent to server accordingly;
Second step: server obtains list of entities according to outdoor scene and relevant information thereof, returns to terminal device;
3rd step: terminal device selects entity according to list of entities, and the entity chosen is sent to server;
4th step: server obtains the relevant information of entity, returns to terminal device;
5th step: terminal takes out entity, shows entity relevant information;
6th step: terminal device screens entity information, and screening conditions are sent to server;
7th step: server obtains entity information again according to screening conditions, and returns to terminal device;
8th step: entity relevant information shown by terminal device.
Fig. 3 will describe the idiographic flow returning list of entities from handheld device transmission outdoor scene to server.
1, outdoor scene and corresponding GPS position information and infrared information are sent to server by terminal device;
2, outdoor scene is passed to illumination model by server
3, illumination model is according to outdoor scene modeling, obtains solid model parameter
4, solid model parameter is returned to server by illumination model
5, server by outdoor scene and corresponding GPS position information and infrared information to supplementary transaction module
6, supplementary transaction module identification or extract auxiliary parameter
7, auxiliary parameter is returned to server by supplementary transaction module
8, server sends all parameters of entity to entity storehouse
9, entity storehouse obtains most probable list of entities according to substance parameter
10, list of entities is returned to server by entity storehouse
11, list of entities is returned to terminal device by server
12, terminal device carries out information displaying (display entity title) on captured 2D image, by the mode of the size that sets font, realizes far and near differentiation.
Return the detailed process of this entity specifying information to server after Fig. 4 will describe and choose entity from handheld device transmission.
1, the entity chosen is sent to server by terminal device;
2, the entity chosen is sent to entity storehouse by server;
3, tentatively sort to entity information according to the behavior of masses in entity storehouse;
4, sort to entity information further according to individual subscriber behavior in entity storehouse;
5, the entity information sorted is returned to server by entity storehouse;
6, entity information is returned to terminal device by server;
8, in terminal device, show the 3D model of this entity, and show the details at this position at the different parts of this 3D model.
The method that the embodiment of the present invention provides, combines the method for multiple Entity recognition, makes Entity recognition more accurate; Employing entity is abstract is accustomed to the technology such as analysis with user behavior, provides more friendly information display mode.In outdoor scene, show that user wants the entity relevant information obtained.Entity is taken out, and incorporates user behavior custom, can show that user wants the information obtained better, improve Consumer's Experience.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment is that the hardware that can carry out instruction relevant by program has come, this program can be stored in a computer-readable recording medium, storage medium can comprise: ROM (read-only memory) (ROM, ReadOnlyMemory), random access memory (RAM, RandomAccessMemory), disk or CD etc.
In addition, in the augmented reality provided the embodiment of the present invention above, the method for outdoor scene and real information superposition is described in detail, apply specific case herein to set forth principle of the present invention and embodiment, the explanation of above embodiment just understands method of the present invention and core concept thereof for helping; Meanwhile, for one of ordinary skill in the art, according to thought of the present invention, all will change in specific embodiments and applications, in sum, this description should not be construed as limitation of the present invention.

Claims (4)

1. the method for outdoor scene and real information superposition in augmented reality, is characterized in that, comprise the steps:
Terminal device by getting outdoor scene, the entity chosen is sent to server, and take out picture according to the information that server returns and go out entity, display entity relevant information;
Network in charge obtains list of entities from outdoor scene, obtains related entities information according to entity.
2. the method for outdoor scene and real information superposition in augmented reality as claimed in claim 1, it is characterized in that, described method comprises:
Terminal device is by outdoor scene and GPS position information and infrared information are sent to server accordingly;
Server obtains list of entities according to outdoor scene and relevant information thereof, returns to terminal device;
Terminal device selects entity according to list of entities, and the entity chosen is sent to server;
Server obtains the relevant information of entity, returns to terminal device;
Terminal device takes out entity, shows entity relevant information;
Terminal device screens entity information, and screening conditions are sent to server;
Server obtains entity information again according to screening conditions, and returns to terminal device;
Entity relevant information shown by terminal device.
3. the method for outdoor scene and real information superposition in augmented reality as claimed in claim 2, it is characterized in that, described method comprises:
Terminal device is by outdoor scene and GPS position information and infrared information are sent to server accordingly;
Outdoor scene is passed to illumination model by server;
Illumination model, according to outdoor scene modeling, obtains solid model parameter;
Solid model parameter is returned to server by illumination model;
Server by outdoor scene and corresponding GPS position information and infrared information to supplementary transaction module;
Supplementary transaction module identification or extract auxiliary parameter;
Auxiliary parameter is returned to server by supplementary transaction module;
Server sends all parameters of entity to entity storehouse;
Entity storehouse obtains most probable list of entities according to substance parameter;
List of entities is returned to server by entity storehouse;
List of entities is returned to terminal device by server;
Terminal device carries out information displaying on captured 2D image, by the mode of the size that sets font, realizes far and near differentiation.
4. the method for outdoor scene and real information superposition in augmented reality as claimed in claim 3, it is characterized in that, described method comprises:
The entity chosen is sent to server by terminal device;
The entity chosen is sent to entity storehouse by server;
Tentatively sort to entity information according to the behavior of masses in entity storehouse;
Sort to entity information further according to individual subscriber behavior in entity storehouse;
The entity information sorted is returned to server by entity storehouse;
Entity information is returned to terminal device by server;
In terminal device, show the 3D model of this entity, and show the details at this position at the different parts of this 3D model.
CN201510511764.1A 2015-08-19 2015-08-19 Method for real scene and reality information superposition in augmented reality Pending CN105138763A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510511764.1A CN105138763A (en) 2015-08-19 2015-08-19 Method for real scene and reality information superposition in augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510511764.1A CN105138763A (en) 2015-08-19 2015-08-19 Method for real scene and reality information superposition in augmented reality

Publications (1)

Publication Number Publication Date
CN105138763A true CN105138763A (en) 2015-12-09

Family

ID=54724110

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510511764.1A Pending CN105138763A (en) 2015-08-19 2015-08-19 Method for real scene and reality information superposition in augmented reality

Country Status (1)

Country Link
CN (1) CN105138763A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106951570A (en) * 2017-04-10 2017-07-14 厦门精图信息技术有限公司 VR map systems based on meteorological satellite real-time update weather conditions
CN107454329A (en) * 2017-08-24 2017-12-08 北京小米移动软件有限公司 Information processing method and equipment
CN108427830A (en) * 2018-02-09 2018-08-21 中建五局第三建设有限公司 A kind of method and device for constructing object space setting-out using mixed reality technological guidance
CN110352595A (en) * 2016-12-30 2019-10-18 脸谱公司 For providing the system and method for augmented reality superposition
CN110708498A (en) * 2018-06-22 2020-01-17 浙江宇视科技有限公司 Method and device for marking POI information in live-action monitoring picture

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101246600A (en) * 2008-03-03 2008-08-20 北京航空航天大学 Method for real-time generating reinforced reality surroundings by spherical surface panoramic camera
CN102194007A (en) * 2011-05-31 2011-09-21 中国电信股份有限公司 System and method for acquiring mobile augmented reality information
CN102647512A (en) * 2012-03-21 2012-08-22 广州市凡拓数码科技有限公司 All-round display method of spatial information
CN102663448A (en) * 2012-03-07 2012-09-12 北京理工大学 Network based augmented reality object identification analysis method
WO2013093906A1 (en) * 2011-09-19 2013-06-27 Eyesight Mobile Technologies Ltd. Touch free interface for augmented reality systems

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101246600A (en) * 2008-03-03 2008-08-20 北京航空航天大学 Method for real-time generating reinforced reality surroundings by spherical surface panoramic camera
CN102194007A (en) * 2011-05-31 2011-09-21 中国电信股份有限公司 System and method for acquiring mobile augmented reality information
WO2013093906A1 (en) * 2011-09-19 2013-06-27 Eyesight Mobile Technologies Ltd. Touch free interface for augmented reality systems
CN102663448A (en) * 2012-03-07 2012-09-12 北京理工大学 Network based augmented reality object identification analysis method
CN102647512A (en) * 2012-03-21 2012-08-22 广州市凡拓数码科技有限公司 All-round display method of spatial information

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110352595A (en) * 2016-12-30 2019-10-18 脸谱公司 For providing the system and method for augmented reality superposition
US11030440B2 (en) 2016-12-30 2021-06-08 Facebook, Inc. Systems and methods for providing augmented reality overlays
CN110352595B (en) * 2016-12-30 2021-08-17 脸谱公司 System and method for providing augmented reality overlays
CN106951570A (en) * 2017-04-10 2017-07-14 厦门精图信息技术有限公司 VR map systems based on meteorological satellite real-time update weather conditions
CN107454329A (en) * 2017-08-24 2017-12-08 北京小米移动软件有限公司 Information processing method and equipment
CN108427830A (en) * 2018-02-09 2018-08-21 中建五局第三建设有限公司 A kind of method and device for constructing object space setting-out using mixed reality technological guidance
CN110708498A (en) * 2018-06-22 2020-01-17 浙江宇视科技有限公司 Method and device for marking POI information in live-action monitoring picture

Similar Documents

Publication Publication Date Title
US20220254119A1 (en) Location-based virtual content placement restrictions
US11232641B2 (en) Mixing virtual image data and physical image data
CN109426333B (en) Information interaction method and device based on virtual space scene
Langlotz et al. Next-generation augmented reality browsers: rich, seamless, and adaptive
US8661053B2 (en) Method and apparatus for enabling virtual tags
KR102098058B1 (en) Method and apparatus for providing information in a view mode
WO2021213067A1 (en) Object display method and apparatus, device and storage medium
KR101691985B1 (en) Personal information communicator
US11151791B2 (en) R-snap for production of augmented realities
CN105138763A (en) Method for real scene and reality information superposition in augmented reality
US11232636B2 (en) Methods, devices, and systems for producing augmented reality
US11450006B2 (en) Image segmentation with touch interaction
KR102402580B1 (en) Image processing system and method in metaverse environment
EP3084712A1 (en) Systems and methods for providing geographically delineated content
US11182600B2 (en) Automatic selection of event video content
US20230073750A1 (en) Augmented reality (ar) imprinting methods and systems
Sandnes et al. Head-mounted augmented reality displays on the cheap: a DIY approach to sketching and prototyping low-vision assistive technologies
KR20230025917A (en) Augmented reality-based voice translation related to travel
US20200233489A1 (en) Gazed virtual object identification module, a system for implementing gaze translucency, and a related method
Nazri et al. Current limitations and opportunities in mobile augmented reality applications
US10649614B2 (en) Image segmentation in virtual reality environments
KR20160016574A (en) Method and device for providing image
US10576016B2 (en) Methods and systems for managing photographic capture
Khan The rise of augmented reality browsers: Trends, challenges and opportunities
US9165339B2 (en) Blending map data with additional imagery

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20151209

WD01 Invention patent application deemed withdrawn after publication