WO2018036113A1 - Augmented reality method and system - Google Patents

Augmented reality method and system Download PDF

Info

Publication number
WO2018036113A1
WO2018036113A1 PCT/CN2017/073980 CN2017073980W WO2018036113A1 WO 2018036113 A1 WO2018036113 A1 WO 2018036113A1 CN 2017073980 W CN2017073980 W CN 2017073980W WO 2018036113 A1 WO2018036113 A1 WO 2018036113A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
target object
image
information
position
Prior art date
Application number
PCT/CN2017/073980
Other languages
French (fr)
Chinese (zh)
Inventor
孙其民
胡治国
李炜
Original Assignee
深圳市掌网科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to CN201610708864.8A priority Critical patent/CN107765842A/en
Priority to CN201610708864.8 priority
Application filed by 深圳市掌网科技股份有限公司 filed Critical 深圳市掌网科技股份有限公司
Publication of WO2018036113A1 publication Critical patent/WO2018036113A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Abstract

An augmented reality method and system, relating to the technical field of virtual reality. The augmented reality method comprises the following steps: S1: dividing a real image into several display areas; S2: pre-testing and recording a display area and corresponding eyeball instantaneous rotation state; S3: forming a mapping relationship between the eyeball instantaneous rotation state and the display area; S4: determining a gaze region; S5: acquiring real images to obtain associated information of target objects; S6: determining the display position on a display screen of the associated information of the target objects; and S7: superposing the associated information and the target objects in the real image and displaying same. The present method and system can accurately search the display area in which the current focal point of attention of the wearer is located, and rapidly respond to obtain the target object on which the user is focusing and the associated information thereof, combining same with a real image for real-time display on a display screen; search and display of unrelated information is reduced and the experience of interactive sensing of effective information is enhanced.

Description

Method and system for augmented reality

FIELD

[0001] The present invention relates to the field of virtual reality technology, particularly, to a system and method for augmented reality.

Background technique

[0002] Augmented reality by means of computer graphics and visualization techniques virtual object does not exist in the real environment, the virtual object accurately and "embedded" into the real environment, the virtual object by the display device integrated with the real environment the virtual information to the real world, thus presenting the user with a real sensory effects of the new environment, in order to achieve enhanced reality.

[0003] for achieving augmented reality augmented reality system requires a large amount of positioning data by analyzing the scene information and to ensure the computer-generated virtual object can be accurately positioned in the real scene. Thus, in the known augmented reality technology, a specific implementation process may include: obtaining the real scene information; real scene information and camera position information obtained is analyzed; generating virtual objects; plotted on the visualization plane according to the camera position information virtual objects and virtual objects displayed together with the real scene information. In the present technique, since a characteristic image matches the entire recognition target, and thus the corresponding processing a large amount of calculation. Further, for the camera collects the size of the target image, there are certain requirements, if the target away from the camera, the camera collects the target image is too small, the detected target features in the image is too small will result, can not achieve reasonably require the number of matching feature, leading to a target object can not be detected, making it impossible to complete the processing to superimpose virtual objects into the video.

[0004] In addition, the realization of augmented reality hardware devices usually have a flat screen display, or similar hand-held electronic devices. But for the first significant new products, the extension of low-inch interactive experience with high-speed graphics rendering process requirements are different from the general hand-held electronics, and then also put more design requirements for equipment. For example, the wearer improvisational mood on the outside of an object in the real scene (goods, buildings, etc.) may be short-lived, how significant head in a box or virtual information and virtual objects during sustain interest in short-inch Pro at the real position of the object superimposed on the display is designed to be solved technical problems. In addition, how complicated external object information, the establishment of a virtual information presentation mechanism, only the wearer's real concern at the moment in search of objects given to show that for a headset without the prospect of uninterrupted shooting reduce the amount of data processing, effectively enhance the sense of experience of effective information exchange and other issues is currently solved.

technical problem

[0005] To solve the above problems, the present invention provides a method and an augmented reality system, the computing device may reduce the amount of wearing, it is preferable that the user at the moment concerned with the real image of the object obtained inch headband information analyzing apparatus and virtual information and real-world information is displayed superimposed on the screen.

Solutions to problems

Technology Solutions

[0006] aspect of the present invention uses: an augmented reality to provide a method, comprising the steps of,

[0007] S1 to the head mounted display device to show realistic image into a plurality of display areas;

[0008] S2 and pre-tested recording head mounted display device wearer stare office area and a display area on the instantaneous position of the specific pixel inch eyeball rotational status display;

[0009] S3 will be recorded and pre-tested eye instantaneous rotational state of a display area of ​​the display screen and took specific pixels of the display region configured mapping relationship between the position;

[0010] S4 based on the acquired instantaneous rotational state is determined eye gaze region on the display screen;

[0011] S5 reality image capture region to show the gaze, and upload it to the cloud database processing, to obtain related information of each target object in the real image;

[0012] S6 to determine the position of each target object, based on position information of each of the target object, determining the display position-related information of each target object on a display screen;

[0013] S7, respectively, with the real image of the target object in the displayed superimposed on the display related information of each target object.

[0014] AR method according to the present invention, step S5 further comprising:

Global / local feature reality image [0015] S51 detecting the collected, each identified target object.

[0016] AR method according to the present invention, the step S51 further comprises:

[0017] S511 the global / local features of each target object entire reality image / local feature with possible matches matches, according to the geometric relative characterized in global / local feature position relationship keep a reasonable match, showing a reasonable match recognition each target object.

[0018] AR method according to the present invention, association information comprises the step S5, the target object property directly related to product information.

[0019] The present invention also provides an augmented reality system, including a head-mounted device, further comprising:

[0020] The infrared camera, a head-mounted device to the eye of the wearer is a solid inch infrared camera to obtain instantaneous rotational state of the eye image;

[0021] eyeball state determining unit for analyzing the instantaneous rotational state of the eye images the eye gaze determining region

[0022] eyeball identification unit for collating key value mapping table eye gaze region, the wearer determines reality image of interest at the moment;

[0023] The screen control unit for displaying the content according to a wearer at the moment of interest reality image display control

[0024] The augmented reality system according to the present invention, the key mapping table is pre-tested and recorded instant eyeball rotation state anywhere on the display screen of the display region and the pixel display region specifically configured mapping relationship between the positions table.

[0025] The augmented reality system according to the present invention, the screen control unit is further configured to reality image to the wearer at the moment concerned uploaded to the cloud database processing, to obtain related information of each target object in reality image.

[0026] The augmented reality system according to the present invention, the screen control unit is further for determining a position of each target object, based on position information of each of the target object, determining the display position-related information of each target object on a display screen.

The augmented reality system according to [0027] of the present invention, the screen control unit is further configured to display related information on each of the target objects are displayed with the target object in the real image is superimposed.

Effect of the Invention

Beneficial effect

[0028] Compared with the prior art, an augmented reality system of the present invention provides a method and by the moment of rotation of the eye of the wearer on the status display area and display mapping relationship, the wearer can display region where the focus for the moment for accurate retrieval, rapid response to get the user's attention to the target object and associated information, and real-inch combination of reality and image is displayed on the screen, it can reduce the search and display of irrelevant information, and enhance effective information exchange sense experience. Same-inch, there is no need for all the real image display exhibited analysis and processing, the headset reduces the data processing capacity, body size or configuration aspects of the device can be greatly improved.

BRIEF DESCRIPTION OF THE DRAWINGS

BRIEF DESCRIPTION

[0029] The accompanying drawings and the following embodiments of the present invention is further illustrated drawings in which:

[0030] FIG. 1 is a flowchart of a method embodiment of the augmented reality to an embodiment of the present invention;

[0031] FIG. 2 embodiment a method is provided an augmented reality sub-step in step S5 is a flowchart embodiment of the present invention.

Embodiment of the present invention.

[0032] To make the objectives, technical solutions and advantages of the present invention will become more apparent hereinafter in conjunction with the accompanying drawings and embodiments of the present invention will be further described in detail. It should be understood that the specific embodiments described herein are only intended to illustrate the present invention and are not intended to limit the present invention.

[0033] FIG. 1 shows a flowchart of the present invention the augmented reality according to the method provided in the embodiment shown in FIG. 1, the present embodiment provides a method for augmented reality, comprising the steps of,

[0034] S1 to the head mounted display device to show realistic image into a plurality of display areas.

[0035] In this step, the external device may be a head mounted headset, a headset-one, any of the mobile terminal device headset. Reality image display can show on the display with a transparent head-mounted device, and direct the display to show the real image of the external environment, also with an external additional device camera head mounted display to the external environment for the shooting, and then through the display screen shot of reality image is displayed, in this way can be achieved shoot that income. Display region in accordance with embodiments of the present embodiment exhibits real image display screen is divided into the upper left, upper right, lower left, lower right four regions. It will be appreciated that the present invention is to divide the number of divided display regions of spatial shape is not limited thereto. Regardless of head rotation, the present embodiment exhibits a perspective view of the real image are zero extended inch.

[0036] S2 and pre-tested recording head mounted display device wearer stare office area and a display region pixel of the display position of the specific instant eyeball rotation state inch.

[0037] Since the shape of the eye and vision is not seen in the same range for each user, in order to improve the accuracy of augmented reality, prior to the present embodiment wearer official use, the precise field of view to verify their eye can see the office and stare display region a display region and the display position on the specific pixel inch instant eyeball rotation state, the rotation of the eye of the wearer or location of the display region on the display screen to create a link state, and to accurately pixel level, ensuring the latter has a higher recognition accuracy.

[0038] S3 will be recorded and pre-tested eye instantaneous rotational state of a display area of ​​the display screen and took specific pixels of the display region configured mapping between position.

[0039] In this step, for rotation of the eye of the wearer of the display area on the instantaneous state to establish one to one correspondence with the display screen, quick response so as to be in the late test or identification of links, reduce latency, and improve user experience.

[0040] S4 based on the acquired instantaneous rotational state is determined eye gaze region on the display screen.

[0041] carried out by the infrared camera photographs the eye inch real, instantaneous rotational state of the eye can be obtained by mapping relationship configured in step S3, to obtain particularly fast response display region on the display screen of the eye gaze

That gaze region on the display screen.

[0042] S5 reality image acquisition exhibited the gaze region, and upload it to the cloud database processing, to obtain related information of each target object in reality image.

[0043] wearer's gaze region on the display screen, that is the reality of the scene of the wearer's attention at the moment, in this embodiment, the device only real sight of the wearer's attention at the moment parsing and processing, that is the only region to show the gaze reality image search and recognition, on the one hand this method allows the user the real focus has been the focus of information retrieval and display, required to solve the user, the user does not need to reduce invalid information display, on the other hand, compared to the prior art requires All real image display seen parsing and processing, greatly reducing the amount of data and information processing equipment in response to the delay, enhances the sense of effective interactive experience information. Cloud database with image recognition and search functionality, can identify the target object of interest to the user and the target object related information search in the real image, related information including product information directly related to the target object attributes, such as related products specifications, product descriptions, features, company profiles, pricing information, contact information.

[0044] Preferably, as shown in FIG. 2, including the step S5,

Global / local feature reality image [0045] S51 detecting the collected, each identified target object.

[0046] The present step is carried out by the whole of the real image / detecting local features, such as matching graphic characteristics, is its identification in the cloud database to find out the target object.

[0047] Preferably, as shown in FIG. 2, step S51 includes,

[0048] S511 the global / local features of each target object entire reality image / local feature with possible matches matches, according to the geometric relative characterized in global / local feature position relationship keep a reasonable match, showing a reasonable match recognition each target object.

[0049] By the method of the present step can be screened important object to reduce extraneous object reality image feature matching, to reduce the amount of calculation, the effect of fast matching results obtained.

[0050] S6 to determine the position of each target object, based on position information of each of the target object, determining the display position-related information of each target object on a display screen.

[0051] Audience treated cloud database obtained will be emphasized display, with the calculated position of each device inch target object on the screen located on the display screen, and based on the position information of each target object, the target object is determined for each the related information display position on the display screen.

[0052] S7, respectively, with the real image of the target object in the displayed superimposed on the display related information of each target object.

[0053] The relationship information among various target audiences of the wearer directly concerned with the real image of the target object combination displayed on the screen according to the position of the step S6, enabling seamless connectivity and integration of virtual information and real-world information.

[0054] Embodiments of the present invention further provides an augmented reality system, including a head-mounted device, further comprising an infrared camera mounted on the head mounted device, attention state determination means, eye recognition unit, the screen control unit. Infrared camera, a head-mounted device to the eye of the wearer is a solid inch infrared camera to obtain instantaneous rotational state of the eye image; eye state determining unit for analyzing the instantaneous rotational state of the eye images determined gaze of the eye region; eye identification means for collating key value mapping table area eye gaze, determines the moment wearer reality image of interest at the moment, the key mapping table is pre-tested and recorded eyeball rotation state anywhere on the display screen of the display region and the region DETAILED pixels of the display table comprises a mapping relationship between the positions; screen control unit, according to the display contents on the wearer at the moment of interest reality image display control, specifically, the screen control unit includes a network information transmission module and a matching module, network transmission module for the real image of the wearer at the moment concern uploaded to the cloud database processing, the target object and then treating the resulting cloud database and its associated information is downloaded to the headset, information matching module is used to display the target object reality image on the piece performed Distribution, determine the location of each target object, and based on the position information of each target object, determines the display position-related information of each target object on the display screen, and finally the associated message will be displayed each target object in the image and reality respectively each target object overlay display.

Above accompanying the embodiments of the present invention have been described, but the present invention is not limited to the specific embodiments described above specific embodiments are merely illustrative, and not restrictive, those of ordinary skill in the art the teachings of the invention, the scope of the present invention without departing from the spirit and the protection of the claims, can also make many of these fall within the protection of the present invention.

Claims

Claims
[Claim 1] A method for augmented reality, which is characterized in that it comprises the following steps,
The head mounted display device S1 exhibits real image into a plurality of display areas;
Test and pre-S2 recording head mounted display device wearer stare office area and a display area on the instantaneous position of the specific pixel inch eyeball rotational status display;
The pre-test and record S3 eyeball instantaneous rotational state of a display area of ​​the display screen and took specific pixels of the display region configured mapping relationship between the position;
S4 gaze rotation state determination region on the display screen based on the acquired moment eyeball;
S5 reality image acquisition show the gaze region, and upload it to the cloud database processing, to obtain related information of each target object in the real image;
S6 to determine the position of each target object, based on position information of each of the target object, determining the display position-related information of each target object on a display screen;
S7, respectively, with the target objects in the real image superimposed display in the associated message will be displayed each target object.
[Claim 2] The method of the AR as claimed in claim 1, wherein step S5 further comprising:
Global / local feature detection reality image acquired S51, each of the identified target object.
[Claim 3] The method of the AR as claimed in claim 2, wherein the step S51 further comprises:
S511 The global / local features of each target object global / local feature reality image with possible matches matches, according to the geometric relative characterized in global / local feature position relationship keep a reasonable match, identifies each target object based on a reasonable match .
[Claim 4] The method of the AR as claimed in claim 1, wherein, in step S5, the related information includes product information directly related to the target object attributes.
[Claim 5] - species augmented reality system, including a head-mounted device, characterized in that, further comprising:
Infrared camera, a head-mounted device to the eye of the wearer is a solid inch infrared camera to obtain instantaneous rotational state of the eye image;
Eye state determining unit for analyzing the instantaneous rotational state of the eye images determined gaze of the eye region;
Eye recognition unit for collating key value mapping table eye gaze region, the wearer determines reality image of interest at the moment;
Screen control unit for displaying the content according to the wearer's real concern now control image display.
[Claim 6] of the augmented reality system as claimed in claim 5, wherein the key mapping table pre-tested and instant eyeball rotating recording state display region anywhere on the display screen region and a pixel in the particular mapping table formed between the display position.
[Claim 7] The augmented reality system as claimed in claim 5, wherein said control unit is further configured to screen real image of interest at the moment the wearer uploaded to the cloud database processing to obtain each of the target objects in the real image related information.
[Claim 8] The augmented reality system as claimed in claim 5, wherein the screen control unit further configured to determine the position of each target object, based on position information of each target object, the association information is determined for each target object the display position on the display screen.
[Claim 9] The augmented reality system as claimed in claim 5, wherein the screen control unit is further configured to display related information on each of the target objects with the target object are superimposed reality image displayed .
PCT/CN2017/073980 2016-08-23 2017-02-17 Augmented reality method and system WO2018036113A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201610708864.8A CN107765842A (en) 2016-08-23 2016-08-23 Augmented-reality method and system
CN201610708864.8 2016-08-23

Publications (1)

Publication Number Publication Date
WO2018036113A1 true WO2018036113A1 (en) 2018-03-01

Family

ID=61246375

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/073980 WO2018036113A1 (en) 2016-08-23 2017-02-17 Augmented reality method and system

Country Status (2)

Country Link
CN (1) CN107765842A (en)
WO (1) WO2018036113A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101067716A (en) * 2007-05-29 2007-11-07 南京航空航天大学 Enhanced real natural interactive helmet with sight line follow-up function
CN102981616A (en) * 2012-11-06 2013-03-20 中兴通讯股份有限公司 Identification method and identification system and computer capable of enhancing reality objects
CN103051942A (en) * 2011-10-14 2013-04-17 中国科学院计算技术研究所 Smart television human-computer interaction method, device and system based on remote controller
US20150212576A1 (en) * 2014-01-28 2015-07-30 Anthony J. Ambrus Radial selection by vestibulo-ocular reflex fixation
CN104823152A (en) * 2012-12-19 2015-08-05 高通股份有限公司 Enabling augmented reality using eye gaze tracking
CN105323539A (en) * 2014-07-17 2016-02-10 原相科技股份有限公司 Automotive safety system and operating method thereof

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8408706B2 (en) * 2010-12-13 2013-04-02 Microsoft Corporation 3D gaze tracker
JPWO2014192103A1 (en) * 2013-05-29 2017-02-23 三菱電機株式会社 Information display device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101067716A (en) * 2007-05-29 2007-11-07 南京航空航天大学 Enhanced real natural interactive helmet with sight line follow-up function
CN103051942A (en) * 2011-10-14 2013-04-17 中国科学院计算技术研究所 Smart television human-computer interaction method, device and system based on remote controller
CN102981616A (en) * 2012-11-06 2013-03-20 中兴通讯股份有限公司 Identification method and identification system and computer capable of enhancing reality objects
CN104823152A (en) * 2012-12-19 2015-08-05 高通股份有限公司 Enabling augmented reality using eye gaze tracking
US20150212576A1 (en) * 2014-01-28 2015-07-30 Anthony J. Ambrus Radial selection by vestibulo-ocular reflex fixation
CN105323539A (en) * 2014-07-17 2016-02-10 原相科技股份有限公司 Automotive safety system and operating method thereof

Also Published As

Publication number Publication date
CN107765842A (en) 2018-03-06

Similar Documents

Publication Publication Date Title
US9749619B2 (en) Systems and methods for generating stereoscopic images
US6633304B2 (en) Mixed reality presentation apparatus and control method thereof
JP4181037B2 (en) Eye-tracking system
JP4537104B2 (en) Marker detection method, the marker detection apparatus, position and orientation estimation method, and the MR space presentation method
CN103140879B (en) Information presentation device, a digital camera, a head-mounted display, a projector, an information presentation method and an information presentation program
US9710973B2 (en) Low-latency fusing of virtual and real content
US20160189384A1 (en) Method for determining the pose of a camera and for recognizing an object of a real environment
KR101320134B1 (en) Method and device for the real time imbedding of virtual objects in an image stream using data from a real scene represented by said images
EP1431798A2 (en) Arbitrary object tracking in augmented reality applications
US20120162384A1 (en) Three-Dimensional Collaboration
US10182720B2 (en) System and method for interacting with and analyzing media on a display using eye gaze tracking
US20160004907A1 (en) Shape recognition device, shape recognition program, and shape recognition method
CN102122392B (en) Information processing apparatus, information processing system, and information processing method
US9165381B2 (en) Augmented books in a mixed reality environment
US8872853B2 (en) Virtual light in augmented reality
US9891435B2 (en) Apparatus, systems and methods for providing motion tracking using a personal viewing device
US9595127B2 (en) Three-dimensional collaboration
US8976160B2 (en) User interface and authentication for a virtual mirror
US8982110B2 (en) Method for image transformation, augmented reality, and teleperence
KR20130108643A (en) Systems and methods for a gaze and gesture interface
JP2006249618A (en) Virtual try-on device
JP2013541747A (en) Interaction reality expansion for natural interaction
US9369638B2 (en) Methods for extracting objects from digital images and for performing color change on the object
JP5028389B2 (en) Method and apparatus to extend the mirror function of using the information related to the contents and operation of the mirror
US20110210970A1 (en) Digital mirror apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17842539

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase in:

Ref country code: DE