CN112287949A - AR information display method and AR display device based on multiple feature information - Google Patents

AR information display method and AR display device based on multiple feature information Download PDF

Info

Publication number
CN112287949A
CN112287949A CN202011203237.1A CN202011203237A CN112287949A CN 112287949 A CN112287949 A CN 112287949A CN 202011203237 A CN202011203237 A CN 202011203237A CN 112287949 A CN112287949 A CN 112287949A
Authority
CN
China
Prior art keywords
information
relationship
targets
feature information
pieces
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011203237.1A
Other languages
Chinese (zh)
Inventor
赵维奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Companion Technology Co ltd
Original Assignee
Hangzhou Companion Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Companion Technology Co ltd filed Critical Hangzhou Companion Technology Co ltd
Priority to CN202011203237.1A priority Critical patent/CN112287949A/en
Publication of CN112287949A publication Critical patent/CN112287949A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure relates to an AR information display method and device based on a plurality of feature information, wherein the method comprises the following steps: identifying a plurality of feature information corresponding to a plurality of targets in a user field of view, wherein the targets have a specific relationship therebetween, and the feature information reflects the specific relationship; acquiring a plurality of pieces of AR information corresponding to the plurality of pieces of feature information according to the plurality of pieces of feature information; and displaying the plurality of pieces of AR information through an AR display device based on the specific relationship, wherein the displayed plurality of pieces of AR information present the specific relationship for the user.

Description

AR information display method and AR display device based on multiple feature information
Technical Field
The present disclosure relates to the field of augmented reality technologies, and in particular, to an AR information display method and apparatus based on multiple feature information.
Background
In a conventional AR information display method, matching AR information is displayed for a target by recognizing characteristic information of a plurality of targets. In some scenarios, a user may see multiple targets simultaneously in the field of view, where the multiple targets have a certain relationship (such as a size relationship) with each other, and the certain relationship may change in some cases, for example, a target with a consistent proportion is originally displayed, and one target is enlarged for a certain reason, if the displayed AR information cannot present the relationship with the consistent proportion between the targets, and the target cannot present a new relationship between the targets after being enlarged, the user may be confused about understanding the relationship between the AR information and the targets, and the user experience may be reduced.
Disclosure of Invention
An object of the present disclosure is to provide an AR information display method and an AR display apparatus based on a plurality of feature information.
The purpose of the present disclosure is achieved by the following technical means. According to the AR information display method based on the plurality of feature information, the method comprises the following steps: identifying a plurality of feature information corresponding to a plurality of targets in a user field of view, wherein the targets have a specific relationship therebetween, and the feature information reflects the specific relationship; acquiring a plurality of pieces of AR information corresponding to the plurality of pieces of feature information according to the plurality of pieces of feature information; and displaying the plurality of pieces of AR information through an AR display device based on the specific relationship, wherein the displayed plurality of pieces of AR information present the specific relationship for the user.
The object of the present disclosure can be further achieved by the following technical measures.
In the above AR information display method based on multiple pieces of feature information, the identifying of the multiple pieces of feature information corresponding to the multiple targets in the user field of view is implemented by using a CV algorithm.
In the above AR information display method based on multiple pieces of feature information, the specific relationship includes a positional relationship, a size relationship, and a state relationship between the multiple targets.
In the above AR information display method based on multiple feature information, the state relationship includes a color state relationship and an additive state relationship.
In the above AR information display method based on multiple pieces of feature information, the displaying, by an AR display device, the multiple pieces of AR information based on the specific relationship, where the displaying the multiple pieces of AR information for the user to present the specific relationship includes: and correspondingly adjusting the specific relation among the plurality of targets on the AR information based on the change of the specific relation among the plurality of targets.
In the above AR information display method based on multiple pieces of feature information, when the size relationship between the multiple targets changes, the change of the size relationship is reflected between the multiple pieces of feature information, so that the size relationship on the displayed AR information also changes correspondingly.
In the above method for displaying AR information based on multiple pieces of feature information, when the positional relationship among the multiple pieces of feature information changes, the positional relationship on the AR information also changes correspondingly.
In the above method for displaying AR information based on multiple pieces of feature information, when the state relationship among the multiple pieces of feature information changes, the state relationship on the AR information also changes correspondingly.
The object of the present disclosure can be further achieved by the following technical measures.
The purpose of the present disclosure is also achieved by the following technical solutions. An AR display device according to the present disclosure includes a processor and a memory, the memory storing a computer program that, when executed by the processor, performs the AR information display method based on a plurality of feature information.
The beneficial effects of the invention at least comprise: through a plurality of characteristic information that a plurality of targets correspond in the discernment field of vision, have specific relations such as size relation, positional relationship and state relation between a plurality of targets to obtain the AR information that a plurality of characteristic information correspond according to a plurality of characteristic information, and based on specific relation, demonstrate a plurality of AR information through AR display device, a plurality of AR information of demonstration present for the user specific relation helps the user to know the relation between the target directly perceived through AR information, promotes user experience.
The foregoing is a summary of the present disclosure, and for the purposes of promoting a clear understanding of the technical means of the present disclosure, the present disclosure may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
Drawings
Fig. 1 is a schematic flowchart of an AR information display method based on multiple pieces of feature information according to an embodiment of the present disclosure.
Detailed Description
To further illustrate the technical means and effects of the present disclosure adopted to achieve the predetermined invention, the following detailed description will be given to specific embodiments, structures, features and effects of the AR information display method and device based on multiple feature information according to the present disclosure with reference to the accompanying drawings and preferred embodiments.
Fig. 1 is a schematic flowchart of an AR information display method based on multiple pieces of feature information according to an embodiment of the present disclosure. Referring to fig. 1, an example of an AR information display method based on multiple feature information of the present disclosure mainly includes the following steps:
step S11, identifying a plurality of feature information corresponding to a plurality of targets in the user field of view, where the plurality of targets have a specific relationship therebetween, and the plurality of feature information reflects the specific relationship.
Specifically, a CV algorithm is used to identify a plurality of feature information corresponding to a plurality of objects in the field of view of the user, where the feature information may be information for distinguishing various objects (e.g., people, houses, rivers, flowers, trees, bridges, etc.) in the field of view, may be a feature of an object extracted by a machine learning method, or may be image information such as the size, position, color, brightness, saturation, etc. of the object in an image. For ease of understanding, the objectives are mainly described in the context of houses, but the objectives in this application refer not only to houses, but also to all objectives that can be identified by CV algorithms. After the user wears the AR display device, after recognizing the target in the field of view, the AR display device displays the AR information corresponding to the target, for example, the user is a house model in the field of view, and after recognizing the preset house model feature, one or more pieces of AR information such as a house type map, a price, a real photo, a user evaluation, and the like of the house can be presented on the AR information. In the present invention, the AR information is augmented reality information, i.e. additional information is presented to the user in addition to the information that the user can see. When displaying the AR information, preferably, the AR information may be displayed around the corresponding real target, or the relationship between the AR information and the real target may be prompted to the user by using a connection line.
The CV algorithm may be to extract features of the real-time image using a trained convolutional neural network, input the features of the real-time image into a classifier to obtain a classification result, and determine a target in the real-time image according to the classification result. The CV algorithm may also be to extract Features of the image through algorithms such as SIFT (Scale-invariant Features transform), HOG (histogram of Oriented Gradient), Speeded Up Robust Features (Speeded Up Robust Features), or LBP (Local Binary Pattern), and perform feature matching by using a clustering algorithm, thereby determining a recognition result of the image. In addition, the real-time image is subjected to target recognition processing based on local processing or cloud server processing, the CV algorithm is not limited in selection, and only the CV algorithm is required to extract the characteristic points of the image and recognize the image.
In the embodiment of the present invention, one object corresponds to one feature information, and one feature information may include one or more features of the object. For example, for a house, the feature information may be a vector extracted by the CV algorithm and corresponding to the house, where the vector includes one or more features of the house, such as color, shape, size, orientation, and area.
Besides the characteristic information of the target identified by the CV algorithm, the characteristic information of the target can be determined by two-dimensional code identification, electronic tag and the like. For example, two-dimensional codes may be placed beside an object in the user field of view, and when each two-dimensional code is recognized, one piece of AR information may be obtained correspondingly. Therefore, the AR information corresponding to the target can be determined simply by scanning the two-bit code. Specifically, the specific relationship between the plurality of targets includes a positional relationship, a magnitude relationship, and a state relationship. Whether the multiple targets have the association or not can be determined through system presetting or big data analysis, and the premise that a specific relation exists among the multiple targets is taken as the premise. Taking two houses in the visual field as an example, the two houses can be located in a cell and have the same house type, and the system can preset that the two houses have a connection, so that the two houses have a specific relationship. Where positional relationships may refer to relative positions between houses, i.e. may be adjacent, or separated by a certain distance, or in the same row and column, it is to be understood that the above only indicates examples of positional relationships, which in fact include all positional relationships that can be identified by CV algorithms. The size relationship refers to the actual size relationship between targets, and taking a house as an example, the size ratio between three houses is 1: 1. 2:1:1 or 3:1:1, etc., it is understood that the magnitude relationship includes virtually all magnitude relationships that can be identified by the CV algorithm. The state relation refers to the color state and the additive state of the target, wherein the color state is the color state of a house under the condition of turning on the light and the color state of the house under the condition of turning off the light; the additive state refers to adding one other additive so that the characteristic information of the target is correspondingly changed, for example, a chimney of a house is in a smoking state, and it can be understood that the state relationship actually comprises all state relationships which can be identified through a CV algorithm. It will be appreciated that each target may have a number of specific relationships with other targets, for example, the ratio of the size relationship of two houses adjacent in a positional relationship of 2:1, one house being in an on state and the other house being in an off state. Thereafter, the process proceeds to step S12.
Step S12, obtaining, according to the plurality of pieces of feature information, a plurality of pieces of AR information corresponding to the plurality of pieces of feature information.
Specifically, the method comprises the following steps: and acquiring multiple pieces of AR information corresponding to the multiple pieces of characteristic information according to the multiple pieces of characteristic information, for example, if three different houses exist in the field of view, each house corresponds to one piece of AR information, and after respective characteristic information of the three houses is detected in the field of view, displaying respective AR information for the three houses respectively. The AR information may be in the form of multimedia such as text, graphics, video, etc. Thereafter, the process proceeds to step S13.
And step S13, displaying the plurality of pieces of AR information through an AR display device based on the specific relationship, wherein the displayed plurality of pieces of AR information present the specific relationship for the user.
Specifically, the AR display device includes any form of device that can implement AR display, such as AR glasses, an AR head ring, and an AR helmet.
For example, in the position relationship, if three houses in the field of view are adjacent, the positions of the corresponding presented AR information are also adjacent, and if the three houses in the field of view are spaced at equal distances, the corresponding presented AR information is spaced at equal distances; in the size relationship, if the size ratio of the three houses is 2:1:1, then the size ratio of the corresponding presented AR information is also 2:1: 1; under the state relation, if the chimney of the house is detected to be in the smoking state through the CV algorithm, a certain special effect (such as smoke) can be added to the corresponding presented AR information so as to correspond to the identified smoking state; if the house is in the light-on state, a certain special effect (such as adding halo or highlighting) can be added to the correspondingly presented AR information so as to correspond to the recognized light-on state; if the house is in the light-off state, the house with the corresponding presented AR information may also remove the special effect of turning on the light, and/or add a certain special effect (e.g., dimming) to correspond to the state of turning off the light. It can be understood that if each target has a plurality of specific relationships with other targets, the AR information also correspondingly exhibits a plurality of specific relationships, for example, the ratio of the size relationship between two houses adjacent to each other in the position relationship in the field of view of the user is 2:1, one house is in a light-on state, the other house is in a light-off state, the AR information exhibits that the position relationship is adjacent to each other in the size ratio of 2:1, one displays a special effect corresponding to the light-on state, and the other displays a special effect corresponding to the light-off state.
Preferably, when the magnitude relation in the specific relation changes, the magnitude relation in the AR information also changes correspondingly. For example, the size ratio of house 1, house 2, and house 3 in the user's field of view is 2:1:1, the size ratio of the AR information of the three houses is also 2:1:1, and after the size of the house 1 is enlarged, the ratio of the AR information of the three houses is changed to 10:2:1, so that the size ratio of the AR information of the three houses is also correspondingly adjusted to 10:2: 1.
Preferably, when the positional relationship in the specific relationship changes, the positional relationship in the AR information also changes correspondingly. For example, if the three houses in the field of view of the user are adjacent to each other, the AR information of the three houses is also adjacent to each other, and in some cases, the position of one house of the three houses starts to be far away from the other two houses, the position of the AR information of the house corresponds to the AR information far away from the other two houses. For another example, if the positional relationship of the three objects (A, B, C) in the user's field of view is the adjacent order of A-B-C, when the positional relationship changes to the adjacent order of A-C-B, the AR information of the three objects also changes from the adjacent order of A-B-C to the adjacent order of A-C-B.
Preferably, when the state relationship in the specific relationship changes, the state relationship in the AR information also changes correspondingly. For example, if a chimney of a house in the field of view of the user is in a smoking state, a certain special effect (for example, smoke) may be added to the corresponding AR information to correspond to the identified smoking state, and if the chimney of the house in the field of view of the user stops smoking at a certain moment, the special effect may also be removed from the corresponding AR information; if the house in the user field of view is in a lighting state, a certain special effect (such as adding a halo or highlighting) can be added to the AR information to correspond to the recognized lighting state, and if the house in the user field of view is turned off at a certain moment, the corresponding AR information can be removed from the lighting special effect and/or added with a certain special effect (such as dimming) to correspond to the non-lighting state of the house; the house in the user field of vision is in the state of turning off the light, adds certain special effect on the AR information, corresponds the state of turning off the light of house, if a certain moment, the house light in the user field of vision is opened, also adds certain special effect on the AR information that then corresponds, corresponds the state of turning on the light of house.
It is understood that when the position relationship, the size relationship, and the state relationship in the specific relationship are changed at the same time, the position relationship, the size relationship, and the state relationship in the AR information are also changed at the same time. And will not be described in detail herein.
In another aspect of the present invention, one or more embodiments of the present invention also provide an AR display apparatus including a processor and a memory, the memory storing a computer program which, when executed by the processor, performs the steps of:
identifying a plurality of feature information corresponding to a plurality of targets in a user field of view, wherein the targets have a specific relationship therebetween, and the feature information reflects the specific relationship;
acquiring a plurality of pieces of AR information corresponding to the plurality of pieces of feature information according to the plurality of pieces of feature information;
and displaying the plurality of pieces of AR information through an AR display device based on the specific relationship, wherein the displayed plurality of pieces of AR information present the specific relationship for the user.
It is understood that the AR display device may further implement one or more steps described above, and will not be described herein again.
In the above, according to the AR information display method based on multiple pieces of feature information of the embodiment of the present disclosure, by identifying multiple pieces of feature information of multiple targets and specific relationships such as size relationships, position relationships, and state relationships between the multiple targets, and presenting the specific relationships on the AR information, when the specific relationships change, the AR information is correspondingly adjusted, so that differentiated display is formed, a user is helped to understand the relationships between the AR information and the targets, and user experience is improved.
The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure is not intended to be limited to the specific details so described.
The block diagrams of devices and apparatuses referred to in this disclosure are only used as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
Also, as used herein, "or" as used in a list of items beginning with "at least one" indicates a separate list, such that, for example, a list of "A, B or at least one of C" means A or B or C, or AB or AC or BC, or ABC (i.e., A and B and C). Furthermore, the word "exemplary" does not mean that the described example is preferred or better than other examples.
It is also noted that in the systems and methods of the present disclosure, components or steps may be decomposed and/or re-combined. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.
Various changes, substitutions and alterations to the techniques described herein may be made without departing from the techniques of the teachings as defined by the appended claims. Moreover, the scope of the claims of the present disclosure is not limited to the particular aspects of the process, machine, manufacture, composition of matter, means, methods and acts described above. Processes, machines, manufacture, compositions of matter, means, methods, or acts, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding aspects described herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or acts.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (9)

1. An AR information display method based on a plurality of feature information, the method comprising:
identifying a plurality of feature information corresponding to a plurality of targets in a user field of view, wherein the targets have a specific relationship therebetween, and the feature information reflects the specific relationship;
acquiring a plurality of pieces of AR information corresponding to the plurality of pieces of feature information according to the plurality of pieces of feature information;
and displaying the plurality of pieces of AR information through an AR display device based on the specific relationship, wherein the displayed plurality of pieces of AR information present the specific relationship for the user.
2. The method for displaying the AR information according to claim 1, wherein the identifying the plurality of feature information corresponding to the plurality of objects in the user field of view is performed by a CV algorithm.
3. The AR information display method based on multiple pieces of feature information according to claim 1, wherein the specific relationship includes a positional relationship, a size relationship, and a status relationship between the multiple targets.
4. The AR information display method based on multiple pieces of feature information according to claim 3, wherein the state relationship includes a color state relationship and an additive state relationship.
5. The method for displaying the AR information based on the feature information according to claim 4, wherein the displaying the AR information based on the specific relationship through an AR display device comprises: and correspondingly adjusting the specific relation among the plurality of targets on the AR information based on the change of the specific relation among the plurality of targets.
6. The method of claim 5, wherein when a size relationship between the objects changes, the size relationship between the objects also changes, so that the size relationship on the displayed AR information also changes correspondingly.
7. The method of claim 5, wherein when the position relationship between the targets changes, the position relationship between the targets also changes, so that the position relationship on the displayed AR information also changes correspondingly.
8. The method of claim 5, wherein when the status relationship between the targets changes, the status relationship between the targets also changes, so that the status relationship on the displayed AR information also changes correspondingly.
9. An AR display apparatus comprising a processor and a memory, the memory storing a computer program which, when executed by the processor, performs an AR information display method based on a plurality of feature information as recited in claims 1 to 8.
CN202011203237.1A 2020-11-02 2020-11-02 AR information display method and AR display device based on multiple feature information Pending CN112287949A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011203237.1A CN112287949A (en) 2020-11-02 2020-11-02 AR information display method and AR display device based on multiple feature information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011203237.1A CN112287949A (en) 2020-11-02 2020-11-02 AR information display method and AR display device based on multiple feature information

Publications (1)

Publication Number Publication Date
CN112287949A true CN112287949A (en) 2021-01-29

Family

ID=74353444

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011203237.1A Pending CN112287949A (en) 2020-11-02 2020-11-02 AR information display method and AR display device based on multiple feature information

Country Status (1)

Country Link
CN (1) CN112287949A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120038671A1 (en) * 2010-08-12 2012-02-16 Pantech Co., Ltd. User equipment and method for displaying augmented reality window
CN102377873A (en) * 2010-08-16 2012-03-14 Lg电子株式会社 Method and displaying information and mobile terminal using the same
CN103207728A (en) * 2012-01-12 2013-07-17 三星电子株式会社 Method Of Providing Augmented Reality And Terminal Supporting The Same
CN106254848A (en) * 2016-07-29 2016-12-21 宇龙计算机通信科技(深圳)有限公司 A kind of learning method based on augmented reality and terminal
CN107219926A (en) * 2017-06-01 2017-09-29 福州市极化律网络科技有限公司 Virtual reality method of interaction experience and device
CN107229393A (en) * 2017-06-02 2017-10-03 三星电子(中国)研发中心 Real-time edition method, device, system and the client of virtual reality scenario
CN108648276A (en) * 2018-05-17 2018-10-12 上海宝冶集团有限公司 A kind of construction and decoration design method, device, equipment and mixed reality equipment
US20180308287A1 (en) * 2015-10-16 2018-10-25 Bent Image Lab, Llc Augmented reality platform
CN111258423A (en) * 2020-01-15 2020-06-09 惠州Tcl移动通信有限公司 Component display method and device, storage medium and augmented reality display equipment
CN111580679A (en) * 2020-06-07 2020-08-25 浙江商汤科技开发有限公司 Space capsule display method and device, electronic equipment and storage medium
US20200312035A1 (en) * 2019-03-26 2020-10-01 Siemens Healthcare Gmbh Transferring a state between vr environments

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120038671A1 (en) * 2010-08-12 2012-02-16 Pantech Co., Ltd. User equipment and method for displaying augmented reality window
CN102377873A (en) * 2010-08-16 2012-03-14 Lg电子株式会社 Method and displaying information and mobile terminal using the same
CN103207728A (en) * 2012-01-12 2013-07-17 三星电子株式会社 Method Of Providing Augmented Reality And Terminal Supporting The Same
US20180308287A1 (en) * 2015-10-16 2018-10-25 Bent Image Lab, Llc Augmented reality platform
CN106254848A (en) * 2016-07-29 2016-12-21 宇龙计算机通信科技(深圳)有限公司 A kind of learning method based on augmented reality and terminal
CN107219926A (en) * 2017-06-01 2017-09-29 福州市极化律网络科技有限公司 Virtual reality method of interaction experience and device
CN107229393A (en) * 2017-06-02 2017-10-03 三星电子(中国)研发中心 Real-time edition method, device, system and the client of virtual reality scenario
CN108648276A (en) * 2018-05-17 2018-10-12 上海宝冶集团有限公司 A kind of construction and decoration design method, device, equipment and mixed reality equipment
US20200312035A1 (en) * 2019-03-26 2020-10-01 Siemens Healthcare Gmbh Transferring a state between vr environments
CN111258423A (en) * 2020-01-15 2020-06-09 惠州Tcl移动通信有限公司 Component display method and device, storage medium and augmented reality display equipment
CN111580679A (en) * 2020-06-07 2020-08-25 浙江商汤科技开发有限公司 Space capsule display method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JULIE CARMIGNIANI 等: "Augmented reality technologies, systems and applications", 《DOI 10.1007/S11042-010-0660-6》 *
熊晶莹 等: "适应移动智能设备的目标跟踪器", 《光学精密工程》 *

Similar Documents

Publication Publication Date Title
US10032072B1 (en) Text recognition and localization with deep learning
CN107633204B (en) Face occlusion detection method, apparatus and storage medium
US10740963B2 (en) 3D virtual environment generating method and device
US8814048B2 (en) Content identification and distribution
US10210423B2 (en) Image match for featureless objects
CN110543892A (en) part identification method based on multilayer random forest
Ren et al. General traffic sign recognition by feature matching
US10445602B2 (en) Apparatus and method for recognizing traffic signs
US8805123B2 (en) System and method for video recognition based on visual image matching
WO2019061658A1 (en) Method and device for positioning eyeglass, and storage medium
US20180357819A1 (en) Method for generating a set of annotated images
CN105117399B (en) Image searching method and device
US20130058583A1 (en) Event classification method using light source detection
CN112149690A (en) Tracing method and tracing system based on biological image feature recognition
CN112784822A (en) Object recognition method, object recognition device, electronic device, storage medium, and program product
CN113935774A (en) Image processing method, image processing device, electronic equipment and computer storage medium
US20210117987A1 (en) Fraud estimation system, fraud estimation method and program
US20130236065A1 (en) Image semantic clothing attribute
US20130058542A1 (en) Event classification method using lit candle detection
CN112036362A (en) Image processing method, image processing device, computer equipment and readable storage medium
CN112287949A (en) AR information display method and AR display device based on multiple feature information
WO2022266878A1 (en) Scene determining method and apparatus, and computer readable storage medium
US20120328184A1 (en) Optically characterizing objects
CN111931680A (en) Vehicle weight recognition method and system based on multiple scales
US20210142098A1 (en) Polygon detection device, polygon detection method, and polygon detecting program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination