CN113867528A - Display method, device, equipment and computer readable storage medium - Google Patents

Display method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN113867528A
CN113867528A CN202111134428.1A CN202111134428A CN113867528A CN 113867528 A CN113867528 A CN 113867528A CN 202111134428 A CN202111134428 A CN 202111134428A CN 113867528 A CN113867528 A CN 113867528A
Authority
CN
China
Prior art keywords
augmented reality
reality environment
target
target object
identification code
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111134428.1A
Other languages
Chinese (zh)
Inventor
欧华富
文一凡
李斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202111134428.1A priority Critical patent/CN113867528A/en
Publication of CN113867528A publication Critical patent/CN113867528A/en
Priority to PCT/CN2022/120170 priority patent/WO2023045964A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Electromagnetism (AREA)
  • General Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the disclosure discloses a display method, a display device, display equipment and a computer-readable storage medium. The method comprises the following steps: in response to a scanning operation performed on a first target identification code in an augmented reality environment, determining a first target object and first virtual effect data based on the first target identification code; in response to identifying the first target object in the augmented reality environment, a first augmented reality effect corresponding to the first target object is presented in the augmented reality environment based on the first virtual effect data. Through the method and the device, the augmented reality effect can be displayed and applied more widely, the display efficiency of the augmented reality effect can be improved, and the watching experience of a user is improved.

Description

Display method, device, equipment and computer readable storage medium
Technical Field
The present disclosure relates to terminal technologies, and in particular, to a display method, device, apparatus, and computer-readable storage medium.
Background
In the related art, in a display scheme of an Augmented Reality (AR) effect, after a specific identification code is scanned by a browser or a third-party identification code scanner, at least one jump is required to enter an AR environment to display an AR effect corresponding to a current identification code, extra waiting time is required to affect the viewing experience of a user, and the third-party identification code scanner is required to be relied on, so that the application range of AR effect display is limited.
Disclosure of Invention
The embodiment of the disclosure provides a display method, a display device, display equipment and a computer-readable storage medium.
The technical scheme of the embodiment of the disclosure is realized as follows:
the disclosed embodiment provides a display method, which includes:
in response to a scanning operation performed on a first target identification code in an augmented reality environment, determining a first target object and first virtual effect data based on the first target identification code;
in response to identifying the first target object in the augmented reality environment, a first augmented reality effect corresponding to the first target object is presented in the augmented reality environment based on the first virtual effect data.
In some embodiments, the determining the first target object and the first virtual effect data based on the first target identification code includes: analyzing the first target identification code to obtain target identification information; determining a first target object corresponding to the target identification information based on a mapping relation between preset identification information and the target object; and determining first virtual effect data corresponding to the target identification information based on a mapping relation between preset identification information and virtual effect data.
In some embodiments, the method further comprises: under the condition that the first augmented reality effect display is finished, in response to a scanning operation of a second target identification code in the augmented reality environment, determining a second target object and second virtual effect data based on the second target identification code; in response to identifying the second target object in the augmented reality environment, based on the second virtual effect data, a second augmented reality effect corresponding to the second target object is presented in the augmented reality environment.
In some embodiments, the method further comprises: entering the augmented reality environment in response to a start operation of the augmented reality environment.
In some embodiments, the scanning operation performed on the first target identification code in the augmented reality environment includes: scanning the first target identification code by using an image acquisition device of the electronic equipment in an augmented reality environment; the method further comprises the following steps: in response to entering the augmented reality environment, activating the image capture device in the augmented reality environment.
In some embodiments, the method further comprises: acquiring an image to be identified in real time by using the image acquisition device in the augmented reality environment; identifying a first target object in the image to be identified; the presenting, in response to identifying the first target object in the augmented reality environment, a first augmented reality effect corresponding to the first target object in the augmented reality environment based on the first virtual effect data, comprising: in response to identifying the first target object in the image to be identified, a first augmented reality effect corresponding to the first target object is presented in the augmented reality environment based on the first virtual effect data.
In some embodiments, said entering the augmented reality environment in response to the initiating operation of the augmented reality environment comprises: entering the augmented reality environment in response to an access operation to an entry address of the augmented reality environment.
In some embodiments, the augmented reality environment may include at least one interactive interface in an application platform, application program, or applet running on the electronic device for presenting augmented reality effects.
In some embodiments, the first target object may comprise a two-dimensional object and/or a three-dimensional object associated with the first target identifier.
In some embodiments, the two-dimensional object may include at least one of: photographs of exhibits, portraits of persons, automotive posters; the three-dimensional object may include at least one of: exhibits, people, buildings, vehicles in real scenes.
An embodiment of the present disclosure provides a display device, including: a first determination module, configured to determine, in response to a scan operation performed on a first target identification code in an augmented reality environment, a first target object and first virtual effect data based on the first target identification code; a first display module to display a first augmented reality effect corresponding to the first target object in the augmented reality environment based on the first virtual effect data in response to identifying the first target object in the augmented reality environment.
An embodiment of the present disclosure provides an electronic device, including: a display screen; a memory for storing an executable computer program; and the processor is used for combining the display screen to realize the display method when executing the executable computer program stored in the memory.
The embodiment of the present disclosure provides a computer-readable storage medium storing a computer program for causing a processor to execute the display method described above.
In the embodiment of the present disclosure, a first target object and first virtual effect data are determined based on a first target identification code by responding to a scanning operation performed on the first target identification code in an augmented reality environment; and in response to identifying the first target object in the augmented reality environment, displaying a first augmented reality effect corresponding to the first target object in the augmented reality environment based on the first virtual effect data. Therefore, on one hand, the first target identification code can be scanned in the augmented reality environment without depending on an identification code scanner of a third party, so that the application of augmented reality effect display is wider; on the other hand, the first target object can be directly identified after the first target identification code is scanned, and the first augmented reality effect is displayed based on the first virtual effect data, so that the display efficiency of the augmented reality effect can be improved, and the watching experience of a user is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1A is a schematic diagram of an implementation architecture of a display system according to an embodiment of the present disclosure;
fig. 1B is a schematic diagram illustrating an implementation flow of a display method according to an embodiment of the disclosure;
fig. 2 is a schematic flow chart illustrating an implementation of a display method according to an embodiment of the present disclosure;
fig. 3 is a schematic flow chart illustrating an implementation of a display method according to an embodiment of the present disclosure;
fig. 4 is a schematic flow chart illustrating an implementation of a display method according to an embodiment of the present disclosure;
fig. 5 is a schematic flow chart illustrating an implementation of a display method according to an embodiment of the present disclosure;
fig. 6A is a schematic flow chart illustrating an implementation of an AR effect displaying method in a single AR effect viewing scene according to an embodiment of the present disclosure;
fig. 6B is a schematic flow chart illustrating an implementation of an AR effect displaying method in a multi-AR effect viewing scene according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a display device according to an embodiment of the disclosure;
fig. 8 is a hardware entity diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
For the purpose of making the purpose, technical solutions and advantages of the present disclosure clearer, the present disclosure will be described in further detail with reference to the accompanying drawings, the described embodiments should not be construed as limiting the present disclosure, and all other embodiments obtained by a person of ordinary skill in the art without making creative efforts shall fall within the protection scope of the present disclosure.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. The terminology used herein is for the purpose of describing embodiments of the disclosure only and is not intended to be limiting of the disclosure.
Before further detailed description of the embodiments of the present disclosure, terms and expressions referred to in the embodiments of the present disclosure are explained, and the terms and expressions referred to in the embodiments of the present disclosure are applied to the following explanations.
1) A Mini Program (also called a Web Program) is a Program developed based on a front-end-oriented Language (e.g., JavaScript) and implementing a service in a hypertext Markup Language (HTML) page, and software downloaded by a client (e.g., a browser or any client embedded in a browser core) via a network (e.g., the internet) and interpreted and executed in a browser environment of the client saves steps installed in the client. For example, an applet for implementing a singing service may be downloaded and run in a social network client.
2) Augmented Reality (AR), which is a relatively new technology content that promotes integration between real world information and virtual world information content, implements analog simulation processing on the basis of computer and other scientific technologies of entity information that is relatively difficult to experience in the spatial range of the real world originally, superimposes the virtual information content in the real world for effective application, and can be perceived by human senses in the process, thereby realizing sensory experience beyond Reality. After the real environment and the virtual object are overlapped, the real environment and the virtual object can exist in the same picture and space at the same time.
3) The webpage view (WebView) is a webpage browsing control, can be embedded in a client, realizes the hybrid development of the front end, and is used for processing requests, loading and rendering webpages, interacting webpages and the like.
In the related art, in the display scheme of the AR effect, after a specific identification code is scanned by a browser or a third-party identification code scanner, at least one jump is required to enter the AR environment to display the AR effect corresponding to the current identification code. Therefore, on one hand, a third-party identification code scanner is required to be relied on, so that the application range of AR effect display is limited; on the other hand, due to the fact that at least one jump is needed, after a user scans a specific identification code, the user can watch the AR effect corresponding to the current identification code only after waiting for the completion of the jump, extra waiting time is needed, the watching experience of the user is affected, especially in a scene with multiple AR effects, the user can experience the AR effects corresponding to different identification codes only by repeatedly scanning, jumping and watching, the interactive operation is complex, and the user experience is further affected.
The embodiment of the disclosure provides a display method, which can enable the augmented reality effect display application to be wider, improve the display efficiency of the augmented reality effect, and improve the viewing experience of a user. The display method provided by the embodiment of the disclosure can be applied to electronic equipment. The electronic device provided by the embodiments of the present disclosure may be implemented as various types of terminals such as AR glasses, notebook computers, tablet computers, desktop computers, set-top boxes, mobile devices (e.g., mobile phones, portable music players, personal digital assistants, dedicated messaging devices, portable game devices), and the like. In some embodiments, the display method provided by the embodiments of the present disclosure may be applied to a client application platform of an electronic device. The client application platform may be a network (Web) application platform or an applet. In some embodiments, the display method provided by the embodiment of the present disclosure may also be applied to an application program of an electronic device.
Referring to fig. 1A, fig. 1A is a schematic diagram of an implementation architecture of a display system provided in the embodiment of the present disclosure, in order to implement and support a client application platform, in the display system 100, electronic devices (exemplarily showing a terminal 400-1 and a terminal 400-2) are connected to a server 200 through a network 300, where the network 300 may be a wide area network or a local area network, or a combination of the two. The electronic device is used for responding to the scanning operation of the first target identification code in the augmented reality environment and sending the first target identification code to the server 200; the server 200 is configured to determine a first target object and first virtual effect data based on the first target identification code, and return the first target object and the first virtual effect data to the electronic device; after receiving the first target object and the first virtual effect data, in response to identifying the first target object in the augmented reality environment, the electronic device displays, in a display interface 410, a first augmented reality effect corresponding to the first target object in the augmented reality environment based on the first virtual effect data; thus, the AR effect is presented in the electronic equipment.
In some embodiments, the server 200 may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a CDN, and a big data and artificial intelligence platform. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the embodiment of the present disclosure is not limited thereto.
In the following, the display method provided by the embodiment of the present disclosure will be described in conjunction with exemplary applications and implementations of the electronic device provided by the embodiment of the present disclosure.
The embodiment of the present disclosure provides a display method, and fig. 1B is a schematic diagram illustrating an implementation flow of the display method provided by the embodiment of the present disclosure, as shown in fig. 1B, the method includes:
step S101, in response to a scanning operation performed on a first target identification code in an augmented reality environment, determining a first target object and first virtual effect data based on the first target identification code.
Here, the augmented reality environment may be any suitable interactive interface for presenting augmented reality effect, and may be implemented based on native augmented reality technology, or implemented by using web-based augmented reality technology, which is not limited herein. For example, the augmented reality environment may be an interactive interface of an application platform, application or applet, or the like, running on the electronic device for presenting augmented reality effects. The electronic device may scan or identify any object in the real scene in the augmented reality environment, or may scan or identify an object in a pre-acquired image.
The first target identification code may be a two-dimensional code, a barcode, or other codes that can be scanned, which is not limited in this disclosure.
The scanning operation performed on the first target identification code may be an operation of scanning the first target identification code in a real scene by using a camera of the electronic device in an augmented reality environment, or an operation of scanning the first target identification code in a pre-acquired image in the augmented reality environment, which is not limited in this disclosure.
The first target object may be any suitable object associated with the first target identification code, and may be a two-dimensional image, such as a photograph of an exhibit, a figure of a person, a car poster, or the like, or a three-dimensional object, such as an exhibit, a person, a building, a vehicle, or the like in a real scene, without limitation.
The first virtual effect data may be virtual special effect data for exhibiting an augmented reality effect corresponding to the first target object in the augmented reality environment. In some embodiments, the first virtual effect data may include at least one of: virtual stickers, virtual animations, and virtual items. The virtual sticker can be two-dimensional or three-dimensional virtual additional information added to a real scene image acquired by the electronic device, for example, the virtual sticker can be a virtual calendar added to the real scene image in an augmented reality environment; the virtual animation may be a two-dimensional or three-dimensional virtual object added to the real scene image and moving according to a preset action, and the virtual object may include a virtual character, a virtual plant, a virtual animal, and the like, for example, the virtual animation may be a virtual interpreter guiding a navigation route in a map navigation application; the virtual object may be a two-dimensional or three-dimensional decoration that is decorated in an image of a real scene captured by the electronic device, for example, the virtual object may be virtual glasses that are added to a portrait in the image of the real scene in an augmented reality environment.
In some embodiments, the first target object and the first virtual effect data may be directly carried in the first target identifier, and the first target object and the first virtual effect data may be obtained by parsing the first target identifier. In other embodiments, a mapping relationship between the identification code, the object, and the virtual effect data may be preset, and the first target object and the first virtual effect data corresponding to the first target identification code may be determined by referring to the mapping relationship.
Step S102, in response to identifying the first target object in the augmented reality environment, displaying a first augmented reality effect corresponding to the first target object in the augmented reality environment based on the first virtual effect data.
In some embodiments, in an augmented reality environment, the electronic device may turn on its own image capture device (e.g., a camera) for image capture, and recognize the captured image. Upon identifying the first target object, a first augmented reality effect corresponding to the first target object may be presented in an augmented reality environment based on the first virtual effect data. In implementation, the first augmented reality effect corresponding to the first target object may be exhibited by rendering the first virtual effect data in an augmented reality environment.
In some embodiments, the electronic device may identify a pre-acquired image in an augmented reality environment, and, if a first target object is identified, present a first augmented reality effect corresponding to the first target object in the augmented reality environment based on the first virtual effect data.
In some embodiments, the determining the first target object and the first virtual effect data based on the first target identification code in step S101 may include the following steps S111 to S113:
and step S111, analyzing the first target identification code to obtain target identification information.
Here, the target identification information may be identification information carried in the first target identification code for characterizing the first target identification code. In implementation, a person skilled in the art may carry appropriate object identification information in the first object identification code according to practical situations, and is not limited herein.
In some embodiments, the object identification information is encoded information of a first object identification code, and different identification codes have different encoded information.
Step S112, determining a first target object corresponding to the target identification information based on a mapping relationship between preset identification information and the target object.
Here, the mapping relationship between the identification information and the target object may be set in advance according to actual conditions, and is not limited here. And inquiring the mapping relation between the identification information and the target object by using the target identification information to obtain the first target object corresponding to the target identification information.
Step S113, determining first virtual effect data corresponding to the target identification information based on a mapping relationship between preset identification information and virtual effect data.
Here, the mapping relationship between the identification information and the virtual effect data may be set in advance according to actual conditions, and is not limited herein. The mapping relationship between the identification information and the virtual effect data is inquired by using the target identification information, and the first virtual effect data corresponding to the target identification information can be obtained.
In some embodiments, the augmented reality environment may include at least one interactive interface in an application platform, application program, or applet running on the electronic device for presenting augmented reality effects.
In some embodiments, the first target object may comprise a two-dimensional object and/or a three-dimensional object associated with the first target identifier.
In some embodiments, the two-dimensional object may include at least one of: photographs of exhibits, portraits of persons, automotive posters; the three-dimensional object may include at least one of: exhibits, people, buildings, vehicles in real scenes.
In the embodiment of the present disclosure, a first target object and first virtual effect data are determined based on a first target identification code by responding to a scanning operation performed on the first target identification code in an augmented reality environment; and in response to identifying the first target object in the augmented reality environment, displaying a first augmented reality effect corresponding to the first target object in the augmented reality environment based on the first virtual effect data. Therefore, on one hand, the first target identification code can be scanned in the augmented reality environment without depending on an identification code scanner of a third party, so that the application of augmented reality effect display is wider; on the other hand, the first target object can be directly identified after the first target identification code is scanned, and the first augmented reality effect is displayed based on the first virtual effect data, so that the display efficiency of the augmented reality effect can be improved, and the watching experience of a user is improved.
The embodiment of the present disclosure provides a display method, and fig. 2 is a schematic diagram illustrating an implementation flow of the display method provided by the embodiment of the present disclosure, and as shown in fig. 2, the method includes:
step S201, in response to a scanning operation performed on a first target identification code in an augmented reality environment, determining a first target object and first virtual effect data based on the first target identification code.
Step S202, in response to identifying the first target object in the augmented reality environment, displaying a first augmented reality effect corresponding to the first target object in the augmented reality environment based on the first virtual effect data.
Here, the above steps S201 to S202 correspond to the above steps S101 to S102, and when implemented, specific embodiments of the above steps S101 to S102 may be referred to.
Step S203, in a case that the first augmented reality effect display is finished, in response to a scanning operation performed on a second target identification code in the augmented reality environment, determining a second target object and second virtual effect data based on the second target identification code.
Here, the first augmented reality effect may be presented in the form of an augmented reality dynamic graph or an augmented reality video, which is not limited in this disclosure.
The user may continue to scan the second target identification code in the augmented reality environment after the first augmented reality effect presentation is finished. The second object identification code may be the same as the first object identification code or different from the first object identification code, and is not limited herein.
The second target object may be any suitable object associated with the second target identification code, and may be a two-dimensional image, such as a photograph of an exhibit, a figure of a person, a car poster, or the like, or a three-dimensional object, such as an exhibit, a person, a building, a car, or the like in a real scene, without limitation. The second virtual effect data may be virtual special effect data, such as a virtual sticker, a virtual animation, a virtual article, etc., for displaying an augmented reality effect corresponding to the second target object in the augmented reality environment.
In some embodiments, the second target object and the second virtual effect data may be directly carried in the second target identifier, and the second target object and the second virtual effect data may be obtained by analyzing the second target identifier. In other embodiments, a mapping relationship between the identification code, the object, and the virtual effect data may be preset, and the second target object and the second virtual effect data corresponding to the second target identification code may be determined by referring to the mapping relationship.
Step S204, in response to identifying the second target object in the augmented reality environment, displaying a second augmented reality effect corresponding to the second target object in the augmented reality environment based on the second virtual effect data.
Here, the step S204 corresponds to the step S202, and when implemented, reference may be made to a specific embodiment of the step S202.
In the embodiment of the present disclosure, in a case where the first augmented reality effect presentation is ended, in response to a scanning operation performed on a second target identification code in an augmented reality environment, a second target object and second virtual effect data are determined based on the second target identification code, and in response to the second target object being identified in the augmented reality environment, a second augmented reality effect corresponding to the second target object is presented in the augmented reality environment based on the second virtual effect data. Like this, the user can directly scan the second target identification code in same augmented reality environment after having watched first augmented reality effect to through discerning second target object, show the second augmented reality effect in this augmented reality environment, thereby can make a plurality of augmented reality effect watch the in-process interaction simpler, and can improve the display efficiency of a plurality of augmented reality effects, thereby further improve user's the experience of watching.
The embodiment of the present disclosure provides a display method, and fig. 3 is a schematic diagram illustrating an implementation flow of the display method provided by the embodiment of the present disclosure, and as shown in fig. 3, the method includes:
step S301, responding to the starting operation of the augmented reality environment, and entering the augmented reality environment.
Here, the operation of starting the augmented reality environment may be any suitable operation of triggering the electronic device to display an interactive interface for presenting the augmented reality effect, including but not limited to starting an applet for presenting the augmented reality effect, opening a portal link of the augmented reality environment in a browser, and the like. In implementation, a user may start on the electronic device and enter the augmented reality environment by using a suitable start operation according to actual conditions, which is not limited herein.
Step S302, in response to a scanning operation performed on a first target identification code in an augmented reality environment, determining a first target object and first virtual effect data based on the first target identification code.
Step S303, in response to identifying the first target object in the augmented reality environment, displaying a first augmented reality effect corresponding to the first target object in the augmented reality environment based on the first virtual effect data.
Here, the steps S302 to S303 correspond to the steps S101 to S102, and when implemented, specific embodiments of the steps S101 to S102 may be referred to.
In some embodiments, the step S301 may include:
step S311, entering the augmented reality environment in response to an access operation to the entry address of the augmented reality environment.
Here, the portal address of the augmented reality environment may include, but is not limited to, one or more of a portal button in an application, a portlet in an application platform, a portal link, and the like, and is not limited thereto. For example, the augmented reality environment may be entered by clicking an entry button of the augmented reality environment in the application; the augmented reality environment can also be accessed by clicking an portlet of the augmented reality environment in the application platform; the augmented reality environment may also be accessed by opening an entry link to the augmented reality environment in a browser.
In the embodiment of the present disclosure, before the user can scan the first target identification code, the user enters the augmented reality environment in response to the start operation of the augmented reality environment. Therefore, the first target object can be directly identified after the first target identification code is scanned, and the time for waiting for the jump of the augmented reality environment after the first target identification code is scanned can be reduced, so that the display efficiency of the augmented reality effect can be improved, and the watching experience of a user is improved.
The embodiment of the present disclosure provides a display method, and fig. 4 is a schematic diagram illustrating an implementation flow of the display method provided by the embodiment of the present disclosure, and as shown in fig. 4, the method includes:
step S401, responding to the starting operation of the augmented reality environment, and entering the augmented reality environment.
Here, the step S401 corresponds to the step S301, and in implementation, reference may be made to a specific embodiment of the step S301.
Step S402, responding to the augmented reality environment, and starting an image acquisition device of the electronic equipment in the augmented reality environment.
In some embodiments, the image capturing device may be a camera installed at any suitable position on the electronic device, and may be a front-facing camera, a rear-facing camera, a built-in camera, or an external camera, which is not limited herein. In implementation, any suitable instruction may be executed to start the image capturing device of the electronic device after the electronic device enters the augmented reality environment.
Step S403, in response to a scanning operation performed on a first target identification code by an image capture device of the electronic device in an augmented reality environment, determining a first target object and first virtual effect data based on the first target identification code.
Step S404, in response to identifying the first target object in the augmented reality environment, displaying a first augmented reality effect corresponding to the first target object in the augmented reality environment based on the first virtual effect data.
Here, the steps S403 to S404 correspond to the steps S101 to S102, and when implemented, reference may be made to specific embodiments of the steps S101 to S102.
In an embodiment of the present disclosure, in response to entering an augmented reality environment, an image capture device of an electronic device is started in the augmented reality environment, and in response to a scan operation performed on a first target identification code by the image capture device in the augmented reality environment, a first target object and first virtual effect data are determined based on the first target identification code. Like this, can be after getting into augmented reality environment automatic start image acquisition device to utilize this image acquisition device to scan first target identification code, thereby can further simplify user's operation, reduce user's latency, thereby further promote user's use and experience.
The embodiment of the present disclosure provides a display method, and fig. 5 is a schematic view illustrating an implementation flow of the display method provided by the embodiment of the present disclosure, and as shown in fig. 5, the method includes:
step S501, responding to the starting operation of the augmented reality environment, and entering the augmented reality environment.
Step S502, responding to the augmented reality environment, starting the image acquisition device in the augmented reality environment.
Step S503, in response to a scanning operation performed on a first target identification code by an image capture device of the electronic device in an augmented reality environment, determining a first target object and first virtual effect data based on the first target identification code.
Here, the steps S501 to S503 correspond to the steps S401 to S403, and when implemented, reference may be made to specific embodiments of the steps S401 to S403.
And step S504, acquiring an image to be identified in real time by using the image acquisition device in the augmented reality environment.
Here, the image to be recognized may be an image captured in a real scene by an image capturing apparatus.
In some embodiments, after determining the first target object, the electronic device may prompt the user for the first target object to be recognized by displaying a prompt text on the display screen or emitting a prompt voice through the speaker. The user can aim the image acquisition device at the first target object according to the prompt text or the prompt voice, and the electronic equipment can acquire the image of the area where the first target object is located in the real scene in real time by using the image acquisition device in the augmented reality environment and take the image as the image to be recognized.
Step S505, identifying a first target object in the image to be identified.
Here, any suitable target recognition algorithm may be used to recognize the first target object in the image to be recognized, such as, but not limited to, a key point detection algorithm, a sliding window algorithm, a candidate region algorithm, and the like.
Step S506, in response to identifying the first target object in the image to be identified, displaying a first augmented reality effect corresponding to the first target object in the augmented reality environment based on the first virtual effect data.
In the embodiment of the present disclosure, after the first target identification code is scanned, in an augmented reality environment, an image to be recognized is collected in real time by using an image collection device, a first target object in the image to be recognized is recognized, and in response to the recognition of the first target object in the image to be recognized, a first augmented reality effect corresponding to the first target object is displayed in the augmented reality environment based on the first virtual effect data. Therefore, the first target object in the real scene can be directly identified by the image acquisition device after the first target identification code is scanned, so that the flexibility of identifying the first target object can be improved, and the watching experience of a user is further improved.
An exemplary application of the embodiments of the present disclosure in a practical application scenario will be described below. The description will be given by taking the target identification code as the target two-dimensional code, the target object as the target image, and the AR effect as the AR effect based on the web page as an example.
With the comprehensive popularization of two-dimensional code technology and the development of webpage-based AR technology, webpage-based AR effect display combining two-dimensional code and image recognition is concerned about due to the characteristics of light weight and interestingness, and is applied to various fields.
In the related art, the flow of the AR effect display scheme includes the following steps:
step S601, a user scans the two-dimensional code through a browser running on the electronic equipment, an application platform or a third-party two-dimensional code scanner in the applet;
step S602, the electronic equipment jumps to an AR environment according to the link identified from the two-dimensional code, and displays a specific AR effect under the condition that a specific image is identified;
step S603, the user exits the AR environment after viewing the AR effect, and repeats the above steps S601 and S602 to view the next AR effect.
In the AR effect display scheme in the related art, two-dimensional code scanning and image identification are independently implemented in stages, and in a display scene with multiple AR effects, scanning, skipping and watching operations need to be repeated, so that interaction is complex, and user experience is influenced; in addition, the scheme mostly depends on a third-party application program or a two-dimensional code scanner in an application platform, additional jump interaction is added, and the application range of the AR effect is limited.
The embodiment of the disclosure provides a no-perception AR effect display method based on two-dimensional codes and image recognition, a user can enter an AR environment through entry addresses of various AR environments to perform viewing experience of AR effects, and the entry addresses can include but are not limited to addresses of WebView environments provided by social platform public numbers, applets, browsers or other application programs and the like. After the electronic equipment enters the AR environment, the camera can be automatically opened to carry out two-dimensional code scanning and image recognition, and the AR effect bound with the two-dimensional code is displayed. The user can switch the display scenes of various AR effects in an AR environment without perception. For example, in an exhibition, an exhibitor needs to display a plurality of products with the AR effect, and under the condition that the display positions of the products are relatively concentrated, a user can adopt the AR effect display method provided by the embodiment of the disclosure to experience a plurality of AR effects without feeling, and the processes of scanning, jumping and watching are not required to be repeated, so that the watching experience of the user is greatly improved.
In some embodiments, as shown in fig. 6A, in a viewing scene of a single AR effect, the AR effect display method provided by the embodiments of the present disclosure may include the following steps S611 to S616:
step S611, acquiring the first manufactured AR effect, and generating a first target two-dimensional code corresponding to the first AR effect.
Step S612, determining a first target image corresponding to the first target two-dimensional code, and binding the first target two-dimensional code with the first target image; here, the first target two-dimensional code may be bound with the first target image by establishing a mapping relationship between the target identification information in the first target two-dimensional code and the first target image.
In step S613, an entry link of the AR environment is generated.
And step S614, accessing the entrance link through a browser containing a webpage view (WebView) control, entering an AR environment, and opening the camera.
Step S615, in the current AR environment, the first target two-dimensional code is scanned, the target identification information in the first target two-dimensional code is analyzed, and the first target image is determined according to the mapping relationship between the target identification information and the first target image.
Step S616, under the current AR environment, identifying a first target image in the real scene, and displaying a first AR effect corresponding to the first target two-dimensional code under the condition that the first target image is identified.
After the user finishes watching the first AR effect, the browser can be closed.
In some embodiments, as shown in fig. 6B, in a viewing scene of a multi-AR effect, an AR effect presentation method provided by an embodiment of the present disclosure may include the following steps S621 to S628:
step S621, obtaining the first and second manufactured AR effects, and generating a first target two-dimensional code corresponding to the first AR effect and a second target two-dimensional code corresponding to the second AR effect.
Step S622, determining a first target image corresponding to the first target two-dimensional code and a second target image corresponding to the second target two-dimensional code, binding the first target two-dimensional code with the first target image, and binding the second target two-dimensional code with the first person target image; here, the first target two-dimensional code may be bound with the first target image by establishing a mapping relationship between the target identification information in the first target two-dimensional code and the first target image, and the second target two-dimensional code may be bound with the second target image by establishing a mapping relationship between the target identification information in the second target two-dimensional code and the second target image.
In step S623, an entry link of the AR environment is generated.
And step S624, accessing the entrance link through the browser containing the WebView control, entering an AR environment, and opening the camera.
Step S625, in the current AR environment, scanning the first target two-dimensional code, analyzing the target identification information in the first target two-dimensional code, and determining the first target image according to the mapping relationship between the target identification information and the first target image.
Step S616, under the current AR environment, identifying a first target image in the real scene, and displaying a first AR effect corresponding to the first target two-dimensional code under the condition that the first target image is identified.
Step S617, after the first AR effect display is finished, in the current AR environment, scanning the second target two-dimensional code, analyzing the target identification information in the second target two-dimensional code, and determining the second target image according to the mapping relationship between the target identification information and the second target image.
Step S618, under the current AR environment, identifying a second target image in the real scene, and displaying a second AR effect corresponding to the second target two-dimensional code when the second target image is identified.
After the user finishes watching the second AR effect, the browser can be closed, and the next target two-dimensional code can be continuously scanned to watch the next AR effect.
In addition, in implementation, the first target two-dimensional code and the second target two-dimensional code may correspond to the first target identification code and the second target identification code, respectively, and the first target image and the second target image may correspond to the first target object and the second target object, respectively.
Based on the foregoing embodiments, the present disclosure provides a display device, which includes units and modules included in the units, and can be implemented by a processor in an electronic device; of course, the implementation can also be realized through a specific logic circuit; in implementation, the processor may be a Central Processing Unit (CPU), a Microprocessor (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
Fig. 7 is a schematic view illustrating a structure of a display device according to an embodiment of the disclosure, and as shown in fig. 7, the display device 700 includes: a first determination module 710 and a first display module 720, wherein:
a first determining module 710, configured to determine, in response to a scanning operation performed on a first target identification code in an augmented reality environment, a first target object and first virtual effect data based on the first target identification code;
a first display module 720, configured to, in response to identifying the first target object in the augmented reality environment, display a first augmented reality effect corresponding to the first target object in the augmented reality environment based on the first virtual effect data.
In some embodiments, the first determining module is further configured to: analyzing the first target identification code to obtain target identification information; determining a first target object corresponding to the target identification information based on a mapping relation between preset identification information and the target object; and determining first virtual effect data corresponding to the target identification information based on a mapping relation between preset identification information and virtual effect data.
In some embodiments, the apparatus further comprises: a second determining module, configured to determine, in a case that the first augmented reality effect display is finished, a second target object and second virtual effect data based on a second target identification code in response to a scanning operation performed on the second target identification code in the augmented reality environment; a second display module to, in response to identifying the second target object in the augmented reality environment, present a second augmented reality effect corresponding to the second target object in the augmented reality environment based on the second virtual effect data.
In some embodiments, the apparatus further comprises: a first initiation module to enter the augmented reality environment in response to an initiation operation of the augmented reality environment.
In some embodiments, the first determining module is further configured to determine, in response to a scanning operation of a first target identification code with an image acquisition apparatus of the electronic device in the augmented reality environment, a first target object and first virtual effect data based on the first target identification code; the device further comprises: and the second starting module is used for responding to the entering of the augmented reality environment, and starting the image acquisition device in the augmented reality environment.
In some embodiments, the apparatus further comprises: the acquisition module is used for acquiring an image to be identified in real time by using the image acquisition device in the augmented reality environment; the identification module is used for identifying a first target object in the image to be identified; the display module is further configured to: in response to identifying the first target object in the image to be identified, a first augmented reality effect corresponding to the first target object is presented in the augmented reality environment based on the first virtual effect data.
In some embodiments, the first initiating module is further configured to enter the augmented reality environment in response to an access operation to an entry address of the augmented reality environment.
In some embodiments, the augmented reality environment may include at least one interactive interface in an application platform, application program, or applet running on the electronic device for presenting augmented reality effects.
In some embodiments, the first target object may comprise a two-dimensional object and/or a three-dimensional object associated with the first target identifier.
In some embodiments, the two-dimensional object may include at least one of: photographs of exhibits, portraits of persons, automotive posters; the three-dimensional object may include at least one of: exhibits, people, buildings, vehicles in real scenes.
The above description of the apparatus embodiments, similar to the above description of the method embodiments, has similar beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the apparatus of the present disclosure, reference is made to the description of the embodiments of the method of the present disclosure.
The disclosure relates to the field of augmented reality, and aims to detect or identify relevant features, states and attributes of a target object by means of various visual correlation algorithms by acquiring image information of the target object in a real environment, so as to obtain an AR effect combining virtual and reality matched with specific applications. For example, the target object may relate to a face, a limb, a gesture, an action, etc. associated with a human body, or a marker, a marker associated with an object, or a sand table, a display area, a display item, etc. associated with a venue or a place. The vision-related algorithms may involve visual localization, SLAM, three-dimensional reconstruction, image registration, background segmentation, key point extraction and tracking of objects, pose or depth detection of objects, and the like. The specific application can not only relate to interactive scenes such as navigation, explanation, reconstruction, virtual effect superposition display and the like related to real scenes or articles, but also relate to special effect treatment related to people, such as interactive scenes such as makeup beautification, limb beautification, special effect display, virtual model display and the like. The detection or identification processing of the relevant characteristics, states and attributes of the target object can be realized through the convolutional neural network. The convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
It should be noted that, in the embodiment of the present disclosure, if the display method is implemented in the form of a software functional module and sold or used as a standalone product, the display method may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing an electronic device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present disclosure are not limited to any specific combination of hardware and software.
Correspondingly, the embodiment of the present disclosure provides an electronic device, which includes a display screen; a memory for storing an executable computer program; and the processor is used for combining the display screen to realize the steps in the display method when the processor executes the executable computer program stored in the memory.
Correspondingly, the disclosed embodiments provide a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps in the above-described method.
Here, it should be noted that: the above description of the storage medium and device embodiments is similar to the description of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and apparatus of the present disclosure, reference is made to the description of the embodiments of the method of the present disclosure.
It should be noted that fig. 8 is a schematic diagram of a hardware entity of an electronic device in an embodiment of the present disclosure, and as shown in fig. 8, the hardware entity of the electronic device 800 includes: a display 801, a memory 802 and a processor 803, wherein the display 801, the memory 802 and the processor 803 are connected by a communication bus 804; a memory 802 for storing an executable computer program; the processor 803 is configured to implement the method provided by the embodiment of the present disclosure, for example, the display method provided by the embodiment of the present disclosure, in conjunction with the display screen 801 when executing the executable computer program stored in the memory 802.
The Memory 802 may be configured to store instructions and applications executable by the processor 803, and may also cache data (e.g., image data, audio data, voice communication data, and video communication data) to be processed or already processed by the processor 803 and modules in the electronic device 800, which may be implemented by a FLASH Memory (FLASH) or a Random Access Memory (RAM).
The present disclosure provides a computer readable storage medium, on which a computer program is stored, for causing the processor 803 to execute, to implement the method provided by the present disclosure, for example, the display method provided by the present disclosure.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present disclosure, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present disclosure. The above-mentioned serial numbers of the embodiments of the present disclosure are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present disclosure, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated unit of the present disclosure may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing an electronic device (which may be a personal computer, a server, or a network device) to execute all or part of the methods according to the embodiments of the present disclosure. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The above description is only an embodiment of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present disclosure, and all the changes or substitutions should be covered by the scope of the present disclosure.

Claims (13)

1. A display method, comprising:
in response to a scanning operation performed on a first target identification code in an augmented reality environment, determining a first target object and first virtual effect data based on the first target identification code;
in response to identifying the first target object in the augmented reality environment, a first augmented reality effect corresponding to the first target object is presented in the augmented reality environment based on the first virtual effect data.
2. The method of claim 1, wherein determining the first target object and the first virtual effect data based on the first target identification code comprises:
analyzing the first target identification code to obtain target identification information;
determining a first target object corresponding to the target identification information based on a mapping relation between preset identification information and the target object;
and determining first virtual effect data corresponding to the target identification information based on a mapping relation between preset identification information and virtual effect data.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
under the condition that the first augmented reality effect display is finished, in response to a scanning operation of a second target identification code in the augmented reality environment, determining a second target object and second virtual effect data based on the second target identification code;
in response to identifying the second target object in the augmented reality environment, based on the second virtual effect data, a second augmented reality effect corresponding to the second target object is presented in the augmented reality environment.
4. The method according to any one of claims 1 to 3, further comprising:
entering the augmented reality environment in response to a start operation of the augmented reality environment.
5. The method of claim 4, wherein the scanning operation performed on the first target identification code in the augmented reality environment comprises: scanning the first target identification code by using an image acquisition device of the electronic equipment in an augmented reality environment;
the method further comprises the following steps:
in response to entering the augmented reality environment, activating the image capture device in the augmented reality environment.
6. The method of claim 5, further comprising:
acquiring an image to be identified in real time by using the image acquisition device in the augmented reality environment;
identifying a first target object in the image to be identified;
the presenting, in response to identifying the first target object in the augmented reality environment, a first augmented reality effect corresponding to the first target object in the augmented reality environment based on the first virtual effect data, comprising:
in response to identifying the first target object in the image to be identified, a first augmented reality effect corresponding to the first target object is presented in the augmented reality environment based on the first virtual effect data.
7. The method of any of claims 4 to 6, wherein entering the augmented reality environment in response to the initiating operation of the augmented reality environment comprises:
entering the augmented reality environment in response to an access operation to an entry address of the augmented reality environment.
8. The method of any of claims 1-7, wherein the augmented reality environment comprises at least one interactive interface of an application platform, application program, or applet running on an electronic device for presenting augmented reality effects.
9. The method according to any one of claims 1 to 8, wherein the first target object comprises a two-dimensional object and/or a three-dimensional object associated with the first target identifier.
10. The method of claim 9,
the two-dimensional object includes at least one of: photographs of exhibits, portraits of persons, automotive posters;
the three-dimensional object includes at least one of: exhibits, people, buildings, vehicles in real scenes.
11. A display device, comprising:
a first determination module, configured to determine, in response to a scan operation performed on a first target identification code in an augmented reality environment, a first target object and first virtual effect data based on the first target identification code;
a first display module to display a first augmented reality effect corresponding to the first target object in the augmented reality environment based on the first virtual effect data in response to identifying the first target object in the augmented reality environment.
12. An electronic device, comprising:
a display screen; a memory for storing an executable computer program;
a processor for implementing the method of any one of claims 1 to 10 in conjunction with the display screen when executing an executable computer program stored in the memory.
13. A computer-readable storage medium, having stored thereon a computer program for causing a processor, when executed, to carry out the method of any one of claims 1 to 10.
CN202111134428.1A 2021-09-27 2021-09-27 Display method, device, equipment and computer readable storage medium Pending CN113867528A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111134428.1A CN113867528A (en) 2021-09-27 2021-09-27 Display method, device, equipment and computer readable storage medium
PCT/CN2022/120170 WO2023045964A1 (en) 2021-09-27 2022-09-21 Display method and apparatus, device, computer readable storage medium, computer program product, and computer program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111134428.1A CN113867528A (en) 2021-09-27 2021-09-27 Display method, device, equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113867528A true CN113867528A (en) 2021-12-31

Family

ID=78991006

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111134428.1A Pending CN113867528A (en) 2021-09-27 2021-09-27 Display method, device, equipment and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN113867528A (en)
WO (1) WO2023045964A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023045964A1 (en) * 2021-09-27 2023-03-30 上海商汤智能科技有限公司 Display method and apparatus, device, computer readable storage medium, computer program product, and computer program
CN117131888A (en) * 2023-04-10 2023-11-28 荣耀终端有限公司 Method, electronic equipment and system for automatically scanning virtual space two-dimensional code

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102902710A (en) * 2012-08-08 2013-01-30 成都理想境界科技有限公司 Bar code-based augmented reality method and system, and mobile terminal
CN108269307A (en) * 2018-01-15 2018-07-10 歌尔科技有限公司 A kind of augmented reality exchange method and equipment
CN109360275A (en) * 2018-09-30 2019-02-19 北京观动科技有限公司 A kind of methods of exhibiting of article, mobile terminal and storage medium
CN110716645A (en) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 Augmented reality data presentation method and device, electronic equipment and storage medium
CN113127126A (en) * 2021-04-30 2021-07-16 上海哔哩哔哩科技有限公司 Object display method and device
CN113326709A (en) * 2021-06-17 2021-08-31 北京市商汤科技开发有限公司 Display method, device, equipment and computer readable storage medium
CN113359985A (en) * 2021-06-03 2021-09-07 北京市商汤科技开发有限公司 Data display method and device, computer equipment and storage medium
CN113409474A (en) * 2021-07-09 2021-09-17 上海哔哩哔哩科技有限公司 Augmented reality-based object display method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11301681B2 (en) * 2019-12-26 2022-04-12 Paypal, Inc. Securing virtual objects tracked in an augmented reality experience between multiple devices
CN111626183B (en) * 2020-05-25 2024-07-16 深圳市商汤科技有限公司 Target object display method and device, electronic equipment and storage medium
CN111918114A (en) * 2020-07-31 2020-11-10 北京市商汤科技开发有限公司 Image display method, image display device, display equipment and computer readable storage medium
CN112148197A (en) * 2020-09-23 2020-12-29 北京市商汤科技开发有限公司 Augmented reality AR interaction method and device, electronic equipment and storage medium
CN113867528A (en) * 2021-09-27 2021-12-31 北京市商汤科技开发有限公司 Display method, device, equipment and computer readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102902710A (en) * 2012-08-08 2013-01-30 成都理想境界科技有限公司 Bar code-based augmented reality method and system, and mobile terminal
CN108269307A (en) * 2018-01-15 2018-07-10 歌尔科技有限公司 A kind of augmented reality exchange method and equipment
CN109360275A (en) * 2018-09-30 2019-02-19 北京观动科技有限公司 A kind of methods of exhibiting of article, mobile terminal and storage medium
CN110716645A (en) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 Augmented reality data presentation method and device, electronic equipment and storage medium
CN113127126A (en) * 2021-04-30 2021-07-16 上海哔哩哔哩科技有限公司 Object display method and device
CN113359985A (en) * 2021-06-03 2021-09-07 北京市商汤科技开发有限公司 Data display method and device, computer equipment and storage medium
CN113326709A (en) * 2021-06-17 2021-08-31 北京市商汤科技开发有限公司 Display method, device, equipment and computer readable storage medium
CN113409474A (en) * 2021-07-09 2021-09-17 上海哔哩哔哩科技有限公司 Augmented reality-based object display method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023045964A1 (en) * 2021-09-27 2023-03-30 上海商汤智能科技有限公司 Display method and apparatus, device, computer readable storage medium, computer program product, and computer program
CN117131888A (en) * 2023-04-10 2023-11-28 荣耀终端有限公司 Method, electronic equipment and system for automatically scanning virtual space two-dimensional code

Also Published As

Publication number Publication date
WO2023045964A1 (en) 2023-03-30

Similar Documents

Publication Publication Date Title
US10839605B2 (en) Sharing links in an augmented reality environment
US9870633B2 (en) Automated highlighting of identified text
WO2023020622A1 (en) Display method and apparatus, electronic device, computer-readable storage medium, computer program, and computer program product
US10026229B1 (en) Auxiliary device as augmented reality platform
US20190333478A1 (en) Adaptive fiducials for image match recognition and tracking
WO2023045964A1 (en) Display method and apparatus, device, computer readable storage medium, computer program product, and computer program
CN111833458B (en) Image display method and device, equipment and computer readable storage medium
CN112684894A (en) Interaction method and device for augmented reality scene, electronic equipment and storage medium
WO2014136103A1 (en) Simultaneous local and cloud searching system and method
US20120158515A1 (en) Dynamic advertisement serving based on an avatar
CN111491187B (en) Video recommendation method, device, equipment and storage medium
CN113359986B (en) Augmented reality data display method and device, electronic equipment and storage medium
CN113867531A (en) Interaction method, device, equipment and computer readable storage medium
CN114332374A (en) Virtual display method, equipment and storage medium
CN114025188B (en) Live advertisement display method, system, device, terminal and readable storage medium
CN113342221A (en) Comment information guiding method and device, storage medium and electronic equipment
CN114116086A (en) Page editing method, device, equipment and storage medium
CN113326709B (en) Display method, device, equipment and computer readable storage medium
CN111815782A (en) Display method, device and equipment of AR scene content and computer storage medium
CN113867875A (en) Method, device, equipment and storage medium for editing and displaying marked object
CN111651049B (en) Interaction method, device, computer equipment and storage medium
CN114296627B (en) Content display method, device, equipment and storage medium
CN114356087A (en) Interaction method, device, equipment and storage medium based on augmented reality
CN111638819B (en) Comment display method, device, readable storage medium and system
CN114049467A (en) Display method, display device, display apparatus, storage medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40057510

Country of ref document: HK