CN114942713A - Augmented reality-based display method, apparatus, device, storage medium, and program - Google Patents

Augmented reality-based display method, apparatus, device, storage medium, and program Download PDF

Info

Publication number
CN114942713A
CN114942713A CN202210346731.6A CN202210346731A CN114942713A CN 114942713 A CN114942713 A CN 114942713A CN 202210346731 A CN202210346731 A CN 202210346731A CN 114942713 A CN114942713 A CN 114942713A
Authority
CN
China
Prior art keywords
rendering
augmented reality
virtual object
article
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210346731.6A
Other languages
Chinese (zh)
Inventor
田真
李斌
张天慧
欧华富
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Datianmian White Sugar Technology Co ltd
Original Assignee
Beijing Datianmian White Sugar Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Datianmian White Sugar Technology Co ltd filed Critical Beijing Datianmian White Sugar Technology Co ltd
Priority to CN202210346731.6A priority Critical patent/CN114942713A/en
Publication of CN114942713A publication Critical patent/CN114942713A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the disclosure discloses a display method, a display device, equipment, a storage medium and a program product based on augmented reality, wherein the method comprises the following steps: acquiring a real scene image through an augmented reality program, and rendering a first virtual object effect corresponding to an entity article in the real scene image through a first target rendering engine in response to the acquired real scene image including a marker corresponding to the entity article to obtain and display a first augmented reality image; acquiring a first identifier corresponding to the entity item through a wireless communication component, and setting the activation state of the entity item to be an activated state based on the first identifier; and in response to the fact that the activation state of the entity article is set to be the activated state, rendering a second virtual object effect corresponding to the entity article in the real scene image through a second target rendering engine to obtain and display a second augmented reality image.

Description

Augmented reality-based display method, apparatus, device, storage medium, and program
Technical Field
The present disclosure relates to, but not limited to, the field of Augmented Reality (AR) technologies, and in particular, to a display method, apparatus, device, storage medium, and program product based on Augmented Reality.
Background
With the progress of the times, the physical life of people is greatly met, the demand of people on the aspect of mental life is increased, and various activities such as sports, reading, collection and the like are adopted by people more and more in order to improve the demand of mental life, wherein the collection becomes increasingly popular behaviors. Collection, refers to the act of collecting some items of value or interest.
At present, collection mainly aims at the collection of real objects, such as some solid bills (such as postcards, tickets and entrance tickets) and the like, but the paper products are difficult to store during collection, manual maintenance is also performed during collection, and valuable contents are difficult to arrange when too many collected objects exist. Therefore, the existing mode for manually collecting the articles has the problems of low collection efficiency and poor collection experience.
Disclosure of Invention
In view of this, the embodiments of the present disclosure at least provide an augmented reality-based display method, apparatus, device, storage medium, and program product.
The technical scheme of the embodiment of the disclosure is realized as follows:
in one aspect, an embodiment of the present disclosure provides a display method based on augmented reality, where the method includes: acquiring a real scene image through an augmented reality program, and rendering a first virtual object effect corresponding to an entity article in the real scene image through a first target rendering engine in response to the acquired real scene image including a marker corresponding to the entity article to obtain and display a first augmented reality image; acquiring a first identifier corresponding to the entity article through a wireless communication component, and setting the activation state of the entity article to be an activated state based on the first identifier; and in response to the fact that the activation state of the entity article is set to be the activated state, rendering a second virtual object effect corresponding to the entity article in the real scene image through a second target rendering engine to obtain and display a second augmented reality image.
In some embodiments, the first and second target rendering engines are augmented reality engines of a plurality of augmented reality engines included in the augmented reality program that match a system type of a terminal.
In some embodiments, said rendering, by a first object rendering engine, a first virtual object effect corresponding to the physical object in the real scene image comprises: rendering, by a first target rendering engine, a virtual object corresponding to the physical object in the real scene image in a first rendering state; the first rendering state is a state showing a partial feature of the virtual object; the rendering, by a second object rendering engine, a second virtual object effect corresponding to the physical object in the real scene image includes: rendering, by a second target rendering engine, a virtual object corresponding to the physical object in the real scene image in a second rendering state; the second rendering state is a state that shows all features of the virtual object.
In the embodiment of the present disclosure, since the rendering state of the virtual object is determined based on the activation state of the physical object, and the rendering state is used for determining the feature display degree of the virtual object, rich and various augmented reality effects can be provided for the user through the virtual objects in different rendering states.
In some embodiments, the rendering, by the first object rendering engine, the first virtual object effect corresponding to the physical object in the real scene image includes: rendering, by a first target rendering engine, a first virtual object corresponding to the physical object in the real scene image; the rendering, by a second object rendering engine, a second virtual object effect corresponding to the physical object in the real scene image includes: rendering, by a second target rendering engine, a second virtual object corresponding to the physical object in the real scene image; the first virtual object is different from the second virtual object.
In the embodiment of the disclosure, because the model material of the virtual object is determined based on the activation state of the entity article, different model materials can be preset for the entity article through the activation state, and then different virtual objects can be obtained through rendering, so that rich and diverse augmented reality effects are provided for users.
In some embodiments, the method for determining the first target rendering engine includes: obtaining a plurality of augmented reality engines included in the augmented reality program; determining the first target rendering engine among the plurality of augmented reality engines based on a system type of the terminal; the method for determining the second target rendering engine comprises the following steps: obtaining a plurality of augmented reality engines included in the augmented reality program; determining the second target rendering engine among the plurality of augmented reality engines based on a system type of the terminal and a presentation style of the virtual object.
In some embodiments, the setting the activation status of the physical item to an activated status based on the first identifier includes: sending an activation state query request carrying the first identifier to a first server; the activation status inquiry request is used for determining whether the entity item is activated by other users; receiving activation state query feedback sent by the first server; setting the activation status of the physical item to an activated status if the activation status query feedback characterizes that the physical item is activated by a current user.
In the embodiment of the disclosure, the authenticity of the activation state of the entity article can be ensured by acquiring the first identifier carried in the entity article and verifying the activation state of the entity article to the first server; meanwhile, in an actual scenario, the user may verify the purchased physical item based on the activation status query feedback sent by the first server.
In some embodiments, the method further comprises: responding to the activated state of the entity article being set to be an activated state, and sending a transfer request carrying the first identifier to a second server; the transfer request is used for transferring the first digital object corresponding to the entity object to the account of the current user.
In some embodiments, the method further comprises: receiving transfer feedback corresponding to the first digitalized object sent by the server; determining the article state of the first digital article based on the transfer feedback corresponding to the first digital article; displaying at least one type of interaction information based on the item state in the process of displaying the second augmented reality image; the type quantity of the interactive information is related to the state of the article.
In the embodiment of the disclosure, the article state of the first digital article is adjusted based on the asset transfer condition corresponding to the first digital article, so that the interactive information with different types and quantities is displayed, the problem of information leakage caused by unsuccessful transfer is avoided, and the safety of the digital article is improved.
In some embodiments, the rendering, by the first object rendering engine, the first virtual object effect corresponding to the physical object in the real scene image includes: rendering a first virtual object effect corresponding to the entity article in the real scene image through a first target rendering engine under the condition that the augmented reality application is started for the first time; under the condition that the augmented reality application is not started for the first time, acquiring account information of a current user; under the condition that the account information indicates that the current user does not own a second digital object, rendering a first virtual object effect corresponding to the entity object in the real scene image through a first target rendering engine; and the second digital article has an association relation with the first digital article corresponding to the entity article.
In the embodiment of the disclosure, whether the augmented reality application is started for the first time is considered, so that an augmented reality effect corresponding to the entity article can be adaptively and directly displayed for a user who uses the program for the first time, an owned second digital article can also be adaptively and directly displayed for a user who does not use the program for the first time, and under the condition that account information represents that the current user does not own the second digital article, a first virtual object effect corresponding to the current entity article is displayed again to complete activation of the current entity article; thereby, different user experiences may be provided for different users.
In some embodiments, the method further comprises: under the condition that the account information represents that the current user has at least one second digital article, acquiring the system type of the terminal; rendering and displaying a target virtual object based on a rendering engine corresponding to a first system type under the condition that the system type of the terminal is the first system type; the target virtual object is a virtual object corresponding to a target second digital item in the at least one second digital item; under the condition that the system type of the terminal is a second system type, displaying each second digital article and the corresponding detail viewing control through an interface; receiving a triggering operation aiming at a target detail viewing control, rendering the target virtual object through a presentation mode of the target virtual object and a rendering engine determined by the second system type, and displaying the target virtual object; the target virtual object is a virtual object corresponding to a target second digital article corresponding to the target detail viewing control.
In some embodiments, the method further comprises: in the process of displaying the virtual object corresponding to the target second digital article, displaying a newly added article control; and responding to the triggering operation aiming at the newly added article control, and rendering a first virtual object effect corresponding to the entity article in the real scene image through a first target rendering engine to obtain and display a first augmented reality image.
In the embodiment of the disclosure, in the process of displaying at least one second digital article owned by a current user, different display flows are set for different system types, so that an augmented reality effect corresponding to the second digital article can be displayed timely under the condition that the system type is an IOS type, and an optimal augmented reality engine is selected based on a presentation mode of a virtual object of the second digital article under the condition that the system type is an Android type, so that not only can the display effect of augmented reality be improved, but also the response rate can be ensured.
In another aspect, an embodiment of the present disclosure provides a display device based on augmented reality, where the display device includes:
the real scene image processing device comprises an acquisition module, a first target rendering engine and a second target rendering engine, wherein the acquisition module is used for acquiring a real scene image through an augmented reality program, responding to the acquired real scene image comprising a marker corresponding to an entity article, rendering a first virtual object effect corresponding to the entity article in the real scene image through the first target rendering engine, and obtaining and displaying a first augmented reality image;
the acquisition module is used for acquiring a first identifier corresponding to the entity article through a wireless communication component and setting the activation state of the entity article to be an activated state based on the first identifier;
and the rendering module is used for rendering a second virtual object effect corresponding to the entity article in the real scene image through a second target rendering engine in response to the activation state of the entity article being set to the activated state, so as to obtain and display a second augmented reality image.
In yet another aspect, the present disclosure provides a computer device, including a memory and a processor, where the memory stores a computer program executable on the processor, and the processor implements some or all of the steps of the above method when executing the program.
In yet another aspect, the disclosed embodiments provide a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements some or all of the steps of the above-described method.
In yet another aspect, the disclosed embodiments provide a computer program comprising computer readable code which, when run in a computer device, a processor in the computer device executes some or all of the steps for implementing the above method.
In yet another aspect, the disclosed embodiments provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program, which when read and executed by a computer, implements some or all of the steps of the above method.
In the embodiment of the disclosure, the augmented reality engine corresponding to each system type is embedded in the application programs of different system types, and in the process of realizing the augmented reality effect, the target rendering engine matched with the system type of the terminal is selected. The development difficulty and development cost of the augmented reality program can be reduced, namely, the development of the application programs of different system platforms can be completed through the same development platform, namely, the effect of sharing multiple system platforms is realized through one-time development.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the technical aspects of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 is an optional schematic flow chart of an augmented reality-based display method provided by an embodiment of the present disclosure;
fig. 2 is an optional schematic flow chart of an augmented reality-based display method provided by an embodiment of the present disclosure;
fig. 3 is an optional schematic flow chart of an augmented reality-based display method provided by an embodiment of the present disclosure;
fig. 4 is an alternative flow chart of an augmented reality-based display method provided by an embodiment of the present disclosure;
fig. 5 is an alternative flow chart of an augmented reality-based display method provided by an embodiment of the present disclosure;
fig. 6 is an alternative flow chart of an augmented reality-based display method provided by an embodiment of the present disclosure;
fig. 7 is an alternative flow chart of an augmented reality-based display method provided by an embodiment of the present disclosure;
fig. 8 is a hardware entity diagram of an augmented reality display apparatus according to an embodiment of the present disclosure;
fig. 9 is a hardware entity diagram of a computer device according to an embodiment of the present disclosure.
Detailed Description
For the purpose of making the purpose, technical solutions and advantages of the present disclosure clearer, the technical solutions of the present disclosure are further elaborated with reference to the drawings and the embodiments, the described embodiments should not be construed as limiting the present disclosure, and all other embodiments obtained by a person of ordinary skill in the art without making creative efforts shall fall within the protection scope of the present disclosure.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict. Reference to the terms "first/second/third" merely distinguishes similar objects and does not denote a particular ordering with respect to the objects, it being understood that "first/second/third" may, where permissible, be interchanged in a particular order or sequence so that embodiments of the disclosure described herein can be practiced in other than the order shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. The terminology used herein is for the purpose of describing the disclosure only and is not intended to be limiting of the disclosure.
Before further detailed description of the embodiments of the present disclosure, terms and expressions referred to in the embodiments of the present disclosure will be described, and the terms and expressions referred to in the embodiments of the present disclosure will be used for the following explanation.
1) A digital voucher, an on-chain issued code that can uniquely correspond to a physical item under the chain, is a mapping of the physical item under the chain onto the chain. A common digitizing credential protocol (ERC721) may be employed in some embodiments to issue a digitizing credential known as a Non-homogeneous Token (NFT). Of course, other digital voucher protocols can be used to issue the digital voucher.
2) Blockchains, generally divided into three types: public chain (Public Blockchain), private chain (PrivateBlockchain) and alliance chain (Consortium Blockchain). In addition, there are various types of combinations, such as private chain + federation chain, federation chain + public chain, and other different combinations. The most decentralized of these is the public chain. The public chain is represented by bitcoin and ether house, and the participators joining the public chain can read the data record on the chain, participate in transaction, compete for accounting right of new blocks, and the like. Furthermore, each participant (i.e., node) is free to join and leave the system and perform related operations. Private chain the other way around, the write rights of the system are controlled by an organization or organization, and the data read rights are specified by the organization. Briefly, a private chain can be a weakly centralized system with strictly limited and few participating nodes. This type of blockchain is more suitable for use within a particular establishment. A federation chain is a block chain between a public chain and a private chain, and "partial decentralization" can be achieved. Each node in a federation chain typically has a physical organization or organization corresponding to it; participants maintain blockchain operation together by authorizing to join the system and forming a stakeholder union.
3) An intelligent contract, whether a public chain, a private chain, or a federation chain, may provide the functionality of an intelligent contract. An intelligent contract on a blockchain is a contract that can be triggered to execute by a transaction (typically client initiated) on a blockchain system. An intelligent contract may be defined in the form of code.
4) Functional applications, referred to as applications for short, are used for implementing specified functions through computer programs, and may be Native programs or software modules in an operating system, and may be local (Native) Application programs (APPs), that is, programs that need to be installed in the operating system to run, such as game APPs, live APPs, or instant messaging APPs; or may be an applet, i.e. a program that can be run only by downloading it to the browser environment; but also an applet that can be embedded into any APP; in general, the computer programs described above may be any form of application, module or plug-in. In the embodiment of the application, the related application is an augmented reality application which is used for rendering augmented reality effects.
5) Augmented Reality (AR), which is a relatively new technology content that promotes integration between real world information and virtual world information content, implements analog simulation processing on the basis of computer and other scientific technologies of entity information that is relatively difficult to experience in the spatial range of the real world, superimposes the virtual information content for effective application in the real world, and can be perceived by human senses in the process, thereby realizing sensory experience beyond Reality. After the real environment and the virtual object are overlapped, the real environment and the virtual object can exist in the same picture and space at the same time. The augmented reality technology not only can effectively embody the real world content, but also can promote the virtual information content to be displayed.
6) Non-homogeneous/Non-replaceable Tokens (NFT), NFT is an entry of a blockchain that is a decentralized digital ledger technique similar to cryptocurrency, such as bitcoin. Because of the irreplaceable nature of NFT, this means that it can be used to represent a unique thing, such as a monna lisa original painting in a museum, or the ownership of a piece of land. Although mainstream encrypted assets such as Bitcoin (BTC), Ethernet (ETH), etc. are also recorded in the blockchain, NFT is different from them in that: any one NFT token is not replaceable and indivisible. When you buy an NFT token, this represents you have acquired ownership records and real asset usage rights that it cannot erase. Each NFT maps a unique sequence number on a particular blockchain, and is not tamper-able, partitionable, or replaceable with each other. It is these traits that make NFTs carriers of digital art, each NFT representing a particular digital art or a single copy thereof for limited distribution, recording its non-tamperable chain rights.
7) Near Field Communication (NFC) is developed by combining a wireless interconnection technology on the basis of a non-contact Radio Frequency Identification (RFID) technology, and provides a very safe and fast Communication mode for various electronic products which are increasingly popularized in our daily life. The "near field" in the NFC chinese name refers to radio waves in the vicinity of an electromagnetic field.
8) Simultaneously positioning and Mapping (SLAM), and simultaneously calculating the Camera attitude (Camera position) and scanning the three-dimensional structure information of the environment; SLAM application scenarios including AR, robot control, etc. generally require real-time performance, so the scanned environmental structure information is generally relatively coarse, and the environmental information is also mainly used to assist the self-positioning. SLAM differs from Marker Tracking in that no three-dimensional spatial information is known in advance, and a three-dimensional structure is restored from two-dimensional images (with a certain parallax) first, and then a map is continuously tracked and expanded.
9) The implementation method is based on Marker-based augmented reality (Marker-based AR), which needs a Marker (as an AR identification mark, for example, a template card or an image or an identification code drawn in a certain specification and shape), then, the Marker is put at a position in reality, which is equivalent to determining a plane in a real scene, then, the Marker is identified and evaluated in attitude (position Estimation) through a camera, and the position of the Marker is determined, then, a coordinate system taking the Marker center as an origin is called a template coordinate system (Marker Coordinates), a mapping relation between the template coordinate system and a screen coordinate system is established, and based on the mapping relation (the transformation from the template coordinate system to a real screen coordinate system needs to be rotated and translated to a camera coordinate system first, and then is mapped to the screen coordinate system from the camera coordinate system), the virtual object rendered on the screen can achieve the effect that the virtual object is attached to the Marker.
An exemplary application of the electronic device provided by the embodiment of the present disclosure is described below, and the electronic device provided by the embodiment of the present disclosure may be implemented as various types of user terminals (hereinafter, referred to as terminals) such as an AR device, a notebook computer, a tablet computer, a desktop computer, a set-top box, a mobile device (for example, a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, and a portable game device), and may also be implemented as a server. The AR equipment is equipment for acquiring a real scene image, rendering a preset virtual object into the real scene image and displaying an obtained augmented reality image. Illustratively, the AR device may be AR glasses, a terminal in which an AR client is installed, or the like. The following embodiments collectively refer to the electronic devices as terminals.
Fig. 1 is a schematic view of an implementation flow of a display method based on augmented reality according to an embodiment of the present disclosure, as shown in fig. 1, the method includes the following steps S101 to S103:
step S101, a real scene image is collected through an augmented reality program, in response to the fact that the collected real scene image comprises a marker corresponding to an entity article, a first virtual object effect corresponding to the entity article is rendered in the real scene image through a first target rendering engine, and a first augmented reality image is obtained and displayed.
In some embodiments, the terminal may include the augmented reality program, where the augmented reality program may be a Native program or a software module in an operating system, and may be a local (Native) Application program (APP, Application), that is, a program that needs to be installed in the operating system to run, such as a game APP, a live APP, or an instant messaging APP; or may be an applet, i.e. a program that can be run only by downloading it to the browser environment; but also an applet that can be embedded into any APP; in general, the computer programs described above may be any form of application, module or plug-in.
Step S102, a first identification corresponding to the entity article is obtained through a wireless communication component, and the activation state of the entity article is set to be an activated state based on the first identification.
In some embodiments, the wireless communication component may establish a communication connection with the physical object by way of wireless communication, and obtain the first identifier corresponding to the physical object based on the established wireless channel. The wireless communication mode may include at least one of the following: near field connections such as a high fidelity wireless communication (Wi-Fi) connection, a Bluetooth connection, an infrared connection, an NFC connection, a ZigBee connection and the like.
In some embodiments, after the terminal acquires the first identifier corresponding to the entity item, the activation state of the entity item may be set to the activated state in a local configuration file based on the first identifier. The activated state characterizes that the current user logged into the augmented reality application is the user that activated the physical object.
In some embodiments, after the terminal obtains the first identifier corresponding to the entity article, the terminal may further query, based on the first identifier, an activation state corresponding to the first identifier in a local configuration file, if the first identifier and the activation state corresponding to the first identifier do not exist in the local configuration file, create the first identifier, and set the activation state corresponding to the first identifier to an activated state; if the first identifier exists in the local configuration file, but the activation state is an inactivated state, the inactivated state represents that the current user logging in the augmented reality application is not the user activating the physical object, and the first augmented reality image is continuously displayed.
Step S103, in response to the activated state of the entity article being set as an activated state, rendering a second virtual object effect corresponding to the entity article in the real scene image through a second target rendering engine, and obtaining and displaying a second augmented reality image.
In some embodiments, the first and second target rendering engines are augmented reality engines of a plurality of augmented reality engines included in the augmented reality program that match a system type of a terminal.
For example, during each process of rendering the virtual object in the real scene image by the augmented reality program to obtain the augmented reality image, the augmented reality engine matched with the system type of the terminal can be selected from a plurality of augmented reality engines based on the device type of the terminal. In specific implementation, the augmented reality engine may be selected based on the device type of the terminal when the terminal is started for the first time, and the augmented reality engine may be selected based on the device type when the terminal is started each time.
The augmented reality program is implemented in different system types based on the same development platform, and the multiple preset augmented reality engines run on the development platform. Therefore, in the process of developing the augmented reality program for different system types, the same engine package including multiple preset augmented reality engines can be used based on the development platform, and the development process of application programs of different system types can also be completed. Meanwhile, in the process that different systems run the augmented reality programs of the system type, the same engine package comprising a plurality of preset augmented reality engines can be called. The different system types are for example Android type or IOS type.
In some implementation scenarios, the plurality of preset rendering engines running with the development platform may include an a engine, a B engine, a C engine, a D engine, and the like. Accordingly, Android-type devices may correspond to the a engine, the B engine, and the C engine, and IOS-type devices may correspond to the D engine.
For example, the preset rendering engines may include a B engine and a D engine. In the process of developing the first type of application program of the Android type and the second type of application program of the IOS type, both the first type of application program and the second type of application program are developed based on the development platform, and at this time, the B engine and the D engine can be added to the first type of application program and the second type of application program as engine packages. Correspondingly, in the process that the Android-type device runs the first type of application program, under the condition that an AR effect needs to be achieved, an engine package comprising a B engine and a D engine in the first type of application program is called, and the B engine is selected as a target rendering engine; in the process that the IOS type device runs the second type of application program, under the condition that the AR effect needs to be achieved, an engine package comprising a B engine and a D engine in the second type of application program is called, and the D engine is selected as a target rendering engine.
In the embodiment of the disclosure, the augmented reality engine corresponding to each system type is embedded in the application programs of different system types, and in the process of realizing the augmented reality effect, the target rendering engine matched with the system type of the terminal is selected. The development difficulty and development cost of the augmented reality program can be reduced, namely, the development of the application programs of different system platforms can be completed through the same development platform, namely, the effect of sharing multiple system platforms is realized through one-time development.
Fig. 2 is an alternative flow chart of an augmented reality-based presentation method provided by an embodiment of the present disclosure, which may be executed by a processor of a computer device. Based on fig. 1, S101 in fig. 1 may be updated to S201, and S103 may be updated to S202, which will be described in conjunction with the steps shown in fig. 2.
Step S201, rendering a virtual object corresponding to the entity article in the real scene image in a first rendering state through a first target rendering engine; the first rendering state is a state that shows a partial feature of the virtual object.
In some embodiments, the method for determining the first target rendering engine includes: obtaining a plurality of augmented reality engines included in the augmented reality program; determining the first target rendering engine among the plurality of augmented reality engines based on a system type of the terminal.
Step S202, rendering a virtual object corresponding to the entity article in the real scene image in a second rendering state through a second target rendering engine; the second rendering state is a state that shows all features of the virtual object.
In some embodiments, the method for determining the second target rendering engine includes: obtaining a plurality of augmented reality engines included in the augmented reality program; determining the second target rendering engine among the plurality of augmented reality engines based on a system type of the terminal and a presentation style of the virtual object.
In the process of displaying the virtual objects of all the features, the second target rendering engine may be determined in at least one augmented reality engine corresponding to the system type of the terminal based on the presentation mode corresponding to the virtual object.
That is, at least one augmented reality engine corresponding to the system type is determined in the plurality of augmented reality engines based on the system type of the terminal, and then a second target rendering engine is determined in the at least one augmented reality engine corresponding to the system type based on the presentation mode corresponding to the virtual object.
For example, taking the system type of the terminal as Android as an example, determining at least one augmented reality engine corresponding to the system type from a plurality of augmented reality engines based on the Android type may include an augmented reality engine such as an engine a, an engine B, and an engine C; then, under the condition that the presentation mode corresponding to the virtual object is based on marker presentation, determining the B engine as a second target rendering engine; and determining the A engine or the C engine as a second target rendering engine when the presentation mode corresponding to the virtual object is based on SLAM presentation.
In some embodiments, different rendering states are used to expose some/all of the features of the virtual object. The features of the virtual object may include features of at least one dimension, which may include shape features, color features, texture features.
The feature of the at least one dimension may include a shape feature, and when the rendering state corresponding to the virtual object determines that the virtual object is in the first rendering state of the partial feature display, the shape of the virtual object may be displayed after being subjected to the fuzzy processing, and accordingly, the obtained augmented reality image may display the approximate shape of the virtual object, that is, the partial shape feature is displayed; and under the condition that the rendering state corresponding to the virtual object determines that the virtual object is the second rendering state of partial feature display, directly rendering and displaying the shape of the virtual object, and correspondingly, displaying all the shapes of the virtual object, namely displaying all the shape features, by the obtained augmented reality image.
In this embodiment, the blurring process may include a filter-based blurring process, and may also be a blurring process of adding a virtual overlay to the virtual object. Wherein the virtual covering is used for being attached to the surface of the virtual object so as to cover part of the shape feature of the virtual object. Illustratively, the virtual covering may be a red cloth, colored yarn, or the like.
The feature of the at least one dimension may include a color feature, and when the rendering state corresponding to the virtual object determines that the virtual object is the first rendering state exhibited by the partial feature, the color of the virtual object may be exhibited after the color saturation reduction processing is performed on the color of the virtual object, and accordingly, the obtained augmented reality image may exhibit a virtual object with low color saturation, that is, exhibit a partial color feature; and under the condition that the rendering state corresponding to the virtual object determines that the virtual object is the second rendering state of partial feature display, directly rendering the shape of the virtual object and displaying, wherein correspondingly, the obtained augmented reality image can display all colors of the virtual object, namely all color features.
The feature of the at least one dimension may include a texture feature, and when the rendering state corresponding to the virtual object determines that the virtual object is in the first rendering state of the partial feature display, the virtual object may be rendered and displayed based on the partial texture feature, and accordingly, the obtained augmented reality image may display the virtual object including the partial texture feature; and under the condition that the rendering state corresponding to the virtual object determines that the virtual object is the second rendering state of partial feature display, directly rendering and displaying the texture features of the virtual object, and correspondingly, displaying all the texture features of the virtual object by the obtained augmented reality image.
In other embodiments, the features of the virtual object may include a plurality of dimensional features, which may include shape features, color features, texture features. The feature rendering state may comprise a plurality of third rendering states, each third rendering state being for showing a feature of at least one dimension; any two of the plurality of third rendering states differ in at least one dimension. For example, there may be a third rendering state including features of only one dimension, such as displaying shape features/color features/texture features only, or including features of a partial dimension, such as a third rendering state including shape features and color features, etc.; there may also be a third rendering state that includes features for all dimensions.
In the embodiment of the present disclosure, since the rendering state of the virtual object is determined based on the activation state of the physical object, and the rendering state is used for determining the feature display degree of the virtual object, rich and various augmented reality effects can be provided for the user through the virtual objects in different rendering states.
In some embodiments, S101 in fig. 1 may be further updated to S203, and S103 may be updated to S204, which will be described in conjunction with the steps shown in fig. 2.
Step S203, rendering, by a first target rendering engine, a first virtual object corresponding to the entity article in the real scene image.
Step S204, rendering a second virtual object corresponding to the entity article in the real scene image through a second target rendering engine; the first virtual object is different from the second virtual object.
In some embodiments, the virtual object corresponding to the physical object in the inactive state and the virtual object corresponding to the physical object in the active state are different for the same physical object. That is, when the physical object is in an inactivated state, the virtual object corresponding to the physical object is the first virtual object; and under the condition that the entity article is in the activated state, the virtual object corresponding to the entity article is the second virtual object. The model materials corresponding to the first virtual object and the second virtual object are different.
For example, if there is an egg model as one physical object, a first virtual object corresponding to the egg model may be set as an egg, and a second virtual object corresponding to the egg model may be set as a chicken. That is, through the activation process described above for the physical object, different virtual objects may be displayed. Correspondingly, the model material corresponding to the first virtual object is used for rendering eggs, and the model material corresponding to the second virtual object is used for rendering chickens.
In the embodiment of the disclosure, because the model material of the virtual object is determined based on the activation state of the entity article, different model materials can be preset for the entity article through the activation state, and then different virtual objects can be rendered, so that rich and various augmented reality effects are provided for a user.
Fig. 3 is an alternative flow chart of an augmented reality-based presentation method provided by an embodiment of the present disclosure, which may be executed by a processor of a computer device. Based on any of the above embodiments, taking fig. 1 as an example, S102 in fig. 1 may be updated to S301 to S303, which will be described with reference to the steps shown in fig. 3.
Step S301, sending an activation state query request carrying the first identifier to a first server; the activation status query request is used to determine whether the physical item is activated by another user.
Step S302, receiving an activation status query feedback sent by the first server.
Step S303, setting the activation state of the entity article to be the activated state under the condition that the activation state query feedback represents that the entity article is activated by the current user.
In some embodiments, in the case that the activation status query feedback indicates that the physical item is activated by other users, the activation status of the physical item is set to an inactive status, and an alert message is displayed. The alarm message is used for indicating that the entity item is activated by others by the current user.
In the embodiment of the disclosure, the authenticity of the activation state of the entity article can be ensured by acquiring the first identifier carried in the entity article and verifying the activation state of the entity article to the first server; meanwhile, in an actual scenario, the user may verify the purchased physical item based on the activation status query feedback sent by the first server.
Fig. 4 is an alternative flowchart of an augmented reality-based presentation method provided by an embodiment of the present disclosure, which may be executed by a processor of a computer device. Based on any of the above embodiments, taking fig. 1 as an example, fig. 1 may further include S401, which will be described with reference to the steps shown in fig. 4.
Step S401, in response to the activated state of the entity article being set to the activated state, sending a transfer request carrying the first identifier to a second server; the transfer request is used for transferring the first digital object corresponding to the entity object to the account of the current user.
In some embodiments, the S103 may further include S402 to S404.
Step S402, receiving the transfer feedback corresponding to the first digitalized object sent by the server.
Step S403, determining an article status of the first digitized article based on the transfer feedback corresponding to the first digitized article.
In some embodiments, the item status of the first digital item is set to the first status if the transfer feedback corresponding to the first digital item indicates that the first digital item has been transferred to the account of the current user.
In some embodiments, if the transfer feedback corresponding to the first digital article indicates that the first digital article has not been transferred to the account of the current user, or the transfer feedback corresponding to the first digital article has not been received, the article status of the first digital article is set to the second status.
Step S404, in the process of displaying the second augmented reality image, displaying at least one type of interactive information based on the article state; the type quantity of the interactive information is related to the state of the article.
In some embodiments, the interactive information may be digitized information corresponding to the first digitized item. The above S404 may be implemented by: displaying at least one type of digital information corresponding to the first digital article, wherein the type and the quantity of the digital information are related to the state of the first digital article.
In some embodiments, the interaction information may be an operation control corresponding to the first digital object.
The above S404 may be implemented by: and displaying at least one type of operation control corresponding to the first digital article, wherein the type quantity of the operation control is related to the state of the first digital article.
In the embodiment of the disclosure, the article state of the first digital article is adjusted based on the asset transfer condition corresponding to the first digital article, so that the interactive information with different types and quantities is displayed, the problem of information leakage caused by unsuccessful transfer is avoided, and the safety of the digital article is improved.
Fig. 5 is an alternative flow chart of an augmented reality-based presentation method provided by an embodiment of the present disclosure, which may be executed by a processor of a computer device. Based on fig. 1, S101 in fig. 1 may be updated to S501 or S502, which will be described in conjunction with the steps shown in fig. 5.
S501, under the condition that the augmented reality application is started for the first time, a first virtual object effect corresponding to the entity article is rendered in the real scene image through a first target rendering engine.
In some embodiments, in order to enable a user to intuitively feel an augmented reality effect corresponding to the augmented reality application when the augmented reality application is first started, and to feedback an operation of a scanned physical object, a first virtual object effect corresponding to the physical object needs to be directly displayed.
S502, under the condition that the augmented reality application is not started for the first time, acquiring account information of a current user; under the condition that the account information indicates that the current user does not own a second digital article, rendering a first virtual object effect corresponding to the entity article in the real scene image through a first target rendering engine; and the second digital article has an association relation with the first digital article corresponding to the entity article.
The account information of the current user may include a plurality of activated digital items, and before the first virtual object effect corresponding to the entity item is rendered in the real scene image by the first object rendering engine, a second digital item having an association relationship with the first digital item corresponding to the entity item may be queried from the plurality of activated digital items. In some embodiments, the association may include at least one of: category association, series association, etc. The category incidence relation represents that categories of the first digital item and the second digital item belong to the same category; the series association characterizes the categories of the first digitized item and the second digitized item as belonging to the same series.
In some embodiments, in a case that the augmented reality application is not first started, considering that the user may already own other second digital items related to the first digital item corresponding to the physical item, the second digital items owned by the current user may be displayed before displaying the first virtual object effect corresponding to the current physical item; and under the condition that the account information indicates that the current user does not own the second digital object, displaying the first virtual object effect corresponding to the current entity object so as to complete the activation of the current entity object.
In some embodiments, before performing step S501 or S502 above, the method further comprises: and starting the augmented reality application, and displaying a guide interface comprising an AR display control. And receiving the trigger operation aiming at the AR display control through the guide interface, and awakening the image acquisition equipment. And acquiring the real scene image through the image acquisition equipment, and identifying the marker in the real scene image. And in response to the fact that the acquired real scene image comprises the marker corresponding to the entity article, rendering the virtual object corresponding to the entity article in the real scene image through a first target rendering engine to obtain and display a first augmented reality image.
In some implementation scenarios, the entity article is provided with an identification code carrying a download address of the augmented reality application, and the identification code may be a two-dimensional code or the like. The user can use the terminal to scan the identification code to obtain the download address of the augmented reality application, and download and install the augmented reality application based on the download address. In other implementation scenarios, the augmented reality application is already installed in the terminal, and after the user scans the identification code corresponding to the physical object, the user can jump to the augmented reality application directly.
In some embodiments, a guidance interface including an augmented reality presentation control may be presented by the augmented reality application. The guiding interface is at least used for guiding a user to shoot the physical object in a real scene and aligning the marker arranged in the physical object.
In the embodiment of the disclosure, because whether the augmented reality application is started for the first time is considered, the augmented reality effect corresponding to the entity article can be adaptively and directly displayed for the user who uses the program for the first time, the second digital article owned by the user who does not use the program for the first time can also be adaptively and, under the condition that the account information represents that the current user does not own the second digital article, the effect of the first virtual object corresponding to the current entity article is displayed again to complete the activation of the current entity article; thus, different user experiences can be provided for different users.
Fig. 6 is an alternative flowchart of an augmented reality-based presentation method provided by an embodiment of the present disclosure, which may be executed by a processor of a computer device. Based on fig. 5, the method may include S601 to S603, which will be described in conjunction with the steps shown in fig. 6.
Step S601, under the condition that the account information represents that the current user has at least one second digital article, acquiring the system type of the terminal.
In some embodiments, the system types include at least two system types, such as an Android type and an IOS type.
Step S602, under the condition that the system type of the terminal is a first system type, rendering and displaying a target virtual object based on a rendering engine corresponding to the first system type; the target virtual object is a virtual object corresponding to a target second digital item in the at least one second digital item.
In some embodiments, rendering the target virtual object based on the rendering engine corresponding to the first system type may refer to the rendering process in the above embodiments.
In some embodiments, the first system type is an IOS type, and in a case that the system type of the terminal is the IOS type, the digital certificate corresponding to each second digital item may be acquired, and a display order corresponding to each digital item is generated based on the digital certificate corresponding to each second digital item, where the display order corresponding to the digital item is related to the size of the digital certificate. Taking the digital certificate as NFT as an example, the smaller the digital certificate of the digital article is, the earlier the release time representing the digital article is, and the earlier the display sequence corresponding to the digital article is changed in the display process. The target second digital item is determined based on the corresponding display order of each second digital item.
Step S603, under the condition that the terminal system type is a second system type, displaying each second digital article and a corresponding detail viewing control through an interface; receiving a triggering operation aiming at a target detail viewing control, rendering the target virtual object through a presentation mode of the target virtual object and a rendering engine determined by the second system type, and displaying the target virtual object; the target virtual object is a virtual object corresponding to a target second digital article corresponding to the target detail viewing control.
In some embodiments, the second system type is an Android type, and when the system type of the terminal is the Android type, determining, based on the Android type, at least one augmented reality engine corresponding to the system type from among the multiple augmented reality engines, where the at least one augmented reality engine may include an augmented reality engine such as an a engine, a B engine, and a C engine; then, under the condition that the presentation mode corresponding to the virtual object is based on marker presentation, determining the B engine as a second target rendering engine; and when the presentation mode corresponding to the virtual object is based on SLAM presentation, determining the A engine or the C engine as a second target rendering engine.
In some embodiments, the method may further include S604.
Step S604, in the process of displaying the virtual object corresponding to the target second digital article, a newly added article control is displayed; and responding to the triggering operation aiming at the newly added article control, and rendering a first virtual object effect corresponding to the entity article in the real scene image through a first target rendering engine to obtain and display a first augmented reality image.
In the embodiment of the disclosure, in the process of displaying at least one second digital article owned by a current user, different display flows are set for different system types, so that an augmented reality effect corresponding to the second digital article can be displayed timely under the condition that the system type is the IOS type, and an optimal augmented reality engine is selected based on the presentation mode of the virtual object of the second digital article under the condition that the system type is the Android type, so that the display effect of augmented reality can be improved, and the response rate can be ensured.
In the above embodiment, the augmented reality program may include a plurality of augmented reality engines, and in a case where a virtual object corresponding to the physical object needs to be rendered, a target rendering engine may be determined in the plurality of augmented display engines based on a system type of the current terminal, and the virtual object of the physical object is rendered based on the target rendering engine. The same entity article can correspond to different virtual objects, the virtual object is used for displaying the entity article for a user from the perspective of augmented reality effect, the same entity article can only correspond to one digital article, and the digital article corresponds to only one digital certificate; that is, the physical object and the digital object are in a one-to-one correspondence relationship, and one physical object (or digital object) may correspond to one or more virtual objects.
The following describes an application of the augmented reality-based display method provided by the embodiment of the present disclosure in an actual scene.
Fig. 7 is an alternative flowchart of an augmented reality-based presentation method provided by an embodiment of the present disclosure, which may be executed by a processor of a computer device.
S701, in response to a trigger event for an application (augmented reality program), determining whether the application is started for the first time.
In some embodiments, in the case where the application is first started, S702 is performed. If the application is not first started, S703 is executed.
S702, displaying a guide interface, wherein the guide interface comprises a guide file and an AR display control.
In some embodiments, the guidance interface may include a guidance copy and an AR presentation control. Wherein the guiding case is at least used for guiding the user to trigger the AR display control; the AR presentation control is used to wake up the camera if triggered. The shooting device is used for obtaining a real scene image, and under the condition that an AR effect needs to be displayed, a marker (marker) corresponding to an entity object in a real scene is obtained.
S704, receiving a trigger operation for the preview control, and capturing a real scene image by a capturing device (corresponding to the image capturing device in the foregoing embodiment). In response to the fact that the real scene image includes the marker corresponding to the solid object, based on a target rendering engine (corresponding to the first target rendering engine in the above embodiment), a virtual object corresponding to the solid object is rendered in the real scene image in a first rendering state, so as to obtain and display an augmented reality image.
The application program is realized in different system types based on the same development platform, and the multiple preset rendering engines run on the development platform. Therefore, in the process of developing the application program for different system types, the same engine package including a plurality of preset rendering engines can be used based on the development platform, and the development process of the application program for different system types can also be completed. In some implementation scenarios, Android-type devices may correspond to B-engine, A-engine, and C-engine, and IOS-type devices may correspond to D-engine.
For example, the preset rendering engines may include a B engine and a D engine. In the process of developing the first type of application program of the Android type and the second type of application program of the IOS type, both the first type of application program and the second type of application program are developed based on the development platform, and at this time, the B engine and the D engine can be added to the first type of application program and the second type of application program as engine packages. Correspondingly, in the process that the Android-type device runs the first type of application program, under the condition that an AR effect needs to be achieved, an engine package comprising a B engine and a D engine in the first type of application program is called, and the B engine is selected as a target rendering engine; in the process that the IOS type device runs the second type of application program, under the condition that the AR effect needs to be achieved, an engine package comprising a B engine and a D engine in the second type of application program is called, and the D engine is selected as a target rendering engine.
In some embodiments, the first rendering state is a state that exposes a portion of the features of the virtual object.
S705, showing an activation guidance interface and an activation control, and waking up an NFC component (corresponding to the wireless communication component in the foregoing embodiment) in response to a trigger operation for the activation control.
S706, acquiring a digital code (first identification) of the entity object in a real scene through the NFC component, and setting the activation state of the entity object to be an activated state based on the digital code.
And S707, acquiring a real scene image through a shooting device. And in response to the fact that the real scene image comprises the marker corresponding to the entity object, rendering the virtual object corresponding to the entity object in the real scene image in a second rendering state based on the target rendering engine to obtain and display the augmented reality image.
In some embodiments, the second rendering state is a state that exposes all features of the virtual object.
S708, initiating a digital article transfer request to the server, and setting the state of the digital article to be a first state (article state of the first digital article).
In some embodiments, the server is a blockchain server, and when receiving a digital goods transfer request carrying the digital code, the blockchain server transfers the digital certificate corresponding to the digital goods to an account of a current user through a pre-deployed intelligent contract, and returns a hash value corresponding to the digital goods.
S709, receiving a transfer feedback sent by the server, where the transfer feedback includes a hash value corresponding to the digitized item, and setting the state of the digitized item to a second state (an item state of the first digitized item).
And S710, displaying at least one type of digital information (interactive information) corresponding to the digital article, wherein the type and the quantity of the digital information are related to the state of the digital article.
In some embodiments, where the status of the digitized item is a first status, the details interface may display a first quantity of types of digitized information; in the event the status of the digitized item is a second status, the details interface may display a second number of types of digitized information; the second number is greater than or equal to the first number.
For example, when the state of the digital article is the first state, two types of digital information, namely a holder and a collection model corresponding to the digital article, can be displayed; and under the condition that the state of the digital article is the second state, four types of digital information of the digital article corresponding to the holder, the collection model, the collection hash value and the collection circulation certificate can be displayed.
And S711, displaying at least one type of operation control (interactive information) corresponding to the digital article, wherein the type number of the operation control is related to the state of the digital article.
In some embodiments, where the state of the digitized item is a first state, the details interface may display a first number of types of operational controls; in the case that the state of the digital item is a second state, the details interface may display a second number of types of operational controls; the second number is greater than or equal to the first number.
For example, when the state of the digital article is the first state, three types of operation controls, namely an AR viewing control, a media viewing control and a model viewing control, corresponding to the digital article may be displayed; and under the condition that the state of the digital article is the second state, displaying six types of operation controls including an AR viewing control, a media viewing control, a model viewing control, a referral control, a certificate viewing control and a sharing control corresponding to the digital article.
And S703, receiving a trigger operation aiming at the AR display control, and acquiring a real scene image through a shooting device. In response to a marker corresponding to a solid object included in the real scene image, determining an identification of a digital article corresponding to the solid object based on the marker;
and S712, judging whether the user has other objects related to the digital article or not under the condition that the system type is the first system type.
In case the user owns other objects related to the digital goods, S713 is executed; in case the user does not own other items related to the digitized item, executing S714;
s713, determining whether the number of other items owned by the user is one.
If the number of other items owned by the user is one, S715 is performed; if the number of other items owned by the user is plural, executing S716;
and S715, rendering the AR effect corresponding to the other article based on the rendering engine corresponding to the first system type.
S716, rendering the AR effect corresponding to the target other item in the other items based on the rendering engine corresponding to the first system type.
Wherein the target other item is the other item with the smallest digital certificate in the plurality of other items.
And S714, receiving a trigger operation aiming at the preview control.
And S717, in the case that the system type is the second system type, displaying a collected interface which comprises other items and activation controls for showing the user.
S718, responding to the viewing operation aiming at the other articles, and rendering the other articles based on the rendering engine corresponding to the second system type.
And determining a target second rendering engine in the rendering engines corresponding to the second system type based on the presentation types of the other articles, and rendering the other articles based on the target second rendering engine.
It should be noted that, before waking up the NFC component in response to the trigger operation for activating the control, the embodiment of the present disclosure may further include a step of performing real-name verification on the current user.
In some embodiments, prior to real-name authentication of a current user, the method provided by the present disclosure further comprises a login step for determining account information of the current user, based on which a digitized item associated with the current user may be determined.
It should be noted that, in implementation, the XXX1 may correspond to XXX1 in the foregoing embodiment, and XXX2 may correspond to XXX2 in the foregoing embodiment.
Based on the foregoing embodiments, the embodiments of the present disclosure provide an augmented reality-based display apparatus, which includes units and modules included in the units, and can be implemented by a processor in a computer device; of course, the implementation can also be realized through a specific logic circuit; in the implementation process, the Processor may be a Central Processing Unit (CPU), a Microprocessor Unit (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate ARray (FPGA), or the like.
Fig. 8 is a schematic structural diagram illustrating a composition of an augmented reality-based display device according to an embodiment of the present disclosure, and as shown in fig. 8, an augmented reality-based display device 800 includes: an acquisition module 801, an acquisition module 802, and a rendering module 803, wherein:
the acquiring module 801 is configured to acquire a real scene image through an augmented reality program, and in response to the acquired real scene image including a marker corresponding to an entity article, render a first virtual object effect corresponding to the entity article in the real scene image through a first target rendering engine, to obtain and display a first augmented reality image;
an obtaining module 802, configured to obtain, through a wireless communication component, a first identifier corresponding to the entity item, and set an activation state of the entity item to an activated state based on the first identifier;
and a rendering module 803, configured to render, by using a second target rendering engine, a second virtual object effect corresponding to the entity article in the real scene image in response to the activation state of the entity article being set to the activated state, so as to obtain and display a second augmented reality image.
In some embodiments, the first and second target rendering engines are augmented reality engines of a plurality of augmented reality engines included in the augmented reality program that match a system type of a terminal.
In some embodiments, the acquiring module 801 is further configured to render, by the first target rendering engine, the virtual object corresponding to the physical object in the real scene image in the first rendering state; the first rendering state is a state showing a partial feature of the virtual object;
the rendering module 803 is further configured to: rendering, by a second target rendering engine, a virtual object corresponding to the physical object in the real scene image in a second rendering state; the second rendering state is a state that shows all features of the virtual object.
In some embodiments, the acquisition module 801 is further configured to: rendering a first virtual object corresponding to the entity article in the real scene image through a first target rendering engine;
the rendering module 803 is further configured to: rendering, by a second target rendering engine, a second virtual object corresponding to the physical object in the real scene image; the first virtual object is different from the second virtual object.
In some embodiments, the acquisition module 801 is further configured to: obtaining a plurality of augmented reality engines included in the augmented reality program; determining the first target rendering engine among the plurality of augmented reality engines based on a system type of the terminal;
the rendering module 803 is further configured to: obtaining a plurality of augmented reality engines included in the augmented reality program; determining the second target rendering engine among the plurality of augmented reality engines based on a system type of the terminal and a presentation style of the virtual object.
In some embodiments, the obtaining module 802 is further configured to:
sending an activation state query request carrying the first identifier to a first server; the activation status inquiry request is used for determining whether the entity item is activated by other users;
receiving activation state query feedback sent by the first server;
setting the activation status of the physical item to an activated status if the activation status query feedback characterizes that the physical item is activated by a current user.
In some embodiments, the obtaining module 802 is further configured to:
responding to the activated state of the entity article to be set as an activated state, and sending a transfer request carrying the first identifier to a second server; the transfer request is used for transferring the first digital object corresponding to the entity object to the account of the current user.
In some embodiments, the obtaining module 802 is further configured to:
receiving transfer feedback corresponding to the first digitalized object sent by the server;
determining the article state of the first digital article based on the transfer feedback corresponding to the first digital article;
displaying at least one type of interaction information based on the item state in the process of displaying the second augmented reality image; the type quantity of the interactive information is related to the state of the article.
In some embodiments, the acquisition module 801 is further configured to:
rendering a first virtual object effect corresponding to the entity article in the real scene image through a first target rendering engine under the condition that the augmented reality application is started for the first time;
under the condition that the augmented reality application is not started for the first time, acquiring account information of a current user; under the condition that the account information indicates that the current user does not own a second digital article, rendering a first virtual object effect corresponding to the entity article in the real scene image through a first target rendering engine; and the second digital article has an association relation with the first digital article corresponding to the entity article.
In some embodiments, the acquisition module 801 is further configured to:
under the condition that the account information represents that the current user has at least one second digital article, acquiring the system type of the terminal;
rendering and displaying a target virtual object based on a rendering engine corresponding to a first system type under the condition that the system type of the terminal is the first system type; the target virtual object is a virtual object corresponding to a target second digital item in the at least one second digital item;
under the condition that the system type of the terminal is a second system type, displaying each second digital article and the corresponding detail viewing control through an interface; receiving a triggering operation aiming at a target detail viewing control, rendering the target virtual object through a presentation mode of the target virtual object and a rendering engine determined by the second system type, and displaying the target virtual object; the target virtual object is a virtual object corresponding to a target second digital article corresponding to the target detail viewing control.
In some embodiments, the acquisition module 801 is further configured to:
in the process of displaying the virtual object corresponding to the target second digital article, displaying a newly added article control;
and responding to the triggering operation aiming at the newly added article control, and rendering a first virtual object effect corresponding to the entity article in the real scene image through a first target rendering engine to obtain and display a first augmented reality image.
The above description of the apparatus embodiments, similar to the above description of the method embodiments, has similar beneficial effects as the method embodiments. In some embodiments, functions of or modules included in the apparatuses provided in the embodiments of the present disclosure may be used to perform the methods described in the above method embodiments, and for technical details not disclosed in the embodiments of the apparatuses of the present disclosure, please refer to the description of the method embodiments of the present disclosure for understanding.
It should be noted that, in the embodiment of the present disclosure, if the augmented reality-based display method is implemented in the form of a software functional module and is sold or used as a standalone product, the augmented reality-based display method may also be stored in a computer-readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present disclosure are not limited to any specific hardware, software, or firmware, or any combination thereof.
The embodiment of the present disclosure provides a computer device, which includes a memory and a processor, where the memory stores a computer program that can be run on the processor, and the processor implements some or all of the steps of the above method when executing the program.
Embodiments of the present disclosure provide a computer-readable storage medium on which a computer program is stored, the computer program implementing some or all of the steps of the above method when executed by a processor. The computer readable storage medium may be transitory or non-transitory.
The disclosed embodiments provide a computer program comprising computer readable code, where the computer readable code runs in a computer device, a processor in the computer device executes some or all of the steps for implementing the above method.
The disclosed embodiments provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program that when read and executed by a computer performs some or all of the steps of the above method. The computer program product may be embodied in hardware, software or a combination thereof. In some embodiments, the computer program product is embodied in a computer storage medium, and in other embodiments, the computer program product is embodied in a SoftwARe product, such as a SoftwARe Development Kit (SDK), or the like.
Here, it should be noted that: the foregoing description of the various embodiments is intended to highlight various differences between the embodiments, which are the same or similar and all of which are referenced. The above description of the apparatus, storage medium, computer program and computer program product embodiments is similar to the description of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the disclosed apparatus, storage medium, computer program and computer program product, reference is made to the description of the embodiments of the method of the present disclosure for understanding.
Fig. 9 is a schematic diagram of a hardware entity of an augmented reality-based display device provided in an embodiment of the present disclosure, and as shown in fig. 9, the hardware entity of the augmented reality-based display device 900 includes: a processor 901 and a memory 902, wherein the memory 902 stores a computer program operable on the processor 901, and the processor 901 implements the steps in the method of any of the above embodiments when executing the program.
The Memory 902 stores a computer program that can be executed on the processor, and the Memory 902 is configured to store instructions and applications that can be executed by the processor 901, and can also buffer data (e.g., image data, audio data, voice communication data, and video communication data) to be processed or already processed by each module in the augmented reality-based presentation device 900, which can be implemented by a FLASH Memory (FLASH) or a Random Access Memory (RAM).
The processor 901, when executing the program, implements the steps of any of the above-described augmented reality based presentation methods. The processor 901 generally controls the overall operation of the augmented reality based presentation device 900.
The embodiments of the present disclosure provide a computer storage medium storing one or more programs, which are executable by one or more processors to implement the steps of the augmented reality based presentation method according to any one of the above embodiments.
The disclosure relates to the field of augmented reality, and aims to detect or identify relevant features, states and attributes of a target object by means of various visual correlation algorithms by acquiring image information of the target object in a real environment, so as to obtain an AR effect combining virtual and reality matched with specific applications. For example, the target object may relate to a face, a limb, a gesture, an action, etc. associated with a human body, or an identifier, a marker associated with an item, or a sand table, a display area, a display item, etc. associated with a venue or a place. The vision-related algorithms may involve visual localization, SLAM, three-dimensional reconstruction, image registration, background segmentation, key point extraction and tracking of objects, pose or depth detection of objects, and the like. The specific application can not only relate to interactive scenes such as navigation, explanation, reconstruction, virtual effect superposition display and the like related to real scenes or articles, but also relate to special effect treatment related to people, such as interactive scenes such as makeup beautification, limb beautification, special effect display, virtual model display and the like. The detection or identification processing of the relevant characteristics, states and attributes of the target object can be realized through the convolutional neural network. The convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
Here, it should be noted that: the above description of the storage medium and device embodiments is similar to the description of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and the apparatus of the present disclosure, reference is made to the description of the embodiments of the method of the present disclosure.
The Processor may be at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate ARray (FPGA), a Central Processing Unit (CPU), a controller, a microcontroller, and a microprocessor. It is understood that the electronic device implementing the above processor function may be other electronic devices, and the embodiments of the present disclosure are not limited in particular.
The computer storage medium/Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a magnetic Random Access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical Disc, or a Compact Disc Read-Only Memory (CD-ROM), and the like; and may be various terminals such as mobile phones, computers, tablet devices, personal digital assistants, etc., including one or any combination of the above-mentioned memories.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present disclosure, the sequence numbers of the above steps/processes do not mean the execution sequence, and the execution sequence of each step/process should be determined by the function and the inherent logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present disclosure. The above-mentioned serial numbers of the embodiments of the present disclosure are merely for description, and do not represent the advantages or disadvantages of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present disclosure, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated unit of the present disclosure may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The above description is only an embodiment of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present disclosure, and all the changes or substitutions should be covered by the scope of the present disclosure.

Claims (15)

1. An augmented reality-based display method is applied to an augmented reality program of a terminal, and the method comprises the following steps:
acquiring a real scene image through an augmented reality program, and rendering a first virtual object effect corresponding to an entity article in the real scene image through a first target rendering engine in response to the acquired real scene image comprising a marker corresponding to the entity article to obtain and display a first augmented reality image;
acquiring a first identifier corresponding to the entity article through a wireless communication component, and setting the activation state of the entity article to be an activated state based on the first identifier;
and in response to the fact that the activation state of the entity article is set to be the activated state, rendering a second virtual object effect corresponding to the entity article in the real scene image through a second target rendering engine to obtain and display a second augmented reality image.
2. The method of claim 1, wherein the first object rendering engine and the second object rendering engine are augmented reality engines of a plurality of augmented reality engines included in the augmented reality program, the augmented reality engines being matched with a system type of a terminal.
3. The method according to claim 1 or 2, wherein the rendering, by the first object rendering engine, the first virtual object effect corresponding to the physical object in the real scene image comprises: rendering, by a first target rendering engine, a virtual object corresponding to the physical object in the real scene image in a first rendering state; the first rendering state is a state showing a partial feature of the virtual object;
the rendering, by a second object rendering engine, a second virtual object effect corresponding to the physical object in the real scene image includes: rendering, by a second target rendering engine, a virtual object corresponding to the physical object in the real scene image in a second rendering state; the second rendering state is a state that shows all features of the virtual object.
4. The method according to claim 1 or 2, wherein the rendering, by the first object rendering engine, the first virtual object effect corresponding to the physical object in the real scene image comprises: rendering, by a first target rendering engine, a first virtual object corresponding to the physical object in the real scene image;
the rendering, by a second object rendering engine, a second virtual object effect corresponding to the physical object in the real scene image includes: rendering, by a second target rendering engine, a second virtual object corresponding to the physical object in the real scene image; the first virtual object is different from the second virtual object.
5. The method of claims 1 to 4, wherein the method of determining the first target rendering engine comprises: obtaining a plurality of augmented reality engines included in the augmented reality program; determining the first target rendering engine among the plurality of augmented reality engines based on a system type of the terminal;
the method for determining the second target rendering engine comprises the following steps: obtaining a plurality of augmented reality engines included in the augmented reality program; determining the second target rendering engine among the plurality of augmented reality engines based on a system type of the terminal and a presentation style of the virtual object.
6. The method according to claims 1 to 5, wherein the setting the activation status of the physical object to an activated status based on the first identifier comprises:
sending an activation state query request carrying the first identifier to a first server; the activation status inquiry request is used for determining whether the entity item is activated by other users;
receiving activation state query feedback sent by the first server;
setting the activation status of the physical item to an activated status if the activation status query feedback characterizes that the physical item is activated by a current user.
7. The method of claims 1 to 6, further comprising:
responding to the activated state of the entity article to be set as an activated state, and sending a transfer request carrying the first identifier to a second server; the transfer request is used for transferring the first digital object corresponding to the entity object to the account of the current user.
8. The method of claim 7, further comprising:
receiving transfer feedback corresponding to the first digitalized object sent by the server;
determining the article state of the first digital article based on the transfer feedback corresponding to the first digital article;
displaying at least one type of interaction information based on the item state in the process of displaying the second augmented reality image; the type quantity of the interactive information is related to the state of the article.
9. The method of claim 1, wherein the rendering, by the first object rendering engine, the first virtual object effect corresponding to the physical object in the real scene image comprises:
rendering a first virtual object effect corresponding to the entity article in the real scene image through a first target rendering engine under the condition that the augmented reality application is started for the first time;
under the condition that the augmented reality application is not started for the first time, acquiring account information of a current user; under the condition that the account information indicates that the current user does not own a second digital article, rendering a first virtual object effect corresponding to the entity article in the real scene image through a first target rendering engine; and the second digital article has an association relation with the first digital article corresponding to the entity article.
10. The method of claim 9, further comprising:
under the condition that the account information represents that the current user has at least one second digital article, acquiring the system type of the terminal;
rendering and displaying a target virtual object based on a rendering engine corresponding to a first system type under the condition that the system type of the terminal is the first system type; the target virtual object is a virtual object corresponding to a target second digital item in the at least one second digital item;
under the condition that the system type of the terminal is a second system type, displaying each second digital article and the corresponding detail viewing control through an interface; receiving a triggering operation aiming at a target detail viewing control, rendering the target virtual object through a presentation mode of the target virtual object and a rendering engine determined by the second system type, and displaying the target virtual object; the target virtual object is a virtual object corresponding to a target second digital article corresponding to the target detail viewing control.
11. The method of claim 10, further comprising:
in the process of displaying the virtual object corresponding to the target second digital article, displaying a newly added article control;
and responding to the triggering operation aiming at the newly added article control, and rendering a first virtual object effect corresponding to the entity article in the real scene image through a first target rendering engine to obtain and display a first augmented reality image.
12. A display device based on augmented reality, comprising:
the real scene image acquisition module is used for acquiring a real scene image through an augmented reality program, responding to the acquired real scene image including a marker corresponding to an entity article, rendering a first virtual object effect corresponding to the entity article in the real scene image through a first target rendering engine, and obtaining and displaying a first augmented reality image;
the acquisition module is used for acquiring a first identifier corresponding to the entity article through a wireless communication component and setting the activation state of the entity article to be an activated state based on the first identifier;
and the rendering module is used for rendering a second virtual object effect corresponding to the entity article in the real scene image through a second target rendering engine in response to the activation state of the entity article being set to the activated state, so as to obtain and display a second augmented reality image.
13. A computer device comprising a memory and a processor, said memory storing a computer program operable on the processor, wherein the processor implements the steps of the method of any one of claims 1 to 11 when executing said program.
14. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 11.
15. A computer program product comprising a non-transitory computer readable storage medium storing a computer program which, when read and executed by a computer, implements the steps of the method of any one of claims 1 to 11.
CN202210346731.6A 2022-03-31 2022-03-31 Augmented reality-based display method, apparatus, device, storage medium, and program Withdrawn CN114942713A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210346731.6A CN114942713A (en) 2022-03-31 2022-03-31 Augmented reality-based display method, apparatus, device, storage medium, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210346731.6A CN114942713A (en) 2022-03-31 2022-03-31 Augmented reality-based display method, apparatus, device, storage medium, and program

Publications (1)

Publication Number Publication Date
CN114942713A true CN114942713A (en) 2022-08-26

Family

ID=82907578

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210346731.6A Withdrawn CN114942713A (en) 2022-03-31 2022-03-31 Augmented reality-based display method, apparatus, device, storage medium, and program

Country Status (1)

Country Link
CN (1) CN114942713A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115604506A (en) * 2022-12-01 2023-01-13 北京数原数字化城市研究中心(Cn) Cloud rendering data synchronous processing method, device and equipment
CN115937430A (en) * 2022-12-21 2023-04-07 北京百度网讯科技有限公司 Method, device, equipment and medium for displaying virtual object

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115604506A (en) * 2022-12-01 2023-01-13 北京数原数字化城市研究中心(Cn) Cloud rendering data synchronous processing method, device and equipment
CN115604506B (en) * 2022-12-01 2023-02-17 北京数原数字化城市研究中心 Cloud rendering data synchronous processing method, device and equipment
CN115937430A (en) * 2022-12-21 2023-04-07 北京百度网讯科技有限公司 Method, device, equipment and medium for displaying virtual object
CN115937430B (en) * 2022-12-21 2023-10-10 北京百度网讯科技有限公司 Method, device, equipment and medium for displaying virtual object

Similar Documents

Publication Publication Date Title
US20150103097A1 (en) Method and Device for Implementing Augmented Reality Application
CN114942713A (en) Augmented reality-based display method, apparatus, device, storage medium, and program
KR20200136482A (en) Virtual Pet Display Method, Device, Terminal and Storage Media
CN112154486B (en) System and method for multi-user augmented reality shopping
CN108897996B (en) Identification information association method and device, electronic equipment and storage medium
KR102557266B1 (en) Virtual pet breeding method, device, device and storage medium
CN109345616A (en) Two dimension rendering map generalization method, equipment and the storage medium of three-dimensional pet
CN110873963B (en) Content display method and device, terminal equipment and content display system
US11989400B2 (en) Data sharing method and device
CN107638690A (en) Method, device, server and medium for realizing augmented reality
WO2018072207A1 (en) Information pushing method, apparatus, and system
KR20190086781A (en) Methods, systems, and media for detecting stereoscopic video by generating fingerprints for multiple portions of a video frame
JP2018084878A (en) Information processing device, information processing method, and program
CN111667590B (en) Interactive group photo method and device, electronic equipment and storage medium
CN115803783A (en) Reconstruction of 3D object models from 2D images
CN112308977A (en) Video processing method, video processing apparatus, and storage medium
JP2017120540A (en) Character editing device and program
JP6791530B1 (en) 3D data system, server and 3D data processing method
CN109189507B (en) Virtual pet breeding method, device, equipment and storage medium
CN109126136B (en) Three-dimensional virtual pet generation method, device, equipment and storage medium
JP2015142320A (en) Imaging printing system, server system and program
CN111639975A (en) Information pushing method and device
CN109242748A (en) A kind of photographic method based on Internet of Things
JP2021043987A (en) Three-dimensional data system, server, and three-dimensional data processing method
CN116341006B (en) 3D data system, server and method for processing 3D data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20220826