CN113721804A - Display method, display device, electronic equipment and computer readable storage medium - Google Patents

Display method, display device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN113721804A
CN113721804A CN202110961689.4A CN202110961689A CN113721804A CN 113721804 A CN113721804 A CN 113721804A CN 202110961689 A CN202110961689 A CN 202110961689A CN 113721804 A CN113721804 A CN 113721804A
Authority
CN
China
Prior art keywords
image
target
real
virtual
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110961689.4A
Other languages
Chinese (zh)
Inventor
田真
李斌
欧华富
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202110961689.4A priority Critical patent/CN113721804A/en
Publication of CN113721804A publication Critical patent/CN113721804A/en
Priority to PCT/CN2022/113746 priority patent/WO2023020622A1/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Abstract

The embodiment of the disclosure discloses a display method, a display device, an electronic device and a computer-readable storage medium; the method comprises the following steps: acquiring a target image of a real object in a real scene in response to a trigger operation on a target function entrance; identifying a target image, determining a virtual object corresponding to the target image, and displaying a virtual effect image obtained by rendering the virtual object in a target function page; the target function page comprises at least one interactive control; the virtual effect image is used for displaying an augmented reality effect corresponding to the target image; and responding to the interactive operation aiming at the target interactive control in the at least one interactive control, determining response effect data corresponding to the virtual object, and updating the displayed virtual effect image through the response effect data. According to the display method and the display device, the richness and the display effect of the display content can be improved.

Description

Display method, display device, electronic equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of terminal technologies, and in particular, to a display method and apparatus, an electronic device, and a computer-readable storage medium.
Background
At present, some goods merchants give away virtual service products to physical goods sold by the merchants for users to enjoy. For example, proprietary activities or services are specified in official applications for users to participate. However, the interaction mode between the virtual service product and the user is usually single at present, so that the richness and the display effect of the display content of the virtual service product, such as some display service applications, are reduced.
Disclosure of Invention
Embodiments of the present disclosure are intended to provide a display method, a display apparatus, an electronic device, and a computer-readable storage medium, which can improve richness and display effect of display content.
The technical scheme of the disclosure is realized as follows:
the disclosed embodiment provides a display method, which includes:
acquiring a target image of a real object in a real scene in response to a trigger operation on a target function entrance;
identifying the target image, determining a virtual object corresponding to the target image, and displaying a virtual effect image obtained by rendering the virtual object in a target function page; the target function page comprises at least one interaction control; the virtual effect image is used for displaying an augmented reality effect corresponding to the target image;
and responding to the interactive operation aiming at the target interactive control in the at least one interactive control, determining response effect data corresponding to the virtual object, and updating the displayed virtual effect image through the response effect data.
In the above method, the method further comprises:
responding to a trigger operation aiming at a display applet entrance, and displaying a display function menu page, wherein the display function menu page is used for displaying at least one function entrance;
the acquiring a target image of a real object in a real scene in response to a trigger operation on a target function entrance comprises:
and responding to the triggering operation aiming at the target function entrance in the at least one function entrance, entering an image acquisition page corresponding to the target function, and acquiring a target image of a real object in a real scene.
In the above method, the entering of the image capture page corresponding to the target function includes:
skipping from the display function menu page to a target function loading interface, and synchronously loading target function background data in the display process of the target function loading interface; the target function background data comprises at least one preset rendering data;
and entering an image acquisition interface under the condition that the background data of the target function is loaded.
In the above method, the real object includes: a real character; the target function portal includes: synthesizing a photographing inlet; the acquiring a target image of a real object in a real scene in response to a trigger operation on a target function entrance comprises:
skipping from the display function menu page to a synthetic photographing interface, and displaying a photographing control on the synthetic photographing interface;
and under the condition of receiving the interactive operation aiming at the photographing control, carrying out image acquisition on the real person in the real scene to obtain a real person image as the target image.
In the above method, after jumping from the display function menu page to the synthetic photographing interface, the method further includes:
and loading a preset special effect paster from the target function background data, and displaying the preset special effect paster on the synthetic photographing interface.
In the above method, the at least one interactive control includes: rotating the control; the determining response effect data corresponding to the virtual object in response to the interactive operation aiming at the target interactive control in the at least one interactive control comprises:
displaying a spin guide icon under the condition that the target interaction control is the spin control;
receiving a rotation operation aiming at the rotation guide icon, and determining the rotation direction and the rotation angle of the virtual object according to the rotation operation;
and rotating the virtual object according to the rotating direction and the rotating angle, and acquiring preset rendering data corresponding to the rotated virtual object as the response effect data.
In the above method, the at least one interactive control includes: an information display control; the determining response effect data corresponding to the virtual object in response to the interactive operation aiming at the target interactive control in the at least one interactive control comprises:
acquiring preset introduction data corresponding to the virtual object as the response effect data under the condition that the target interaction control is the information display control; the preset introduction data is used for displaying introduction information of the target real object on a preset display position.
In the above method, the at least one interactive control includes: an image acquisition control; the determining response effect data corresponding to the virtual object in response to the interactive operation aiming at the target interactive control in the at least one interactive control comprises:
under the condition that the target interaction control is the image acquisition control, acquiring a current virtual effect image corresponding to the virtual object to obtain a current object image;
acquiring a first preset template, generating a template object image by combining the first preset template and the current object image, and taking the template object image as the response effect data; the first preset template is used for providing a preset image template for displaying the current object image.
Through the interactive mode of image acquisition, rotation and information introduction, the richness of the display content is improved, and the diversity of the interactive mode of the display method is improved.
In the above method, the first preset template further includes: presetting an identification code; the preset identification code is used for providing a link of a preset function.
In the method, the preset identification code is an applet identification code, and the applet identification code is used for providing a link for displaying an applet entry.
In the above method, the method further comprises:
under the condition that the template object image is generated, jumping to a sharing page, and displaying the template object image on the sharing page, wherein the sharing page comprises a first sharing control;
and under the condition that the sharing operation aiming at the first sharing control is received, generating a first applet link corresponding to the template object image, and sharing the first applet link to the target equipment.
By sharing the virtual effect image to the target device, the diversity of the interaction mode of the display method is further improved, and the popularization of virtual service products is facilitated.
In the above method, the shared page further includes a first save control, and the method further includes:
and under the condition that the saving operation aiming at the first saving control is received, saving the template object image to a local storage space.
In the above method, the acquiring a target image of a real object in a real scene includes:
displaying acquisition range guide information on an image acquisition interface under the condition of entering the image acquisition interface; the acquisition range guiding information comprises a preset acquisition range boundary;
and carrying out image acquisition on the real object through the image acquisition interface, and taking the image in the boundary of the preset acquisition range as the target image.
In the above method, the real object includes: a target real article, the virtual object being a target virtual model corresponding to the target real article;
the determining the virtual object corresponding to the target image by identifying the target image includes:
performing target identification on at least one preset real article on the target image to obtain a detection result of the target real article;
and acquiring a target virtual model corresponding to the target real article as the virtual object based on the detection result.
In the above method, the method further comprises:
and under the condition that a preset real article is not identified in the target image, displaying guide information for prompting the adjustment of the position of the boundary of the preset acquisition range on the image acquisition interface.
In the above method, the method further comprises:
and under the condition of displaying the virtual effect image, playing multimedia audio data corresponding to the virtual effect image.
The dynamic virtual effect image is combined with the multimedia audio data, so that the display effect of the display function is further enriched, and the diversity of the interaction mode is improved.
In the above method, the displaying a virtual effect image obtained by rendering the virtual object in the target function page includes:
acquiring corresponding position information of the target image in a preset screen coordinate system, and taking the position information as screen position information displayed by the target real article on the target function page;
performing posture evaluation based on the target image to obtain real position information of the target real object in the real scene;
and obtaining preset target rendering data corresponding to the virtual object from target function background data, and rendering the virtual object by using the preset target rendering data based on the mapping relation between the real position information and the screen position information to obtain the virtual effect image.
The virtual object is rendered to obtain the virtual effect image with the augmented reality effect, so that the real object and the display content in the virtual world can be seamlessly combined, the diversity and experience of the display content are enhanced, and the richness of the display content is improved.
In the above method, the target image includes: an image of a real person, the virtual object comprising: presetting special effect stickers;
the displaying of the virtual effect image obtained by rendering the virtual object in the target function page includes:
and superposing the real character image and the preset special effect paster, and generating and displaying a superposed current synthetic image as the virtual effect image.
The image effect of the composite image is displayed by superposing the real character image and the preset special effect paster, so that the diversity of the interaction mode is improved, and the richness of the display content is further improved.
In the above method, the method further comprises:
the display method is realized through a small program or a webpage client.
The display method is realized through the small program page or the webpage end, the convenience of the display method is improved, and the popularization of virtual service products is facilitated.
An embodiment of the present disclosure provides a display device, including:
the acquisition unit is used for responding to the trigger operation of the target function entrance and acquiring a target image of a real object in a real scene;
the rendering unit is used for identifying the target image, determining a virtual object corresponding to the target image, and displaying a virtual effect image obtained by rendering the virtual object in a target function page; the target function page comprises at least one interaction control; the virtual effect image is used for displaying an augmented reality effect corresponding to the target image;
and the operation unit is used for responding to the interactive operation aiming at the target interactive control in the at least one interactive control, determining response effect data corresponding to the virtual object, and updating the displayed virtual effect image through the response effect data.
An embodiment of the present disclosure provides an electronic device, including:
a display screen; a memory for storing an executable computer program;
a processor for implementing any one of the display methods described above in conjunction with the display screen when executing the executable computer program stored in the memory.
The embodiments of the present disclosure provide a computer-readable storage medium, on which a computer program is stored, for causing a processor to implement any one of the display methods as described above when executed.
The disclosed embodiment provides a display method, a display device, an electronic device and a computer-readable storage medium; the virtual object is determined by acquiring the target image of the real object in the real scene, and the virtual object is rendered to obtain the virtual effect image, so that the augmented reality effect of the virtual effect image is utilized to improve and enrich the display effect of the display application. Moreover, the electronic equipment can receive the interactive operation of the user through at least one interactive control, and perform corresponding operation and update display on the virtual object, so that the richness of the display content is improved.
Drawings
Fig. 1 is a schematic diagram of a display system according to an embodiment of the present disclosure;
fig. 2 is an alternative flow chart of a display method provided by the embodiment of the present disclosure;
fig. 3 is an alternative flow chart of a display method provided by the embodiment of the present disclosure;
fig. 4 is a schematic diagram illustrating an optional effect of a target function loading interface according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram illustrating an optional effect of an image capture interface according to an embodiment of the present disclosure;
fig. 6 is an alternative flow chart of a display method provided by the embodiment of the present disclosure;
fig. 7 is a schematic diagram illustrating an optional effect of a target function page according to an embodiment of the present disclosure;
fig. 8 is a schematic diagram illustrating an optional effect of guidance information provided by an embodiment of the present disclosure;
FIG. 9 is an alternative diagram illustrating response effects of a spin control provided by an embodiment of the present disclosure;
FIG. 10 is an alternative diagram illustrating the response effect of an information profile control provided by an embodiment of the present disclosure;
fig. 11 is an alternative schematic diagram of a response effect of an image capture control according to an embodiment of the present disclosure;
FIG. 12 is an alternative diagram illustrating response effects of at least one interaction control provided by an embodiment of the present disclosure;
fig. 13 is a schematic diagram illustrating an optional effect of a target function loading interface according to an embodiment of the present disclosure;
fig. 14 is a schematic diagram illustrating an optional effect of an image capture interface according to an embodiment of the present disclosure;
fig. 15 is a schematic diagram illustrating an optional effect of a target function page according to an embodiment of the present disclosure;
FIG. 16 is an alternative diagram illustrating the response effect of an information profile control provided by an embodiment of the present disclosure;
fig. 17 is an alternative schematic diagram of a response effect of an image capture control according to an embodiment of the present disclosure;
FIG. 18 is an alternative flow chart of a display method provided by an embodiment of the present disclosure;
fig. 19 is a schematic diagram illustrating an alternative interaction effect of a compound photographing function according to an embodiment of the disclosure;
fig. 20 is an alternative structural schematic diagram of a display device provided in an embodiment of the present disclosure;
fig. 21 is an alternative structural schematic diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure.
For the purpose of making the purpose, technical solutions and advantages of the present disclosure clearer, the present disclosure will be described in further detail with reference to the accompanying drawings, the described embodiments should not be construed as limiting the present disclosure, and all other embodiments obtained by a person of ordinary skill in the art without making creative efforts shall fall within the protection scope of the present disclosure.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. The terminology used herein is for the purpose of describing embodiments of the disclosure only and is not intended to be limiting of the disclosure.
Before further detailed description of the embodiments of the present disclosure, terms and expressions referred to in the embodiments of the present disclosure are explained, and the terms and expressions referred to in the embodiments of the present disclosure are applied to the following explanations.
1) A Mini Program (also called a Web Program) is a Program developed based on a front-end-oriented Language (e.g., JavaScript) and implementing a service in a hypertext Markup Language (HTML) page, and is software that is downloaded by an application (e.g., a browser or any client embedded in a browser core) via a network (e.g., the internet) and interpreted and executed in a browser environment of the application, thereby saving steps of installing the downloaded application.
2) A web Client (Client) is also called a web Client, and refers to a program corresponding to a server and providing local services to a Client. The web page client is usually installed on a common client machine and needs to cooperate with a server side to operate, and the more common web page client sides comprise a web page browser used by the world wide web, an email client side when receiving and sending emails, client side software for instant messaging and the like.
3) Augmented Reality (AR), which is a relatively new technology content that promotes integration between real world information and virtual world information content, implements analog simulation processing on the basis of computer and other scientific technologies of entity information that is relatively difficult to experience in the spatial range of the real world, superimposes the virtual information content for effective application in the real world, and can be perceived by human senses in the process, thereby realizing sensory experience beyond Reality. After the real environment and the virtual object are overlapped, the real environment and the virtual object can exist in the same picture and space at the same time.
The augmented reality technology not only can effectively embody the content of the real world, but also can promote the display of virtual information content, and the fine content is mutually supplemented and superposed. In the visual augmented reality, the real world can be overlapped with the computer graphics, and the real world can be fully seen around the computer graphics after the overlapping. The augmented reality technology mainly comprises new technologies and means such as multimedia, three-dimensional modeling, scene fusion and the like, and the information content provided by augmented reality and the information content which can be perceived by human beings are obviously different.
At present, some goods merchants give away virtual service products to physical goods sold by the merchants for users to enjoy. For example, proprietary activities or services are specified in official applications for users to participate. The user can acquire the application program corresponding to the commodity through two-dimensional code scanning or application store downloading and installation and the like, and then logs in the application program through a purchase certificate to perform rights and interests getting and virtual service experience. However, the operation process of the method is complex, and the interaction mode between the virtual service product and the user is generally single, which is not beneficial to the popularization of the virtual service product, thereby reducing the richness and the display effect of the display content of the virtual service product, such as some display service applications, and reducing the convenience of the use of the virtual service product.
The embodiment of the disclosure provides a display method, which can improve convenience and richness of display application. The display method provided by the embodiment of the disclosure is applied to electronic equipment.
An exemplary application of the electronic device provided by the embodiment of the present disclosure is described below, and the electronic device provided by the embodiment of the present disclosure may be implemented as various types of user terminals (hereinafter, referred to as terminals) such as Augmented Reality (AR) glasses, a notebook computer, a tablet computer, a desktop computer, a set-top box, a mobile device (for example, a mobile phone, a portable music player, a smart watch, a personal digital assistant, a dedicated information device, and a portable game device).
Referring to fig. 1, fig. 1 is an alternative architecture diagram of a display system provided by an embodiment of the present disclosure, in the display system 100, a terminal (electronic device) 400 is connected to a server 200 through a network 300, and the network 300 may be a wide area network or a local area network, or a combination of the two. The terminal 400 is configured to, when receiving an operation of starting a display applet by a user, for example, when receiving a trigger operation of the user for a display applet entry, obtain the display applet from the server 200 and load the display applet in the terminal to run; the home page main page of the applet is displayed through a built-in or external display screen 401, such as a display function menu page. At least one function entry can be displayed on the displayed function menu page, so that the operation of the user can be received through the at least one function entry.
The terminal 400 is further configured to, in a case that a trigger operation of a user for a target function entry in the at least one function entry is received, enter an image capture page corresponding to the target function in response to the trigger operation of the target function entry, and capture a target image 501 of a real object 500 in a real scene through a built-in or external image capture device 402. Illustratively, the real object may be a book or a calendar, and the target image may be an image on the book or the calendar, such as a showpiece image, or the like.
The terminal 400 is further configured to identify the target image, determine a virtual object corresponding to the target image, and display a virtual effect image 600 obtained by rendering the virtual object in the target function page through the display screen 401; the target function page comprises at least one interactive control; the virtual effect image is used for displaying an augmented reality effect corresponding to the target image; and responding to the interactive operation aiming at the target interactive control in at least one interactive control, determining response effect data corresponding to the virtual object, updating the virtual effect image through the response effect data, and displaying the response effect corresponding to the interactive operation. Illustratively, the virtual object may be an AR model corresponding to an image on a book or calendar.
In some embodiments of the present disclosure, the terminal 400 may pre-store the virtual object locally in the terminal, and the terminal 400 may also generate or determine the virtual object corresponding to the target image by using a server, such as the server 200 or other servers with image processing functions, and then load the virtual object from the server to the local of the terminal 400, perform rendering and subsequent interactive operations on the virtual object, and the like. The server side can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and can also be a cloud server for providing basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud functions, cloud storage, network service, cloud communication, middleware service, domain name service, security service, CDN, big data and artificial intelligence platform and the like. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the embodiment of the present disclosure is not limited thereto.
Next, an exemplary application in the case where the electronic device is implemented as a terminal will be explained.
Fig. 2 is an alternative flow chart of a display method provided by the embodiment of the present disclosure, which will be described with reference to the steps shown in fig. 2.
S101, responding to trigger operation of a target function entrance, and acquiring a target image of a real object in a real scene.
The display method in the embodiment of the disclosure is suitable for a scene in which a user experiences a virtual service product attached to a real article through an applet or a web client. Such as performing AR display experience on the exhibit image in the paper calendar, or performing multimedia interaction experience on the exhibit picture on the movable exhibit board, and so on. The display method in the embodiment of the disclosure can provide user experience and rich interaction modes of combined display of a physical object and a virtual object in a lightweight mode by depending on an applet or a webpage client.
In the embodiment of the present disclosure, the target function entry may be displayed on an applet page or a web client, and is configured to receive a trigger operation of a user to start a corresponding target function. Here, the object function is used to provide an augmented reality effect or an image synthesis effect of a real scene combined with a virtual object by acquiring an image of the real scene and determining the virtual object, such as an AR model or a special effect image, corresponding to the image of the real scene.
In some embodiments, the electronic device may present the display function menu page through a display applet entry presented on a preset page, for example, a display applet icon entry presented in an applet list of an application pull-down page, or a display applet link entry presented on a webpage side, and so on, in response to a trigger operation for the display applet entry received by a user, in response to the trigger operation for the display applet entry. And displaying at least one function entry corresponding to at least one function contained in the display applet through a display function menu page. In this way, the electronic device may enter a target function corresponding to the target function entry in response to a trigger operation for the target function entry in the at least one function entry.
In some embodiments, the electronic device may also directly display the target function entry on a preset page, such as an applet page or a web page client, or design a menu hierarchy according to actual needs to layout the target function entry, specifically select the target function entry according to actual situations, which is not limited in the embodiments of the present disclosure.
In the embodiment of the disclosure, the electronic device may enter the image acquisition page corresponding to the target function in response to the trigger operation for the target function entry, and call the image acquisition device to acquire the image of the real object in the real scene to obtain the target image.
In the embodiment of the disclosure, the real scene is a real scene, and a real object exists in the real scene; in some embodiments, the real scene may include at least one real object, and the real object may be one real object of the at least one real object; in some embodiments, the real object may also include a real person. Illustratively, the real scene may include a physical object, such as a paper calendar, the paper calendar includes at least one exhibit image, wherein the real object may be a target exhibit image in the at least one exhibit image, such as a censer pottery image, or an art portrait, and the like. The electronic equipment can acquire the image of the target exhibit to obtain the target image.
In some embodiments, the real object in the real scene may also be an exhibit or a display board displayed in a science and technology museum or an art museum, or a commodity album displayed in a shopping mall, which is specifically selected according to the actual situation, and the embodiment of the disclosure is not limited.
S102, identifying the target image, determining a virtual object corresponding to the target image, and displaying a virtual effect image obtained by rendering the virtual object in a target function page; the target function page comprises at least one interactive control; the virtual effect image is used for displaying the augmented reality effect corresponding to the target image.
In the embodiment of the disclosure, the electronic device may perform feature extraction and image analysis on the target image to identify the target image when obtaining the target image. For the case that the real object is the target real object, the electronic device may identify the preset real object corresponding to the target image, and further determine a virtual object corresponding to the preset real object, such as a virtual AR model corresponding to the preset real object. For the case that the real object is a real person, the electronic device may perform human body recognition on the target image, recognize human body parts, such as five sense organs or a head, a hand, and the like, included in the target image, and further determine a virtual object corresponding to the human body part, such as a special effect sticker corresponding to a specific human body part.
In some embodiments, the virtual object may be a pre-modeled image model, including specific visual parameters or action logic. The electronic device may generate and store a preset virtual model corresponding to each preset real article in advance according to the type of the preset real article, and acquire the preset virtual model corresponding to the preset real article as a virtual object when the corresponding preset real article is recognized from the target image. In some embodiments, the electronic device may also generate a virtual object corresponding to the target object in real time by recognizing the target image, and for example, the electronic device may perform real-time AR modeling according to a display image collected from a calendar to obtain an AR model of the display as the virtual object. The specific choice is made according to the actual situation, and the embodiment of the disclosure is not limited.
In the embodiment of the disclosure, the electronic device may enter the target function page when determining the virtual object, estimate the real position information of the real object in the real scene by using a computer vision method based on the target image, acquire the screen position information of the target image in the display screen of the electronic device, and further establish the mapping relationship between the real world and the screen. The electronic equipment can render the virtual object by using preset rendering data corresponding to the virtual object according to the mapping relation between the real world and the screen in the target function page, and the rendering result of the virtual object is added to the object image of the real scene, so that the enhanced display effect of combining the virtual and the real scenes is displayed.
In some embodiments, the real object may be a planar image in a real scene, such as a showpiece image in a calendar. The electronic device can take a real object as a mark plane in a real scene, recognize and evaluate the mark plane through a camera (position Estimation), and determine the position information of the mark plane in the real scene. The electronic equipment maps the mark plane in the three-dimensional real scene to a two-dimensional screen page of the electronic equipment, and renders and draws a virtual effect image corresponding to the virtual object on the two-dimensional screen page obtained through mapping.
S103, responding to the interactive operation of the target interactive control in the at least one interactive control, determining response effect data corresponding to the virtual object, and updating the displayed virtual effect image through the response effect data.
In the embodiment of the disclosure, the target function page includes at least one interactive control, and the electronic device may receive an interactive operation of a user through a target interactive control of the at least one interactive control, and further, in response to the interactive operation, operate the virtual object according to a control function of the target interactive control, so that the interactive operation of the user is applied to the virtual object, and interaction between the user and the virtual object is achieved.
In the embodiment of the disclosure, the electronic device may determine the operation mode of the virtual object through the interactive operation of the target interactive control, and further determine the response effect data corresponding to the operation mode, and the electronic device updates the current real virtual effect image by using the response effect data, so that the virtual object embodies the interactive effect responding to the interactive operation.
In some embodiments, based on the interaction function corresponding to each interaction control in the at least one interaction control, the response effect data may be rendering data for re-rendering the virtual object, or may also be text or image effect data displayed in combination with the virtual object, which is specifically selected according to an actual situation, and the embodiment of the present disclosure is not limited.
It can be understood that, in the embodiment of the present disclosure, the virtual object is determined by collecting the target image of the real object in the real scene, and the virtual object is rendered to obtain the virtual effect image, so that the augmented reality effect of the virtual effect image is utilized to improve and enrich the display effect of the display application. Moreover, the electronic equipment can receive the interactive operation of the user through at least one interactive control, and perform corresponding operation and update display on the virtual object, so that the richness of the display content is improved.
In some embodiments, based on fig. 2, as shown in fig. 3, S101 may be implemented by S1011-S1012 as follows:
s1011, responding to the trigger operation of the target function entrance, jumping from the display function menu page to the target function loading interface, and synchronously loading the background data of the target function in the display process of the target function loading interface.
In the embodiment of the disclosure, in response to a trigger operation on a target function entry, the electronic device may jump from a display function menu page to a target function loading interface before entering an image acquisition interface, and display a loading progress of background data of a target function and introduction information of the target function through the target function loading interface.
In the embodiment of the disclosure, the electronic device may load the target function background data synchronously in the display process of the target function loading interface, so as to render the virtual object in the subsequent steps. Here, the target function background data includes at least one preset rendering data; the target function background data can be stored in the server and loaded to the local by the electronic equipment; or the memory may be stored locally in the electronic device, and loaded to the memory in the running process, specifically, the selection is performed according to the actual situation, and the embodiment of the present disclosure is not limited.
And S1012, entering an image acquisition interface to acquire a target image of a real object in a real scene under the condition that the background data of the target function is loaded.
In some embodiments, the target function may be, for example, a "show bring home" function in a virtual service product attached to a calendar, which may provide user interaction with the show AR model, and its corresponding target function loading interface may be as shown in fig. 4. The target function loading interface in fig. 4 shows information such as the name 11 of the target function, the loading progress 12 of the background data of the target function, and the introduction information 13 of the target function.
In some embodiments, the process of capturing the target image of the real object in the real scene by the electronic device upon entering the image capturing interface may be implemented by performing S201-S202 as follows:
s201, displaying acquisition range guide information on an image acquisition interface under the condition of entering the image acquisition interface; the acquisition range guidance information includes a preset acquisition range boundary.
In the embodiment of the disclosure, under the condition that the electronic device enters the image acquisition interface, the acquisition range guide information can be displayed on the image acquisition interface, so that the image acquisition range of the electronic device to the real scene is focused on the real object in a targeted manner, the interference of irrelevant information in the real scene is reduced, and the accuracy of identifying the target image in the subsequent steps is improved.
In some embodiments, the acquisition range guidance information may include a preset acquisition range boundary, which may be, for example, a graphical boundary such as a scan frame, a sight frame, or the like. In some embodiments, the acquisition range guidance information may also include textual guidance information for providing textual guidance for image acquisition.
Illustratively, based on fig. 4, the image acquisition interface may be as shown in fig. 5, where the preset acquisition range boundary 21 is shown in fig. 5.
S202, image acquisition is carried out on a real object in a real scene through an image acquisition interface, and an image in a preset acquisition range boundary is used as a target image.
In the embodiment of the disclosure, the electronic device may perform image acquisition on a real object in a real scene through an image acquisition interface, discard image information outside a preset acquisition range boundary, and use an image within the preset acquisition range boundary as a target image.
In some embodiments, the mode of image acquisition of the real object by the electronic device may include a scanning or photographing mode, and may also be other image acquisition modes, which are specifically selected according to actual situations, and the embodiment of the present disclosure is not limited.
In some embodiments, the real object includes: a target real object, the virtual object being a target virtual model corresponding to the target real object, which may be, for example, a display image in a calendar, such as a incense burner image; the virtual object can be a virtual model corresponding to the exhibit image, such as an AR model of a incense burner. The following describes a display method in the embodiment of the present disclosure, taking a real object as a target real object as an example.
Based on fig. 2 and 3, as shown in fig. 6, the process of determining the virtual object corresponding to the target image by identifying the target image in S102 may be implemented by performing the processes of S1021-S1022; the process of displaying the virtual effect image obtained by rendering the virtual object in the target function page in S102 can be implemented by the processes of S1023 to S1025, which will be described with reference to each step.
And S1021, performing target identification on at least one preset real article on the target image to obtain a detection result of the target real article.
In the embodiment of the disclosure, the electronic device may obtain the multi-target classification neural network in a machine learning manner, and is used for identifying the classification probability of the input image corresponding to at least one preset real article. The electronic equipment can perform feature extraction and classification prediction on the target image through a multi-target classified neural network to obtain the classification probability that the target image belongs to each preset real article in at least one preset real article; and obtaining a detection result of the target real article corresponding to the target image based on the classification probability that the target image belongs to each preset real article.
In some embodiments, based on the image acquisition interface shown in fig. 5, the electronic device may acquire the target image within the boundary of the preset acquisition range in a scanning manner, perform real-time identification on the target image obtained by real-time scanning, and predict a preset real article corresponding to the target image as a detection result of the target real article.
And S1022, acquiring a target virtual model corresponding to the target real article as a virtual object based on the detection result.
In the embodiment of the disclosure, the electronic device may generate, in real time, a target virtual model corresponding to the target real article as a virtual object using preset modeling data based on a detection result of the target real article, and the electronic device may also pre-store at least one preset virtual model corresponding to at least one preset real article in target function background data of the display applet, so that the target virtual model corresponding to the target real article may be obtained as the virtual object from the pre-stored at least one preset virtual model based on the detection result of the target real article. The specific choice is made according to the actual situation, and the embodiment of the disclosure is not limited.
Exemplarily, the electronic device identifies the target image to obtain a detection result that the target real object is the incense burner image. The electronic equipment can perform three-dimensional modeling under a virtual scene aiming at the incense burner image to obtain a three-dimensional virtual model of the incense burner as a virtual object corresponding to the incense burner image.
And S1023, acquiring corresponding position information of the target image in a preset screen coordinate system, and taking the position information as the screen position information displayed by the target real object on the target function page.
In the embodiment of the disclosure, since the target image is an image displayed in a target function page currently displayed on the display screen by the target real article, the electronic device may obtain the corresponding position information of each pixel point in the target image in the preset screen coordinate system of the display screen, as the screen position information displayed on the target function page by the target real article.
And S1024, performing posture evaluation based on the target image to obtain the real position information of the target real object in the real scene.
In the embodiment of the disclosure, since the target image also reflects the position and posture information of the target real object in the real scene, the electronic device may perform posture estimation on the target image based on the target image in combination with the calibration parameters of the image acquisition device, such as the internal reference matrix of the camera, to obtain the real position information of the target real object in the real scene.
In some embodiments, the electronic device may use a coordinate system with the center of the target real object as an origin as a real coordinate system, rotate and translate the real coordinate system to a camera coordinate system corresponding to the image capturing device by using a transformation relationship of 3D projective geometry, and then map the real coordinate system to the screen coordinate system from the camera coordinate system, thereby obtaining a mapping relationship between the real position information and the screen position information.
S1025, obtaining target preset rendering data corresponding to the virtual object from the target function background data, and rendering the virtual object by using the target preset rendering data based on the mapping relation between the real position information and the screen position information to obtain a virtual effect image.
In the embodiment of the disclosure, when the electronic device establishes the mapping relationship between the real coordinate system and the screen coordinate system, the electronic device may render the virtual object on the display screen according to the mapping relationship between the real position information and the screen position information, so as to obtain a virtual effect image in which the rendered virtual object image is attached to the article image of the real scene.
In the embodiment of the disclosure, the electronic device may obtain the preset target rendering data corresponding to the virtual object from the background data of the target function when the background data of the target function is obtained through the loading process. Here, the at least one preset rendering data included in the target function background data may be preset rendering data corresponding to the at least one preset virtual model. The electronic device may obtain preset rendering data corresponding to the target virtual model in the at least one preset rendering data as target preset rendering data.
In some embodiments, in the case that the electronic device identifies the detection result of the target real item from the target image, the electronic device may jump from the image capture interface to the target function page, and render the virtual object and display the virtual effect image on the target function page. For example, in a case that the electronic device recognizes the incense burner image from the target image, a corresponding incense burner AR model of the incense burner image may be acquired, and a virtual object image of the incense burner AR model may be rendered on the target function page through preset rendering data, as shown in part 31 in fig. 7. Here, the virtual object image may be a three-dimensional stereoscopic incense burner model image, and the electronic device superimposes the virtual object image on an item image of a calendar in a real scene to obtain a virtual effect image, and displays an augmented reality effect combining virtual and reality through the target function page.
It can be understood that, in the embodiment of the present disclosure, the virtual effect image with the augmented reality effect is obtained by rendering the virtual object, the real object and the display content in the virtual world can be seamlessly combined, the diversity and experience of the display content are enhanced, and the richness of the display content is improved.
In some embodiments, based on fig. 6, after S1021, S1026 may also be performed, as follows:
and S1026, under the condition that the preset article object is not identified in the target image, displaying guide information for prompting the adjustment of the position of the boundary of the preset acquisition range on the image acquisition interface.
In the embodiment of the disclosure, the electronic device performs target recognition of at least one preset real article on a target image, and when the preset real article is not recognized, the electronic device determines that the preset acquisition range boundary does not contain any preset real article, and displays guidance information for prompting to adjust the position of the preset acquisition range boundary on an image acquisition interface so as to prompt a user to align the preset acquisition range boundary with a real object for image acquisition.
Illustratively, based on fig. 5, in the case that the preset collection range boundary, i.e. the scanning frame, is not aligned with the exhibit in the calendar, such as the incense burner image, as shown in fig. 8, the electronic device cannot normally recognize the preset exhibit object according to the target image in the scanning frame. The electronic device displays guidance information 41 on the image capture interface to prompt the user to align the scan frame with the display on the calendar.
In some embodiments, based on fig. 2 and 6, the at least one interaction control exposed by the electronic device on the target function page may include: the process of rotating the control, in response to the interactive operation for the target interactive control in the at least one interactive control in S103, determining the response effect data corresponding to the virtual object may be implemented through S1031 to S1033, as follows:
and S1031, displaying the rotation guide icon under the condition that the target interaction control is the rotation control.
In the embodiment of the disclosure, the rotation control is used for rotating the virtual object so as to display the target real object in an all-around manner through the rotation of the virtual object. Under the condition that a target interactive control corresponding to interactive operation triggered by a user is a spin control, the electronic equipment can display a spin guidance icon, and the spin guidance icon guides the user to rotate the virtual object according to operation indicated by the icon.
In some embodiments, based on the target function page shown in fig. 7, the spin control may be as shown by the control 51 in fig. 9, and in the event that the control 51 receives an interactive operation, such as a user clicking on the control 51, the electronic device may display a spin guide icon 52 near the control 51 in response to the interactive operation on the control 51. The rotation guide icon 52 instructs the user to rotate the virtual object by sliding left and right.
S1032 receives the rotation operation for the rotation guide icon, and determines the rotation direction and the rotation angle of the virtual object according to the rotation operation.
In the embodiment of the disclosure, the electronic device receives a rotation operation through the rotation guide icon, and determines a rotation direction and a rotation angle of the virtual object according to operation information of the rotation operation, such as an operation direction or an operation distance.
In some embodiments, the rotation operation may be a left-right sliding operation, and the electronic device may determine a clockwise or counterclockwise rotation direction of the virtual object according to the left-right sliding direction and determine a rotation angle of the virtual object according to the left-right sliding distance.
S1033, rotating the virtual object according to the rotation direction and the rotation angle, and acquiring preset rendering data corresponding to the rotated virtual object as response effect data.
In the embodiment of the present disclosure, the electronic device may rotate the virtual object according to the rotation direction and the rotation angle indicated by the rotation operation, so as to obtain the rotated virtual object. The electronic equipment acquires corresponding preset rendering data as response effect data according to the rotated virtual object, updates the virtual effect image through the response effect data, and displays the virtual effect image displayed after the virtual object is rotated.
It should be noted that, in some embodiments, the electronic device may set the spin control to the open state when receiving an interactive operation, such as a click, for the spin control; in this way, when the spin control is in the on state, the spin guidance icon is displayed, and the user can rotate the virtual object by the spin operation. And under the condition that the rotation of the virtual object is finished, the user can convert the spin control into a closed state by clicking the spin control again, so that the virtual object exits the spin state, the spin guide icon is closed, and the spin operation is not responded under the condition that the spin control is in the closed state.
In some embodiments, the at least one interaction control on the target functionality page may include: the information display control, S103, may be implemented by S1034 as follows:
s1034, under the condition that the target interaction control is the information display control, acquiring preset introduction data corresponding to the virtual object as response effect data; the preset introduction data is used for displaying introduction information of the target real object on a preset display position.
In the embodiment of the present disclosure, the information display control is configured to display preset introduction data of the virtual object, where the preset introduction data is used to display introduction information of the target real object at a preset display position, and the introduction information may include at least one of text introduction information and image introduction information. The electronic equipment acquires preset introduction data corresponding to the virtual object as response effect data under the condition that interactive operation aiming at the information display control is received, so that the virtual effect image is updated through the response effect data, and the preset introduction data is displayed on the current target function page.
In some embodiments, the electronic device may display the preset introduction data through a pop-up window or a pop-up interface on the preset display position, and update the virtual effect image. Illustratively, based on the target function page shown in fig. 7, the information display control may be as shown in a control 61 in fig. 10, and when the control 61 receives an interactive operation, for example, when a user clicks the control 61, in response to the interactive operation on the control 61, the electronic device may acquire a virtual object, for example, introduction information of the incense burner corresponding to the incense burner AR model, as preset introduction data, pop up an introduction window 62 of the incense burner at a preset display position of the current target function page, and display the preset introduction data through the introduction window 62.
In some embodiments, the at least one interaction control on the target functionality page may include: the image capture control, S103, may be implemented by S1035-S1036 as follows:
and S1035, under the condition that the target interaction control is an image acquisition control, acquiring the current virtual effect image corresponding to the virtual object to obtain the current object image.
In the embodiment of the present disclosure, the image acquisition control is configured to acquire a virtual effect image currently presented on the target function page. Under the condition that the target interaction control is an image acquisition control, the electronic device acquires a current virtual effect image corresponding to the virtual object, and illustratively, the current virtual effect image can be acquired by means of screen capture or screenshot to obtain a current object image.
S1036, acquiring a first preset template, generating a template object image by combining the first preset template and the current object image, and taking the template object image as response effect data; the first preset template is used for providing a preset image template for displaying the current object image.
In the embodiment of the disclosure, the electronic device acquires a first preset template, and combines the current object image with the first preset template to generate a template object image as response effect data.
In the embodiment of the present disclosure, the first preset template provides a preset image template for displaying the current object image. In some embodiments, the first preset template may be a preset photo frame, and the electronic device combines the current object image with the preset photo frame to obtain the template object image. Illustratively, as shown in FIG. 11. The image acquisition control 70 is displayed on the target function page, and when the image acquisition control 70 receives an interactive operation, for example, when the user clicks the image acquisition control 70, the electronic device can acquire a virtual effect image currently presented on the target function page in response to the interactive operation on the image acquisition control 70, so as to obtain a current object image 71. The electronic device acquires the first preset template 72, and combines the first preset template 72 with the current object image 71 to generate a template object image, as shown by a portion 73 within a dashed line frame in fig. 11, and takes the template object image 73 as response effect data.
In some embodiments of the present disclosure, based on the above fig. 9 to fig. 11, the target function page may simultaneously exhibit at least one interaction control, for example, as shown in fig. 12, a spin control 51, an information display control 61, and an image capture control 70 are simultaneously exhibited on the target function page, so as to implement multiple interaction manners corresponding to the spin control, the information display control, and the image capture control on the target function page. In some embodiments, in the case where at least two interaction controls are simultaneously displayed on the target functionality page, the electronic device responds to only one interaction control at a time. Illustratively, in the case where the user clicks the spin control 51, the spin control 51 enters an open state, and the spin guide icon 52 is displayed; if receiving the interactive operation aiming at the information display control 61, converting the rotary control into a closed state, closing the rotary guide icon, exiting the virtual object from the closed state, and popping up an interface of preset introduction data 62; at this time, if the interactive operation for the image acquisition control 70 is received, the display of the interface of the preset introduction data 62 is closed, and the current virtual effect image corresponding to the virtual object is acquired to obtain a current object image; the template object image 73 is generated by combining the first preset template with the current object image.
It can be understood that in the embodiment of the present disclosure, the display effect of the display application can be enriched and the interactivity of the display application and the user can be improved through at least one of the interaction modes of image acquisition, rotation and information introduction.
In some embodiments, the first preset template may further include: presetting an identification code; the preset identification code is used for providing a link of a preset function.
In the embodiment of the disclosure, the electronic device may generate and display the preset identification code on the first preset template, so as to attach the preset identification code to the template object image generated according to the first preset template, so that the target device may enter the link of the preset function through the identification of the preset identification code. In some embodiments, the default identifier is an applet identifier, and the applet identifier is used to provide a display applet entry. And in the case that the target device successfully scans and identifies the applet identification code, starting a target function on the target device.
In some embodiments, based on the above S1036 and fig. 11, in the case of generating the template object image, the electronic device may jump from the current target function page to a sharing page, and display the template object image on the sharing page, where the sharing page includes the first sharing control. Under the condition that the sharing operation aiming at the first sharing control is received, the electronic equipment can generate a first applet link corresponding to the template object image and share the first applet link to the target equipment. Therefore, the target device can open the template object image on the target device or enter a page displaying the applet by clicking the first applet link, so as to realize sharing to the target device.
In some embodiments, the shared page may further include a first saving control, and the template object image is saved to the local storage space when a saving operation for the first saving control is received.
In some embodiments, the share page may also include a return control, such as a "rephoto" functionality control. And the return control is used for returning to the target function page again to collect the virtual effect image under the condition of receiving the corresponding trigger operation.
In some embodiments, the sharing page may also prompt the user to share the image by a preset triggering sharing operation, such as by long-pressing the template object image, in the form of text information. The specific choice is made according to the actual situation, and the embodiment of the disclosure is not limited.
In some embodiments, the template object image generated by the electronic device may be a web view form page, and since the web view form page is not convenient for direct sharing, the electronic device may perform a sharing operation in response to the first sharing control; or, by means of a preset trigger sharing operation in response, for example, a long-press operation is performed on the template object image, the sharing page jumps to the sharing applet, and the page of the sharing applet is entered, so that the page of the sharing applet is used for realizing a saving operation or a sharing operation on the template object image.
In some embodiments, the electronic device may use the page of the sharing applet as a secondary sharing page, and display a second save control and a second shared control on the secondary sharing page. Under the condition that the second saving control is triggered, saving the template object image to the local; and under the condition that the second sharing control is triggered, generating a first applet link corresponding to the template object image, and sharing the first applet link to the target equipment.
It can be understood that in the embodiment of the present disclosure, the interaction of displaying the applet is further enriched by sharing the virtual effect image to the target device, and the popularization of displaying the virtual service product corresponding to the applet is facilitated.
In some embodiments, the real object may be an artistic portrait in a calendar and the target function may be an "artistic portrait up" function in a virtual service product attached to the calendar. Based on the above descriptions in fig. 2, fig. 3, and fig. 6, the electronic device may enter the target function loading interface as shown in fig. 13 in response to a trigger operation of "moving art" on the target function entry, and load background data of the target function synchronously during the display process of the target function loading interface. Here, the target function background data may include preset dynamic effect rendering data and preset multimedia audio data. In the loading process of loading the background data of the target function, the electronic equipment can also pop up a prompt window for starting sound on a target function loading interface so as to remind a user of starting the sound, and better experience is obtained. And under the condition that background data loading of the target function is completed, entering an image acquisition interface shown in fig. 14 from the target function loading interface, and acquiring the artistic portrait on the calendar through the image acquisition interface. Under the condition that a target image obtained by collecting the artistic portrait is successfully identified through an image collecting interface, the electronic equipment determines a target virtual model corresponding to the artistic portrait as a virtual object, obtains target preset rendering data corresponding to the virtual object, and renders the virtual object on a target function page to obtain a virtual effect image. In some embodiments, the virtual effect image may be a dynamic artwork representation, as shown in FIG. 15. The electronic equipment can also play multimedia audio data corresponding to the virtual effect image under the condition of displaying the virtual effect image, namely the dynamic artistic portrait.
In some embodiments, before entering the target function page shown in fig. 15 from the image capture interface, the electronic device may detect whether its own sound is turned on, and in a case that the sound is not turned on, a prompt window pops up on the image capture interface to remind the user to turn on the sound.
In some embodiments, the target function page shown in fig. 15 may include an image capture control 120 and an information display control 121, and when the information display control 121 receives an interactive operation, the electronic device pops up a window interface 130 corresponding to preset introduction data on the current target function page, so as to introduce the currently identified artistic portrait, as shown in fig. 16. When the image acquisition control 120 receives the interactive operation, the electronic device acquires the current virtual effect image and obtains a first preset template to generate a template object image. In the case of generating the template object image, the electronic device jumps to the sharing interface, and as shown in fig. 17, displays the template object image 131 and the first sharing control 132 on the sharing interface.
In some embodiments, based on fig. 17, in a case that a sharing operation for the first sharing control 132 is received, or a preset trigger sharing operation is responded, the electronic device may jump to the sharing applet and enter a secondary sharing interface corresponding to the sharing applet; and generating a first applet link corresponding to the template object image through a second sharing control on the secondary sharing page, sharing the first applet link to the target equipment, and completing the sharing of the template object image. Or the template object image is stored to the local through a second storage control on the secondary sharing page.
It can be understood that, in the embodiment of the present disclosure, the display effect of the display function can be further enriched by combining the dynamic virtual effect image with the multimedia audio data, and the interactivity with the user is improved.
In some embodiments, the real object includes: a real character displaying at least one function entry on the function menu page including: and the electronic equipment can realize special effect shooting of the real person under the preset theme through the synthetic shooting function. The following describes a display method in the embodiment of the present disclosure, taking an actual object as an actual person as an example.
Based on fig. 2, as shown in fig. 18, the process of S101 can be realized by S301 to S302, and the process of S102 can be realized by S303, which will be described with reference to the steps.
S301, responding to the triggering operation of the synthetic shooting inlet, jumping to a synthetic shooting interface from a display function menu page, and displaying a shooting control on the synthetic shooting interface.
In the embodiment of the disclosure, in the case that the target function entry is a synthetic photographing entry, the electronic device jumps from the display function menu page to the synthetic photographing interface in response to a trigger operation on the synthetic photographing entry, so as to perform image acquisition on a real person through the synthetic photographing interface.
In some embodiments, the electronic device may also jump to a target function loading interface as shown in fig. 19 in response to a trigger operation for the synthetic photo entry, and load target function background data on the target function loading interface. The target function background data may include a preset special effect sticker. And under the condition that the background data of the target function is loaded, skipping to a synthetic photographing interface.
In the embodiment of the disclosure, the electronic device displays the photographing control on the synthetic photographing interface. As shown by control 140 in fig. 19.
In some embodiments, after jumping to the synthetic photographing interface, the electronic device may load a preset special effect sticker from the target function background data, and display the preset special effect sticker on the synthetic photographing interface. As shown by decal 141 of fig. 19.
S302, under the condition that the interactive operation aiming at the photographing control is received, image acquisition is carried out on the real person in the real scene, and the real person image is obtained and serves as a target image.
In the implementation of the disclosure, the electronic device performs image acquisition on a real person in a real scene under the condition that the interactive operation for the photographing control is received, and obtains a real person image as a target image.
S303, under the condition that the target image is the real character image, identifying the real character image, determining a preset special effect sticker corresponding to the real character image, overlapping the real character image and the preset special effect sticker, and generating and displaying a current composite image after overlapping as a virtual effect image.
In the embodiment of the disclosure, when the target image is a real person image, the electronic device may identify the real person image to obtain human body part information included in the real person image, and then determine a preset special effect sticker corresponding to the human body part information, for example, a preset special effect sticker corresponding to a head or a preset special effect sticker corresponding to an eye, and further superimpose corresponding parts of the real person image and the preset special effect sticker, and generate and display a superimposed current synthetic image as a virtual effect image.
In some embodiments, the electronic device, after generating the current composite image, can jump to a photo completion interface as shown in fig. 19 to present the current composite image, which is shown as image 151 in fig. 19. The electronic device may use the photographing completion interface as a target function page, and display a third sharing control of the at least one interaction control on the photographing completion interface, as shown by the control 150 in fig. 19. When the electronic device receives the sharing operation for the third sharing control 150, it may acquire a second preset template, and generate a template composite image 160 shown in fig. 19 based on the second preset template and the current composite image; the template composite image 160 is used as response effect data to update the current composite image, i.e., the virtual effect image.
In some embodiments, FIG. 19 shows a third save control 161 and a fourth share control 162. When the third save control 161 receives the click operation, the template synthetic image is saved locally. And under the condition that the fourth sharing control 162 receives the clicking operation, generating a second applet link corresponding to the template synthetic image, and sharing the second applet link to the target device. The sharing process of the template synthetic image by the electronic device is consistent with the description of the sharing process of the template object image, and details are not repeated here.
In some embodiments, the target device may enter the loading page by scanning a preset identification code on the template composite image, and after the loading is completed, the sharing effect page including the template object image is presented on the target device. The user of the target device can experience the synthetic photographing function through the photographing experience control on the sharing effect page.
It can be understood that the special effect photographing of the preset theme in the virtual service product can be provided for the user through the synthetic photographing function, and the interaction experience of the display function of the virtual service product and the user is further enriched.
The disclosed embodiment provides a display device, as shown in fig. 20, a display device 1 includes: an acquisition unit 10, a rendering unit 20 and an operation unit 30, wherein
The acquisition unit 10 is configured to acquire a target image of a real object in a real scene in response to a trigger operation on a target function entry;
the rendering unit 20 is configured to determine a virtual object corresponding to the target image by identifying the target image, and display a virtual effect image obtained by rendering the virtual object in a target function page; the target function page comprises at least one interaction control; the virtual effect image is used for displaying an augmented reality effect corresponding to the target image;
the operation unit 30 is configured to determine response effect data corresponding to the virtual object in response to an interaction operation for a target interaction control in the at least one interaction control, and update the displayed virtual effect image according to the response effect data.
In the above device, the display device 1 further includes: the display unit is used for responding to the triggering operation aiming at the display applet entrance and displaying a display function menu page, and the display function menu page is used for displaying at least one function entrance;
the acquiring unit 10 is further configured to enter an image acquisition page corresponding to a target function in response to a trigger operation for a target function entry in the at least one function entry, and acquire a target image of a real object in a real scene.
In the above device, the display device 1 further includes: the loading unit is used for jumping from the display function menu page to a target function loading interface and synchronously loading target function background data in the display process of the target function loading interface; the target function background data comprises at least one preset rendering data; and entering an image acquisition interface under the condition that the background data of the target function is loaded.
In the above apparatus, the real object includes: a real character; the target function portal includes: synthesizing a photographing inlet; the acquisition unit 10 is further configured to jump from the display function menu page to a synthetic photographing interface, and display a photographing control on the synthetic photographing interface; and under the condition of receiving the interactive operation aiming at the photographing control, carrying out image acquisition on the real person in the real scene to obtain a real person image as the target image.
In the device, the loading unit is further configured to load a preset special-effect sticker from the target function background data after jumping from the display function menu page to the synthetic photographing interface, and the synthetic photographing interface displays the preset special-effect sticker.
In the above apparatus, the at least one interaction control includes: rotating the control; the operation unit 30 is further configured to display a spin guidance icon when the target interaction control is the spin control; receiving a rotation operation aiming at the rotation guide icon, and determining the rotation direction and the rotation angle of the virtual object according to the rotation operation; and rotating the virtual object according to the rotating direction and the rotating angle, and acquiring preset rendering data corresponding to the rotated virtual object as the response effect data.
In the above apparatus, the at least one interaction control includes: an information display control; the operation unit 30 is further configured to, when the target interaction control is the information display control, obtain preset introduction data corresponding to the virtual object, and use the preset introduction data as the response effect data; the preset introduction data is used for displaying introduction information of the target real object on a preset display position.
In the above apparatus, the at least one interaction control includes: an image acquisition control; the operation unit 30 is further configured to, when the target interaction control is the image acquisition control, acquire a current virtual effect image corresponding to the virtual object to obtain a current object image; acquiring a first preset template, generating a template object image by combining the first preset template and the current object image, and taking the template object image as the response effect data; the first preset template is used for providing a preset image template for displaying the current object image.
In the above apparatus, the first preset template further includes: presetting an identification code; the preset identification code is used for providing a link of a preset function.
In the above apparatus, the preset identifier is an applet identifier, and the applet identifier is used to provide a link for displaying an applet entry.
In the above apparatus, the operation unit 30 is further configured to jump to a sharing page and display the template object image on the sharing page under the condition that the template object image is generated, where the sharing page includes a first sharing control; and under the condition that the sharing operation aiming at the first sharing control is received, generating a first applet link corresponding to the template object image, and sharing the first applet link to the target equipment.
In the device, the sharing page further includes a first saving control; the operation unit 30 is further configured to save the template object image to a local storage space when a save operation for the first save control is received.
In the above device, the acquiring unit 10 is further configured to display acquisition range guidance information on an image acquisition interface when entering the image acquisition interface; the acquisition range guiding information comprises a preset acquisition range boundary; and carrying out image acquisition on the real object through the image acquisition interface, and taking the image in the boundary of the preset acquisition range as the target image.
In the above apparatus, the real object includes: a target real article, the virtual object being a target virtual model corresponding to the target real article; the rendering unit 20 further includes an identification subunit, configured to perform target identification on at least one preset real object on the target image to obtain a detection result of the target real object; and acquiring a target virtual model corresponding to the target real article as the virtual object based on the detection result.
In the above apparatus, the image capturing unit 10 is further configured to display, on the image capturing interface, guidance information for prompting to adjust the position of the boundary of the preset capturing range, when a preset real object is not identified in the target image.
In the above device, the display device 1 further includes a playback unit; the playing unit is further configured to play the multimedia audio data corresponding to the virtual effect image under the condition that the virtual effect image is displayed.
In the above apparatus, the rendering unit 20 is further configured to obtain corresponding position information of the target image in a preset screen coordinate system, where the corresponding position information is used as screen position information of the target real object displayed on the target function page; performing posture evaluation based on the target image to obtain real position information of the target real object in the real scene; and obtaining preset target rendering data corresponding to the virtual object from target function background data, and rendering the virtual object by using the preset target rendering data based on the mapping relation between the real position information and the screen position information to obtain the virtual effect image.
In the above apparatus, the target image includes: an image of a real person, the virtual object comprising: presetting special effect stickers; the rendering unit 20 is further configured to superimpose the real character image and the preset special effect sticker, and generate and display a current composite image after superimposing as the virtual effect image.
In some embodiments, the display device 1 is implemented by an applet or a web client.
It should be noted that the above description of the embodiment of the apparatus, similar to the above description of the embodiment of the method, has similar beneficial effects as the embodiment of the method. For technical details not disclosed in the embodiments of the apparatus of the present disclosure, reference is made to the description of the embodiments of the method of the present disclosure.
The embodiment of the present disclosure further provides an electronic device, and fig. 21 is a schematic structural diagram of the electronic device 2 provided in the embodiment of the present disclosure, as shown in fig. 21, including: the display screen 201, the memory 202 and the processor 203, wherein the display screen 201, the memory 202 and the processor 203 are connected through a communication bus 204; a memory 202 for storing an executable computer program; the processor 203 is configured to implement the method provided by the embodiment of the present disclosure, for example, the display method provided by the embodiment of the present disclosure, in conjunction with the display screen 201 when executing the executable computer program stored in the memory 202.
The embodiment of the present disclosure provides a computer-readable storage medium, on which a computer program is stored, for causing the processor 203 to execute the method provided by the embodiment of the present disclosure, for example, the display method provided by the embodiment of the present disclosure.
In some embodiments of the present disclosure, the storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments of the disclosure, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts, or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The disclosure relates to the field of augmented reality, and aims to detect or identify relevant features, states and attributes of a target object by means of various visual correlation algorithms by acquiring image information of the target object in a real environment, so as to obtain an AR effect combining virtual and reality matched with specific applications. For example, the target object may relate to a face, a limb, a gesture, an action, etc. associated with a human body, or a marker, a marker associated with an object, or a sand table, a display area, a display item, etc. associated with a venue or a place. The vision-related algorithms may involve visual localization, SLAM, three-dimensional reconstruction, image registration, background segmentation, key point extraction and tracking of objects, pose or depth detection of objects, and the like. The specific application can not only relate to interactive scenes such as navigation, explanation, reconstruction, virtual effect superposition display and the like related to real scenes or articles, but also relate to special effect treatment related to people, such as interactive scenes such as makeup beautification, limb beautification, special effect display, virtual model display and the like. The detection or identification processing of the relevant characteristics, states and attributes of the target object can be realized through the convolutional neural network. The convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
In summary, the target image of the real object in the real scene is acquired, the virtual object is determined, and the virtual object is rendered to obtain the virtual effect image, so that the augmented reality effect of the virtual effect image is utilized to improve and enrich the display effect of the display application. Moreover, the electronic equipment can receive the interactive operation of the user through at least one interactive control, and perform corresponding operation and update display on the virtual object, so that the richness of the display content is improved. Furthermore, the interaction of the display small program is further enriched by sharing the virtual effect image to the target equipment, and the popularization of the virtual service product corresponding to the display small program is facilitated. Furthermore, through the synthetic photographing function, special effect photographing of a preset theme in the virtual service product can be provided for the user, and the interaction experience of the display function of the virtual service product and the user is further enriched.
The above description is only for the preferred embodiment of the present disclosure, and is not intended to limit the scope of the present disclosure.

Claims (22)

1. A display method, comprising:
acquiring a target image of a real object in a real scene in response to a trigger operation on a target function entrance;
identifying the target image, determining a virtual object corresponding to the target image, and displaying a virtual effect image obtained by rendering the virtual object in a target function page; the target function page comprises at least one interaction control; the virtual effect image is used for displaying an augmented reality effect corresponding to the target image;
and responding to the interactive operation aiming at the target interactive control in the at least one interactive control, determining response effect data corresponding to the virtual object, and updating the displayed virtual effect image through the response effect data.
2. The method of claim 1, further comprising:
responding to a trigger operation aiming at a display applet entrance, and displaying a display function menu page, wherein the display function menu page is used for displaying at least one function entrance;
the acquiring a target image of a real object in a real scene in response to a trigger operation on a target function entrance comprises:
and responding to the triggering operation aiming at the target function entrance in the at least one function entrance, entering an image acquisition page corresponding to the target function, and acquiring a target image of a real object in a real scene.
3. The method according to claim 2, wherein the entering of the image capture page corresponding to the target function comprises:
skipping from the display function menu page to a target function loading interface, and synchronously loading target function background data in the display process of the target function loading interface; the target function background data comprises at least one preset rendering data;
and entering an image acquisition interface under the condition that the background data of the target function is loaded.
4. The method according to claim 1 or 2, wherein the real object comprises: a real character; the target function portal includes: synthesizing a photographing inlet; the acquiring a target image of a real object in a real scene in response to a trigger operation on a target function entrance comprises:
skipping from the display function menu page to a synthetic photographing interface, and displaying a photographing control on the synthetic photographing interface;
and under the condition of receiving the interactive operation aiming at the photographing control, carrying out image acquisition on the real person in the real scene to obtain a real person image as the target image.
5. The method of claim 4, wherein after said jumping from said display function menu page to said synthetic photography interface, said method further comprises:
and loading a preset special effect paster from the target function background data, and displaying the preset special effect paster on the synthetic photographing interface.
6. The method of any of claims 1-3, wherein the at least one interactive control comprises: rotating the control; the determining response effect data corresponding to the virtual object in response to the interactive operation aiming at the target interactive control in the at least one interactive control comprises:
displaying a spin guide icon under the condition that the target interaction control is the spin control;
receiving a rotation operation aiming at the rotation guide icon, and determining the rotation direction and the rotation angle of the virtual object according to the rotation operation;
and rotating the virtual object according to the rotating direction and the rotating angle, and acquiring preset rendering data corresponding to the rotated virtual object as the response effect data.
7. The method of any of claims 1-3, wherein the at least one interactive control comprises: an information display control; the determining response effect data corresponding to the virtual object in response to the interactive operation aiming at the target interactive control in the at least one interactive control comprises:
acquiring preset introduction data corresponding to the virtual object as the response effect data under the condition that the target interaction control is the information display control; the preset introduction data is used for displaying introduction information of the target real object on a preset display position.
8. The method of any of claims 1-3, wherein the at least one interactive control comprises: an image acquisition control; the determining response effect data corresponding to the virtual object in response to the interactive operation aiming at the target interactive control in the at least one interactive control comprises:
under the condition that the target interaction control is the image acquisition control, acquiring a current virtual effect image corresponding to the virtual object to obtain a current object image;
acquiring a first preset template, generating a template object image by combining the first preset template and the current object image, and taking the template object image as the response effect data; the first preset template is used for providing a preset image template for displaying the current object image.
9. The method of claim 8, wherein the first preset template further comprises: presetting an identification code; the preset identification code is used for providing a link of a preset function.
10. The method of claim 9, wherein the default identifier is an applet identifier, and wherein the applet identifier is used to provide a link for displaying an applet entry.
11. The method according to any one of claims 8-10, further comprising:
under the condition that the template object image is generated, jumping to a sharing page, and displaying the template object image on the sharing page, wherein the sharing page comprises a first sharing control;
and under the condition that the sharing operation aiming at the first sharing control is received, generating a first applet link corresponding to the template object image, and sharing the first applet link to the target equipment.
12. The method of claim 11, wherein the shared page further comprises a first save control, the method further comprising:
and under the condition that the saving operation aiming at the first saving control is received, saving the template object image to a local storage space.
13. The method according to any one of claims 1-12, wherein said acquiring a target image of a real object in a real scene comprises:
displaying acquisition range guide information on an image acquisition interface under the condition of entering the image acquisition interface; the acquisition range guiding information comprises a preset acquisition range boundary;
and carrying out image acquisition on the real object through the image acquisition interface, and taking the image in the boundary of the preset acquisition range as the target image.
14. The method according to any one of claims 1-13, wherein the real object comprises: a target real article, the virtual object being a target virtual model corresponding to the target real article;
the determining the virtual object corresponding to the target image by identifying the target image includes:
performing target identification on at least one preset real article on the target image to obtain a detection result of the target real article;
and acquiring a target virtual model corresponding to the target real article as the virtual object based on the detection result.
15. The method of claim 14, further comprising:
and under the condition that a preset real article is not identified in the target image, displaying guide information for prompting the adjustment of the position of the boundary of the preset acquisition range on the image acquisition interface.
16. The method according to any one of claims 1-15, further comprising:
and under the condition of displaying the virtual effect image, playing multimedia audio data corresponding to the virtual effect image.
17. The method according to any one of claims 1 to 16, wherein the presenting a virtual effect image rendered from the virtual object in the target function page comprises:
acquiring corresponding position information of the target image in a preset screen coordinate system, and taking the position information as screen position information displayed by the target real article on the target function page;
performing posture evaluation based on the target image to obtain real position information of the target real object in the real scene;
and obtaining preset target rendering data corresponding to the virtual object from target function background data, and rendering the virtual object by using the preset target rendering data based on the mapping relation between the real position information and the screen position information to obtain the virtual effect image.
18. The method of any one of claims 1-16, wherein the target image comprises: an image of a real person, the virtual object comprising: presetting special effect stickers;
the displaying of the virtual effect image obtained by rendering the virtual object in the target function page includes:
and superposing the real character image and the preset special effect paster, and generating and displaying a superposed current synthetic image as the virtual effect image.
19. The method according to any one of claims 1-17, further comprising:
the display method is realized through a small program or a webpage client.
20. A display device, comprising:
the acquisition unit is used for responding to the trigger operation of the target function entrance and acquiring a target image of a real object in a real scene;
the rendering unit is used for identifying the target image, determining a virtual object corresponding to the target image, and displaying a virtual effect image obtained by rendering the virtual object in a target function page; the target function page comprises at least one interaction control; the virtual effect image is used for displaying an augmented reality effect corresponding to the target image;
and the operation unit is used for responding to the interactive operation aiming at the target interactive control in the at least one interactive control, determining response effect data corresponding to the virtual object, and updating the displayed virtual effect image through the response effect data.
21. An electronic device, comprising:
a display screen; a memory for storing an executable computer program;
a processor for implementing the method of any one of claims 1 to 19 in conjunction with the display screen when executing an executable computer program stored in the memory.
22. A computer-readable storage medium, having stored thereon a computer program for causing a processor, when executed, to carry out the method of any one of claims 1 to 19.
CN202110961689.4A 2021-08-20 2021-08-20 Display method, display device, electronic equipment and computer readable storage medium Withdrawn CN113721804A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110961689.4A CN113721804A (en) 2021-08-20 2021-08-20 Display method, display device, electronic equipment and computer readable storage medium
PCT/CN2022/113746 WO2023020622A1 (en) 2021-08-20 2022-08-19 Display method and apparatus, electronic device, computer-readable storage medium, computer program, and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110961689.4A CN113721804A (en) 2021-08-20 2021-08-20 Display method, display device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113721804A true CN113721804A (en) 2021-11-30

Family

ID=78677163

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110961689.4A Withdrawn CN113721804A (en) 2021-08-20 2021-08-20 Display method, display device, electronic equipment and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN113721804A (en)
WO (1) WO2023020622A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114371904A (en) * 2022-01-12 2022-04-19 北京字跳网络技术有限公司 Data display method and device, mobile terminal and storage medium
CN114390214A (en) * 2022-01-20 2022-04-22 脸萌有限公司 Video generation method, device, equipment and storage medium
CN114398133A (en) * 2022-01-14 2022-04-26 北京字跳网络技术有限公司 Display method, display device, electronic equipment and storage medium
CN115016688A (en) * 2022-06-28 2022-09-06 维沃移动通信有限公司 Virtual information display method and device and electronic equipment
CN115033148A (en) * 2022-06-13 2022-09-09 北京字跳网络技术有限公司 Document display method and device, electronic equipment and storage medium
WO2023020622A1 (en) * 2021-08-20 2023-02-23 上海商汤智能科技有限公司 Display method and apparatus, electronic device, computer-readable storage medium, computer program, and computer program product
CN115712344A (en) * 2022-09-27 2023-02-24 汉华智能科技(佛山)有限公司 Scene interaction method, system and storage medium for intelligent electronic sacrifice
WO2024021792A1 (en) * 2022-07-25 2024-02-01 腾讯科技(深圳)有限公司 Virtual scene information processing method and apparatus, device, storage medium, and program product

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117412019A (en) * 2023-12-14 2024-01-16 深圳市欧冠微电子科技有限公司 Aerial imaging and virtual reality dynamic backlight determination method and device and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110677587A (en) * 2019-10-12 2020-01-10 北京市商汤科技开发有限公司 Photo printing method and device, electronic equipment and storage medium
CN111643899A (en) * 2020-05-22 2020-09-11 腾讯数码(天津)有限公司 Virtual article display method and device, electronic equipment and storage medium
CN111880659A (en) * 2020-07-31 2020-11-03 北京市商汤科技开发有限公司 Virtual character control method and device, equipment and computer readable storage medium
CN111897431A (en) * 2020-07-31 2020-11-06 北京市商汤科技开发有限公司 Display method and device, display equipment and computer readable storage medium
CN112135059A (en) * 2020-09-30 2020-12-25 北京字跳网络技术有限公司 Shooting method, shooting device, electronic equipment and storage medium
CN112148189A (en) * 2020-09-23 2020-12-29 北京市商汤科技开发有限公司 Interaction method and device in AR scene, electronic equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DK180470B1 (en) * 2017-08-31 2021-05-06 Apple Inc Systems, procedures, and graphical user interfaces for interacting with augmented and virtual reality environments
CN112771472B (en) * 2018-10-15 2022-06-10 美的集团股份有限公司 System and method for providing real-time product interactive assistance
CN112051961A (en) * 2020-09-04 2020-12-08 脸萌有限公司 Virtual interaction method and device, electronic equipment and computer readable storage medium
CN113721804A (en) * 2021-08-20 2021-11-30 北京市商汤科技开发有限公司 Display method, display device, electronic equipment and computer readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110677587A (en) * 2019-10-12 2020-01-10 北京市商汤科技开发有限公司 Photo printing method and device, electronic equipment and storage medium
CN111643899A (en) * 2020-05-22 2020-09-11 腾讯数码(天津)有限公司 Virtual article display method and device, electronic equipment and storage medium
CN111880659A (en) * 2020-07-31 2020-11-03 北京市商汤科技开发有限公司 Virtual character control method and device, equipment and computer readable storage medium
CN111897431A (en) * 2020-07-31 2020-11-06 北京市商汤科技开发有限公司 Display method and device, display equipment and computer readable storage medium
CN112148189A (en) * 2020-09-23 2020-12-29 北京市商汤科技开发有限公司 Interaction method and device in AR scene, electronic equipment and storage medium
CN112135059A (en) * 2020-09-30 2020-12-25 北京字跳网络技术有限公司 Shooting method, shooting device, electronic equipment and storage medium

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023020622A1 (en) * 2021-08-20 2023-02-23 上海商汤智能科技有限公司 Display method and apparatus, electronic device, computer-readable storage medium, computer program, and computer program product
CN114371904A (en) * 2022-01-12 2022-04-19 北京字跳网络技术有限公司 Data display method and device, mobile terminal and storage medium
CN114371904B (en) * 2022-01-12 2023-09-15 北京字跳网络技术有限公司 Data display method and device, mobile terminal and storage medium
CN114398133A (en) * 2022-01-14 2022-04-26 北京字跳网络技术有限公司 Display method, display device, electronic equipment and storage medium
CN114390214A (en) * 2022-01-20 2022-04-22 脸萌有限公司 Video generation method, device, equipment and storage medium
CN114390214B (en) * 2022-01-20 2023-10-31 脸萌有限公司 Video generation method, device, equipment and storage medium
CN115033148A (en) * 2022-06-13 2022-09-09 北京字跳网络技术有限公司 Document display method and device, electronic equipment and storage medium
CN115033148B (en) * 2022-06-13 2024-04-19 北京字跳网络技术有限公司 Document display method, device, electronic equipment and storage medium
CN115016688A (en) * 2022-06-28 2022-09-06 维沃移动通信有限公司 Virtual information display method and device and electronic equipment
WO2024021792A1 (en) * 2022-07-25 2024-02-01 腾讯科技(深圳)有限公司 Virtual scene information processing method and apparatus, device, storage medium, and program product
CN115712344A (en) * 2022-09-27 2023-02-24 汉华智能科技(佛山)有限公司 Scene interaction method, system and storage medium for intelligent electronic sacrifice

Also Published As

Publication number Publication date
WO2023020622A1 (en) 2023-02-23

Similar Documents

Publication Publication Date Title
CN113721804A (en) Display method, display device, electronic equipment and computer readable storage medium
CN110300909B (en) Systems, methods, and media for displaying an interactive augmented reality presentation
MacIntyre et al. The Argon AR Web Browser and standards-based AR application environment
US11615592B2 (en) Side-by-side character animation from realtime 3D body motion capture
US10026229B1 (en) Auxiliary device as augmented reality platform
US9652046B2 (en) Augmented reality system
US20190333478A1 (en) Adaptive fiducials for image match recognition and tracking
US20150185825A1 (en) Assigning a virtual user interface to a physical object
US11734894B2 (en) Real-time motion transfer for prosthetic limbs
KR20230107844A (en) Personalized avatar real-time motion capture
CN114930399A (en) Image generation using surface-based neurosynthesis
US20140002443A1 (en) Augmented reality interface
KR20230107655A (en) Body animation sharing and remixing
WO2014136103A1 (en) Simultaneous local and cloud searching system and method
WO2018142756A1 (en) Information processing device and information processing method
CN112752162B (en) Virtual article presenting method, device, terminal and computer readable storage medium
US20210312887A1 (en) Systems, methods, and media for displaying interactive augmented reality presentations
CN113867531A (en) Interaction method, device, equipment and computer readable storage medium
US20220392172A1 (en) Augmented Reality App and App Development Toolkit
JP2022500795A (en) Avatar animation
WO2023045964A1 (en) Display method and apparatus, device, computer readable storage medium, computer program product, and computer program
KR20140078083A (en) Method of manufacturing cartoon contents for augemented reality and apparatus performing the same
US20230410440A1 (en) Integrating augmented reality experiences with other components
US20240119690A1 (en) Stylizing representations in immersive reality applications
Niemelä Mobile augmented reality client for citizen participation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40056854

Country of ref document: HK

WW01 Invention patent application withdrawn after publication

Application publication date: 20211130

WW01 Invention patent application withdrawn after publication