CN112348969A - Display method and device in augmented reality scene, electronic equipment and storage medium - Google Patents

Display method and device in augmented reality scene, electronic equipment and storage medium Download PDF

Info

Publication number
CN112348969A
CN112348969A CN202011233879.6A CN202011233879A CN112348969A CN 112348969 A CN112348969 A CN 112348969A CN 202011233879 A CN202011233879 A CN 202011233879A CN 112348969 A CN112348969 A CN 112348969A
Authority
CN
China
Prior art keywords
target object
position information
special effect
image
effect data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011233879.6A
Other languages
Chinese (zh)
Other versions
CN112348969B (en
Inventor
刘旭
栾青
李斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202011233879.6A priority Critical patent/CN112348969B/en
Publication of CN112348969A publication Critical patent/CN112348969A/en
Priority to PCT/CN2021/102191 priority patent/WO2022095467A1/en
Priority to TW110127756A priority patent/TW202220438A/en
Application granted granted Critical
Publication of CN112348969B publication Critical patent/CN112348969B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence

Abstract

The disclosure provides a display method and device in an augmented reality scene, an electronic device and a storage medium, wherein the display method and device firstly acquire a current scene image shot by an augmented reality AR device; then, determining special effect data matched with the target object and display position information of the special effect data based on the recognition result of the current scene image on the target object; finally, based on the display position information, the AR equipment is controlled to play the special effect data; the special effect data comprises a virtual image and/or an audio, and a display position between the virtual image and the target object has a preset position relation.

Description

Display method and device in augmented reality scene, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of augmented reality technologies, and in particular, to a display method and apparatus in an augmented reality scene, an electronic device, and a storage medium.
Background
Augmented Reality (AR) technology superimposes entity information (visual information, sound, touch, etc.) on the real world after simulation, so that a real environment and a virtual object are presented on the same screen or space in real time.
The current positioning mode is to map the current position of the AR equipment to a certain position in a three-dimensional map model by identifying the current position of the AR equipment, and further show preset virtual special effect data within the range of the position. The method not only needs to collect a large quantity of images to reconstruct the three-dimensional map model corresponding to the real environment, but also has single effect of displaying preset virtual data, and is not abundant and vivid enough.
Disclosure of Invention
The embodiment of the disclosure at least provides a display method and device in an augmented reality scene.
In a first aspect, an embodiment of the present disclosure provides a display method in an augmented reality scene, including:
acquiring a current scene image shot by the AR equipment;
determining special effect data matched with the target object and display position information of the special effect data based on the recognition result of the current scene image on the target object;
controlling the AR equipment to play the special effect data based on the display position information; the special effect data comprises a virtual image and/or an audio, and a display position between the virtual image and the target object has a preset position relation.
In addition, the three-dimensional map model does not need to be reconstructed, the matched special effect data can be triggered to be displayed directly through the recognition result of the target object, and the display position between the virtual image in the special effect data and the target object has a preset position relation, so that the display effect can be closely associated with the target object, and the special effect data can be displayed in a more targeted manner.
In a possible embodiment, the determining the display position information of the special effect data based on the recognition result of the current scene image on the target object includes:
determining display position information of the special effect data based on image position information of the target object in the current scene image under the condition that the target object is identified in the current scene image;
and under the condition that the target object is not identified in the current scene image, acquiring relative position information between the target object and the AR equipment in a world coordinate system, and determining display position information of the special effect data based on the relative position information.
According to the embodiment, the corresponding positioning mode is switched to determine the display position information of the special effect data based on the recognition result of the target object in the current scene image, so that the display of the special effect data can be effectively prevented from being interrupted due to positioning failure of one positioning mode, and the stability of the display of the special effect data is improved.
In one possible embodiment, the special effects data includes the virtual imagery and audio;
the controlling the AR device to play the special effect data based on the display position information includes:
under the condition that at least part of the target object is determined to be within the image display range of the AR device, controlling the AR device to play at least part of the virtual images and/or audio based on the display position information;
and under the condition that the target object is determined not to be in the image display range of the AR equipment, controlling the AR equipment to continue playing the audio according to the played progress of the audio based on the display position information.
In the embodiment, at least part of the virtual image and/or the audio is displayed under the condition that at least part of the target object is included in the image display range of the AR device, and when the target object is not included in the image display range of the AR device, the virtual image is not displayed, and only the audio is displayed, so that the effect of displaying the special effect data is more reasonable, and the display effect of the special effect data is more coherent.
In one possible embodiment, the virtual image comprises a hologram;
the display method further comprises the following steps:
acquiring a video to be processed matched with the target object, wherein the video to be processed comprises a target associated object associated with the target object;
setting a transparent channel for each pixel point in the video to be processed to obtain a first video;
based on the transparent channel, removing background pixel points from the first video to obtain a second video;
generating a hologram including the target-associated object based on the second video.
In this embodiment, the virtual image further includes a hologram, which displays a hologram corresponding to the target associated object associated with the target object, and further displays the hologram in a superimposed manner in the current scene image, so that the display effect of the AR content is richer.
In a possible implementation manner, the removing, based on the transparent channel, a background pixel point from the first video to obtain a second video includes:
setting a transparent channel corresponding to a background pixel point in the first video to be white, and obtaining a third video; the first video comprises target pixel points of the target associated object and background pixel points except the target pixel points;
setting a transparent channel corresponding to a first type of pixel point in the first video as black, setting a transparent channel corresponding to a second type of pixel point in the first video as white, and setting a transparent channel corresponding to a third type of pixel point in the first video as a preset gray value to obtain a fourth video; the third type of pixel points comprise target pixel points adjacent to the background pixel points and background pixel points adjacent to the target pixel points; the first type of pixel points comprise background pixel points except for third type of pixel points, and the second type of pixel points comprise target pixel points except for the third type of pixel points;
generating the second video based on the third video and the fourth video.
According to the embodiment, the original video can be adjusted to the display effect of the holographic image by processing different types of pixel points of the first video.
In one possible embodiment, the virtual image includes images of a plurality of virtual objects, and presentation order and/or interaction data between the plurality of virtual objects;
the controlling the AR device to play the special effect data based on the display position information includes:
and displaying the images of the virtual objects on the display positions corresponding to the display position information based on the display sequence and/or interactive data among the virtual objects.
According to the embodiment, the images comprising the virtual objects and the interactive data among the virtual objects are displayed according to the display sequence among the virtual objects, so that the content displayed by the AR can be further enriched, and the display effect of the AR content is improved.
In a possible implementation manner, the determining, based on image position information of the target object in the current scene image, display position information of the special effect data includes:
determining the position information of the target object in a world coordinate system based on the image position information of the target object in the current scene image;
and determining display position information of the special effect data based on the position information of the target object in the world coordinate system and the position information of the AR equipment in the world coordinate system.
According to the embodiment, the image position information of the target object in the current scene image can be accurately determined, and the display position information of the special effect data can be accurately obtained based on the image position information of the target object, so that support is provided for accurate display of the special effect data.
In one possible embodiment, the obtaining of the relative position information between the target object and the AR device in the world coordinate system includes:
and determining the relative position information between the AR device and the target object when shooting the current scene image based on the current scene image, the historical scene image and the relative position information between the AR device and the target object when shooting the historical scene image under the world coordinate system.
According to the embodiment, the current scene image, the historical scene image and the relative position information of the AR equipment and the target object in the world coordinate system when the historical scene image is shot are utilized, the relative position information of the AR equipment and the target object when the current scene image is shot can be accurately determined, and therefore support is provided for accurate display of special effect data.
In one possible embodiment, it is identified whether the target object is contained in the current scene image as follows:
extracting feature points of the current scene image to obtain feature information corresponding to a plurality of feature points contained in the current scene image; the plurality of feature points are located in a target detection area in the current scene image;
and determining whether the current scene image contains the target object or not based on comparison between the feature information respectively corresponding to the feature points and the pre-stored feature information respectively corresponding to the feature points contained in the target object.
In this embodiment, whether the target object exists in the current scene image can be determined more accurately by using the extraction and comparison of the feature points.
In a second aspect, an embodiment of the present disclosure provides a display device in an augmented reality scene, including:
the image acquisition module is used for acquiring a current scene image shot by the AR equipment;
the position determining module is used for determining special effect data matched with the target object and display position information of the special effect data based on the recognition result of the current scene image on the target object;
a special effect playing module, configured to control the AR device to play the special effect data based on the display position information; the special effect data comprises a virtual image and/or an audio, and a display position between the virtual image and the target object has a preset position relation.
In a possible embodiment, the position determination module, when determining the display position information of the special effect data based on the recognition result of the current scene image to the target object, is configured to:
determining display position information of the special effect data based on image position information of the target object in the current scene image under the condition that the target object is identified in the current scene image;
and under the condition that the target object is not identified in the current scene image, acquiring relative position information between the target object and the AR equipment in a world coordinate system, and determining display position information of the special effect data based on the relative position information.
In one possible embodiment, the special effects data includes the virtual imagery and audio;
the special effect playing module is configured to, when controlling the AR device to play the special effect data based on the display position information:
under the condition that at least part of the target object is determined to be within the image display range of the AR device, controlling the AR device to play at least part of the virtual images and/or audio based on the display position information;
and under the condition that the target object is determined not to be in the image display range of the AR equipment, controlling the AR equipment to continue playing the audio according to the played progress of the audio based on the display position information.
In one possible embodiment, the virtual image comprises a hologram;
the display device further comprises a holographic image generation module, which is used for:
acquiring a video to be processed matched with the target object, wherein the video to be processed comprises a target associated object associated with the target object;
setting a transparent channel for each pixel point in the video to be processed to obtain a first video;
based on the transparent channel, removing background pixel points from the first video to obtain a second video;
generating a hologram including the target-associated object based on the second video.
In a possible implementation manner, when the hologram generating module removes a background pixel from the first video based on the transparent channel to obtain a second video, the hologram generating module is configured to:
setting a transparent channel corresponding to a background pixel point in the first video to be white, and obtaining a third video; the first video comprises target pixel points of the target associated object and background pixel points except the target pixel points;
setting a transparent channel corresponding to a first type of pixel point in the first video as black, setting a transparent channel corresponding to a second type of pixel point in the first video as white, and setting a transparent channel corresponding to a third type of pixel point in the first video as a preset gray value to obtain a fourth video; the third type of pixel points comprise target pixel points adjacent to the background pixel points and background pixel points adjacent to the target pixel points; the first type of pixel points comprise background pixel points except for third type of pixel points, and the second type of pixel points comprise target pixel points except for the third type of pixel points;
generating the second video based on the third video and the fourth video.
In one possible embodiment, the virtual image includes images of a plurality of virtual objects, and presentation order and/or interaction data between the plurality of virtual objects;
the special effect playing module is configured to, when controlling the AR device to play the special effect data based on the display position information:
and displaying the images of the virtual objects on the display positions corresponding to the display position information based on the display sequence and/or interactive data among the virtual objects.
In a possible embodiment, the position determining module, when determining the presentation position information of the special effect data based on the image position information of the target object in the current scene image, is configured to:
determining the position information of the target object in a world coordinate system based on the image position information of the target object in the current scene image;
and determining display position information of the special effect data based on the position information of the target object in the world coordinate system and the position information of the AR equipment in the world coordinate system.
In one possible embodiment, the position determining module, when obtaining the relative position information between the target object and the AR device in the world coordinate system, is configured to:
and determining the relative position information between the AR device and the target object when shooting the current scene image based on the current scene image, the historical scene image and the relative position information between the AR device and the target object when shooting the historical scene image under the world coordinate system.
In a possible implementation, the position determination module is further configured to identify whether the target object is included in the current scene image according to the following manner:
extracting feature points of the current scene image to obtain feature information corresponding to a plurality of feature points contained in the current scene image; the plurality of feature points are located in a target detection area in the current scene image;
and determining whether the current scene image contains the target object or not based on comparison between the feature information respectively corresponding to the feature points and the pre-stored feature information respectively corresponding to the feature points contained in the target object.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect described above, or any possible implementation of the first aspect.
In a fourth aspect, this disclosed embodiment also provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
For the description of the effects of the display apparatus, the electronic device, and the computer-readable storage medium in the augmented reality scene, reference is made to the description of the display method in the augmented reality scene, and details are not repeated here.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 shows a flowchart of a display method in an augmented reality scene according to an embodiment of the present disclosure;
FIG. 2A illustrates a flow chart for generating a hologram provided by an embodiment of the present disclosure;
fig. 2B shows a flowchart for removing a background pixel point in a first video to obtain a second video according to the embodiment of the present disclosure;
FIG. 3A illustrates one of the schematic diagrams of the effects data presented in this disclosure;
FIG. 3B illustrates an image in a video to be processed in accordance with the present disclosure;
FIG. 3C shows an image in a fourth video in the present disclosure;
FIG. 4A illustrates a second schematic diagram of the special effects data presented in the present disclosure;
FIG. 4B illustrates a third schematic diagram of special effects data presented in this disclosure;
FIG. 4C illustrates four schematic diagrams of the special effects data presented in this disclosure;
FIG. 5 is a flow chart illustrating a method for identifying whether a target object is included in a current scene image according to an embodiment of the disclosure;
fig. 6 is a schematic diagram illustrating a display device in an augmented reality scene according to an embodiment of the present disclosure;
fig. 7 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The term "and/or" herein merely describes an associative relationship, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
With the development of AR technology, AR technology is gradually applied to various fields, such as that AR content can be superimposed on an entity object, and the entity object is vividly introduced to a user through the AR content. However, when displaying AR content on an AR device at present, a current position of the AR device needs to be identified, and then the current position is mapped to a certain position in a three-dimensional map model, so that preset virtual special effect data within a range where the position is located is displayed. The method not only needs to collect a large quantity of images to reconstruct the three-dimensional map model corresponding to the real environment, but also has single effect of displaying preset virtual data, and is not abundant and vivid enough.
The embodiment of the disclosure provides a display method, a display device, an electronic device and a computer-readable storage medium in an augmented reality scene, and the embodiment of the disclosure can realize comprehensive display of virtual images and audio based on a recognition result of a target object, that is, virtual images such as AR pictures, videos and holographic images matched with the target object can be displayed, in addition, a three-dimensional map model does not need to be reconstructed in the aspect, matched special effect data can be triggered to be displayed directly through the recognition result of the target object, and a display position between the virtual image in the special effect data and the target object has a preset position relationship, so that the display effect can be closely associated with the target object, and the special effect data can be displayed more specifically.
The following describes a display method, a display device, an electronic device, and a storage medium in an augmented reality scene according to specific embodiments of the present disclosure.
As shown in fig. 1, an embodiment of the present disclosure discloses a display method in an augmented reality scene, where the method may be applied to a device with computing capability, which may be a server or an AR device. Specifically, the display method in the augmented reality scene may include the following steps:
and S110, acquiring a current scene image shot by the AR equipment.
For example, the AR device may include, but is not limited to, display-enabled and data-processing devices such as AR glasses, tablet computers, smart phones, smart wearable devices, and the like, and an application program for presenting AR scene content may be installed in the AR device, and a user may experience the AR scene content in the application program.
For example, the AR device may further include an image acquisition component, such as an RGB camera, configured to capture an image, and after acquiring a current scene image captured by the AR device, may identify the current scene image, and identify whether a target object triggering special effect data to be displayed is included.
S120, determining special effect data matched with the target object and display position information of the special effect data based on the recognition result of the current scene image on the target object.
For example, for different application scenarios, the target object may be an object having a specific form, such as a book, a painting, a building, and other solid objects, and the solid object may be introduced through the special effect data, so as to increase the user's understanding of the solid object.
For example, for a scene in which calendar special effect data is displayed, the target object may be a calendar having a preset form, the special effect data may be virtual display content designed in advance based on the content of the calendar, and the content of the calendar may be introduced to the user to attract the user to refer to the calendar.
The captured current scene image may or may not include the target object, and therefore, before this step is performed, the current scene image needs to be identified to determine whether the target object is included in the current scene image.
After the target object is recognized, special effect data matching the target object may be acquired based on an identifier of the target object or the like. The special effect data may include virtual images, videos, and the like. The virtual image may include a video, a hologram, an AR picture, etc. matching the target object.
When the current scene image includes the target object, the display position information of the special effect data matched with the target object can be determined by utilizing a marker (marker) to perform positioning mode directly based on the current scene image. Specifically, the positioning by using the marker is to determine image position information of a target object in a current scene image by using an image of the target object as the marker, and then determine display position information of special effect data based on the image position information.
When the current scene image does not include the target object, determining position information corresponding to the target object or relative position information relative to the AR device by using another positioning mode, for example, a real-time positioning And Mapping (SLAM) positioning mode, And then determining display position information of the special effect data by using the determined position information or relative position information. The steps of determining the relative position information using the SLAM are described in the following embodiments.
The display position information may include coordinate information of the special effect data in a world coordinate system. The world coordinate system is a three-dimensional coordinate system constructed in a real space and is an absolute coordinate system, and the world coordinate system does not change along with the positions of the AR equipment, the target object and the special effect data.
S130, controlling the AR equipment to play the special effect data based on the display position information; the special effect data comprises a virtual image and/or an audio, and a display position between the virtual image and the target object has a preset position relation.
If the current scene image is the first frame image of the identified target object, controlling the special effect data to be played from the beginning based on the display position information; and if the target object is identified in the historical scene image shot before the AR equipment, continuing to play the special effect data based on the current playing progress of the special effect data. After the special effect data is played, the special effect data can be played again by clicking a button displayed on the AR device.
According to the embodiment, the matched special effect data can be triggered to be displayed directly according to the identification result of the target object without reconstructing a three-dimensional map model, and compared with a mode of triggering the display of the special effect data based on the positioning result of the current AR equipment, the display effect can be closely associated with the target object, and the special effect data can be displayed more specifically. In addition, the three-dimensional map model does not need to be reconstructed, the matched special effect data can be triggered to be displayed directly according to the recognition result of the target object, and the display position between the virtual image in the special effect data and the target object has a preset position relation, so that the display effect of the special effect data can be closely associated with the target object, and the special effect data can be displayed more pertinently.
Therefore, when the AR device is controlled to play the special effect data based on the display position information, it is first necessary to determine whether the special effect data is located within the image display range of the AR device.
The special effect data is matched with the target object, and the display position of the virtual image in the special effect data has a preset position relation with the target object, for example, when the target object is a calendar, the display position of the virtual image corresponding to the special effect data can be perpendicular to a cover of the calendar.
Specifically, if the current scene image includes at least part of the target object, it is determined that at least part of the target object is within the image display range of the AR device, at this time, special effect data matched with the target object is at least partially located within the image display range of the AR device, and at this time, when the AR device is controlled to play the special effect data based on the display position information, it may specifically be that the AR device is controlled to play at least part of the virtual image and/or the audio.
If the current scene image does not include the target object, determining that the target object is not within the image display range of the AR device, and at this time, the virtual image in the special effect data matched with the target object is not within the image display range of the AR device, and at this time, when the AR device is controlled to play the special effect data based on the display position information, specifically, the AR device may be controlled to continue to play the audio according to the played progress of the audio.
For example, the determining whether the current scene image includes the target object or not, or whether at least part of the target object is included may specifically be implemented as follows:
extracting feature points of the current scene image to obtain feature information corresponding to a plurality of feature points contained in the current scene image; the plurality of feature points are located in a target detection area in the current scene image; and determining whether the current scene image contains the target object or not or whether a part of the target object is included or not based on comparison between the feature information respectively corresponding to the feature points and the pre-stored feature information respectively corresponding to the feature points contained in the target object.
If the feature points extracted from the current scene image are matched with the pre-stored feature points successfully, determining that the current scene image comprises a complete target object; if the ratio of the feature points extracted from the current scene image to the feature points stored in advance is higher than the preset ratio, determining that the current scene image comprises a part of target objects; and if the ratio of the feature points extracted from the current scene image to the feature points stored in advance is lower than or equal to the preset ratio, determining that the target object is not included in the current scene image. The specific matching process can be seen in steps S510-S520 described below.
When the current scene image comprises a complete target object, the image display range of the AR equipment comprises the complete target object, and at the moment, the AR equipment displays the complete virtual image and/or audio; when the current scene image comprises a part of target objects, the image display range of the AR equipment comprises a part of target objects, and at the moment, the AR equipment displays a part of virtual images and/or audios; when the target object is not included in the current scene image, the target object is not included in the image display range of the AR equipment, at the moment, the AR equipment does not display the virtual image and only displays the audio, so that the display rationality of the special effect data is improved, and the display continuity of the special effect data is ensured.
In order to further improve the richness of the displayed AR content and improve the display effect of the AR content, a holographic image matched with the target object can be displayed, wherein the holographic image comprises a target associated object associated with the target object. As shown in fig. 2A, the hologram may be generated in some embodiments using the following steps:
s210, obtaining a to-be-processed video matched with the target object, wherein the to-be-processed video comprises a target associated object associated with the target object.
The target related object is associated with the target object, for example, as shown in fig. 3A, when the target object is a certain place 301, the target related object may be a guide 302 introducing the place.
As shown in fig. 3B, the image is an image in the video to be processed, and the image has a background.
S220, setting a transparent channel for each pixel point in the video to be processed to obtain a first video.
The method comprises the steps that a transparent channel is set for each pixel point in each image in a video to be processed, the transparent channel can be used for controlling the transparency degree of the corresponding pixel point, and the transparent pixel point does not contribute to the image, namely the pixel point is not displayed; an opaque pixel point contributes to the image, i.e. the pixel point is displayed.
And S230, based on the transparent channel, removing background pixel points from the first video to obtain a second video.
If the value of the transparent channel of a certain pixel point is set to be 0, the pixel point is represented to be transparent, the transparent channel of the pixel point is set to be black at the moment, and the pixel point does not contribute to the image; if the value of the transparent channel of a certain pixel point is set to be 1, the pixel point is represented to be opaque, at the moment, the transparent channel of the pixel point is set to be white, and the pixel point contributes to the image. Background pixels can be removed from the first video by setting the transparency of the transparent channel.
In specific implementation, as shown in fig. 2B, the following steps may be used to remove background pixel points in the first video to obtain a second video:
s2301, setting a transparent channel corresponding to a background pixel point in the first video to be white, and obtaining a third video; the first video comprises target pixel points of the target associated object and background pixel points except the target pixel points.
Illustratively, the transparent channel corresponding to the background pixel point is set to 1.
S2302, setting a transparent channel corresponding to a first type of pixel point in the first video to be black, setting a transparent channel corresponding to a second type of pixel point in the first video to be white, and setting a transparent channel corresponding to a third type of pixel point in the first video to be a preset gray value to obtain a fourth video; the third type of pixel points comprise target pixel points adjacent to the background pixel points and background pixel points adjacent to the target pixel points; the first type of pixel points comprise background pixel points except for third type of pixel points, and the second type of pixel points comprise target pixel points except for the third type of pixel points.
For example, the transparent channel corresponding to the first type of pixel point is set to 0, the transparent channel corresponding to the second type of pixel point is set to 1, and the transparent channel corresponding to the third type of pixel point is set to a value between 0 and 1, that is, a preset gray value. The third-class pixel points are set to be the preset gray values so that the color of the pixel at the edge of the target-related object is close to the transparent hue of the background, and the displayed color at the edge of the target-related object is prevented from being sharper. As shown in figure 3C for one image in the fourth video,
s2303, generating the second video based on the third video and the fourth video.
And integrating the third video and the fourth video to obtain the second video with the background removed and only the target associated object reserved. As shown in FIG. 3A, the background of the tour guide 302 is transparent.
And S240, generating a holographic image comprising the target associated object based on the second video.
The resulting hologram is shown in FIG. 3A.
In order to further improve the richness of the display of the AR content, the virtual image may further include a video, an AR picture, characters, and the like, as shown in fig. 4A, when a target object cake is recognized, characters of "happy birthday" are displayed, as shown in fig. 4B, when a target object day is recognized, an AR picture of a calendar object is displayed, and the AR picture includes virtual objects such as a dragon, a squirrel, and the like.
In addition, in order to further enrich the content of the AR presentation and improve the presentation effect of the AR content, images of a plurality of virtual objects may be set in the virtual image, and presentation order and/or interaction data between the plurality of virtual objects may be set in advance. When controlling the AR device to play the special effect data based on the display position information, specifically, the method may include:
and displaying the images of the virtual objects on the display positions corresponding to the display position information based on the display sequence and/or interactive data among the virtual objects.
As shown in fig. 4C, images of a virtual object fighter one 401 and a virtual object fighter two 402 are set in the virtual image, and when the game character is scanned, the virtual image showing the fighter one 401 appears first in the picture shown by the AR device, and the virtual image showing the fighter two 402 appears later in the picture shown by the AR device. And then, showing the fighting states of the first warrior 401 and the second warrior 402 according to preset interaction data between the two warriors.
In the process of displaying the special effect data, in some cases, a target object or an AR device may move, and in the moving process, if the position of the target object changes, how to continuously determine the display position information of the special effect data, so that the special effect data is continuously displayed to provide a more realistic display effect, which is a problem worthy of research.
In view of the above problem, in the embodiment of the present disclosure, display position information of special effect data is determined by using two positioning manners, and when a target object is identified in a current scene image, the display position information of the special effect data may be determined by using a first positioning manner based on image position information of the target object in the current scene image; when it is recognized that the current scene image does not include the target object, the relative position information between the target object and the AR device in the world coordinate system may be acquired in a second positioning manner, and the display position information of the special effect data may be determined based on the relative position information.
According to the method, under the condition that the target object is not identified in the current scene image, the display position information of the special effect data can be determined according to the second positioning mode, so that the AR equipment can be controlled to continue displaying the non-displayed special effect data based on the determined display position information, the continuity of the special effect data in the display process is ensured, and the display of the special effect data is more vivid.
Illustratively, the target object is a calendar, the special effect data includes a video with a total dynamic display time of 30s, and if the target object is not identified in the current scene image captured by the AR device when the video is displayed to the 10 th s, the AR device may be continuously controlled to continue displaying from the 10 th s according to the display position information of the special effect data determined based on the second positioning manner. If the calendar completely leaves the image display range of the AR device during the continuous display process, for example, the shooting angle of the AR device completely leaves the calendar, the video is naturally located outside the image display range, and at this time, although the video is still continuously displayed, the user cannot view the video corresponding to the special effect data through the AR device. If the calendar is determined to be deviated from but not completely leave the image display range of the AR device based on the scene image in the continuous display process, for example, the shooting angle of the AR device can also shoot a partial area of the calendar, and at this time, the user can watch a partial video of the video through the AR device.
Illustratively, the target object is a calendar, the special effect data includes a video with a total duration of dynamic display being 30s, and if all the target objects are identified in the current scene image captured by the AR device when the video is displayed to the 10 th s, the AR device may be continuously controlled to continue displaying from the 10 th s according to the display position information determined based on the image position information in the current scene image by using the calendar.
Illustratively, whether the calendar is in the current scene image may be identified by the following steps:
extracting feature points of the current scene image to obtain feature information corresponding to a plurality of feature points contained in the current scene image; the plurality of feature points are located in a target detection area in the current scene image; and determining whether the current scene image contains the calendar or not based on comparison between the feature information respectively corresponding to the feature points and the pre-stored feature information respectively corresponding to the feature points contained in the calendar.
If the feature points extracted from the current scene image are matched with the pre-stored calendar feature points successfully, determining that the current scene image comprises a complete calendar, and if the ratio of the feature points extracted from the current scene image to the pre-stored calendar feature points is higher than a preset ratio, determining that the current scene image comprises a part of calendar. And if the ratio of the feature points extracted from the current scene image to the feature points of the pre-stored calendar is lower than or equal to the preset ratio, determining that the calendar is not included in the current scene image.
The image position information of the target object in the current scene image can be accurately determined by utilizing the first positioning mode based on the image recognition technology, so that the display position information of the special effect data can be accurately obtained based on the image position information of the target object, and support is provided for accurate display of the special effect data.
The first positioning mode is display position information of special effect data determined based on image position information of a target object in a current scene image, so that in the process of positioning the target object based on the first positioning mode, relative position information between the AR device And the target object when shooting each scene image can be simultaneously determined, And the relative position information is stored, so that under the condition that the current scene image does not identify the target object, the stored relative position information between the AR device And the target object can be combined with a real-time positioning And map building (SLAM) technology to determine the relative position information between the AR device And the target object when shooting the current scene image, And further the display position information of the special effect data can be determined based on the relative position information And the relative position relationship between the special effect data And the target object, this process will be described in detail later.
In some embodiments, whether the current scene image contains the target object may be identified as follows, as shown in fig. 5:
s510, extracting feature points of the current scene image to obtain feature information corresponding to the feature points contained in the current scene image; a plurality of feature points are located in a target detection area in a current scene image.
In the process of recognizing the current scene image, a target detection area containing an entity object in the current scene image may be located through an image detection algorithm, and then feature point extraction may be performed in the target detection area, for example, feature points located on an entity object contour, feature points located in an identification pattern area, feature points located in a text area, and the like in the target detection area may be extracted.
For example, the feature information included in the feature points extracted here may include texture feature values, RGB feature values, gray scale values, and the like corresponding to the feature points, which can represent features of the feature points.
S520, comparing the feature information corresponding to the feature points with the pre-stored feature information corresponding to the feature points contained in the target object, and determining whether the target object is contained in the current scene image.
For example, the target object may be photographed in advance in the same manner, and feature information corresponding to each of a plurality of feature points included in the target object may be obtained and stored.
For example, when comparing the feature information corresponding to the plurality of feature points with the feature information corresponding to the plurality of feature points included in the pre-stored target object, a first feature vector corresponding to the target detection region in the current scene image may be determined based on the feature information corresponding to the plurality of feature points extracted from the current scene image, a second feature vector corresponding to the target object may be determined based on the feature information corresponding to the plurality of feature points included in the target object, and then the similarity between the target detection region and the target object may be determined by the first feature vector and the second feature vector, for example, by a cosine formula.
Illustratively, in a case where it is determined that the similarity between the first feature vector and the second feature vector is greater than or equal to a preset similarity threshold, it is determined that the target object is included in the current scene image, whereas in a case where it is determined that the similarity between the first feature vector and the second feature vector is less than the preset similarity threshold, it is determined that the target object is not included in the current scene image.
By means of the extraction and comparison of the characteristic points, whether the target object exists in the current scene image can be determined more accurately.
In some embodiments, in the case that the current scene image is identified to include the target object, the display position information of the special effect data may be determined by using the following steps:
determining the position information of the target object in a world coordinate system based on the image position information of the target object in the current scene image; and determining display position information of the special effect data based on the position information of the target object in the world coordinate system and the position information of the AR equipment in the world coordinate system.
Before the above steps are performed, image position information of the target object in the current scene image needs to be acquired, for example, an image coordinate system may be established with the current scene image, and image coordinate values of a plurality of feature points included in the target object in the image coordinate system may be acquired, so as to obtain the image position information of the target object in the current scene image.
The determining of the position information of the target object in the world coordinate system based on the image position information of the target object in the current scene image may specifically be determining the position information of the target object in the world coordinate system based on the image position information, a conversion relationship between the image coordinate system and a camera coordinate system corresponding to the AR device, and a conversion relationship between the camera coordinate system corresponding to the AR device and the world coordinate system.
For example, a camera coordinate system corresponding to the AR device may be a three-dimensional rectangular coordinate system established with a focus center of an image capturing component included in the AR device as an origin and an optical axis as a Z axis, and after the AR device captures the current scene image, position information of the target object in the camera coordinate system may be determined based on a conversion relationship between the image coordinate system and the camera coordinate system.
For example, the world coordinate system may be established with the center point of the target object as the origin, such as the aforementioned case where the target object is a calendar, with the center of the calendar as the origin, with the long side passing through the center of the calendar as the X-axis, with the short side passing through the center of the calendar as the Y-axis, and with the straight line passing through the center of the calendar and perpendicular to the calendar cover as the Z-axis.
The conversion relationship between the camera coordinate system and the world coordinate system can be determined by the position coordinates of a plurality of position points in the target object in the world coordinate system and the corresponding position coordinates in the camera coordinate system, which are not described in detail in the present disclosure, and after the position information of the target object in the camera coordinate system is obtained, the position information of the target object in the world coordinate system can be determined based on the conversion relationship between the camera coordinate system corresponding to the AR device and the world coordinate system.
The display position information of the special effect data is determined based on the position information of the target object in the world coordinate system and the position information of the AR device in the world coordinate system, specifically, the position information of the AR device in the world coordinate system may be determined by a current scene image captured by the AR device, for example, a feature point is selected in the current scene image, and the position information of the AR device in the world coordinate system when the current scene image is captured may be determined by determining a position coordinate of the selected feature point in the world coordinate system established with the target object and a position coordinate of the selected feature point in a camera coordinate system corresponding to the AR device.
Considering that the special effect data and the target object have a preset positional relationship in the same coordinate system, the display position information of the special effect data can be determined based on the position information of the target object and the AR device in the same world coordinate system.
When the display position information of the special effect data is determined based on the position information of the target object in the world coordinate system and the position information of the AR device in the world coordinate system, the specific examples may be:
determining the position information of the special effect data in the world coordinate system based on the position information of the target object in the world coordinate system; and determining the display position information of the special effect data based on the position information of the special effect data in the world coordinate system and the position information of the AR equipment in the world coordinate system.
For example, the display position information of the special effect data in the world coordinate system may be determined according to the position information of the target object in the world coordinate system and a preset position relationship between the preset special effect data and the target object in the same coordinate system.
In some embodiments, in the case that the current scene image is recognized to include the non-target object, the display position information of the special effect data may be determined by using the following steps:
determining relative position information between the AR device and the target object when shooting the current scene image based on the current scene image, the historical scene image and the relative position information between the AR device and the target object when shooting the historical scene image under a world coordinate system; and determining the display position information of the special effect data based on the determined relative position information.
By way of example, the following briefly describes how to determine the relative position information between the AR device and the target object when the AR device captures the current scene image, by taking the current scene image as the third frame scene image captured by the AR device as an example, in combination with the SLAM technology.
Starting from the AR device taking a first frame of a scene image containing a target object, a world coordinate system established with the center point of the target object as the origin may be based, and the position coordinates of the selected feature points in the first frame of scene image shot by the AR device under the world coordinate system and the camera coordinate system corresponding to the AR device respectively, determining the position information of the AR device under the world coordinate system when shooting the first frame of scene image, the position information of the target object in the world coordinate system when the AR device shoots the first scene image is also contained, based on the position information of the AR device in the world coordinate system when the AR device shoots the first scene image, and position information of the target object in a world coordinate system when the AR device takes the first frame of scene image, the relative position information of the AR device and the target object in the world coordinate system when the AR device shoots the first frame of scene image can be determined.
Further, when the AR device captures a second frame of scene image, a target feature point included in the first frame of scene image may be found in the second frame of scene image, a position offset of the AR device when capturing the second frame of scene image with respect to when capturing the first frame of scene image is determined based on position information of the target feature point in a camera coordinate system when the AR device captures the two frames of scene images, respectively, and then, based on the position offset and relative position information of the AR device when capturing the first frame of scene image with respect to the target object in an established world coordinate system, the relative position information of the AR device when capturing the second frame of scene image with respect to the target object in the world coordinate system is determined.
Further, in the same manner, the position offset of the AR device when shooting the current scene image relative to the position when shooting the second frame scene image may be determined, so that the position offset of the AR device when shooting the current scene image compared to the position when shooting the second frame scene image and the relative position information of the AR device when shooting the second frame scene image and the target object in the world coordinate system may be combined to determine the relative position information of the AR device when shooting the current scene image and the target object in the world coordinate system.
By using the current scene image, the historical scene image and the relative position information of the AR device and the target object in the world coordinate system when the historical scene image is shot, the relative position information of the AR device and the target object when the current scene image is shot can be accurately determined.
When the display position information of the special effect data in the world coordinate system is determined, the display posture information of the special effect data in the world coordinate system can be determined, the processing logics are basically the same, and the description is omitted. When the special effect data is displayed, the determined display position information and the display posture information can be combined for displaying.
Corresponding to the display method in the augmented reality scene, the present disclosure also discloses a display apparatus in the augmented reality scene, where each module in the apparatus can implement each step in the display method in the augmented reality scene of each embodiment executed on the server or the AR device, and can obtain the same beneficial effect, and therefore, the description of the same part is omitted here. Specifically, as shown in fig. 6, the display device in the augmented reality scene includes:
an image obtaining module 610, configured to obtain a current scene image captured by the augmented reality AR device.
A position determining module 620, configured to determine, based on a recognition result of the current scene image on a target object, special effect data matched with the target object and display position information of the special effect data.
A special effect playing module 630, configured to control the AR device to play the special effect data based on the display position information; the special effect data comprises a virtual image and/or an audio, and a display position between the virtual image and the target object has a preset position relation.
Corresponding to the display method in the augmented reality scene, an embodiment of the present disclosure further provides an electronic device 700, as shown in fig. 7, which is a schematic structural diagram of the electronic device 700 provided in an embodiment of the present disclosure, and includes:
a processor 71, a memory 72, and a bus 73; the memory 72 is used for storing execution instructions and includes a memory 721 and an external memory 722; the memory 721 is also referred to as an internal memory, and is used for temporarily storing the operation data in the processor 71 and the data exchanged with the external memory 722 such as a hard disk, the processor 71 exchanges data with the external memory 722 through the memory 721, and when the electronic device 700 is operated, the processor 71 and the memory 72 communicate with each other through the bus 73, so that the processor 71 executes the following instructions:
acquiring a current scene image shot by the AR equipment; determining special effect data matched with the target object and display position information of the special effect data based on the recognition result of the current scene image on the target object; controlling the AR equipment to play the special effect data based on the display position information; the special effect data comprises a virtual image and/or an audio, and a display position between the virtual image and the target object has a preset position relation.
The embodiment of the present disclosure further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the display method in the augmented reality scenario in the above method embodiment. The storage medium may be a volatile or non-volatile computer-readable storage medium.
An embodiment of the present disclosure further provides a computer program product, which includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute steps of the display method in the augmented reality scenario in the foregoing method embodiment, which may be referred to specifically in the foregoing method embodiment, and details are not repeated here.
Wherein the computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (12)

1. A display method in an augmented reality scene is characterized by comprising the following steps:
acquiring a current scene image shot by the AR equipment;
determining special effect data matched with the target object and display position information of the special effect data based on the recognition result of the current scene image on the target object;
controlling the AR equipment to play the special effect data based on the display position information; the special effect data comprises a virtual image and/or an audio, and a display position between the virtual image and the target object has a preset position relation.
2. The display method according to claim 1, wherein the determining display position information of the special effect data based on the recognition result of the current scene image on the target object comprises:
determining display position information of the special effect data based on image position information of the target object in the current scene image under the condition that the target object is identified in the current scene image;
and under the condition that the target object is not identified in the current scene image, acquiring relative position information between the target object and the AR equipment in a world coordinate system, and determining display position information of the special effect data based on the relative position information.
3. A presentation method according to claim 1 or 2, wherein the special effect data comprises the virtual image and audio;
the controlling the AR device to play the special effect data based on the display position information includes:
under the condition that at least part of the target object is determined to be within the image display range of the AR device, controlling the AR device to play at least part of the virtual images and/or audio based on the display position information;
and under the condition that the target object is determined not to be in the image display range of the AR equipment, controlling the AR equipment to continue playing the audio according to the played progress of the audio based on the display position information.
4. A presentation method according to any one of claims 1 to 3, wherein the virtual image comprises a holographic image;
the method further comprises the following steps:
acquiring a video to be processed matched with the target object, wherein the video to be processed comprises a target associated object associated with the target object;
setting a transparent channel for each pixel point in the video to be processed to obtain a first video;
based on the transparent channel, removing background pixel points from the first video to obtain a second video;
generating a hologram including the target-associated object based on the second video.
5. The method according to claim 4, wherein the removing background pixels from the first video based on the transparent channel to obtain a second video comprises:
setting a transparent channel corresponding to a background pixel point in the first video to be white, and obtaining a third video; the first video comprises target pixel points of the target associated object and background pixel points except the target pixel points;
setting a transparent channel corresponding to a first type of pixel point in the first video as black, setting a transparent channel corresponding to a second type of pixel point in the first video as white, and setting a transparent channel corresponding to a third type of pixel point in the first video as a preset gray value to obtain a fourth video; the third type of pixel points comprise target pixel points adjacent to the background pixel points and background pixel points adjacent to the target pixel points; the first type of pixel points comprise background pixel points except for third type of pixel points, and the second type of pixel points comprise target pixel points except for the third type of pixel points;
generating the second video based on the third video and the fourth video.
6. The presentation method according to any one of claims 1 to 5, wherein the virtual image comprises images of a plurality of virtual objects, and presentation order and/or interaction data between the plurality of virtual objects;
the controlling the AR device to play the special effect data based on the display position information includes:
and displaying the images of the virtual objects on the display positions corresponding to the display position information based on the display sequence and/or interactive data among the virtual objects.
7. The method according to claim 2, wherein the determining the display position information of the special effect data based on the image position information of the target object in the current scene image comprises:
determining the position information of the target object in a world coordinate system based on the image position information of the target object in the current scene image;
and determining display position information of the special effect data based on the position information of the target object in the world coordinate system and the position information of the AR equipment in the world coordinate system.
8. The method according to claim 2, wherein the obtaining of the relative position information between the target object and the AR device in the world coordinate system comprises:
and determining the relative position information between the AR device and the target object when shooting the current scene image based on the current scene image, the historical scene image and the relative position information between the AR device and the target object when shooting the historical scene image under the world coordinate system.
9. The method according to any one of claims 1 to 8, wherein whether the target object is included in the current scene image is identified as follows:
extracting feature points of the current scene image to obtain feature information corresponding to a plurality of feature points contained in the current scene image; the plurality of feature points are located in a target detection area in the current scene image;
and determining whether the current scene image contains the target object or not based on comparison between the feature information respectively corresponding to the feature points and the pre-stored feature information respectively corresponding to the feature points contained in the target object.
10. A display device under an augmented reality scene, comprising:
the image acquisition module is used for acquiring a current scene image shot by the AR equipment;
the position determining module is used for determining special effect data matched with the target object and display position information of the special effect data based on the recognition result of the current scene image on the target object;
a special effect playing module, configured to control the AR device to play the special effect data based on the display position information; the special effect data comprises a virtual image and/or an audio, and a display position between the virtual image and the target object has a preset position relation.
11. An electronic device, comprising: processor, memory and bus, the memory storing machine readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine readable instructions when executed by the processor performing the steps of the method for displaying in an augmented reality scene according to any one of claims 1 to 9.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, performs the steps of the method for displaying in an augmented reality scene according to any one of claims 1 to 9.
CN202011233879.6A 2020-11-06 2020-11-06 Display method and device in augmented reality scene, electronic equipment and storage medium Active CN112348969B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202011233879.6A CN112348969B (en) 2020-11-06 2020-11-06 Display method and device in augmented reality scene, electronic equipment and storage medium
PCT/CN2021/102191 WO2022095467A1 (en) 2020-11-06 2021-06-24 Display method and apparatus in augmented reality scene, device, medium and program
TW110127756A TW202220438A (en) 2020-11-06 2021-07-28 Display method, electronic device and computer readable storage medium in augmented reality scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011233879.6A CN112348969B (en) 2020-11-06 2020-11-06 Display method and device in augmented reality scene, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112348969A true CN112348969A (en) 2021-02-09
CN112348969B CN112348969B (en) 2023-04-25

Family

ID=74428557

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011233879.6A Active CN112348969B (en) 2020-11-06 2020-11-06 Display method and device in augmented reality scene, electronic equipment and storage medium

Country Status (3)

Country Link
CN (1) CN112348969B (en)
TW (1) TW202220438A (en)
WO (1) WO2022095467A1 (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112905014A (en) * 2021-02-26 2021-06-04 北京市商汤科技开发有限公司 Interaction method and device in AR scene, electronic equipment and storage medium
CN112991555A (en) * 2021-03-30 2021-06-18 北京市商汤科技开发有限公司 Data display method, device, equipment and storage medium
CN113115099A (en) * 2021-05-14 2021-07-13 北京市商汤科技开发有限公司 Video recording method and device, electronic equipment and storage medium
CN113220123A (en) * 2021-05-10 2021-08-06 深圳市慧鲤科技有限公司 Sound effect control method and device, electronic equipment and storage medium
CN113240819A (en) * 2021-05-24 2021-08-10 中国农业银行股份有限公司 Wearing effect determination method and device and electronic equipment
CN113269782A (en) * 2021-04-21 2021-08-17 青岛小鸟看看科技有限公司 Data generation method and device and electronic equipment
CN113329218A (en) * 2021-05-28 2021-08-31 青岛鳍源创新科技有限公司 Augmented reality combining method, device and equipment for underwater shooting and storage medium
CN113345108A (en) * 2021-06-25 2021-09-03 北京市商汤科技开发有限公司 Augmented reality data display method and device, electronic equipment and storage medium
CN113359986A (en) * 2021-06-03 2021-09-07 北京市商汤科技开发有限公司 Augmented reality data display method and device, electronic equipment and storage medium
CN113470186A (en) * 2021-06-30 2021-10-01 北京市商汤科技开发有限公司 AR interaction method and device, electronic equipment and storage medium
CN113542620A (en) * 2021-07-06 2021-10-22 北京百度网讯科技有限公司 Special effect processing method and device and electronic equipment
CN113542891A (en) * 2021-06-22 2021-10-22 海信视像科技股份有限公司 Video special effect display method and device
CN114153548A (en) * 2021-12-15 2022-03-08 北京绵白糖智能科技有限公司 Display method and device, computer equipment and storage medium
WO2022095467A1 (en) * 2020-11-06 2022-05-12 北京市商汤科技开发有限公司 Display method and apparatus in augmented reality scene, device, medium and program
CN114661398A (en) * 2022-03-22 2022-06-24 上海商汤智能科技有限公司 Information display method and device, computer equipment and storage medium
WO2022188305A1 (en) * 2021-03-11 2022-09-15 深圳市慧鲤科技有限公司 Information presentation method and apparatus, and electronic device, storage medium and computer program
WO2022252688A1 (en) * 2021-06-03 2022-12-08 上海商汤智能科技有限公司 Augmented reality data presentation method and apparatus, electronic device, and storage medium
WO2023124698A1 (en) * 2021-12-31 2023-07-06 上海商汤智能科技有限公司 Display of augmented reality scene
CN114661398B (en) * 2022-03-22 2024-05-17 上海商汤智能科技有限公司 Information display method and device, computer equipment and storage medium

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115242980B (en) * 2022-07-22 2024-02-20 中国平安人寿保险股份有限公司 Video generation method and device, video playing method and device and storage medium
CN115695685A (en) * 2022-10-28 2023-02-03 北京字跳网络技术有限公司 Special effect processing method and device, electronic equipment and storage medium
CN116095293A (en) * 2023-01-13 2023-05-09 北京达佳互联信息技术有限公司 Virtual prop display method, device, equipment and storage medium
CN116860114B (en) * 2023-09-04 2024-04-05 腾讯科技(深圳)有限公司 Augmented reality interaction method and related device based on artificial intelligence

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160180590A1 (en) * 2014-12-23 2016-06-23 Lntel Corporation Systems and methods for contextually augmented video creation and sharing
CN110180167A (en) * 2019-06-13 2019-08-30 张洋 The method of intelligent toy tracking mobile terminal in augmented reality
CN110213640A (en) * 2019-06-28 2019-09-06 香港乐蜜有限公司 Generation method, device and the equipment of virtual objects
CN111640169A (en) * 2020-06-08 2020-09-08 上海商汤智能科技有限公司 Historical event presenting method and device, electronic equipment and storage medium
CN111667588A (en) * 2020-06-12 2020-09-15 上海商汤智能科技有限公司 Person image processing method, person image processing device, AR device and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170092001A1 (en) * 2015-09-25 2017-03-30 Intel Corporation Augmented reality with off-screen motion sensing
CN110083238A (en) * 2019-04-18 2019-08-02 深圳市博乐信息技术有限公司 Man-machine interaction method and system based on augmented reality
CN111510701A (en) * 2020-04-22 2020-08-07 Oppo广东移动通信有限公司 Virtual content display method and device, electronic equipment and computer readable medium
CN111696215A (en) * 2020-06-12 2020-09-22 上海商汤智能科技有限公司 Image processing method, device and equipment
CN112348969B (en) * 2020-11-06 2023-04-25 北京市商汤科技开发有限公司 Display method and device in augmented reality scene, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160180590A1 (en) * 2014-12-23 2016-06-23 Lntel Corporation Systems and methods for contextually augmented video creation and sharing
CN110180167A (en) * 2019-06-13 2019-08-30 张洋 The method of intelligent toy tracking mobile terminal in augmented reality
CN110213640A (en) * 2019-06-28 2019-09-06 香港乐蜜有限公司 Generation method, device and the equipment of virtual objects
CN111640169A (en) * 2020-06-08 2020-09-08 上海商汤智能科技有限公司 Historical event presenting method and device, electronic equipment and storage medium
CN111667588A (en) * 2020-06-12 2020-09-15 上海商汤智能科技有限公司 Person image processing method, person image processing device, AR device and storage medium

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022095467A1 (en) * 2020-11-06 2022-05-12 北京市商汤科技开发有限公司 Display method and apparatus in augmented reality scene, device, medium and program
CN112905014A (en) * 2021-02-26 2021-06-04 北京市商汤科技开发有限公司 Interaction method and device in AR scene, electronic equipment and storage medium
WO2022188305A1 (en) * 2021-03-11 2022-09-15 深圳市慧鲤科技有限公司 Information presentation method and apparatus, and electronic device, storage medium and computer program
CN112991555A (en) * 2021-03-30 2021-06-18 北京市商汤科技开发有限公司 Data display method, device, equipment and storage medium
CN113269782A (en) * 2021-04-21 2021-08-17 青岛小鸟看看科技有限公司 Data generation method and device and electronic equipment
CN113269782B (en) * 2021-04-21 2023-01-03 青岛小鸟看看科技有限公司 Data generation method and device and electronic equipment
CN113220123A (en) * 2021-05-10 2021-08-06 深圳市慧鲤科技有限公司 Sound effect control method and device, electronic equipment and storage medium
WO2022237129A1 (en) * 2021-05-14 2022-11-17 北京市商汤科技开发有限公司 Video recording method and apparatus, device, medium and program
CN113115099A (en) * 2021-05-14 2021-07-13 北京市商汤科技开发有限公司 Video recording method and device, electronic equipment and storage medium
CN113240819A (en) * 2021-05-24 2021-08-10 中国农业银行股份有限公司 Wearing effect determination method and device and electronic equipment
CN113329218A (en) * 2021-05-28 2021-08-31 青岛鳍源创新科技有限公司 Augmented reality combining method, device and equipment for underwater shooting and storage medium
WO2022252688A1 (en) * 2021-06-03 2022-12-08 上海商汤智能科技有限公司 Augmented reality data presentation method and apparatus, electronic device, and storage medium
CN113359986A (en) * 2021-06-03 2021-09-07 北京市商汤科技开发有限公司 Augmented reality data display method and device, electronic equipment and storage medium
CN113359986B (en) * 2021-06-03 2023-06-20 北京市商汤科技开发有限公司 Augmented reality data display method and device, electronic equipment and storage medium
CN113542891A (en) * 2021-06-22 2021-10-22 海信视像科技股份有限公司 Video special effect display method and device
CN113345108A (en) * 2021-06-25 2021-09-03 北京市商汤科技开发有限公司 Augmented reality data display method and device, electronic equipment and storage medium
CN113345108B (en) * 2021-06-25 2023-10-20 北京市商汤科技开发有限公司 Augmented reality data display method and device, electronic equipment and storage medium
CN113470186A (en) * 2021-06-30 2021-10-01 北京市商汤科技开发有限公司 AR interaction method and device, electronic equipment and storage medium
CN113542620A (en) * 2021-07-06 2021-10-22 北京百度网讯科技有限公司 Special effect processing method and device and electronic equipment
CN114153548A (en) * 2021-12-15 2022-03-08 北京绵白糖智能科技有限公司 Display method and device, computer equipment and storage medium
WO2023124698A1 (en) * 2021-12-31 2023-07-06 上海商汤智能科技有限公司 Display of augmented reality scene
CN114661398A (en) * 2022-03-22 2022-06-24 上海商汤智能科技有限公司 Information display method and device, computer equipment and storage medium
CN114661398B (en) * 2022-03-22 2024-05-17 上海商汤智能科技有限公司 Information display method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN112348969B (en) 2023-04-25
TW202220438A (en) 2022-05-16
WO2022095467A1 (en) 2022-05-12

Similar Documents

Publication Publication Date Title
CN112348969B (en) Display method and device in augmented reality scene, electronic equipment and storage medium
CN111694430A (en) AR scene picture presentation method and device, electronic equipment and storage medium
CN106355153A (en) Virtual object display method, device and system based on augmented reality
CN111627117B (en) Image display special effect adjusting method and device, electronic equipment and storage medium
CN112348968B (en) Display method and device in augmented reality scene, electronic equipment and storage medium
CN111638793A (en) Aircraft display method and device, electronic equipment and storage medium
CN111643900A (en) Display picture control method and device, electronic equipment and storage medium
CN111638784B (en) Facial expression interaction method, interaction device and computer storage medium
CN111640197A (en) Augmented reality AR special effect control method, device and equipment
CN108668050B (en) Video shooting method and device based on virtual reality
CN112653848B (en) Display method and device in augmented reality scene, electronic equipment and storage medium
CN109035415B (en) Virtual model processing method, device, equipment and computer readable storage medium
CN111640202A (en) AR scene special effect generation method and device
CN111667588A (en) Person image processing method, person image processing device, AR device and storage medium
CN112637665B (en) Display method and device in augmented reality scene, electronic equipment and storage medium
CN111970557A (en) Image display method, image display device, electronic device, and storage medium
CN111639613B (en) Augmented reality AR special effect generation method and device and electronic equipment
CN111651057A (en) Data display method and device, electronic equipment and storage medium
CN112905014A (en) Interaction method and device in AR scene, electronic equipment and storage medium
CN113487709A (en) Special effect display method and device, computer equipment and storage medium
CN112085835A (en) Three-dimensional cartoon face generation method and device, electronic equipment and storage medium
CN111694431A (en) Method and device for generating character image
CN111652983A (en) Augmented reality AR special effect generation method, device and equipment
CN111651058A (en) Historical scene control display method and device, electronic equipment and storage medium
CN111638798A (en) AR group photo method, AR group photo device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40039399

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant