CN112333498A - Display control method and device, computer equipment and storage medium - Google Patents

Display control method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112333498A
CN112333498A CN202011193531.9A CN202011193531A CN112333498A CN 112333498 A CN112333498 A CN 112333498A CN 202011193531 A CN202011193531 A CN 202011193531A CN 112333498 A CN112333498 A CN 112333498A
Authority
CN
China
Prior art keywords
target
equipment
pose information
target virtual
virtual playing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011193531.9A
Other languages
Chinese (zh)
Inventor
张建博
李宇飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TetrasAI Technology Co Ltd
Original Assignee
Shenzhen TetrasAI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen TetrasAI Technology Co Ltd filed Critical Shenzhen TetrasAI Technology Co Ltd
Priority to CN202011193531.9A priority Critical patent/CN112333498A/en
Publication of CN112333498A publication Critical patent/CN112333498A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41415Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance involving a public display, viewable by several users in a public space outside their home, e.g. movie theatre, information kiosk
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/458Scheduling content for creating a personalised stream, e.g. by combining a locally stored advertisement with an incoming stream; Updating operations, e.g. for OS modules ; time-related management operations
    • H04N21/4586Content update operation triggered locally, e.g. by comparing the version of software modules in a DVB carousel to the version stored locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video

Abstract

The present disclosure provides a display control method, apparatus, computer device and storage medium, including: acquiring a live-action image acquired by AR equipment in real time; determining whether the AR equipment is positioned in a preset target real scene based on the live-action image; and under the condition that the AR equipment is determined to be positioned in a preset target real scene, displaying target virtual playing equipment, and controlling the target virtual playing equipment to play the appointed target video resource.

Description

Display control method and device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of AR technologies, and in particular, to a display control method and apparatus, a computer device, and a storage medium.
Background
In scenic spots, exhibition halls, stations and other places with large people flow, the video playing needs are often needed for the purposes of propaganda and the like.
In the related art, when a video is played, generally, a video is played circularly through a fixed display device (e.g., an electronic screen, etc.), on one hand, the method needs to occupy an entity playing device and an actual position space resource, and on the other hand, playing the video resource on the entity device can only be controlled or set by a server in a unified manner to automatically play circularly through the entity device, which cannot meet different watching requirements of different users.
Disclosure of Invention
The embodiment of the disclosure at least provides a display control method, a display control device, computer equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a display control method, including:
acquiring a live-action image acquired by AR equipment in real time;
determining whether the AR equipment is positioned in a preset target real scene based on the live-action image;
and under the condition that the AR equipment is determined to be positioned in a preset target real scene, displaying target virtual playing equipment, and controlling the target virtual playing equipment to play the appointed target video resource.
Based on the method, after detecting that the AR equipment is located in a preset target real scene, the target virtual playing equipment can be displayed, and the target virtual playing equipment is controlled to play the target video resource; because the virtual playing device does not need to occupy the entity space, the virtual playing device is not limited by the entity display space, and the space resource and the entity device resource can be saved. In addition, after different AR devices enter a preset target real scene, the target virtual playing devices can be automatically controlled, the watching requirements of the different AR devices on the target video resources are met, and the display effect of the target video resources is further improved.
In one possible implementation, whether the AR device is located in the preset target reality scene is determined according to the following method:
inputting the live-action image acquired by the AR equipment into a pre-trained neural network model, and determining whether the AR equipment is located in the preset target real scene, wherein the neural network model is obtained based on sample live-action image training carrying a target real scene label.
In one possible implementation, whether the AR device is located in the preset target reality scene is determined according to the following method:
determining pose information of the AR device based on the live-action image;
and determining whether the AR equipment is positioned in a preset target real scene or not based on the pose information of the AR equipment.
In a possible implementation manner, the display target virtual playing device includes:
and displaying the target virtual playing device in the AR device.
In a possible implementation manner, the presenting, in the AR device, a target virtual playing device includes:
determining pose information of the AR device based on the live-action image;
determining initial display pose information of the target virtual playing device in a three-dimensional scene model corresponding to the target real scene based on the pose information of the AR device and preset relative pose information between the target virtual playing device and the AR device;
and displaying the target virtual playing device in the AR device based on the initial display pose information of the target virtual playing device in the three-dimensional scene model corresponding to the target real scene.
Here, the target virtual playing device is placed in the three-dimensional scene model based on the pose information of the AR device only when it is determined that the AR device is located in the preset target real scene, so that it can be ensured that the AR device is located at the optimal viewing position when the target video resource is played, and the display effect of the target video resource is improved.
In a possible embodiment, the method further comprises:
responding to the movement operation of a user for the displayed target virtual playing device, and updating the initial display pose information of the target virtual playing device based on the movement operation;
and displaying the target virtual playing equipment based on the updated initial display pose information.
In the implementation mode, the interaction process between the user and the AR equipment is increased, and the display effect of the target video resource is improved.
In a possible implementation manner, the controlling the target virtual playing device to play the specified target video resource includes:
loading a target video resource corresponding to a target real scene where the AR equipment is located;
and controlling the target virtual equipment to play the loaded target video resource.
By pre-loading the target video resource corresponding to the target application scene, the continuity of the target video resource in the display process can be ensured, and the influence of a network environment on the playing process of the target video resource is avoided.
In one possible embodiment, after controlling the target virtual playing device to play the specified target video resource, the method further includes:
under the condition that the change of the pose information of the AR equipment is detected, determining the relative pose information of the AR equipment relative to the target virtual playing equipment based on the changed pose information of the AR equipment and the initial display pose information of the target virtual playing equipment;
and under the condition that the relative pose information does not meet the preset condition, controlling the target virtual playing equipment to stop playing the target video resource.
In the above embodiment, it can be ensured that the AR device is located at the optimal viewing position when the target video resource is played, and when the relative pose information does not satisfy the preset condition, the playing of the target video resource is directly stopped, thereby avoiding the influence on the display effect of the target video resource due to the viewing position of the AR device.
In a possible implementation manner, the controlling, in a case where the relative pose information does not satisfy a preset condition, the target virtual playing apparatus to stop playing the target video resource includes:
and controlling the target virtual playing equipment to stop playing the target video resource under the condition that the relative distance in the relative pose information is greater than a preset distance, or the included angle between the shooting direction of the AR equipment and the direction facing the video playing area in the relative pose information is not within a set angle range.
When the relative distance between the AR equipment and the target virtual playing equipment is larger than the set distance, the target virtual playing equipment is controlled to stop playing the target video resource, so that the problem of unclear playing caused by too long distance between the AR equipment and the target virtual playing equipment when the target video resource is played is avoided; and when the relative orientation included angle is not within the set angle range, the target virtual playing device is controlled to stop playing the target video resource, so that the AR device is located within the optimal viewing angle range when the target video resource is played, and the display effect of the target video resource is improved.
In one possible embodiment, after controlling the target virtual playing device to play the specified target video resource, the method further includes:
under the condition that the change of the pose information of the AR equipment is detected, determining an AR scene picture displayed in the AR equipment based on the changed pose information of the AR equipment and the initial display pose information of the target virtual playing equipment in the three-dimensional scene model corresponding to the target real scene;
and under the condition that the area occupied by the video elements of the target video resources contained in the AR scene picture is smaller than the set proportion in the AR scene picture, controlling the target virtual playing equipment to stop playing the target video resources.
When the area occupied by the video elements of the target video resources contained in the AR scene picture is small, the user may not be able to play the target video resources displayed in the AR scene picture, and by suspending playing the target video resources, resource waste caused by lack of viewing by the user of playing the target video resources can be avoided.
In a second aspect, an embodiment of the present disclosure further provides a display control apparatus, including:
the acquisition module is used for acquiring a live-action image acquired by the AR equipment in real time;
a determining module, configured to determine whether the AR device is located in a preset target reality scene based on the live-action image;
and the control module is used for displaying the target virtual playing equipment and controlling the target virtual playing equipment to play the appointed target video resource under the condition that the AR equipment is determined to be positioned in the preset target real scene.
In a possible implementation, the determining module is configured to determine whether the AR device is located in the preset target reality scene according to the following method:
inputting the live-action image acquired by the AR equipment into a pre-trained neural network model, and determining whether the AR equipment is located in the preset target real scene, wherein the neural network model is obtained based on sample live-action image training carrying a target real scene label.
In a possible implementation, the determining module is configured to determine whether the AR device is located in the preset target reality scene according to the following method:
determining pose information of the AR device based on the live-action image;
and determining whether the AR equipment is positioned in a preset target real scene or not based on the pose information of the AR equipment.
In a possible implementation manner, the control module, when presenting the target virtual playing device, is configured to:
and displaying the target virtual playing device in the AR device.
In a possible implementation manner, when the target virtual playback device is shown in the AR device, the control module is configured to:
determining pose information of the AR device based on the live-action image;
determining initial display pose information of the target virtual playing device in a three-dimensional scene model corresponding to the target real scene based on the pose information of the AR device and preset relative pose information between the target virtual playing device and the AR device;
and displaying the target virtual playing device in the AR device based on the initial display pose information of the target virtual playing device in the three-dimensional scene model corresponding to the target real scene.
In a possible implementation, the control module is further configured to:
responding to the movement operation of a user for the displayed target virtual playing device, and updating the initial display pose information of the target virtual playing device based on the movement operation;
and displaying the target virtual playing equipment based on the updated initial display pose information.
In a possible implementation manner, the control module, when controlling the target virtual playing device to play the specified target video resource, is configured to:
loading a target video resource corresponding to a target real scene where the AR equipment is located;
and controlling the target virtual equipment to play the loaded target video resource.
In a possible implementation manner, after controlling the target virtual playing device to play the specified target video resource, the control module is further configured to:
under the condition that the change of the pose information of the AR equipment is detected, determining the relative pose information of the AR equipment relative to the target virtual playing equipment based on the changed pose information of the AR equipment and the initial display pose information of the target virtual playing equipment;
and under the condition that the relative pose information does not meet the preset condition, controlling the target virtual playing equipment to stop playing the target video resource.
In a possible implementation manner, the control module, when controlling the target virtual playing device to stop playing the target video resource in a case that the relative pose information does not satisfy a preset condition, is configured to:
and controlling the target virtual playing equipment to stop playing the target video resource under the condition that the relative distance in the relative pose information is greater than a preset distance, or the included angle between the shooting direction of the AR equipment and the direction facing the video playing area in the relative pose information is not within a set angle range.
In a possible implementation manner, after controlling the target virtual playing device to play the specified target video resource, the control module is further configured to:
under the condition that the change of the pose information of the AR equipment is detected, determining an AR scene picture displayed in the AR equipment based on the changed pose information of the AR equipment and the initial display pose information of the target virtual playing equipment in the three-dimensional scene model corresponding to the target real scene;
and under the condition that the area occupied by the video elements of the target video resources contained in the AR scene picture is smaller than the set proportion in the AR scene picture, controlling the target virtual playing equipment to stop playing the target video resources.
In a third aspect, an embodiment of the present disclosure further provides a computer device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect described above, or any possible implementation of the first aspect.
In a fourth aspect, this disclosed embodiment also provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
For the description of the effects of the display control apparatus, the computer device, and the computer-readable storage medium, reference is made to the description of the display control method, which is not repeated herein.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 shows a flowchart of a presentation control method provided by an embodiment of the present disclosure;
FIG. 2 illustrates a schematic view of the relative included angles provided by embodiments of the present disclosure;
FIG. 3 is a schematic diagram illustrating an architecture of a presentation control apparatus according to an embodiment of the present disclosure;
fig. 4 shows a schematic structural diagram of a computer device 400 provided by the embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
In the related art, when a video is played, a fixed display device generally plays in a circulating manner, but this manner needs to occupy an entity playing device and an actual position space resource on one hand, and on the other hand, playing the video resource on the entity device can only be controlled or set by a server in a unified manner to automatically play in a circulating manner by the entity device, and the viewing schedules of different users may be different, so that different viewing requirements of different users cannot be met.
Based on the research, the present disclosure provides a display control method, apparatus, computer device, and storage medium, which may display a target virtual playing device and control the target virtual playing device to play a target video resource after detecting that an AR device is located in a preset target real scene; because the virtual playing device does not need to occupy the entity space, the virtual playing device is not limited by the entity display space, and the space resource and the entity device resource can be saved. In addition, after different AR devices enter a preset target real scene, the target virtual playing devices can be automatically controlled, the watching requirements of the different AR devices on the target video resources are met, and the display effect of the target video resources is further improved.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solutions proposed by the present disclosure to the above-mentioned problems should be the contribution of the inventor in the process of the present disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In order to understand the embodiment, a detailed description is first given to a display control method disclosed in the embodiment of the present disclosure, an execution main body of the display control method provided in the embodiment of the present disclosure is generally a computer device with certain computing capability, and specifically may be a terminal device or other processing devices, an AR device may include, for example, devices with an explicit display function and a data processing function, such as AR glasses, a tablet computer, a smart phone, and an intelligent wearable device, and the AR device may be connected to a cloud server through an application program.
Referring to fig. 1, a flowchart of a display control method provided in the embodiment of the present disclosure is shown, where the method includes steps 101 to 103, where:
step 101, acquiring a live-action image acquired by AR equipment in real time.
And 102, determining whether the AR equipment is positioned in a preset target real scene or not based on the real scene image.
And 103, under the condition that the AR equipment is determined to be located in a preset target reality scene, displaying target virtual playing equipment, and controlling the target virtual playing equipment to play the appointed target video resource.
For step 101,
The real scene image may be an image of a real scene acquired by the AR device in real time. When the AR device captures the live-action image, the user may capture the live-action image by the AR device after triggering the capture button of the AR device, or after the AR device is started.
With respect to step 102,
In one possible implementation, when determining whether the AR device is located in the preset target reality scene, the position and orientation information of the AR device may be determined based on the real-scene image, and then it may be determined whether the AR device is located in the preset target reality scene based on the position and orientation information of the AR device.
In a possible implementation manner, when the position and orientation information of the AR device is determined based on the live-action image, the position and orientation information of the AR device may be determined based on the live-action image by the AR device, or the AR device may send the live-action image to the server, the position and orientation information of the AR device is determined by the server based on the live-action image, and then the AR device acquires the determined position and orientation information from the server.
Specifically, when determining the pose information, the AR device may determine the pose information of the AR device in a scene coordinate system established based on a scene corresponding to the live-action image, based on the live-action image captured by the AR device in real time. Wherein the scene coordinate system may be a three-dimensional coordinate system.
Specifically, when determining the pose information of the AR device in the scene coordinate system established based on the scene corresponding to the live-action image based on the live-action image shot by the AR device in real time, any one of the following methods may be used:
the first method,
The position information of a plurality of target detection points in a scene corresponding to the live-action image may be detected, a target pixel point corresponding to each target detection point in the live-action image may be determined, then, depth information corresponding to each target pixel point in the live-action image may be determined (for example, the depth information may be obtained by performing depth detection on the live-action image), and then, the pose information of the AR device may be determined based on the depth information of the target pixel points.
The target detection point may be a preset position point in a scene where the AR device is located, for example, a cup, a fan, a water dispenser, and the like, and the depth information of the target pixel point may be used to represent a distance between the target detection point corresponding to the target pixel point and an image acquisition device of the AR device. The position coordinates of the target detection points in the scene coordinate system are preset and fixed.
Specifically, when the pose information of the AR equipment is determined, the orientation of a target pixel point corresponding to a target detection point in the scene image can be determined according to the coordinate information of the target pixel point in the scene image; and determining the position information of the AR equipment based on the depth value of the target pixel point corresponding to the target detection point, so that the pose information of the AR equipment can be determined.
The second method,
May be determined based on a three-dimensional scene model of a target scene in which the AR device is located.
Specifically, the live-action image acquired by the AR device in real time may be matched with a pre-constructed three-dimensional scene model of the scene where the AR device is located, and then the pose information of the AR device is determined based on the matching result.
Based on the three-dimensional scene model of the scene where the AR device is located, the scene image of the AR device under each pose information can be obtained, and the pose information of the AR device can also be obtained by matching the real-scene image obtained by the AR in real time with the three-dimensional scene model.
Here, the preset target reality scene may be a part of a scene in which the AR device is located, for example, the scene in which the AR device is located is an amusement park, and the preset target reality scene may be an entrance of the amusement park.
In a possible implementation manner, the target real scene may have corresponding area information, and when determining whether the AR device is located in the preset target real scene based on the pose information of the AR device, it may be detected whether the AR device is located in an area corresponding to the area information corresponding to the target real scene based on the pose information of the AR device.
In another possible implementation, when determining whether the AR device is located in a preset target reality scene, the real-scene image acquired by the AR device may be input into a pre-trained neural network model to determine whether the AR device is located in the preset target reality scene, where the neural network model is obtained based on a sample real-scene image training that carries a target reality scene label.
In another possible implementation, the AR device may further send the acquired real-world image to a server, and the server determines whether the AR device is located in a preset target real-world scene based on a pre-trained neural network model.
Specifically, when the neural network model is trained, only a sample real-scene image corresponding to a preset target real scene may be used for training, so that when the neural network model identifies a real-scene image, only the real-scene image corresponding to the target real scene may be identified, after the real-scene image acquired by the AR device is input to the neural network model, the neural network model may output whether the real-scene image acquired by the AR device is the real-scene image corresponding to the target real scene, and if so, it may be determined that the AR device is located in the preset target real scene.
In another possible implementation, when determining whether the AR device is located in a preset target real scene based on the real image, it may further detect whether the real image includes a target object corresponding to the target real scene, for example, if the target real scene is an entrance of an amusement park, the target object corresponding to the target real scene may be a gate of the amusement park, and if the target real scene is a public toilet, the target object corresponding to the target real scene may be a sign of the public toilet.
When detecting whether the live-action image contains the target object corresponding to the target real-action scene image, the live-action image may be subjected to semantic segmentation to obtain a semantic segmentation image, and then the semantic segmentation image is input into the object identification model to detect whether each semantic segmentation image contains the target object.
The object recognition model is obtained based on sample image training carrying a target object label.
For step 103,
In a possible implementation manner, the three-dimensional scene model corresponding to the scene where the AR device is located does not include the target virtual playing device when being constructed, after the AR device is turned on, the three-dimensional scene model corresponding to the scene where the AR device is located can be obtained from the server, the target virtual playing device is added to the obtained three-dimensional scene model when the AR device is determined to be located in a preset target real scene, and then the AR scene picture displayed in the AR device is determined based on the three-dimensional scene model to which the target virtual playing device is added.
In one possible implementation, when the target virtual playing device is displayed, the target virtual playing device may be displayed on a display device deployed in the target reality scene.
In another possible implementation manner, when the target virtual playing device is displayed, the target virtual playing device may also be displayed in the AR device. Specifically, the pose information of the AR device may be determined first, and then the initial display pose information of the target virtual playing device in the three-dimensional scene model corresponding to the target real scene may be determined based on the pose information of the AR device and the preset relative pose information between the target virtual playing device and the AR device; and then displaying the target virtual playing device in the AR device based on the initial display pose information of the target virtual playing device in the three-dimensional scene model corresponding to the target real scene.
Here, the three-dimensional scene model corresponding to the target real scene and the three-dimensional scene model of the scene where the AR device is located may be the same model.
The display pose of the target virtual playing device in the AR device can be determined based on the initial display pose information of the target virtual playing device in the three-dimensional scene model corresponding to the target real scene and the pose information of the AR device, then the target virtual playing device and the live-action image collected by the AR device are fused based on the display pose, and the live-action image fused with the target virtual playing device is displayed in the AR device.
In a possible implementation manner, after the target virtual playing device is displayed in the AR device, a moving operation of the user for the displayed target virtual playing device may be responded, then the initial display pose information of the target virtual playing device is updated based on the moving operation, and the target virtual playing device is displayed based on the updated initial display pose information.
Specifically, a target position point and a moving direction corresponding to the moving operation may be determined, the initial display pose information is updated based on the target position point and the moving direction, the target virtual playing device in the three-dimensional scene model is adjusted according to the updated initial display pose information, then the display pose of the target virtual playing device is re-determined according to the adjusted three-dimensional scene model and the pose information of the AR device, the target virtual playing device and the live-action image acquired by the AR device are fused according to the display pose, and the live-action image currently displayed by the AR device is updated according to the re-fused live-action image.
In a possible implementation manner, when the target virtual playing device is controlled to play the designated target video resource, the target video resource corresponding to the target real scene where the AR device is located may be loaded first, and then the target virtual device is controlled to play the loaded target video resource.
Here, the loading of the target video resource corresponding to the target real scene where the AR device is located may be acquiring a video resource corresponding to the target real scene from a server, and the target video resource corresponding to the target real scene may be pre-loaded on the AR device before playing, so that whether the target video resource is played or not may be directly determined by the AR device.
When the video resource corresponding to the target real scene is loaded, the video resource bound with the identification information of the target real scene can be loaded according to the identification information corresponding to the target real scene.
In specific implementation, the identification information of the target reality scene may be set in advance, the identification information of the target reality scene is used to distinguish different target reality scenes, and the different target reality scenes may correspond to different target video resources. Specifically, the AR device may store a mapping relationship between the identification information of the target real scene and the video resource identification, and after the target real scene where the AR device is located is determined, the target video resource identification bound to the identification information of the target real scene may be searched from the mapping relationship based on the identification information corresponding to the target real scene, and then the target video resource corresponding to the searched target video resource identification is loaded. Here, one of the target real scenes corresponds to at least one target video asset.
When a plurality of target video resources are bound to one target reality scene, and when the target video resources corresponding to the target reality scene are loaded, the target video resources can be loaded according to a preset loading condition because the storage capacity of the AR equipment is limited.
Here, the loading condition may be any one of conditions such as a user instruction, a relative pose relationship, and a current time.
In a possible implementation manner, when the loading condition includes a user instruction, and when a target video resource corresponding to a target real scene is loaded, a playlist including a target video resource identifier corresponding to the target real scene may be first displayed on the AR device, and then the target video resource corresponding to the selection instruction is loaded based on a selection instruction made by a user for the playlist.
When the playlist is displayed on the AR device, the playlist may be superimposed on a preset position of the live-action image for display, and the user may generate a selection instruction for any target video resource identifier by triggering the AR device.
When the user triggers the AR device to generate a selection instruction for any target video resource identifier, the user may trigger a screen of the AR device to generate the selection instruction for any target video resource identifier, or the user may make a target gesture, and the selection instruction for the target video resource identifier may be generated based on the target video resource identifier pointed by the target gesture.
In one possible implementation, when the loading condition includes the relative pose information, when loading a target corresponding to the target real scene loads a video resource, a target video resource matching the relative distance in the relative pose information may be loaded from among a plurality of target video resources corresponding to the area identification of the target real scene.
Specifically, the set distance may be divided into different distance ranges, the different distance ranges correspond to different target video resources, then a target distance range to which a relative distance in the relative pose relationship belongs is determined, and a target video resource corresponding to the target distance range is loaded.
For example, if the set distance is 5 meters, a distance range of 0-2 meters and a distance range of 2-5 meters can be divided, a target video resource corresponding to the distance range of 0-2 meters is a video resource a, a target video resource corresponding to the distance range of 2-5 meters is a video resource B, and if the relative distance in the relative pose relationship is 2 meters, the video resource a can be loaded.
In a possible implementation manner, when the loading condition includes the current time, and when the target video resource corresponding to the target real scene is loaded, a playing time period corresponding to each of a plurality of target video resources corresponding to the target real scene may be determined first, and then according to the playing time periods corresponding to the plurality of target video resources, the target video resource corresponding to the current time is selected to be loaded from the plurality of target video resources.
The target video resource corresponding to the current time may be a video resource corresponding to a playing time period to which the current time belongs, for example, if the target video resource corresponding to the target real scene includes a video resource a, a video resource B and a video resource C, the corresponding playing time periods are 10:00 to 12:00, 14:00 to 16:00 and 17:00 to 19:00, respectively, and if the current time is 11:00, the video resource a is loaded.
And if the current time does not belong to the playing time period corresponding to any target video resource, determining the playing time period closest to the current time, and taking the target video resource corresponding to the playing time period as the target video resource corresponding to the current time.
In one possible implementation manner, after a target virtual playing device is controlled to play a target video resource, in the case that a change in the pose information of the AR device is detected, based on the changed pose information of the AR device and the initial display pose information of the target virtual playing device, the relative pose information of the AR device with respect to the target virtual playing device is determined; and under the condition that the relative pose information does not meet the preset condition, controlling the target virtual playing equipment to stop playing the target video resource.
The relative pose information may include a relative distance and a relative orientation included angle, where the relative distance may be a distance between the AR device and the video playing area, and specifically may be determined according to position information in the pose information of the AR device and position information in initial display pose information of the target virtual playing device; the relative orientation included angle may be an included angle between a shooting direction of the AR device and a direction of the target virtual playing device, and is exemplary, the relative orientation included angle may be as shown in fig. 2, the relative orientation included angle is an included angle formed by the extension line of the AR device in the horizontal direction and the extension line of the direction of the target virtual playing device in the horizontal direction.
When the relative pose information does not satisfy the preset condition, controlling the target virtual playing device to stop playing the target video resource may be: and controlling the target virtual playing equipment to stop playing the target video resource under the condition that the relative distance in the relative pose information is greater than a preset distance, or the included angle between the shooting direction of the AR equipment and the direction facing the video playing area in the relative pose information is not within a set angle range.
When the relative distance between the AR equipment and the target virtual playing equipment is larger than the set distance, the target virtual playing equipment is controlled to stop playing the target video resource, so that the problem of unclear playing caused by too long distance between the AR equipment and the target virtual playing equipment when the target video resource is played is avoided; and when the relative orientation included angle is not within the set angle range, the target virtual playing device is controlled to stop playing the target video resource, so that the AR device is located within the optimal viewing angle range when the target video resource is played, and the display effect of the target video resource is improved.
In another possible implementation manner, when it is detected that the pose information of the AR device changes, an AR scene picture displayed in the AR device may be determined based on the changed pose information of the AR device and initial display pose information of the target virtual playing device in a three-dimensional scene model corresponding to the target real scene; and then controlling the target virtual playing device to stop playing the target video resource under the condition that the occupation ratio of the area occupied by the video element of the target video resource contained in the AR scene picture is smaller than a set ratio.
For example, the preset proportion may be set to 50%, that is, when the area occupied by the video element of the target video resource contained in the AR scene picture is less than 50%, the target virtual playing device is controlled to stop playing the target video resource.
Based on the mode, under the condition that the area occupied by the video elements contained in the AR scene picture is large enough, the video resources can be played, resource waste is avoided, the playing effect of the video resources is improved, and the watching experience of a user is improved.
In another possible implementation, when it is detected that the pose information of the AR device changes, after determining the AR scene picture displayed in the AR device, it may be further determined whether the AR scene picture includes a video element of the target video resource, and if the AR scene picture does not include the video element of the target video resource, the target virtual playing device is controlled to stop playing the target video resource.
In another possible implementation manner, when it is detected that the pose information of the AR device changes, after the AR scene picture displayed in the AR device is determined, the number of pixels occupied by video elements corresponding to the target video resource in the AR scene picture may also be detected, and when the number of pixels is smaller than a preset number, the target virtual playing device is controlled to stop playing the target video resource.
In another possible implementation manner, in a case that a change in the pose information of the AR device is detected, if any one of the following conditions is detected, the target virtual playback device is controlled to stop playing the target video resource:
the relative pose information between the AR equipment and the target virtual playing equipment does not meet set conditions; the occupation ratio of the area occupied by the video elements of the target video resources contained in the AR scene picture is smaller than a set proportion; the AR scene picture does not contain video elements of the target video resource; the number of pixels occupied by video elements corresponding to the target video resources in the AR scene picture is less than the preset number.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
By the method, after the AR equipment is detected to be located in the preset target real scene, the target virtual playing equipment can be displayed in the AR equipment, and the target virtual playing equipment is controlled to play the target video resource; because the virtual playing device does not need to occupy the entity space, the virtual playing device is not limited by the entity display space, and the space resource and the entity device resource can be saved. In addition, after different AR devices enter a preset target real scene, the target virtual playing devices can be automatically controlled, the watching requirements of the different AR devices on the target video resources are met, and the display effect of the target video resources is further improved.
Based on the same inventive concept, a display control device corresponding to the display control method is also provided in the embodiments of the present disclosure, and since the principle of solving the problem of the device in the embodiments of the present disclosure is similar to the display control method in the embodiments of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 3, a schematic diagram of an architecture of a display control apparatus according to an embodiment of the present disclosure is shown, where the apparatus includes: an acquisition module 301, a determination module 302 and a control module 303; wherein the content of the first and second substances,
an obtaining module 301, configured to obtain a live-action image acquired by an AR device in real time;
a determining module 302, configured to determine whether the AR device is located in a preset target real scene based on the live-action image;
the control module 303 is configured to, when it is determined that the AR device is located in a preset target reality scene, display a target virtual playing device, and control the target virtual playing device to play a specified target video resource.
In a possible implementation, the determining module 302 is configured to determine whether the AR device is located in the preset target reality scene according to the following method:
inputting the live-action image acquired by the AR equipment into a pre-trained neural network model, and determining whether the AR equipment is located in the preset target real scene, wherein the neural network model is obtained based on sample live-action image training carrying a target real scene label.
In a possible implementation, the determining module 302 is configured to determine whether the AR device is located in the preset target reality scene according to the following method:
determining pose information of the AR device based on the live-action image;
and determining whether the AR equipment is positioned in a preset target real scene or not based on the pose information of the AR equipment.
In a possible implementation manner, the control module 303, when presenting the target virtual playing device, is configured to:
and displaying the target virtual playing device in the AR device.
In a possible implementation manner, when the target virtual playing device is shown in the AR device, the control module 303 is configured to:
determining pose information of the AR device based on the live-action image;
determining initial display pose information of the target virtual playing device in a three-dimensional scene model corresponding to the target real scene based on the pose information of the AR device and preset relative pose information between the target virtual playing device and the AR device;
and displaying the target virtual playing device in the AR device based on the initial display pose information of the target virtual playing device in the three-dimensional scene model corresponding to the target real scene.
In a possible implementation, the control module 303 is further configured to:
responding to the movement operation of a user for the displayed target virtual playing device, and updating the initial display pose information of the target virtual playing device based on the movement operation;
and displaying the target virtual playing equipment based on the updated initial display pose information.
In a possible implementation manner, the control module 303, when controlling the target virtual playing device to play the specified target video resource, is configured to:
loading a target video resource corresponding to a target real scene where the AR equipment is located;
and controlling the target virtual equipment to play the loaded target video resource.
In a possible implementation manner, the control module 303, after controlling the target virtual playing device to play the specified target video resource, is further configured to:
under the condition that the change of the pose information of the AR equipment is detected, determining the relative pose information of the AR equipment relative to the target virtual playing equipment based on the changed pose information of the AR equipment and the initial display pose information of the target virtual playing equipment;
and under the condition that the relative pose information does not meet the preset condition, controlling the target virtual playing equipment to stop playing the target video resource.
In a possible implementation manner, the control module 303, when controlling the target virtual playing apparatus to stop playing the target video resource in a case that the relative pose information does not satisfy a preset condition, is configured to:
and controlling the target virtual playing equipment to stop playing the target video resource under the condition that the relative distance in the relative pose information is greater than a preset distance, or the included angle between the shooting direction of the AR equipment and the direction facing the video playing area in the relative pose information is not within a set angle range.
In a possible implementation manner, the control module 303, after controlling the target virtual playing device to play the specified target video resource, is further configured to:
under the condition that the change of the pose information of the AR equipment is detected, determining an AR scene picture displayed in the AR equipment based on the changed pose information of the AR equipment and the initial display pose information of the target virtual playing equipment in the three-dimensional scene model corresponding to the target real scene;
and under the condition that the area occupied by the video elements of the target video resources contained in the AR scene picture is smaller than the set proportion in the AR scene picture, controlling the target virtual playing equipment to stop playing the target video resources.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
By the device, after the AR equipment is detected to be located in the preset target real scene, the target virtual playing equipment can be displayed in the AR equipment, and the target virtual playing equipment is controlled to play the target video resource; because the virtual playing device does not need to occupy the entity space, the virtual playing device is not limited by the entity display space, and the space resource and the entity device resource can be saved. In addition, after different AR devices enter a preset target real scene, the target virtual playing devices can be automatically controlled, the watching requirements of the different AR devices on the target video resources are met, and the display effect of the target video resources is further improved.
Based on the same technical concept, the embodiment of the disclosure also provides computer equipment. Referring to fig. 4, a schematic structural diagram of a computer device 400 provided in the embodiment of the present disclosure includes a processor 401, a memory 402, and a bus 403. The memory 402 is used for storing execution instructions and includes a memory 4021 and an external memory 4022; the memory 4021 is also referred to as an internal memory, and is configured to temporarily store operation data in the processor 401 and data exchanged with an external memory 4022 such as a hard disk, the processor 401 exchanges data with the external memory 4022 through the memory 4021, and when the computer device 400 operates, the processor 401 communicates with the memory 402 through the bus 403, so that the processor 401 executes the following instructions:
acquiring a live-action image acquired by AR equipment in real time;
determining whether the AR equipment is positioned in a preset target real scene based on the live-action image;
and under the condition that the AR equipment is determined to be positioned in a preset target real scene, displaying target virtual playing equipment, and controlling the target virtual playing equipment to play the appointed target video resource.
The embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the display control method described in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The computer program product of the display control method provided in the embodiments of the present disclosure includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the steps of the display control method described in the above method embodiments, which may be referred to specifically for the above method embodiments, and are not described herein again.
The embodiments of the present disclosure also provide a computer program, which when executed by a processor implements any one of the methods of the foregoing embodiments. The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (13)

1. A display control method, comprising:
acquiring a live-action image acquired by AR equipment in real time;
determining whether the AR equipment is positioned in a preset target real scene based on the live-action image;
and under the condition that the AR equipment is determined to be positioned in a preset target real scene, displaying target virtual playing equipment, and controlling the target virtual playing equipment to play the appointed target video resource.
2. The method of claim 1, wherein determining whether the AR device is located in the preset target reality scene is performed according to:
inputting the live-action image acquired by the AR equipment into a pre-trained neural network model, and determining whether the AR equipment is located in the preset target real scene, wherein the neural network model is obtained based on sample live-action image training carrying a target real scene label.
3. The method of claim 1, wherein determining whether the AR device is located in the preset target reality scene is performed according to:
determining pose information of the AR device based on the live-action image;
and determining whether the AR equipment is positioned in a preset target real scene or not based on the pose information of the AR equipment.
4. The method of claim 1, wherein the presenting the target virtual playback device comprises:
and displaying the target virtual playing device in the AR device.
5. The method of claim 4, wherein said presenting the target virtual playback device in the AR device comprises:
determining pose information of the AR device based on the live-action image;
determining initial display pose information of the target virtual playing device in a three-dimensional scene model corresponding to the target real scene based on the pose information of the AR device and preset relative pose information between the target virtual playing device and the AR device;
and displaying the target virtual playing device in the AR device based on the initial display pose information of the target virtual playing device in the three-dimensional scene model corresponding to the target real scene.
6. The method according to any one of claims 1 to 5, further comprising:
responding to the movement operation of a user for the displayed target virtual playing device, and updating the initial display pose information of the target virtual playing device based on the movement operation;
and displaying the target virtual playing equipment based on the updated initial display pose information.
7. The method according to claim 1, wherein the controlling the target virtual playing device to play the specified target video resource comprises:
loading a target video resource corresponding to a target real scene where the AR equipment is located;
and controlling the target virtual equipment to play the loaded target video resource.
8. The method of claim 5, wherein after controlling the target virtual playback device to play the specified target video asset, the method further comprises:
under the condition that the change of the pose information of the AR equipment is detected, determining the relative pose information of the AR equipment relative to the target virtual playing equipment based on the changed pose information of the AR equipment and the initial display pose information of the target virtual playing equipment;
and under the condition that the relative pose information does not meet the preset condition, controlling the target virtual playing equipment to stop playing the target video resource.
9. The method according to claim 8, wherein the controlling the target virtual playing device to stop playing the target video resource in the case that the relative pose information does not satisfy a preset condition comprises:
and controlling the target virtual playing equipment to stop playing the target video resource under the condition that the relative distance in the relative pose information is greater than a preset distance, or the included angle between the shooting direction of the AR equipment and the direction facing the video playing area in the relative pose information is not within a set angle range.
10. The method of claim 5, wherein after controlling the target virtual playback device to play the specified target video asset, the method further comprises:
under the condition that the change of the pose information of the AR equipment is detected, determining an AR scene picture displayed in the AR equipment based on the changed pose information of the AR equipment and the initial display pose information of the target virtual playing equipment in the three-dimensional scene model corresponding to the target real scene;
and under the condition that the area occupied by the video elements of the target video resources contained in the AR scene picture is smaller than the set proportion in the AR scene picture, controlling the target virtual playing equipment to stop playing the target video resources.
11. A display control apparatus, comprising:
the acquisition module is used for acquiring a live-action image acquired by the AR equipment in real time;
a determining module, configured to determine whether the AR device is located in a preset target reality scene based on the live-action image;
and the control module is used for displaying the target virtual playing equipment and controlling the target virtual playing equipment to play the appointed target video resource under the condition that the AR equipment is determined to be positioned in the preset target real scene.
12. A computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when a computer device is running, the machine-readable instructions when executed by the processor performing the steps of the presentation control method as claimed in any one of claims 1 to 10.
13. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, performs the steps of the presentation control method as claimed in any one of the claims 1 to 10.
CN202011193531.9A 2020-10-30 2020-10-30 Display control method and device, computer equipment and storage medium Pending CN112333498A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011193531.9A CN112333498A (en) 2020-10-30 2020-10-30 Display control method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011193531.9A CN112333498A (en) 2020-10-30 2020-10-30 Display control method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112333498A true CN112333498A (en) 2021-02-05

Family

ID=74297627

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011193531.9A Pending CN112333498A (en) 2020-10-30 2020-10-30 Display control method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112333498A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112954437A (en) * 2021-02-02 2021-06-11 深圳市慧鲤科技有限公司 Video resource processing method and device, computer equipment and storage medium
CN112967405A (en) * 2021-03-23 2021-06-15 深圳市商汤科技有限公司 Pose updating method, device and equipment of virtual object and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108550190A (en) * 2018-04-19 2018-09-18 腾讯科技(深圳)有限公司 Augmented reality data processing method, device, computer equipment and storage medium
CN108958475A (en) * 2018-06-06 2018-12-07 阿里巴巴集团控股有限公司 virtual object control method, device and equipment
US20190051103A1 (en) * 2017-08-09 2019-02-14 Igt Augmented reality systems and methods for providing a wagering game having real-world and virtual elements
CN111610998A (en) * 2020-05-26 2020-09-01 北京市商汤科技开发有限公司 AR scene content generation method, display method, device and storage medium
CN111694430A (en) * 2020-06-10 2020-09-22 浙江商汤科技开发有限公司 AR scene picture presentation method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190051103A1 (en) * 2017-08-09 2019-02-14 Igt Augmented reality systems and methods for providing a wagering game having real-world and virtual elements
CN108550190A (en) * 2018-04-19 2018-09-18 腾讯科技(深圳)有限公司 Augmented reality data processing method, device, computer equipment and storage medium
CN108958475A (en) * 2018-06-06 2018-12-07 阿里巴巴集团控股有限公司 virtual object control method, device and equipment
CN111610998A (en) * 2020-05-26 2020-09-01 北京市商汤科技开发有限公司 AR scene content generation method, display method, device and storage medium
CN111694430A (en) * 2020-06-10 2020-09-22 浙江商汤科技开发有限公司 AR scene picture presentation method and device, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112954437A (en) * 2021-02-02 2021-06-11 深圳市慧鲤科技有限公司 Video resource processing method and device, computer equipment and storage medium
WO2022166173A1 (en) * 2021-02-02 2022-08-11 深圳市慧鲤科技有限公司 Video resource processing method and apparatus, and computer device, storage medium and program
CN112967405A (en) * 2021-03-23 2021-06-15 深圳市商汤科技有限公司 Pose updating method, device and equipment of virtual object and storage medium

Similar Documents

Publication Publication Date Title
CN110716645A (en) Augmented reality data presentation method and device, electronic equipment and storage medium
CN112148197A (en) Augmented reality AR interaction method and device, electronic equipment and storage medium
CN111640197A (en) Augmented reality AR special effect control method, device and equipment
CN111638797A (en) Display control method and device
CN108985263B (en) Data acquisition method and device, electronic equipment and computer readable medium
CN111679742A (en) Interaction control method and device based on AR, electronic equipment and storage medium
CN111640202A (en) AR scene special effect generation method and device
CN111643900A (en) Display picture control method and device, electronic equipment and storage medium
CN110928411B (en) AR-based interaction method and device, storage medium and electronic equipment
CN111639613B (en) Augmented reality AR special effect generation method and device and electronic equipment
CN112637665B (en) Display method and device in augmented reality scene, electronic equipment and storage medium
CN111625100A (en) Method and device for presenting picture content, computer equipment and storage medium
CN112333498A (en) Display control method and device, computer equipment and storage medium
CN108697934A (en) Guidance information related with target image
CN111640192A (en) Scene image processing method and device, AR device and storage medium
CN112150349A (en) Image processing method and device, computer equipment and storage medium
CN111640169A (en) Historical event presenting method and device, electronic equipment and storage medium
CN112905014A (en) Interaction method and device in AR scene, electronic equipment and storage medium
CN111651058A (en) Historical scene control display method and device, electronic equipment and storage medium
CN111815782A (en) Display method, device and equipment of AR scene content and computer storage medium
CN114358822A (en) Advertisement display method, device, medium and equipment
CN113359983A (en) Augmented reality data presentation method and device, electronic equipment and storage medium
CN112954437B (en) Video resource processing method and device, computer equipment and storage medium
CN112991555B (en) Data display method, device, equipment and storage medium
CN111625101A (en) Display control method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210205

RJ01 Rejection of invention patent application after publication