CN111862341A - Virtual object driving method and device, display equipment and computer storage medium - Google Patents

Virtual object driving method and device, display equipment and computer storage medium Download PDF

Info

Publication number
CN111862341A
CN111862341A CN202010659421.0A CN202010659421A CN111862341A CN 111862341 A CN111862341 A CN 111862341A CN 202010659421 A CN202010659421 A CN 202010659421A CN 111862341 A CN111862341 A CN 111862341A
Authority
CN
China
Prior art keywords
virtual object
image
target
adjusting
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010659421.0A
Other languages
Chinese (zh)
Inventor
侯欣如
栾青
许亲亲
李园园
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202010659421.0A priority Critical patent/CN111862341A/en
Publication of CN111862341A publication Critical patent/CN111862341A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Abstract

The embodiment of the application provides a driving method and device of a virtual object, a display device and a computer readable storage medium, wherein the method comprises the following steps: in the process that the display equipment moves along a preset sliding track, acquiring a real scene image in the moving process through an image acquisition module of the display equipment; wherein a virtual object is displayed on a screen of the display device; determining target driving data for adjusting the display form of the virtual object in the case that it is determined based on the real scene image that a trigger condition for adjusting the display form of the virtual object is satisfied; and adjusting the display form of the virtual object based on the target driving data, so that the display effect of the virtual object can be improved.

Description

Virtual object driving method and device, display equipment and computer storage medium
Technical Field
The embodiment of the application relates to the technical field of augmented reality, and relates to but is not limited to a driving method and device of a virtual object, display equipment and a computer storage medium.
Background
With the increasingly strong pursuit of people on cultural experiences, more and more people walk into the exhibition hall to visit and learn. The exhibition hall mainly uses artifical guide as the owner, and the guide explains the show content in exhibition hall, and work load is big, among the relevant art, can introduce the show content in order to reduce artifical work load through plane medium in order to reduce artifical guide, but because plane medium (like the poster) space restriction for the information volume is less, and the bandwagon effect is single.
Disclosure of Invention
The embodiment of the application provides a driving method and device of a virtual object, display equipment and a computer storage medium.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a driving method of a virtual object, which comprises the following steps:
in the process that the display equipment moves along a preset sliding track, acquiring a real scene image in the moving process through an image acquisition module of the display equipment; wherein a virtual object is displayed on a screen of the display device;
determining target driving data for adjusting the display form of the virtual object in the case that it is determined based on the real scene image that a trigger condition for adjusting the display form of the virtual object is satisfied;
and adjusting the display form of the virtual object based on the target driving data.
In some embodiments, the method further comprises:
determining a target entity in a real scene based on the real scene image;
determining attribute information of the target entity;
and under the condition that the target driving data corresponding to the attribute information is determined to exist, determining that a trigger condition for adjusting the display form of the virtual object is met.
In some embodiments, the method further comprises:
identifying a target image region of a target entity from the real scene image;
matching the target image area with a reference image in a pre-stored image library to obtain the similarity between the reference image in the pre-stored image library and the target image area;
and determining that a trigger condition for adjusting the display form of the virtual object is met under the condition that the similarity between the target image and the target image area is greater than or equal to a similarity threshold value in a pre-stored image library.
In some embodiments, the method further comprises:
identifying a target image region of a target entity from the real scene image;
determining size information of the target image area;
and determining that a trigger condition for adjusting the virtual object display form is met if the size information is greater than or equal to a size threshold.
In some embodiments, the method further comprises:
identifying a target entity in a real scene from the real scene image;
acquiring distance information between the target entity and the display equipment;
determining that a trigger condition for adjusting the display form of the virtual object is satisfied if the distance information is less than or equal to a distance threshold.
In some embodiments, the method further comprises:
acquiring operation information aiming at a display screen of the display equipment;
determining driving data corresponding to the operation information;
and adjusting the display form of the virtual object based on the driving data corresponding to the operation information.
In some embodiments, the target-driven data is used at least to characterize a presentation shape of the virtual object;
correspondingly, the adjusting the virtual object display form based on the target driving data includes:
and adjusting the current display form of the virtual object to the display form corresponding to the target driving data, wherein the display form at least comprises one of walking, turning, standing, squatting and mouth shape.
An embodiment of the present application provides a driving apparatus for a virtual object, where the driving apparatus for the virtual object includes:
the display device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a real scene image in the moving process through an image acquisition module of the display device in the moving process of the display device along a preset sliding track; wherein a virtual object is displayed on a screen of the display device;
a first determination module, configured to determine target driving data for adjusting a display form of a virtual object if it is determined, based on the real scene image, that a trigger condition for adjusting the display form of the virtual object is satisfied;
And the first adjusting module is used for adjusting the display form of the virtual object based on the target driving data.
In some embodiments, the driving means of the virtual object further comprises:
a second determination module to determine a target entity in a real scene based on the real scene image;
a third determining module, configured to determine attribute information of the target entity;
and the fourth determining module is used for determining that the triggering condition for adjusting the display form of the virtual object is met under the condition that the driving data corresponding to the attribute information exists.
In some embodiments, the driving means of the virtual object further comprises:
a fifth determining module, configured to determine a target image region of a target entity from the real scene image;
the matching module is used for matching the target image area with a reference image in a pre-stored image library to obtain the similarity between the reference image in the pre-stored image library and the target image area;
a sixth determining module, configured to determine that a trigger condition for adjusting the display form of the virtual object is satisfied when a similarity between a target image and the target image area is greater than or equal to a similarity threshold in a pre-stored image library.
In some embodiments, the driving means of the virtual object further comprises:
a seventh determining module, configured to determine a target image region of a target entity from the real scene image;
an eighth determining module, configured to determine size information of the target image area;
a ninth determining module, configured to determine that a trigger condition for adjusting the virtual object display form is satisfied when the size information is greater than or equal to a size threshold.
In some embodiments, the driving means of the virtual object further comprises:
a tenth determination module to identify a target entity in a real scene from the real scene image;
the first acquisition module is used for acquiring distance information between the target entity and the display equipment;
an eleventh determining module, configured to determine that a trigger condition for adjusting the display form of the virtual object is satisfied when the distance information is less than or equal to a distance threshold.
In some embodiments, the driving means of the virtual object further comprises:
the second acquisition module is used for acquiring operation information aiming at a display screen of the display equipment;
a twelfth determining module, configured to determine driving data corresponding to the operation information;
And the second adjusting module is used for adjusting the display form of the virtual object based on the driving data corresponding to the operation information.
In some embodiments, the target-driven data is used at least to characterize a presentation shape of the virtual object;
correspondingly, the first adjusting module is configured to adjust the current display form of the virtual object to the display form corresponding to the target driving data, where the display form at least includes one of walking, turning, standing, squatting, and mouth shape.
An embodiment of the present application provides a display device, the display device at least includes:
the image acquisition module is used for acquiring a real scene image;
a screen for displaying at least the real scene image and a virtual object;
the processor is connected with the image acquisition module and the screen;
a memory for storing a computer program operable on the processor; wherein the computer program, when executed by a processor, implements the steps of the method of driving the virtual object.
The embodiment of the application provides a computer storage medium, wherein computer-executable instructions are stored in the computer storage medium and configured to execute the steps of the driving method of the virtual object.
The embodiment of the application provides a driving method and device of a virtual object, a display device and a computer storage medium, in the driving method of the virtual object, in the process that the display device moves along a preset track, when a real scene image is acquired, the display form of a virtual interpreter is adjusted based on target driving data when the trigger condition for adjusting the display form of the virtual object (such as the virtual interpreter) is met based on the real scene image, so that the explanation based on the virtual interpreter is realized, and the display effect of the virtual interpreter can be improved by adjusting the display form of the virtual interpreter.
Drawings
In the drawings, like numerals may describe similar components in different views. The drawings illustrate generally, by way of example, but not by way of limitation, various embodiments discussed herein.
Fig. 1 is a schematic view of an application scenario of a driving method for a virtual object according to an embodiment of the present application;
fig. 2 is a schematic flowchart of an implementation process of a driving method for a virtual object according to an embodiment of the present application;
fig. 3 is a schematic flowchart of another implementation of a driving method for a virtual object according to an embodiment of the present application;
Fig. 4 is a schematic flowchart of another implementation of the driving method for a virtual object according to the embodiment of the present application;
fig. 5 is a schematic flowchart of another implementation of the method for driving a virtual object according to the embodiment of the present application;
fig. 6 is a schematic flowchart of another implementation of the method for driving a virtual object according to the embodiment of the present application;
fig. 7 is a schematic flowchart of another implementation of the method for driving a virtual object according to the embodiment of the present application;
fig. 8 is a schematic structural diagram of a driving apparatus for a virtual object according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a display device according to an embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
The following description will be added if a similar description of "first \ second \ third" appears in the application file, and in the following description, the terms "first \ second \ third" merely distinguish similar objects and do not represent a specific ordering for the objects, and it should be understood that "first \ second \ third" may be interchanged under certain circumstances in a specific order or sequence, so that the embodiments of the application described herein can be implemented in an order other than that shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Based on the problems in the related art, embodiments of the present application provide a driving method for a virtual object, where the driving method for a virtual object is applied to a display device, and the display device is an electronic device with a display function, for example, the display device may be a tablet computer, a mobile terminal, and the like, and the mobile terminal may be a mobile phone. The functions implemented by the driving method of the virtual object may be implemented by a processor in the display apparatus calling a program code, wherein the program code may be stored in a computer storage medium.
Fig. 1 is a schematic view of an application scenario of a driving method for a virtual object according to an embodiment of the present application, as shown in fig. 1, a display device 101 is disposed on a preset sliding rail 102, the display device 101 can move along the sliding rail 102, the display device 101 is disposed in front of a display stand, at least one target entity 103 is disposed on the display stand, a virtual object 104 is displayed in the display device 101, and the virtual object 104 is used for explaining the target entity 103. The following describes a driving method of a virtual object in detail with reference to a scenario in which the driving method of the virtual object is applied.
Fig. 2 is a schematic flow chart of an implementation of a method for driving a virtual object according to an embodiment of the present application, and as shown in fig. 2, the method includes:
step S201, in the process that the display device moves along a preset sliding track, a real scene image in the moving process is collected through an image collecting module of the display device.
In the embodiment of the application, the display device can be arranged on the preset sliding rail, and the display device can move along the preset sliding rail through external force. In the process that the display equipment moves along the preset sliding track, the image acquisition module of the display equipment acquires images of a real scene, and the image acquisition module is arranged on one side of a screen of the display equipment. The image acquisition module may be a camera and the real scene may be a scene including a display stand displaying an item.
In the embodiment of the application, a virtual object is displayed on a screen of the display device. The virtual object may be a virtual interpreter, the avatar of which may be any of: human, robot, cartoon character, animal, etc., which are not limited in this application. In some embodiments, the virtual object may be a virtual effect displayed on a screen of a display device. In the embodiment of the application, after the display device acquires the image of the real scene through the image acquisition module, the virtual object is displayed on the image of the real scene in an overlapping manner to form an Augmented Reality (AR) image, and the AR image is displayed on the display device. When a user views the acquired real scene image on the display device, the user can view the virtual object superimposed on the real scene image of the real scene.
Step S202, under the condition that the triggering condition for adjusting the virtual object display form is determined to be met based on the real scene image, determining target driving data for adjusting the virtual object display form.
In some embodiments, a target entity in a real scene may be determined through a real scene image, and attribute information of the target entity is determined, where a trigger condition for adjusting a display form of a virtual object is satisfied in the presence of driving data corresponding to the attribute information.
In some embodiments, a target image region of a target entity may be further determined from the real scene image, the target image region is matched with a reference image in a pre-stored image library, and when matching is successful, it is determined that a trigger condition for adjusting the display form of the virtual object is satisfied.
In some embodiments, a target image region of a target entity may also be determined from the real scene image; determining size information of the target image area; when the size information is larger than or equal to a size threshold value, determining that a trigger condition for adjusting the virtual object display form is met.
In some embodiments, a target entity in a real scene may also be determined based on the real scene image; acquiring distance information between the target entity and the self equipment; when the distance information is smaller than or equal to a distance threshold value, determining that a trigger condition for adjusting the display form of the virtual object is met.
When it is determined that the trigger condition for adjusting the display form of the virtual object is satisfied, the display device may determine target driving data for adjusting the display form of the virtual object based on a correspondence relationship with driving data established in advance.
In this embodiment of the application, the pre-established correspondence may be a correspondence between a reference image and drive data in an image library stored in the image library, for example, when a target area image of a target entity is determined, and a similarity between the target area image and a target image in the image library is greater than a similarity threshold, it is determined that the drive data corresponding to the target image is the target drive data, such as the drive data a corresponding to the reference image a, and when the similarity between the target area image and the reference image a in the image library is greater than the similarity threshold, it is determined that the drive data a is the target drive data.
In some embodiments, the pre-established correspondence may also be a correspondence between the trigger condition and the drive data; for example, the trigger condition corresponds to the driving data a, and when the trigger condition is satisfied, the driving data a is determined as the target driving data. That is, when a trigger condition for adjusting the presentation form of the virtual object is satisfied, the presentation form of the virtual object is adjusted.
In some embodiments, the pre-established corresponding relationship may also be a corresponding relationship between attribute information of the target entity and the driving data, where the attribute information may be a category, integrity, and the like of the target entity, and taking the category of the target entity as an example, different target entities may be classified, and different classifications correspond to different driving data, and when the category of the target entity is determined, the driving data corresponding to the category is determined to be the target driving data. In an embodiment of the present application, the target driving data is at least used for characterizing a display form of the virtual object.
Step S203, adjusting the display form of the virtual object based on the target driving data.
In the embodiment of the application, after the target driving data is acquired, the display device adjusts the virtual object display form based on the target driving data. And adjusting the current display form of the virtual object to the display form corresponding to the target driving data. For example: in the process that the display device moves along the preset sliding track, the display form of the virtual object is the walking form, the target driving data are used for representing that the virtual object is the turning form, and when the display device displays the virtual object as the turning form, the virtual object is adjusted from the walking form to the turning form. The adjusting of the presentation form of the virtual object may be regarded as an action of changing the virtual object.
In the embodiment of the present application, when the trigger condition for adjusting the display form of the virtual object is satisfied, the obtained target driving data is different from the driving data corresponding to the current display form. And in the process of adjusting the display form of the virtual object, changing the image frame in the current display form into the image frame in the adjusted display form, thereby presenting an animation effect. In an embodiment of the application, the display configuration includes at least one of walking, turning, standing, squatting, and mouth-shape. The adjusting of the display form of the virtual object may be adjusting a standing form to a squatting form, adjusting a walking form to a turning form, and the like. In the embodiment of the present application, the adjusted display form may be set according to a scene.
In some embodiments, the driving data further includes an explanation content, and the display device drives the virtual object to rotate and then starts to explain the explanation content corresponding to the target entity in the real scene image.
According to the driving method of the virtual object, in the process that the display device moves along the preset track, when the real scene image is obtained, under the condition that the triggering condition for adjusting the display form of the virtual object is determined to be met based on the real scene image, the display form of the virtual object is adjusted based on the target driving data, so that explanation based on the virtual object can be realized, and the display effect of the virtual object can be improved by adjusting the display form of the virtual object.
In some embodiments, fig. 3 is a schematic flow chart of another implementation of the method for driving a virtual object provided in the embodiment of the present application, and as shown in fig. 3, the method further includes:
step S1, determining a target entity in the real scene based on the real scene image.
In some embodiments, the real scene image may be input into a first recognition model trained in advance, and a target entity in the real scene may be determined. In the embodiment of the present application, an initial first recognition model may be trained based on a sample image set, where images in the sample image set are labeled with target entities. The initial first recognition model may be a neural network model, the initial first recognition model is trained based on a sample image set to obtain a trained first recognition model, and then a target entity in a real scene image is determined based on the trained first recognition model, that is, a target image region where the target entity is located is determined.
In some embodiments, similarity calculation may be performed on the real scene image and a reference image in a preset image library, where a target entity is labeled in advance in the reference image in the preset image library, whether the similarity between the real scene image and the reference image in the preset image library is greater than a similarity threshold is determined, and when the similarity between the real scene image and the target reference image in the preset image library is greater than the similarity threshold, it may be determined that the target entity in the real scene is an entity corresponding to the target reference image.
Step S2, determining attribute information of the target entity.
In the embodiment of the present application, the attribute information may include, but is not limited to, at least one of the attributes of the category, the integrity, and the like of the target entity.
In some embodiments, after the target entity is determined, the image corresponding to the target entity may be input into a second recognition model trained in advance, and the attribute information of the target entity is determined. The initial second recognition model may be trained based on a sample image set, where the images in the sample image set are labeled with attribute information of a target entity, the initial second recognition model may be a neural network model, when training is performed based on the sample images, the predicted attribute information of each sample image is obtained through the initial second recognition model, and parameters of the initial second recognition model are adjusted based on the new predicted attributes and the labeled attribute information of the sample images, so as to obtain the trained second recognition model.
In some embodiments, the similarity calculation between the image of the target entity in the real scene image and a reference image in a preset image library can be further calculated, wherein the reference image in the image library is labeled with the attribute information in advance. And when the similarity between the image of the target entity and the image in the preset image library is greater than or equal to the similarity threshold value, matching the image of the target entity with the target reference image in the preset image library. Because the reference image in the image library has preset attribute information, the attribute information of the target entity can be determined based on the target reference image in the preset image library.
In some embodiments, when the attribute information includes the integrity, a pixel area occupied by the target entity in the real scene image may be determined, and the integrity of the target entity may be determined by comparing the pixel area occupied by the target entity in the real scene image with a preset pixel area of the target entity.
In some embodiments, when the attribute information includes the integrity, a target image region where the target entity is located may be further segmented by an image segmentation technique, a ratio of a pixel area of the target image region to a pixel area of the image of the real scene is determined, and the integrity of the target entity is determined based on the ratio.
Step S3, when it is determined that the drive data corresponding to the attribute information exists, determining that a trigger condition for adjusting the display form of the virtual object is satisfied.
In the embodiment of the application, the display device stores the driving data corresponding to the attribute information in advance. For example, the attribute information is a category, the category includes a first category and a second category, a correspondence between the first category and the drive data a is established in advance, and the second category does not have corresponding drive data. When the category of the target entity is obtained, whether corresponding driving data exists or not can be determined according to the category of the target entity. When the target entity is determined to be in the first category, the corresponding driving data A is determined to exist, so that the triggering condition for adjusting the display form of the virtual object is determined to be met, then the display form of the virtual interpreter is adjusted based on the driving data A, when the target entity is determined to be in the second category, no corresponding driving data exists, namely the triggering condition for adjusting the display form of the virtual object is not met, and at the moment, the current state of the virtual interpreter is maintained by the display equipment.
In some embodiments, the attribute information is integrity, the integrity corresponding to the driving data greater than the integrity threshold is set, the integrity smaller than the integrity threshold has no corresponding driving data, and when the integrity of the target entity is obtained, whether corresponding driving data exists or not can be determined according to the integrity, so as to determine whether a trigger condition for adjusting the display form of the virtual object is satisfied or not. When the integrity is determined to be greater than the integrity threshold, the triggering condition for adjusting the display form of the virtual object is determined to be met, and when the integrity is determined to be less than the integrity threshold, the triggering condition for adjusting the display form of the virtual object is not met, and at the moment, the display equipment maintains the current state of the virtual instructor.
In the embodiment of the application, different attribute information can be set to correspond to different driving data, so that the display effect of the virtual object is improved.
In the embodiment of the application, the target entity in the real scene is determined through the real scene image, the attribute information of the target entity is determined, whether the triggering condition for adjusting the display form of the virtual object is met or not is determined based on the attribute information, when the triggering condition for adjusting the display form of the virtual object is met, the display form of the virtual object is adjusted according to the driving data corresponding to the attribute information, and the display effect of the virtual object can be improved.
In some embodiments, fig. 4 is a schematic flowchart of a further implementation flow of the driving method for a virtual object provided in this application embodiment, as shown in fig. 4, the method further includes:
in step S11, a target image region of a target entity is identified from the real scene image.
In the embodiment of the application, the real scene image may be input to a target image region of a recognition target entity in a first recognition model trained in advance.
In some embodiments, when implemented, the step S11 may also perform similarity calculation between the real scene image and a plurality of reference images in a preset image library, so as to identify the target entity from the real scene image. And when the similarity between the image of the real scene and the target reference image is greater than a preset similarity threshold value, the target entity in the real scene is considered as the target entity corresponding to the reference image, so that the target image area of the target entity in the real scene is determined.
In some embodiments, the image of the real scene may be segmented to obtain a plurality of image regions, and then the image regions obtained by the segmentation are matched with a plurality of reference images in a preset image library, where different reference images include image regions of different target entities, and when a reference image satisfying a matching condition with a certain image region obtained by the segmentation exists in the preset image library, the image region is determined as a target image region including the target entity, and an entity corresponding to the reference image is determined as the target entity in the real scene.
Step S12, matching the target image area with a reference image in a pre-stored image library to obtain a similarity between the reference image in the pre-stored image library and the target image area.
In the embodiment of the application, after the target image area is obtained, the target image area can be matched with an image stored in the device of the device, so that the similarity between a reference image and the target image area in a pre-stored image library can be calculated. Illustratively, the reference images in the pre-stored image library are corresponding image areas of the target entity to be explained. Illustratively, a cosine similarity between the target area image and a reference image in a pre-stored image library is calculated.
Step S13, when the similarity between the target image and the target image area is greater than or equal to the similarity threshold in the pre-stored image library, determining that a trigger condition for adjusting the display form of the virtual object is satisfied.
In this embodiment of the application, the similarity threshold may be preset, and whether the trigger condition for adjusting the display form of the virtual object is satisfied may be determined by comparing the similarity with the preset similarity threshold. And when the similarity is greater than or equal to the similarity threshold, a trigger condition for adjusting the display form of the virtual object is met, and at the moment, the display equipment determines target driving data for adjusting the display form of the virtual object. And when the similarity is smaller than the similarity threshold, the triggering condition for adjusting the display form of the virtual object is not met, and the display equipment maintains the current display state of the virtual object.
In the embodiment of the application, when determining the driving data for adjusting the display form of the virtual object, the target driving data is determined according to the corresponding relationship between the target entity image and the driving data, which is stored in advance. For example, the target image corresponds to the driving data a, and when it is determined that the similarity between the target image area and the target image is greater than the similarity threshold, it is determined that the driving data a is the target driving data. In the embodiment of the application, the target image area of the target entity is identified from the image of the real scene, the reference image in the pre-stored image library is matched with the target image area, when the matching is successful (namely, the similarity is greater than the similarity threshold), the triggering condition for adjusting the display form of the virtual object is met, and the display form of the virtual object is adjusted by acquiring the driving data corresponding to the target image, so that the display effect of the virtual object can be improved.
In some embodiments, fig. 5 is a schematic flow chart of still another implementation of the method for driving a virtual object provided in this application embodiment, as shown in fig. 5, the method further includes:
in step S21, a target image region of a target entity is identified from the real scene image.
The process of step S21 can be referred to the above description of step S11, and will not be repeated here.
In step S22, size information of the target image area is determined.
In the embodiment of the present application, the size information of the target image region may be determined by calculating the pixel area of the target image region. In the embodiment of the application, the size information of the target image area can be determined according to the number of pixels and the area of each pixel by counting the number of pixels occupied by the target image area.
Step S23, determining that a trigger condition for adjusting the virtual object display form is satisfied when the size information is greater than or equal to a size threshold.
In the embodiment of the present application, the size threshold may be preset, and whether the triggering condition for adjusting the display form of the virtual object is satisfied may be determined by comparing the size information with the preset size threshold. When the size information is larger than or equal to the size threshold, determining that a trigger condition for adjusting the display form of the virtual object is met, and determining target driving data for adjusting the display form of the virtual object by the display device. And when the size information is smaller than the size threshold, the triggering condition for adjusting the display form of the virtual object is not met, and the display equipment maintains the current display state of the virtual object.
According to the method provided by the embodiment of the application, the size information (such as the pixel area) of the target image area is determined by identifying the target image area of the target entity from the real scene image, when the pixel area is larger than the size threshold, the triggering condition for adjusting the display form of the virtual object is met, the driving data corresponding to the triggering condition is obtained and determined as the target driving data, so that the display form of the virtual object is adjusted, and the display effect of the virtual object can be improved.
In some embodiments, fig. 6 is a schematic flowchart of a further implementation flow of the method for driving a virtual object provided in this application embodiment, as shown in fig. 6, the method further includes:
step S31, identifying a target entity in the real scene from the real scene image.
The process of step S31 can be referred to the above description of step S11, and will not be repeated here.
Step S32, obtaining distance information between the target entity and the display device.
For example, the display device may determine distance information between the target entity and the own device in the real scene through a distance sensor provided on the own device.
Step S33, determining that a trigger condition for adjusting the display form of the virtual object is satisfied when the distance information is less than or equal to a distance threshold.
In the embodiment of the present application, after the distance information is determined, the size of the distance information and a preset distance threshold may be determined to determine whether a trigger condition for adjusting the display form of the virtual object is satisfied. And when the distance information is greater than the distance threshold, the triggering condition for adjusting the display form of the virtual object is not met, and the display equipment maintains the current display state of the virtual object. And when the distance information is smaller than or equal to the distance threshold, a trigger condition for adjusting the display form of the virtual object is met, and the display equipment determines target driving data for adjusting the display form of the virtual object. In the embodiment of the application, the distance threshold is set, so that the size of the image of the target entity in the display screen of the display device is suitable under the condition that the display device meets the trigger condition in the moving process.
According to the driving method of the virtual object, whether the triggering condition for adjusting the display form of the virtual object is met or not is judged by obtaining the distance information between the target entity and the display device, when the triggering condition is determined to be met, the driving data corresponding to the image of the target entity can be obtained, the display form of the virtual object is adjusted, and the display form of the virtual object is enriched.
Based on the foregoing embodiments, after step S203, the method further includes:
and step S204, acquiring operation information aiming at the display screen of the display equipment.
In the embodiment of the application, when a user operates the display screen of the display device, the display device acquires the operation information. In the embodiment of the present application, the operation information may be information related to a screen click, for example, the number of times the screen is clicked, a clicked position, a sliding path on the screen, and the like.
Step S205, determining the driving data corresponding to the operation information.
In the embodiment of the present application, the correspondence between the operation information and the driving data may be set in advance. For example, 3 clicks are preset to correspond to the drive data a, and the slide path a is preset to correspond to the drive data B. When 3 clicks are received, it is determined that the corresponding driving data a is clicked 3 times.
Step S206, adjusting the display form of the virtual object based on the driving data corresponding to the operation information.
Taking over the above example, the presentation shape of the virtual object is adjusted based on the driving data a.
According to the method provided by the embodiment of the application, the operation information of the display screen of the display device is obtained, the corresponding driving data is determined through the operation information, the virtual object display form is adjusted based on the driving data, so that a user can adjust the display form of the virtual object through an instruction, the display form of a virtual interpreter is richer, and the use experience of the user is improved.
Based on the foregoing embodiments, in the embodiments of the present application, a virtual object is taken as an example to explain, and fig. 7 is a schematic flow chart of an implementation of the method for driving a virtual object provided in the embodiments of the present application, as shown in fig. 7, the method includes:
step S301, in the process of moving the display device, acquiring a real scene image and displaying the real scene image on a screen of the display device, wherein a virtual interpreter is also displayed on the screen.
Display device in this embodiment of the present application, the virtual instructor may be in the form of one of the following: humans, robots, cartoon characters, animals, etc. In the embodiment of the application, the scene image can be collected and displayed through the camera of the display device.
Step S302, determining whether a trigger condition for adjusting the presentation form of the virtual instructor is satisfied.
The determination of whether the trigger condition for adjusting the presentation form of the virtual instructor is satisfied may be implemented by the following methods including but not limited to:
in the first mode, the identification of the target image area is carried out based on the collected real scene image, the target image area is compared with the stored picture, and if the target image area can be matched with the stored picture, the triggering condition for adjusting the display form of the virtual interpreter is met.
And secondly, calculating the pixel area of the target entity based on the acquired display scene image, and when the pixel area of the target entity reaches a preset pixel area threshold, meeting the trigger condition for adjusting the display form of the virtual interpreter.
And determining whether operation information aiming at the screen is received, such as clicking of the screen, gestures on the screen and the like, and when the operation information is received, meeting a trigger condition for adjusting the display form of the virtual interpreter.
And fourthly, calculating the distance between the screen and the explained object, and meeting the triggering condition for adjusting the display form of the virtual interpreter when the distance reaches a preset distance threshold value.
The determination of whether the trigger condition for adjusting the presentation form of the virtual instructor is satisfied may be implemented without being limited to the above implementation.
In the embodiments of the present application, the display forms include, but are not limited to: walking, turning, standing, squatting, mouth shape and explanation.
In the embodiment of the present application, when the trigger condition is satisfied, step S303 is executed; when the trigger condition is not satisfied, step S304 is executed.
Step S303, the display form of the virtual interpreter is driven to change.
For example, when the trigger condition is satisfied, the virtual lecturer is driven to adjust from the walking configuration to the turning configuration, or the virtual lecturer is driven to adjust from the standing configuration to the squatting configuration.
Step S304, maintaining the current display form of the virtual interpreter.
The following are examples of application scenarios:
and when the display equipment is closer to the position of the exhibit and is lower than the distance threshold value, adjusting the virtual interpreter to turn from walking to explaining.
When the operation information aiming at the screen of the display device is received, the operation information is that the lower part of the exhibit displayed on the screen is clicked, the virtual interpreter is adjusted to squat from standing, and the content corresponding to the lower part of the exhibit is explained.
When the display device is in the moving process and the target entity is all displayed on the screen of the display device, adjusting the virtual interpreter to turn from walking to transferring and interpreting the content corresponding to the target entity.
According to the driving method of the virtual object, when the triggering condition for adjusting the display form of the virtual object is met, the display form of the virtual object is adjusted, so that explanation based on the virtual object is realized, and the display form of the virtual object is richer.
Based on the foregoing embodiments, the present application provides a driving apparatus for a virtual object, where each module included in the apparatus and each unit included in each module may be implemented by a processor in a computer device; of course, the implementation can also be realized through a specific logic circuit; in the implementation process, the processor may be a Central Processing Unit (CPU), a Microprocessor Unit (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
Fig. 8 is a schematic structural diagram of a driving apparatus for a virtual object according to an embodiment of the present application, and as shown in fig. 8, the driving apparatus 800 for a virtual object includes:
the acquisition module 801 is used for acquiring a real scene image in a moving process through an image acquisition module of the display device in the process that the display device moves along a preset sliding track; wherein a virtual object is displayed on a screen of the display device;
a first determining module 802, configured to determine target driving data for adjusting a display form of a virtual object if it is determined, based on the real scene image, that a trigger condition for adjusting the display form of the virtual object is satisfied;
A first adjusting module 803, configured to adjust a display form of the virtual object based on the target driving data.
In some embodiments, the driving apparatus 800 of the virtual object further includes:
a second determination module to determine a target entity in a real scene based on the real scene image;
a third determining module, configured to determine attribute information of the target entity;
and the fourth determining module is used for determining that the triggering condition for adjusting the display form of the virtual object is met under the condition that the driving data corresponding to the attribute information exists.
In some embodiments, the driving apparatus 800 of the virtual object further includes:
a fifth determining module, configured to determine a target image region of a target entity from the real scene image;
the matching module is used for matching the target image area with a pre-stored reference image to obtain the similarity between the pre-stored reference image and the target image area;
a sixth determining module, configured to determine that a trigger condition for adjusting the display form of the virtual object is satisfied when a similarity between a target image and the target image area is greater than or equal to a similarity threshold in a pre-stored reference image.
In some embodiments, the driving apparatus 800 of the virtual object further includes:
a seventh determining module, configured to determine a target image region of a target entity from the real scene image;
an eighth determining module, configured to determine size information of the target image area;
a ninth determining module, configured to determine that a trigger condition for adjusting the virtual object display form is satisfied when the size information is greater than or equal to a size threshold.
In some embodiments, the driving apparatus 800 of the virtual object further includes:
a tenth determination module to identify a target entity in a real scene from the real scene image;
the first acquisition module is used for acquiring distance information between the target entity and the display equipment;
an eleventh determining module, configured to determine that a trigger condition for adjusting the display form of the virtual object is satisfied when the distance information is less than or equal to a distance threshold.
In some embodiments, the driving apparatus 800 of the virtual object further includes:
the second acquisition module is used for acquiring operation information aiming at a display screen of the display equipment;
a twelfth determining module, configured to determine driving data corresponding to the operation information;
And the second adjusting module is used for adjusting the display form of the virtual object based on the driving data corresponding to the operation information.
In some embodiments, the target-driven data is used at least to characterize a presentation shape of the virtual object;
correspondingly, the first adjusting module is configured to adjust the current display form of the virtual object to the display form corresponding to the target driving data, where the display form at least includes one of walking, turning, standing, squatting, and mouth shape.
In the embodiment of the present application, if the driving method of the virtual object is implemented in the form of a software functional module and is sold or used as a standalone product, the driving method may also be stored in a computer-readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
Accordingly, an embodiment of the present application provides a computer storage medium, on which a computer program is stored, wherein the computer program is executed by a processor to implement the steps in the driving method of the virtual object provided in the above embodiment.
An embodiment of the present application provides a display device, fig. 9 is a schematic diagram of a composition structure of the display device provided in the embodiment of the present application, and as shown in fig. 9, the display device 900 includes: a processor 901, at least one communication bus 902, a user interface 903, at least one external communication interface 904, a memory 905, an image acquisition module 906. Wherein the communication bus 902 is configured to enable connective communication between these components. The user interface 903 may include a screen, and the external communication interface 904 may include a standard wired interface and a wireless interface, among others. The image capturing module 906 is configured to capture an image of a real scene, and the processor 901 is configured to execute a program of a driving method of a virtual object stored in a memory, so as to implement the steps in the driving method of a virtual object provided in the foregoing embodiments.
The above description of the display device and storage medium embodiments is similar to the description of the method embodiments above, with similar beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the display device and storage medium of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application. The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a controller to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The above description is only for the embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of driving a virtual object, the method comprising:
in the process that the display equipment moves along a preset sliding track, acquiring a real scene image in the moving process through an image acquisition module of the display equipment; wherein a virtual object is displayed on a screen of the display device;
determining target driving data for adjusting the display form of the virtual object in the case that it is determined based on the real scene image that a trigger condition for adjusting the display form of the virtual object is satisfied;
and adjusting the display form of the virtual object based on the target driving data.
2. The method of claim 1, further comprising:
determining a target entity in a real scene based on the real scene image;
Determining attribute information of the target entity;
and determining that a trigger condition for adjusting the display form of the virtual object is satisfied under the condition that the driving data corresponding to the attribute information is determined to exist.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
identifying a target image region of a target entity from the real scene image;
matching the target image area with a reference image in a pre-stored image library to obtain the similarity between the reference image in the pre-stored image library and the target image area;
and determining that a trigger condition for adjusting the display form of the virtual object is met under the condition that the similarity between the target image and the target image area is greater than or equal to a similarity threshold value in a pre-stored image library.
4. The method of any of claims 1 to 3, further comprising:
identifying a target image region of a target entity from the real scene image;
determining size information of the target image area;
and determining that a trigger condition for adjusting the virtual object display form is met if the size information is greater than or equal to a size threshold.
5. The method of any of claims 1 to 4, further comprising:
identifying a target entity in a real scene from the real scene image;
acquiring distance information between the target entity and the display equipment;
determining that a trigger condition for adjusting the display form of the virtual object is satisfied if the distance information is less than or equal to a distance threshold.
6. The method of any of claims 1 to 5, further comprising:
acquiring operation information aiming at a display screen of the display equipment;
determining driving data corresponding to the operation information;
and adjusting the display form of the virtual object based on the driving data corresponding to the operation information.
7. The method according to any one of claims 1 to 6, wherein the target-driven data is used at least for characterizing the presentation shape of the virtual object;
the adjusting the virtual object display form based on the target driving data comprises:
and adjusting the current display form of the virtual object to the display form corresponding to the target driving data, wherein the display form at least comprises one of walking, turning, standing, squatting and mouth shape.
8. An apparatus for driving a virtual object, comprising:
the display device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a real scene image in the moving process through an image acquisition module of the display device in the moving process of the display device along a preset sliding track; wherein a virtual object is displayed on a screen of the display device;
a first determination module, configured to determine target driving data for adjusting a display form of a virtual object if it is determined, based on the real scene image, that a trigger condition for adjusting the display form of the virtual object is satisfied;
and the first adjusting module is used for adjusting the display form of the virtual object based on the target driving data.
9. A display device, characterized in that the display device comprises:
the image acquisition module is used for acquiring a real scene image;
a screen for displaying at least the real scene image and a virtual object;
the processor is connected with the screen of the image acquisition module;
a memory for storing a computer program operable on the processor;
wherein the computer program when executed by a processor implements the steps of the method of driving a virtual object according to any one of claims 1 to 7.
10. A computer storage medium having stored therein computer-executable instructions configured to perform the steps of the method of driving a virtual object according to any one of claims 1 to 7.
CN202010659421.0A 2020-07-09 2020-07-09 Virtual object driving method and device, display equipment and computer storage medium Pending CN111862341A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010659421.0A CN111862341A (en) 2020-07-09 2020-07-09 Virtual object driving method and device, display equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010659421.0A CN111862341A (en) 2020-07-09 2020-07-09 Virtual object driving method and device, display equipment and computer storage medium

Publications (1)

Publication Number Publication Date
CN111862341A true CN111862341A (en) 2020-10-30

Family

ID=73153519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010659421.0A Pending CN111862341A (en) 2020-07-09 2020-07-09 Virtual object driving method and device, display equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN111862341A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112270767A (en) * 2020-11-09 2021-01-26 重庆智慧之源科技有限公司 Building virtual display control method and device, wearable device and storage medium
CN112684894A (en) * 2020-12-31 2021-04-20 北京市商汤科技开发有限公司 Interaction method and device for augmented reality scene, electronic equipment and storage medium
CN113422977A (en) * 2021-07-07 2021-09-21 上海商汤智能科技有限公司 Live broadcast method and device, computer equipment and storage medium
CN113703582A (en) * 2021-09-06 2021-11-26 联想(北京)有限公司 Image display method and device
CN114371904A (en) * 2022-01-12 2022-04-19 北京字跳网络技术有限公司 Data display method and device, mobile terminal and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108519816A (en) * 2018-03-26 2018-09-11 广东欧珀移动通信有限公司 Information processing method, device, storage medium and electronic equipment
CN110390730A (en) * 2019-07-05 2019-10-29 北京悉见科技有限公司 The method and electronic equipment of augmented reality object arrangement
CN110716645A (en) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 Augmented reality data presentation method and device, electronic equipment and storage medium
US20200074736A1 (en) * 2018-09-05 2020-03-05 International Business Machines Corporation Transmutation of virtual entity sketch using extracted features and relationships of real and virtual objects in mixed reality scene

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108519816A (en) * 2018-03-26 2018-09-11 广东欧珀移动通信有限公司 Information processing method, device, storage medium and electronic equipment
US20200074736A1 (en) * 2018-09-05 2020-03-05 International Business Machines Corporation Transmutation of virtual entity sketch using extracted features and relationships of real and virtual objects in mixed reality scene
CN110390730A (en) * 2019-07-05 2019-10-29 北京悉见科技有限公司 The method and electronic equipment of augmented reality object arrangement
CN110716645A (en) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 Augmented reality data presentation method and device, electronic equipment and storage medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112270767A (en) * 2020-11-09 2021-01-26 重庆智慧之源科技有限公司 Building virtual display control method and device, wearable device and storage medium
CN112684894A (en) * 2020-12-31 2021-04-20 北京市商汤科技开发有限公司 Interaction method and device for augmented reality scene, electronic equipment and storage medium
CN113422977A (en) * 2021-07-07 2021-09-21 上海商汤智能科技有限公司 Live broadcast method and device, computer equipment and storage medium
CN113703582A (en) * 2021-09-06 2021-11-26 联想(北京)有限公司 Image display method and device
CN114371904A (en) * 2022-01-12 2022-04-19 北京字跳网络技术有限公司 Data display method and device, mobile terminal and storage medium
WO2023134490A1 (en) * 2022-01-12 2023-07-20 北京字跳网络技术有限公司 Data display method and device, mobile terminal, and storage medium
CN114371904B (en) * 2022-01-12 2023-09-15 北京字跳网络技术有限公司 Data display method and device, mobile terminal and storage medium

Similar Documents

Publication Publication Date Title
CN111862341A (en) Virtual object driving method and device, display equipment and computer storage medium
CN109635621B (en) System and method for recognizing gestures based on deep learning in first-person perspective
CN108875633B (en) Expression detection and expression driving method, device and system and storage medium
CN111556278B (en) Video processing method, video display device and storage medium
CN109688451B (en) Method and system for providing camera effect
CN110716645A (en) Augmented reality data presentation method and device, electronic equipment and storage medium
CN111638797A (en) Display control method and device
CN111612822B (en) Object tracking method, device, computer equipment and storage medium
KR20150039252A (en) Apparatus and method for providing application service by using action recognition
RU2667720C1 (en) Method of imitation modeling and controlling virtual sphere in mobile device
CN111985385A (en) Behavior detection method, device and equipment
EP2591458A2 (en) Systems and methods for improving visual attention models
CN111491187A (en) Video recommendation method, device, equipment and storage medium
US20120038602A1 (en) Advertisement display system and method
CN111638784A (en) Facial expression interaction method, interaction device and computer storage medium
CN111652983A (en) Augmented reality AR special effect generation method, device and equipment
CN113986093A (en) Interaction method and related device
CN114092670A (en) Virtual reality display method, equipment and storage medium
CN105468249B (en) Intelligent interaction system and its control method
CN110719415B (en) Video image processing method and device, electronic equipment and computer readable medium
CN112333498A (en) Display control method and device, computer equipment and storage medium
WO2022166173A1 (en) Video resource processing method and apparatus, and computer device, storage medium and program
KR20190069250A (en) Server and operation method of the server for providing naturual learning contents
KR20150109987A (en) VIDEO PROCESSOR, method for controlling the same and a computer-readable storage medium
US20220283698A1 (en) Method for operating an electronic device in order to browse through photos

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201030

RJ01 Rejection of invention patent application after publication