CN111583421A - Method and device for determining display animation, electronic equipment and storage medium - Google Patents

Method and device for determining display animation, electronic equipment and storage medium Download PDF

Info

Publication number
CN111583421A
CN111583421A CN202010496496.1A CN202010496496A CN111583421A CN 111583421 A CN111583421 A CN 111583421A CN 202010496496 A CN202010496496 A CN 202010496496A CN 111583421 A CN111583421 A CN 111583421A
Authority
CN
China
Prior art keywords
sculpture
virtual
determining
display
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010496496.1A
Other languages
Chinese (zh)
Inventor
揭志伟
李炳泽
武明飞
张一�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Shangtang Technology Development Co Ltd
Zhejiang Sensetime Technology Development Co Ltd
Original Assignee
Zhejiang Shangtang Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Shangtang Technology Development Co Ltd filed Critical Zhejiang Shangtang Technology Development Co Ltd
Priority to CN202010496496.1A priority Critical patent/CN111583421A/en
Publication of CN111583421A publication Critical patent/CN111583421A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation

Abstract

The disclosure provides a method, a device, an electronic device and a storage medium for determining a display animation, wherein the method comprises the following steps: when detecting that a real scene image shot by an Augmented Reality (AR) device comprises an entity sculpture, extracting scene characteristic information corresponding to the entity sculpture in the real scene image; determining a matched virtual sculpture based on the solid sculpture, and generating display animation data of the virtual sculpture based on the extracted scene characteristic information; and displaying the animation effect of the virtual sculpture blended into the real scene through the AR equipment based on the animation display data of the virtual sculpture.

Description

Method and device for determining display animation, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of enhanced display technologies, and in particular, to a method and an apparatus for determining a display animation, an electronic device, and a storage medium.
Background
The sculpture is a carved ornamental object with certain meaning, symbol and pictograph, and the sculpture can be generally displayed in various public places to increase the interest of the public places. For example, can show the lucky baby sculpture in the park for people can have a clear understanding and cognition to the lucky baby through the visit to the lucky baby sculpture.
However, the static display mode of displaying the sculpture at a fixed position is inflexible, the display mode is single, and the display effect of the sculpture is poor.
Disclosure of Invention
In view of the above, the present disclosure at least provides a method, an apparatus, an electronic device and a storage medium for determining a display animation.
In a first aspect, the present disclosure provides a method for determining a presentation animation, including:
when detecting that a real scene image shot by an Augmented Reality (AR) device comprises an entity sculpture, extracting scene characteristic information corresponding to the entity sculpture in the real scene image;
determining a matched virtual sculpture based on the solid sculpture, and generating display animation data of the virtual sculpture based on the extracted scene characteristic information;
and displaying the animation effect of the virtual sculpture blended into the real scene through the AR equipment based on the animation display data of the virtual sculpture.
By adopting the method, the display animation data is generated for the virtual sculpture corresponding to the solid sculpture based on the scene characteristic information, so that the generated display animation data is matched with the image of the real scene, for example, if the real scene comprises a building, the generated display animation data can comprise the animation effect of the virtual sculpture walking along the edge of the building, the same virtual object can correspond to different display animation data under different scene characteristic information, the display of the virtual object is flexible, and the display effect of the virtual sculpture is improved.
In one possible embodiment, the generating of display animation data of the virtual sculpture based on the extracted scene feature information includes:
and generating display animation data for displaying the virtual sculpture in cooperation with the display scenery based on the display scenery in the real scene indicated by the scene characteristic information.
In a possible implementation manner, generating presentation animation data for presenting the virtual sculpture in cooperation with the presentation scenery based on the presentation scenery in the real scene indicated by the scene characteristic information includes:
determining pose change information of the virtual sculpture based on a displayed scene in a real scene indicated by the scene characteristic information;
and generating display animation data for displaying the virtual sculpture in cooperation with the display scenery based on the pose change information.
In one possible embodiment, the determining the pose change information of the virtual sculpture based on the exposed scenery in the real scene indicated by the scene characteristic information comprises:
under the condition that the display scenery comprises an entity building and the virtual sculpture is a virtual animal sculpture, determining a walking path of the virtual animal sculpture walking along the ridge of the entity building based on the structural characteristics of the entity building;
based on the pose change information, generating display animation data for displaying the virtual sculpture in cooperation with the display scenery, comprising:
and generating display animation data of the virtual animal sculpture walking along the ridge of the solid building based on the determined walking path.
In the above embodiment, based on the structural characteristics of the solid building, walking strength is determined for the virtual animal sculpture, and different solid buildings can correspond to different walking paths, so that the generated display animation data is matched with the solid building, for example, based on the determined walking path, the display animation data of the virtual animal sculpture walking along the ridge of the solid building is generated, the matching degree of the display animation data and a real scene is increased, and the display effect is improved.
In one possible embodiment, the generating of the display animation data of the virtual animal sculpture walking along the ridge of the physical building based on the determined walking path includes:
and generating display animation data of the virtual animal sculpture walking along the ridge of the solid building based on the determined walking path and the walking posture data corresponding to the virtual animal sculpture.
In one possible embodiment, determining a matching virtual sculpture based on the physical sculpture comprises:
determining a sculpture type of the solid sculpture based on the real scene image;
determining the virtual sculpture matching the physical sculpture based on the sculpture type of the physical sculpture and at least one virtual sculpture corresponding to each of the sculpture types stored in advance.
In one possible embodiment, determining the virtual sculpture matching with the physical sculpture, based on the sculpture type of the physical sculpture and a plurality of virtual sculptures corresponding to each sculpture type stored in advance, when the virtual sculpture corresponding to each sculpture type is plural, includes:
extracting target characteristic information of the solid sculpture from the real scene image;
determining the virtual sculpture matched with the solid sculpture based on the target characteristic information of the solid sculpture and the pre-stored characteristic information of each virtual sculpture corresponding to the type of the sculpture.
In the above embodiment, different solid sculptures can correspond to different virtual sculptures by determining the corresponding virtual sculpture for the solid sculpture, so that the diversity and flexibility of the virtual sculptures corresponding to the solid sculptures are increased, and the effect of virtual sculpture display is improved.
The following descriptions of the effects of the apparatus, the electronic device, and the like refer to the description of the above method, and are not repeated here.
In a second aspect, the present disclosure provides an apparatus for determining a presentation animation, comprising:
the augmented reality AR equipment comprises an extraction module, a storage module and a display module, wherein the extraction module is used for extracting scene characteristic information corresponding to an entity sculpture in a real scene image when detecting that the real scene image shot by the augmented reality AR equipment comprises the entity sculpture;
the determining module is used for determining a matched virtual sculpture based on the solid sculpture and generating display animation data of the virtual sculpture based on the extracted scene characteristic information;
and the display module is used for displaying the display animation effect of the virtual sculpture blended into the real scene through the AR equipment based on the display animation data of the virtual sculpture.
In one possible embodiment, the determining module, when generating the display animation data of the virtual sculpture based on the extracted scene feature information, is configured to:
and generating display animation data for displaying the virtual sculpture in cooperation with the display scenery based on the display scenery in the real scene indicated by the scene characteristic information.
In a possible implementation manner, the determining module, when generating rendering animation data for rendering the virtual sculpture in cooperation with the rendering scenery based on the rendering scenery in the real scene indicated by the scene characteristic information, is configured to:
determining pose change information of the virtual sculpture based on a displayed scene in a real scene indicated by the scene characteristic information;
and generating display animation data for displaying the virtual sculpture in cooperation with the display scenery based on the pose change information.
In one possible embodiment, the determining module, when determining the pose change information of the virtual sculpture based on the exposed scene in the real scene indicated by the scene feature information, is configured to:
under the condition that the display scenery comprises an entity building and the virtual sculpture is a virtual animal sculpture, determining a walking path of the virtual animal sculpture walking along the ridge of the entity building based on the structural characteristics of the entity building;
the determining module is used for generating display animation data for displaying the virtual sculpture in cooperation with the display scenery based on the pose change information, and is used for:
and generating display animation data of the virtual animal sculpture walking along the ridge of the solid building based on the determined walking path.
In one possible embodiment, the determining module, when generating the display animation data of the virtual animal sculpture walking along the ridge of the physical building based on the determined walking path, is configured to:
and generating display animation data of the virtual animal sculpture walking along the ridge of the solid building based on the determined walking path and the walking posture data corresponding to the virtual animal sculpture.
In one possible embodiment, the determining module, in determining a matching virtual sculpture based on the physical sculpture, is configured to:
determining a sculpture type of the solid sculpture based on the real scene image;
determining the virtual sculpture matching the physical sculpture based on the sculpture type of the physical sculpture and at least one virtual sculpture corresponding to each of the sculpture types stored in advance.
In one possible embodiment, the determination module, when determining the virtual sculpture matching the physical sculpture, based on the sculpture type of the physical sculpture and a plurality of virtual sculptures corresponding to each sculpture type stored in advance, is to, when there are a plurality of virtual sculptures corresponding to each sculpture type:
extracting target characteristic information of the solid sculpture from the real scene image;
determining the virtual sculpture matched with the solid sculpture based on the target characteristic information of the solid sculpture and the pre-stored characteristic information of each virtual sculpture corresponding to the type of the sculpture.
In a third aspect, the present disclosure provides an electronic device comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the method of determining a presentation animation as described in the first aspect or any one of the embodiments above.
In a fourth aspect, the present disclosure provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method for determining a presentation animation as described in the first aspect or any one of the embodiments above.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
FIG. 1 is a flow chart illustrating a method for determining a presentation animation according to an embodiment of the disclosure;
FIG. 2A is a schematic diagram illustrating that in a method for determining a presentation animation provided by an embodiment of the present disclosure, an AR device displays presentation animation data;
FIG. 2B is a schematic diagram illustrating that in a method for determining a presentation animation provided by an embodiment of the present disclosure, an AR device displays presentation animation data;
FIG. 3 is a schematic diagram illustrating an architecture of an apparatus for determining a presentation animation according to an embodiment of the present disclosure;
fig. 4 shows a schematic structural diagram of an electronic device 400 provided in an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
Generally, can show various sculptures in the exhibition hall, for example, show personage sculpture, scene sculpture, animal sculpture etc. wherein, the sculpture can be placed in the exhibition hall arbitrary position, demonstrates this sculpture through static show mode, and the user looks over the sculpture in the position that reachs the sculpture correspondence, and the show mode of sculpture is single, and the flexibility is lower.
In order to solve the above problems, the embodiments of the present disclosure provide a sculpture display method, which extracts scene characteristic information corresponding to an entity sculpture in a real scene image; determining a matched virtual sculpture based on the solid sculpture, and generating display animation data of the virtual sculpture based on the extracted scene characteristic information; displaying the display animation effect of the virtual sculpture blended into the real scene through AR equipment based on the display animation data of the virtual sculpture; through based on scene characteristic information, for the virtual sculpture generation show animation data that the entity sculpture corresponds for the show animation data and the real scene image phase-match that generate, for example, if when including the building in the real scene, then can include the animation effect of virtual sculpture along the walking of building edge in the show animation data that generate, different scene characteristic information can correspond different show animation data, and virtual object's show is more nimble, has promoted the effect of virtual sculpture show.
For the convenience of understanding the embodiments of the present disclosure, a method for determining a presentation animation disclosed in the embodiments of the present disclosure will be described in detail first.
Referring to fig. 1, which is a schematic flow diagram of a method for determining a display animation provided in the embodiment of the present disclosure, an execution subject of the method may be a server, which may be a local server or a cloud server, where the method includes S101-S103, specifically:
s101, when detecting that a real scene image shot by an augmented reality AR device comprises an entity sculpture, extracting scene characteristic information corresponding to the entity sculpture in the real scene image;
s102, determining a matched virtual sculpture based on the solid sculpture, and generating display animation data of the virtual sculpture based on the extracted scene characteristic information;
s103, displaying the display animation effect of the virtual sculpture blended into the real scene through the AR equipment based on the display animation data of the virtual sculpture.
Extracting scene characteristic information corresponding to the solid sculpture in a real scene image; determining a matched virtual sculpture based on the solid sculpture, and generating display animation data of the virtual sculpture based on the extracted scene characteristic information; displaying the display animation effect of the virtual sculpture blended into the real scene through AR equipment based on the display animation data of the virtual sculpture; through based on scene characteristic information, for the virtual sculpture generation show animation data that the entity sculpture corresponds for the show animation data and the real scene image phase-match that generate, for example, if when including the building in the real scene, then can include the animation effect of virtual sculpture along the walking of building edge in the show animation data that generate, same virtual object, under different scene characteristic information, can correspond different show animation data, virtual object's show is more nimble, has promoted the effect of virtual sculpture show.
For S101:
an Augmented display (AR) device is an intelligent device capable of supporting AR functionality, exemplary illustrations of which include, but are not limited to: the mobile phone, the tablet computer, the AR glasses and other electronic equipment capable of presenting the augmented reality effect.
In specific implementation, the real scene image can be detected through the trained image detection neural network, and whether the real scene image comprises the solid sculpture or not is determined. The real scene image is a real-time real scene image shot by a camera arranged on the AR equipment.
Here, the AR device may capture each frame of real scene image in the real scene in real time, detect each frame of real scene image, and determine whether the real scene image includes the solid sculpture. When the fact that the entity sculpture is included in the real scene image is detected, special extraction can be performed on the real scene image including the entity sculpture, and scene characteristic information corresponding to the entity sculpture in the real scene image is obtained. For example, the scene feature information may be contour information of buildings (e.g., a contour of a bridge, a contour of a roof edge of a house, etc.), a contour of a road, etc., included in the real scene image.
For S102:
here, a virtual sculpture matching the physical sculpture may be determined based on the physical sculpture. And generating display animation data of the virtual sculpture based on the extracted scene characteristic information.
In an alternative embodiment, generating display animation data of the virtual sculpture based on the extracted scene feature information may include: and generating display animation data for displaying the virtual sculpture in cooperation with the display scenery based on the display scenery in the real scene indicated by the scene characteristic information.
Here, the exhibition scene may be a house, a bridge, a tree, a road, etc. For example, when the display scene indicated by the scene feature information includes a house, display animation data in which the virtual sculpture is displayed in cooperation with the display scene may be generated, for example, display animation data in which the virtual sculpture walks on an eave of the house may be generated; display animation data may also be generated for the virtual sculpture to travel from one end of the bridge to the other.
In an optional implementation, generating, based on a displayed scene in a real scene indicated by the scene characteristic information, display animation data for displaying a virtual sculpture in cooperation with the displayed scene may include:
firstly, determining pose change information of the virtual sculpture based on a displayed scene in a real scene indicated by scene characteristic information.
And generating display animation data for displaying the virtual sculpture in cooperation with the display scenery based on the pose change information.
Here, the pose change information of the virtual sculpture may be determined based on the exposed scene in the real scene indicated by the scene characteristic information. For example, if the displayed scenery comprises stairs, the pose change information of going upstairs or downstairs of the virtual sculpture can be determined.
As an optional implementation, the determining the pose change information of the virtual sculpture based on the shown scenery in the real scene indicated by the scene characteristic information includes: and under the condition that the display scenery comprises an entity building and the virtual sculpture is the virtual animal sculpture, determining the walking path of the virtual animal sculpture walking along the ridge of the entity building based on the structural characteristics of the entity building. And generating display animation data for displaying the virtual sculpture in cooperation with the display scenery based on the pose change information, wherein the display animation data can comprise: and generating display animation data of the virtual animal sculpture walking along the ridge of the solid building based on the determined walking path.
Here, in the case where the exhibited scenery includes a solid building and the virtual sculpture is a virtual animal sculpture, a walking path along which the virtual animal sculpture walks along the ridge of the solid building is determined based on the structural characteristics of the solid building. For example, if the physical building is a house and the virtual sculpture is a virtual animal sculpture, the roof structure of the house can be determined, and the walking path of the virtual animal sculpture walking along the ridge of the physical building can be determined. If the solid building is a bridge and the virtual sculpture is a virtual animal sculpture and/or a virtual character sculpture, determining the bridge floor of the bridge and determining a walking path of the virtual animal sculpture and/or the virtual character sculpture walking along the bridge floor; alternatively, when the solid building is a bridge and the virtual sculpture is a virtual animal sculpture, it is possible to determine a bridge pier of the bridge, a jump path along which the virtual animal sculpture jumps, and the like. Or, when the displayed scenery is a tree, and the virtual sculpture is an animal sculpture capable of climbing trees, the climbing path of the animal sculpture can be determined on the tree for the virtual sculpture, and the like.
Further, display animation data in which the virtual animal sculpture walks along the ridge of the physical building may be generated based on the determined walking path. Alternatively, display animation data or the like for the virtual animal sculpture to crawl along the solid trees may be generated based on the determined crawling path.
In an alternative embodiment, generating display animation data of the virtual animal sculpture walking along the ridge of the physical building based on the determined walking path may include: and generating display animation data of the virtual animal sculpture walking along the ridge of the solid building based on the determined walking path and the walking posture data corresponding to the virtual animal sculpture.
Here, the walking posture data corresponding to the virtual animal sculpture, for example, the posture data of the left leg and the posture data of the right leg of the virtual animal sculpture. Or the jumping posture data corresponding to the virtual animal sculpture can be the posture data of the jumping process, the process of being positioned in the air, the landing process and the like of the virtual animal sculpture. And generating display animation data of the virtual animal sculpture walking along the ridge of the solid building based on the determined walking path and the walking posture data corresponding to the virtual animal sculpture.
In the above embodiment, based on the structural characteristics of the solid building, walking strength is determined for the virtual animal sculpture, and different solid buildings can correspond to different walking paths, so that the generated display animation data is matched with the solid building, for example, based on the determined walking path, the display animation data of the virtual animal sculpture walking along the ridge of the solid building is generated, the matching degree of the display animation data and a real scene is increased, and the display effect is improved.
Referring to fig. 2A and 2B, there is shown a schematic view of a display animation of a virtual animal sculpture walking along a ridge of a physical building, fig. 2A is the display animation of the virtual animal sculpture 21 at a position a on the ridge 23 of the physical building 22, and fig. 2B is the display animation of the virtual animal sculpture 21 walking from the position a to a position B on the ridge 23 of the physical building 22.
In an alternative embodiment, determining a matching virtual sculpture, based on a physical sculpture, comprises:
firstly, determining the sculpture type of the solid sculpture based on the real scene image.
And secondly, determining a virtual sculpture matched with the solid sculpture based on the sculpture type of the solid sculpture and at least one virtual sculpture corresponding to each pre-stored sculpture type.
Here, a sculpture type of a solid sculpture may be determined based on the real scene image and the trained image detection neural network. The types of sculpture may include a human sculpture, an animal sculpture, a plant sculpture, and the like. Wherein the type of sculpture can be set as desired.
Each sculpture type may include one virtual sculpture and a plurality of virtual sculptures. When a virtual sculpture is included under each sculpture type, the virtual sculpture may be determined to be a virtual sculpture that matches the physical sculpture. When including a plurality of virtual sculptures under every sculpture type, for example, when the sculpture type is the animal sculpture, can confirm the virtual sculpture of a plurality of different animal types for the animal sculpture type, for example, a plurality of virtual sculptures that the animal sculpture classification corresponds can include phoenix virtual sculpture, dinosaur virtual sculpture, rabbit virtual sculpture, dolphin virtual sculpture, cat virtual sculpture etc..
Determining a virtual sculpture matching the physical sculpture, based on the sculpture type of the physical sculpture and a plurality of virtual sculptures, which are stored in advance, corresponding to each of the sculpture types, when the virtual sculpture corresponding to each of the sculpture types is plural, may include:
firstly, extracting target characteristic information of the solid sculpture from a real scene image.
And secondly, determining the virtual sculpture matched with the solid sculpture based on the target characteristic information of the solid sculpture and the prestored characteristic information of each virtual sculpture corresponding to the sculpture type.
In specific implementation, the feature information of each virtual sculpture may be extracted based on the trained feature extraction network, for example, a multi-frame image corresponding to each virtual sculpture may be acquired, the multi-frame image corresponding to each virtual sculpture is input into the feature extraction network, the feature information of each virtual sculpture is acquired, and the feature information of each virtual sculpture is associated with the corresponding virtual sculpture and then stored.
And then, inputting the real scene image including the solid sculpture to the trained feature extraction network, and extracting the target feature information of the solid sculpture. And then determining a virtual sculpture matching the type of the sculpture, based on the target characteristic information of the physical sculpture and the pre-stored characteristic information of each virtual sculpture corresponding to the type of the sculpture. For example, a degree of matching between target characteristic information of the physical sculpture and characteristic information of each virtual sculpture may be calculated, and the virtual sculpture having the highest degree of matching may be determined as the virtual sculpture matching the physical sculpture. Alternatively, the target characteristic information of the physical sculpture and the characteristic information of each virtual sculpture may be input to the matching degree detection network, and the virtual sculpture matching the physical sculpture may be determined.
In the above embodiment, different solid sculptures can correspond to different virtual sculptures by determining the corresponding virtual sculpture for the solid sculpture, so that the diversity and flexibility of the virtual sculptures corresponding to the solid sculptures are increased, and the effect of virtual sculpture display is improved.
For S103:
here, a presentation animation effect in which the virtual sculpture is integrated into a real scene may be presented through the AR device based on presentation animation data of the virtual sculpture. For example, when the AR device is an AR glasses, the display animation data may be sent to the AR device, so that the AR device may display the display animation data corresponding to the virtual sculpture. When the AR equipment is a smart phone, the display animation data and the established three-dimensional scene model can be fused (the three-dimensional scene model can be a scene model established through a real scene image), and the fused display animation data is sent to the AR equipment, so that the AR equipment can display the display animation effect of the virtual sculpture fused into the real scene.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same concept, an embodiment of the present disclosure further provides a device for determining a display animation, and as shown in fig. 3, an architecture diagram of the device for determining a display animation provided by the embodiment of the present disclosure includes an extraction module 301, a determination module 302, and a display module 303, specifically:
the extracting module 301 is configured to, when it is detected that a real scene image shot by an augmented reality AR device includes an entity sculpture, extract scene feature information corresponding to the entity sculpture in the real scene image;
a determining module 302, configured to determine a matched virtual sculpture based on the physical sculpture, and generate display animation data of the virtual sculpture based on the extracted scene characteristic information;
and the display module 303 is configured to display a display animation effect of the virtual sculpture blended into a real scene through the AR device based on the display animation data of the virtual sculpture.
In one possible implementation, the determining module 302, when generating the display animation data of the virtual sculpture based on the extracted scene feature information, is configured to:
and generating display animation data for displaying the virtual sculpture in cooperation with the display scenery based on the display scenery in the real scene indicated by the scene characteristic information.
In a possible implementation manner, the determining module 302, when generating rendering animation data for rendering the virtual sculpture in cooperation with the rendering scenery based on the rendering scenery in the real scene indicated by the scene characteristic information, is configured to:
determining pose change information of the virtual sculpture based on a displayed scene in a real scene indicated by the scene characteristic information;
and generating display animation data for displaying the virtual sculpture in cooperation with the display scenery based on the pose change information.
In one possible implementation, the determining module 302, when determining the pose change information of the virtual sculpture based on the exposed scene in the real scene indicated by the scene characteristic information, is configured to:
under the condition that the display scenery comprises an entity building and the virtual sculpture is a virtual animal sculpture, determining a walking path of the virtual animal sculpture walking along the ridge of the entity building based on the structural characteristics of the entity building;
the determining module 302, when generating display animation data for displaying the virtual sculpture in cooperation with the display scenery based on the pose change information, is configured to:
and generating display animation data of the virtual animal sculpture walking along the ridge of the solid building based on the determined walking path.
In one possible embodiment, the determining module 302, when generating the display animation data of the virtual animal sculpture walking along the ridge of the physical building based on the determined walking path, is configured to:
and generating display animation data of the virtual animal sculpture walking along the ridge of the solid building based on the determined walking path and the walking posture data corresponding to the virtual animal sculpture.
In one possible embodiment, the determining module 302, when determining a matching virtual sculpture based on the physical sculpture, is configured to:
determining a sculpture type of the solid sculpture based on the real scene image;
determining the virtual sculpture matching the physical sculpture based on the sculpture type of the physical sculpture and at least one virtual sculpture corresponding to each of the sculpture types stored in advance.
In one possible embodiment, when there are a plurality of virtual sculptures corresponding to each sculpture type, the determining module 302, when determining the virtual sculpture matching the physical sculpture, based on the sculpture type of the physical sculpture and a plurality of virtual sculptures corresponding to each sculpture type stored in advance, is configured to:
extracting target characteristic information of the solid sculpture from the real scene image;
determining the virtual sculpture matched with the solid sculpture based on the target characteristic information of the solid sculpture and the pre-stored characteristic information of each virtual sculpture corresponding to the type of the sculpture.
In some embodiments, the functions of the apparatus provided in the embodiments of the present disclosure or the included templates may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, no further description is provided here.
Based on the same technical concept, the embodiment of the disclosure also provides an electronic device. Referring to fig. 4, a schematic structural diagram of an electronic device provided in the embodiment of the present disclosure includes a processor 401, a memory 402, and a bus 403. The memory 402 is used for storing execution instructions and includes a memory 4021 and an external memory 4022; the memory 4021 is also referred to as an internal memory, and is configured to temporarily store operation data in the processor 401 and data exchanged with the external memory 4022 such as a hard disk, the processor 401 exchanges data with the external memory 4022 through the memory 4021, and when the electronic device 400 operates, the processor 401 communicates with the memory 402 through the bus 403, so that the processor 401 executes the following instructions:
when detecting that a real scene image shot by an Augmented Reality (AR) device comprises an entity sculpture, extracting scene characteristic information corresponding to the entity sculpture in the real scene image;
determining a matched virtual sculpture based on the solid sculpture, and generating display animation data of the virtual sculpture based on the extracted scene characteristic information;
and displaying the animation effect of the virtual sculpture blended into the real scene through the AR equipment based on the animation display data of the virtual sculpture.
In addition, the present disclosure also provides a computer readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the method for determining a presentation animation described in the above method embodiments.
The computer program product for determining a method for displaying an animation provided in the embodiments of the present disclosure includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the steps of the method for determining a method for displaying an animation described in the embodiments of the above methods, which may be referred to in the embodiments of the above methods specifically, and are not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above are only specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present disclosure, and shall be covered by the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A method for determining a presentation animation, comprising:
when detecting that a real scene image shot by an Augmented Reality (AR) device comprises an entity sculpture, extracting scene characteristic information corresponding to the entity sculpture in the real scene image;
determining a matched virtual sculpture based on the solid sculpture, and generating display animation data of the virtual sculpture based on the extracted scene characteristic information;
and displaying the animation effect of the virtual sculpture blended into the real scene through the AR equipment based on the animation display data of the virtual sculpture.
2. The method according to claim 1, wherein the generating of the display animation data of the virtual sculpture based on the extracted scene feature information comprises:
and generating display animation data for displaying the virtual sculpture in cooperation with the display scenery based on the display scenery in the real scene indicated by the scene characteristic information.
3. The method according to claim 2, wherein generating presentation animation data for presenting the virtual sculpture in cooperation with the presentation scenery based on the presentation scenery in the real scene indicated by the scene feature information comprises:
determining pose change information of the virtual sculpture based on a displayed scene in a real scene indicated by the scene characteristic information;
and generating display animation data for displaying the virtual sculpture in cooperation with the display scenery based on the pose change information.
4. The method according to claim 3, wherein determining pose change information for the virtual sculpture based on a scene revealed in the real scene indicated by the scene feature information comprises:
under the condition that the display scenery comprises an entity building and the virtual sculpture is a virtual animal sculpture, determining a walking path of the virtual animal sculpture walking along the ridge of the entity building based on the structural characteristics of the entity building;
based on the pose change information, generating display animation data for displaying the virtual sculpture in cooperation with the display scenery, comprising:
and generating display animation data of the virtual animal sculpture walking along the ridge of the solid building based on the determined walking path.
5. The method of claim 4, wherein said generating display animation data of said virtual animal sculpture walking along said solid building roof ridge based on said determined walking path comprises:
and generating display animation data of the virtual animal sculpture walking along the ridge of the solid building based on the determined walking path and the walking posture data corresponding to the virtual animal sculpture.
6. The method according to any one of claims 1-5, wherein determining a matching virtual sculpture, based on said physical sculpture, comprises:
determining a sculpture type of the solid sculpture based on the real scene image;
determining the virtual sculpture matching the physical sculpture based on the sculpture type of the physical sculpture and at least one virtual sculpture corresponding to each of the sculpture types stored in advance.
7. The method according to claim 6, wherein determining the virtual sculpture matching the physical sculpture, based on the sculpture type of the physical sculpture and a plurality of virtual sculptures corresponding to each sculpture type stored in advance, when there are a plurality of virtual sculptures corresponding to each sculpture type, comprises:
extracting target characteristic information of the solid sculpture from the real scene image;
determining the virtual sculpture matched with the solid sculpture based on the target characteristic information of the solid sculpture and the pre-stored characteristic information of each virtual sculpture corresponding to the type of the sculpture.
8. An apparatus for determining a presentation animation, comprising:
the augmented reality AR equipment comprises an extraction module, a storage module and a display module, wherein the extraction module is used for extracting scene characteristic information corresponding to an entity sculpture in a real scene image when detecting that the real scene image shot by the augmented reality AR equipment comprises the entity sculpture;
the determining module is used for determining a matched virtual sculpture based on the solid sculpture and generating display animation data of the virtual sculpture based on the extracted scene characteristic information;
and the display module is used for displaying the display animation effect of the virtual sculpture blended into the real scene through the AR equipment based on the display animation data of the virtual sculpture.
9. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is operating, the machine-readable instructions when executed by the processor performing the steps of the method of determining a presentation animation of any one of claims 1 to 7.
10. A computer-readable storage medium, having stored thereon a computer program for performing, when being executed by a processor, the steps of the method for determining a presentation animation as claimed in any one of claims 1 to 7.
CN202010496496.1A 2020-06-03 2020-06-03 Method and device for determining display animation, electronic equipment and storage medium Pending CN111583421A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010496496.1A CN111583421A (en) 2020-06-03 2020-06-03 Method and device for determining display animation, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010496496.1A CN111583421A (en) 2020-06-03 2020-06-03 Method and device for determining display animation, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111583421A true CN111583421A (en) 2020-08-25

Family

ID=72127351

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010496496.1A Pending CN111583421A (en) 2020-06-03 2020-06-03 Method and device for determining display animation, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111583421A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190065027A1 (en) * 2017-08-31 2019-02-28 Apple Inc. Systems, Methods, and Graphical User Interfaces for Interacting with Augmented and Virtual Reality Environments
CN110716645A (en) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 Augmented reality data presentation method and device, electronic equipment and storage medium
CN110738737A (en) * 2019-10-15 2020-01-31 北京市商汤科技开发有限公司 AR scene image processing method and device, electronic equipment and storage medium
CN110827376A (en) * 2018-08-09 2020-02-21 北京微播视界科技有限公司 Augmented reality multi-plane model animation interaction method, device, equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190065027A1 (en) * 2017-08-31 2019-02-28 Apple Inc. Systems, Methods, and Graphical User Interfaces for Interacting with Augmented and Virtual Reality Environments
CN110827376A (en) * 2018-08-09 2020-02-21 北京微播视界科技有限公司 Augmented reality multi-plane model animation interaction method, device, equipment and storage medium
CN110716645A (en) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 Augmented reality data presentation method and device, electronic equipment and storage medium
CN110738737A (en) * 2019-10-15 2020-01-31 北京市商汤科技开发有限公司 AR scene image processing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111640202B (en) AR scene special effect generation method and device
US20120233076A1 (en) Redeeming offers of digital content items
CN108986188A (en) AR video generation device
CN111652987B (en) AR group photo image generation method and device
CN108037823A (en) Information recommendation method, Intelligent mirror and computer-readable recording medium
CN111638797A (en) Display control method and device
CN111625100A (en) Method and device for presenting picture content, computer equipment and storage medium
CN111667588A (en) Person image processing method, person image processing device, AR device and storage medium
CN111640200A (en) AR scene special effect generation method and device
CN111639613B (en) Augmented reality AR special effect generation method and device and electronic equipment
CN111651057A (en) Data display method and device, electronic equipment and storage medium
CN112150349A (en) Image processing method and device, computer equipment and storage medium
CN111640169A (en) Historical event presenting method and device, electronic equipment and storage medium
CN111693063A (en) Navigation interaction display method and device, electronic equipment and storage medium
Apostolakis et al. RAAT-The reverie avatar authoring tool
CN111652983A (en) Augmented reality AR special effect generation method, device and equipment
KR20180028764A (en) Apparatus and method for children learning using augmented reality
CN111651058A (en) Historical scene control display method and device, electronic equipment and storage medium
CN113359983A (en) Augmented reality data presentation method and device, electronic equipment and storage medium
CN114358822A (en) Advertisement display method, device, medium and equipment
CN111583421A (en) Method and device for determining display animation, electronic equipment and storage medium
WO2023155477A1 (en) Painting display method and apparatus, electronic device, storage medium, and program product
CN111625103A (en) Sculpture display method and device, electronic equipment and storage medium
CN111640186A (en) AR special effect generation method and device for building, electronic device and storage medium
CN113345107A (en) Augmented reality data display method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination