WO2023051340A1 - 一种动画显示方法、装置及设备 - Google Patents

一种动画显示方法、装置及设备 Download PDF

Info

Publication number
WO2023051340A1
WO2023051340A1 PCT/CN2022/120159 CN2022120159W WO2023051340A1 WO 2023051340 A1 WO2023051340 A1 WO 2023051340A1 CN 2022120159 W CN2022120159 W CN 2022120159W WO 2023051340 A1 WO2023051340 A1 WO 2023051340A1
Authority
WO
WIPO (PCT)
Prior art keywords
trajectory
target
terminal device
segment
parameters
Prior art date
Application number
PCT/CN2022/120159
Other languages
English (en)
French (fr)
Inventor
石盛传
王奥宇
骆博文
包泽华
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Priority to US18/571,129 priority Critical patent/US20240282031A1/en
Publication of WO2023051340A1 publication Critical patent/WO2023051340A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Definitions

  • the present application relates to the field of computers, in particular to an animation display method, device and equipment.
  • Augmented Reality (AR) technology is a technology that ingeniously integrates virtual information with the real world.
  • the virtual object can be simulated to obtain the animation corresponding to the virtual object, and the animation obtained by simulation can be superimposed with the picture in the real world for display.
  • the images appearing in the user's field of vision include not only real-world pictures, but also animations corresponding to virtual objects, so that the user can see both the virtual object and the real world.
  • AR technology has the characteristics of strong interaction and good interaction, it has been widely used in many fields.
  • the interaction between virtual objects and real objects in the real world can be simulated.
  • a display effect of a virtual object colliding with a real object during motion can be realized.
  • the combination and interaction between the virtual world and the real world can be achieved through AR technology, which can bring better display effects.
  • embodiments of the present application provide an animation display method and device.
  • the embodiment of the present application provides an animation display method, the method comprising:
  • the target trajectory segment is a trajectory segment that the terminal device passes through during movement
  • the trajectory parameter set includes multiple sets of trajectory parameters
  • the trajectory parameters are the parameters of the terminal device in the target trajectory segment Collected during exercise
  • the animation corresponding to the target model is displayed at the display position corresponding to the target trajectory segment on the terminal device.
  • the target model is determined according to the motion parameters.
  • the motion parameters are determined according to the trajectory parameter set.
  • the motion parameters reflect the The motion state characteristics of the terminal device in the target trajectory segment.
  • the embodiment of the present application provides an animation display device, the device includes:
  • the acquiring unit is configured to acquire a trajectory parameter set corresponding to a target trajectory segment, the target trajectory segment is a trajectory segment passed by the terminal device during movement, the trajectory parameter set includes multiple sets of trajectory parameters, and the trajectory parameter is a set of trajectory parameters of the terminal device Collected during the movement of the target trajectory segment;
  • a display unit configured to display an animation corresponding to a target model at a display position corresponding to a target track segment on the terminal device, the target model is determined according to motion parameters, the motion parameters are determined according to the track parameter set, the The motion parameter reflects the motion state characteristics of the terminal device in the target trajectory segment.
  • the embodiment of the present application provides an electronic device, and the electronic device includes: one or more processors; a memory for storing one or more programs; when the one or more programs are executed by the One or more processors execute, so that the one or more processors implement the animation display method as described in the aforementioned first aspect.
  • the embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the animation display method as described in the aforementioned first aspect is implemented.
  • the user may first move the terminal device along the target track segment.
  • a trajectory parameter set collected by the terminal device in the target trajectory segment may be acquired.
  • a set of trajectory parameters may include multiple sets of trajectory parameters.
  • the target model can be displayed at the display position corresponding to the target track segment on the terminal device. In this way, since the display position of the target model is determined according to the segment of the target track moved by the terminal device, and the terminal device will be constrained by real objects during the movement process, the animation of the target model will also be constrained by real objects. In this way, without modeling the real object, a corresponding virtual model can be created based on the real object, realizing the interaction between the virtual object and the real object.
  • the target model is determined according to the trajectory parameters of the target trajectory segment, the user only needs to adjust the movement of the terminal device in the target trajectory segment to adjust the animation display effect corresponding to the target trajectory segment. In this way, the free choice of the virtual animation by the user is realized, and the user experience is improved.
  • FIG. 1 is a schematic diagram of an application scenario of an AR technology provided by the present application
  • Fig. 2 is a schematic flow chart of an animation display method provided by the present application.
  • FIG. 3 is a schematic diagram of a display interface of a terminal device provided in an embodiment of the present application.
  • Fig. 4 is a schematic structural diagram of an animation display device provided by an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the term “comprise” and its variations are open-ended, ie “including but not limited to”.
  • the term “based on” is “based at least in part on”.
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; the term “some embodiments” means “at least some embodiments.” Relevant definitions of other terms will be given in the description below.
  • AR technology can combine the virtual world and the real world, and has been widely used in many fields. For example, through AR technology, the display effect of a virtual villain walking on a desktop can be realized, and the display effect of a virtual villain hitting a wall and unable to move forward can also be realized. In this way, to a certain extent, the "dimension wall" between the virtual world and the real world is broken, and it has a good display effect, and has been widely used in many fields.
  • the real objects need to be modeled.
  • the interaction between the virtual object and the real object can be realized. That is to say, in order to create a virtual object associated with a real object, a virtual model corresponding to the real object needs to be established first.
  • the real object "wall” can be modeled first, the virtual object corresponding to the wall can be created in the virtual environment, and the virtual object corresponding to the wall can be set.
  • the virtual object corresponding to the wall may not be displayed.
  • the picture seen by the user includes the virtual villain and the real wall.
  • the virtual villain moves to the position corresponding to the real wall, it will collide with the virtual object corresponding to the wall and cannot move forward. In this way, the display effect of "the virtual villain cannot move forward after hitting a wall” is realized.
  • FIG. 1 the figure is a schematic diagram of an application scenario of an AR technology provided by an embodiment of the present application.
  • the real world includes a plane 11 and an object 12 placed on the plane 11 .
  • Relevant software of AR technology is deployed on the terminal device 13, which can realize the interactive display effect between the real world and the virtual object.
  • the display effect that the terminal device needs to achieve is "track 21 - track 22 - track 23", which is a track that is in contact with the object 12 and the plane 11 .
  • the virtual model corresponding to the plane 11 and the virtual model corresponding to the object 12 can be respectively established.
  • models corresponding to the orbit 21, orbit 22, and orbit 23 can be respectively established, and corresponding animations can be displayed at the display positions corresponding to the models.
  • the model corresponding to the track model 21 can be deployed on the upper surface of the virtual model corresponding to the object 12, the model corresponding to the track model 22 can be deployed on the right side of the virtual model corresponding to the object 12, and the corresponding position of the virtual model corresponding to the plane 11 Deploy the model corresponding to track model 23.
  • the user can first move the terminal device along the motion track 31, and the terminal device collects track parameters during the movement.
  • the target models corresponding to the respective track segments of the motion track 31 can be determined according to the track parameters, and the animation of the target model is displayed at the corresponding display position on the terminal device, thereby displaying the track 21, the track 22 and the track 23 on the terminal device.
  • the animation of the target model is displayed at the corresponding display position on the terminal device, thereby displaying the track 21, the track 22 and the track 23 on the terminal device.
  • Fig. 2 is a schematic flow chart of an animation display method provided by the embodiment of the present application.
  • the animation display method provided by the embodiment of the present application can be applied to an application scenario where the user creates an AR model through a terminal device and displays the corresponding animation.
  • This method can be implemented by
  • the AR unit installed in the terminal device can be implemented by means of software, and its code is integrated into the memory of the terminal device and executed by the processing unit of the terminal device.
  • the method may also be executed by a server or other devices with data processing functions. The method is described below by taking the processing unit of the AR device as an example. As shown in Figure 2, the method specifically includes the following steps:
  • the terminal device may first be moved along the target track segment. During the process of the terminal device moving at the position corresponding to the target trajectory segment, the terminal device may collect multiple sets of trajectory parameters to obtain a trajectory parameter set.
  • each set of trajectory parameters may include the collection time and collection location of the set of trajectory parameters.
  • the trajectory parameters may also include the device orientation corresponding to the collection time.
  • each trajectory parameter and its collection method are firstly introduced.
  • the terminal device can perform multiple data collections, and each collection can collect multiple parameters as a set of trajectory parameters.
  • the terminal device can collect data according to time intervals, that is, a set of trajectory parameters can be collected every preset time interval.
  • the terminal device can collect according to the distance, and each time the terminal device moves a preset distance, it can collect a set of trajectory parameters.
  • the terminal device may record the time of collecting the group of trajectory parameters as the collection time.
  • the terminal device may read the time of collecting trajectory parameters from the system as the collection time in the trajectory parameters.
  • the terminal device may use the time stamp when the trajectory parameter is collected as the collection time in the trajectory parameter.
  • the terminal device can record its current location.
  • the terminal device may establish a coordinate system that is fixedly connected to the earth, and record its current location as the origin of the coordinate system.
  • the terminal device can determine the distance moved by the terminal device through its own sensors such as gyroscope and accelerometer, the acceleration measured by the accelerometer can determine the distance moved by the terminal device, and the angular acceleration can be measured by the gyroscope The angle through which the terminal device has been rotated can be determined. It can be seen that by combining the results measured by the gyroscope and the accelerometer, the position and orientation of the terminal device at any moment during the moving process can be determined.
  • the collection position of the terminal device and the device orientation corresponding to the collection time can be determined according to the measurement results of the gyroscope and the accelerometer.
  • the terminal device may also determine the collection location and device orientation corresponding to the collection time in other ways.
  • the collection position may be represented by three-dimensional coordinates
  • the orientation of the device may be represented by a three-dimensional vector or a spatial quaternion.
  • the target trajectory segment can be freely selected by the user according to the model he wants to create. For example, assuming that the user wants to create a "ladder" type model that moves from the ground to the desktop, then the user can hold the terminal device to move from the ground to the desktop. Assuming that the user wants to create a "grass” type model that moves from one corner of the desktop to the other, the user can hold the terminal device and move from one corner of the desktop to the other.
  • the terminal device is moving along the target track segment, the user can adjust parameters such as the terminal's moving speed and device orientation according to animation generation rules.
  • animation generation rules please refer to the following text, so I won’t go into details here.
  • the trajectory of the terminal device's movement may be referred to as a target trajectory, and the target trajectory may include one or more target trajectory segments.
  • the motion state characteristics of the terminal device basically remain unchanged.
  • the user can manually divide the target trajectory into multiple target trajectory segments.
  • the user can trigger the segmentation instruction by triggering the segmentation control displayed on the terminal device.
  • the terminal device may divide the trajectory before receiving the segmentation instruction into a target trajectory segment. In this way, the user can split the movement track of the terminal device into multiple target track segments by triggering the segment instruction through the segment control. Since the target models corresponding to different target trajectory segments are independent of each other, the final target trajectory can be composed of multiple target models, realizing the combination of various animations.
  • the user may trigger a first instruction, thereby controlling the terminal device to start collecting trajectory parameters.
  • the user may trigger a second instruction, thereby controlling the terminal device to stop collecting trajectory parameters.
  • the first instruction and the second instruction may be the aforementioned segmentation instructions, or may be instructions to start collecting or stop collecting.
  • the trajectory parameters collected by the terminal device between the time when the first instruction is acquired and the time when the second instruction is acquired are the trajectory parameter sets of the target trajectory segment.
  • the terminal device may store the location when the first instruction is obtained, and determine the collection location according to the motion information. for example. Assuming that the trajectory parameter set includes the first trajectory parameter, and the first trajectory parameter includes the first position, then the terminal device may first determine the motion information of the terminal device through the gyroscope and the accelerometer. The motion information indicates the distance and direction the terminal device moves during the process of moving from the second location to the first location. Next, the terminal device may determine the first location based on the second location and combined with the obtained motion information.
  • the trajectory parameter set of the target trajectory may also be collected first, and the corresponding motion parameters are determined according to the trajectory parameter set to divide the target trajectory into multiple target trajectory segments. Specifically, since each target trajectory segment corresponds to a target model, the motion state characteristics of the terminal device in each target trajectory segment should remain unchanged. Therefore, according to the motion parameters, the change law of the motion state characteristics of the terminal device on the target trajectory can be determined, so that the target trajectory can be divided into multiple target trajectory segments.
  • motion parameters please refer to the following text, so I won’t go into details here.
  • the accuracy of sensors such as accelerometers and gyroscopes of the terminal device may be limited, so that directly collected trajectory parameters may not be accurate enough. Then, in order to improve the accuracy of the target trajectory segment, data cleaning can be performed on the trajectory adoption number set.
  • five-point smoothing can be used to clean the trajectory parameters. For example, suppose the i-th trajectory parameter in the trajectory parameter set is denoted by p i , and the i-th trajectory parameter after data cleaning is denoted by p′ i . Then the process of data cleaning can be as follows:
  • n is the total number of trajectory parameters included in the trajectory parameter set.
  • the animation generation method provided by the embodiment of the present application may be executed by a terminal device or by a server. If the animation generation method provided by the embodiment of the present application is executed by the terminal device, the terminal device can first store the set of trajectory parameters collected during the movement of the target trajectory segment, and extract the trajectory parameters from the memory when generating the animation corresponding to the target trajectory segment gather. If the animation generation method provided in the embodiment of the present application is executed by a terminal device, the terminal device may first store a set of trajectory parameters collected during the movement of the target trajectory segment. When it is necessary to generate an animation corresponding to the target trajectory segment, the server may obtain a trajectory parameter set from the terminal device. For example, the server may obtain the track parameter set by sending a request to the terminal device, or the terminal device actively pushes the track parameter set to the service.
  • the target model corresponding to the target trajectory segment can be determined according to the trajectory parameter set. Specifically, the motion parameters may be determined first according to the trajectory parameter set, and then the target model corresponding to the target trajectory segment may be determined according to the motion parameters. Wherein, the motion parameter is used to reflect the motion state characteristics of the terminal device in the target trajectory segment.
  • the target model is a virtual model to be displayed on the target track segment, for example, it may include virtual models such as a track model, a grass model, a bridge model, and a ladder model.
  • the animation display method provided by the embodiment of the present application can be applied to a terminal device or a server.
  • the above two processes are respectively introduced by taking the terminal device executing the animation display method provided by the embodiment of the present application as an example.
  • the terminal device may first determine the movement parameters of the terminal device in the target trajectory segment according to the trajectory parameter set. Specifically, the terminal device can analyze the movement process of the terminal device in the target trajectory segment according to the trajectory parameter set, so as to determine the movement parameters of the target trajectory segment.
  • the motion parameter may include any one or more of average motion speed, average trajectory direction, average device orientation, cumulative speed change parameter, and direction cumulative change parameter. Methods for determining these motion parameters by the terminal device are respectively introduced below.
  • the motion parameter may include an average motion speed.
  • the average movement speed reflects the average speed of the terminal device in the target track segment.
  • the target trajectory segment can be divided into multiple sub-trajectory segments according to the trajectory parameters.
  • the starting point of each sub-trajectory segment corresponds to a set of trajectory parameters
  • the end point of each sub-trajectory segment also corresponds to A set of trajectory parameters.
  • the sub-trajectory segments do not include collection points of trajectory parameters.
  • the moving speed of the terminal device in each of the multiple sub-trajectory segments may be calculated respectively according to the collection position and collection time of the starting point of the sub-trajectory segment and the collection position and collection time of the end point of the sub-trajectory segment. After obtaining the moving speed of the terminal device in each sub-track segment, the moving speeds of multiple sub-track segments can be averaged to obtain the average moving speed of the terminal device.
  • a 1 represents the average movement speed of the terminal device in the target trajectory segment
  • n is the number of trajectory parameters in the trajectory parameter set
  • p i represents the collection position included in the i-th trajectory parameter in the trajectory parameter set
  • t i represents the trajectory Acquisition time included in the i-th trajectory parameter in the parameter set.
  • the collection position p i can be the three-dimensional coordinates of the terminal device
  • two adjacent collection positions can be subtracted and modulo taken to obtain the distance between the two collection positions straight-line distance.
  • the motion parameter may include an average trajectory direction.
  • the average trajectory direction is the direction from the starting point of the target trajectory segment to the end point of the target trajectory segment, that is, the overall orientation of the target trajectory segment.
  • the terminal device can first extract the trajectory parameters corresponding to the starting point of the target trajectory segment and the trajectory parameters corresponding to the end point of the target trajectory segment from the trajectory parameter set, and collect The average trajectory direction is computed from the acquisition position corresponding to the location and end point of the target trajectory segment.
  • the acquisition position in the trajectory parameters is expressed in the form of three-dimensional coordinates
  • the acquisition position corresponding to the starting point of the target trajectory segment and the acquisition position corresponding to the starting point of the target trajectory segment can be converted into three-dimensional vectors respectively.
  • the two calculated three-dimensional vectors can be subtracted and normalized, and the obtained result is a unit vector from the starting point of the target trajectory segment to the end point of the target trajectory segment, that is, the average trajectory direction.
  • a 2 represents the average trajectory orientation of the target trajectory segment
  • p begin represents the collection position corresponding to the starting point of the target trajectory segment
  • p end represents the collection position corresponding to the end point of the target trajectory segment
  • the meanings of the remaining symbols are the same as above, I won't go into details here.
  • the motion parameter further includes an average device orientation
  • the average device orientation reflects an average direction of the terminal device in the target track segment.
  • the terminal device can split the target trajectory segment into multiple sub-trajectory segments according to the trajectory parameter set, and calculate the terminal device in each sub-trajectory according to the device orientation. , and then average the average orientations of multiple sub-trajectory segments to obtain the average device orientation.
  • a 3 represents the average device orientation of the terminal device in the target trajectory segment
  • n is the number of trajectory parameters in the trajectory parameter set
  • q i represents the mining device orientation included in the i-th trajectory parameter in the trajectory parameter set
  • toEuler represents the The calculation process of converting quaternions to Euler angles, the meanings of the remaining symbols are the same as the previous ones, and will not be repeated here.
  • the motion parameter may also include a speed accumulation change parameter.
  • the Accumulative Velocity Variation parameter reflects the velocity fluctuation of the terminal device in the target trajectory segment.
  • the terminal device can split the target trajectory segment into multiple sub-trajectory segments according to the trajectory parameter set, and calculate the terminal device’s The velocity of each acquisition location, and then calculate the average acceleration corresponding to the sub-trajectory segment between the two acquisition locations according to the average velocity of two adjacent acquisition locations, and finally take the average of multiple average accelerations to obtain the mean value, and get Speed cumulative change parameter.
  • v i represents the velocity of the terminal device at the i-th trajectory parameter
  • a 4 represents the velocity fluctuation parameter of the terminal device in the target trajectory segment
  • the meanings of the remaining symbols are the same as those described above, and will not be repeated here.
  • the motion parameter includes a direction cumulative change parameter.
  • the direction cumulative change parameter reflects the change of the orientation of the terminal device in the target track segment.
  • the terminal device can split the target trajectory segment into multiple sub-trajectory segments according to the trajectory parameter set, and calculate the terminal device in each sub-trajectory according to the device orientation. Rotation angle and rotation speed, and then average the rotation speeds of multiple sub-trajectory segments to obtain the direction cumulative change parameter.
  • a 5 represents the cumulative change parameter of the terminal device in the direction of the target trajectory segment, and the meanings of the remaining symbols are the same as those described above, and will not be repeated here.
  • the five motion parameters given above are only examples, and it does not mean that the animation display method provided in the embodiment of the present application only includes these five motion parameters.
  • technicians can freely select any one or more motion parameters according to the actual situation, or add other motion parameters that can reflect the motion state characteristics of the terminal device in the target trajectory segment.
  • the terminal device may determine the target model corresponding to the target trajectory segment according to the motion parameters.
  • the terminal device may select a target model corresponding to the motion parameters from a model library according to the motion parameters.
  • the model library may include multiple models, and each model corresponds to an animation display effect.
  • the animation generation rules used to generate the target model can be displayed to the user.
  • Animation generation rules are used to indicate the correspondence between motion state features and target models.
  • the user can determine the motion state characteristics corresponding to the first model according to the animation generation rules, and move the terminal device according to the motion state characteristics.
  • the motion parameters can be obtained according to the trajectory parameter set, and the motion state characteristics of the terminal device in the target trajectory segment can be deduced inversely, and combined with the preset trajectory, the user can determine the The first model to generate at the target trajectory segment.
  • the animation generation rules can be defined by technicians according to the actual situation.
  • the motion parameters may include any one or more of average motion speed, average trajectory direction, average device orientation, speed cumulative change parameter, and direction cumulative change parameter.
  • the methods for the terminal device to determine the target model according to these motion parameters are respectively introduced below.
  • the motion parameter may include an average motion speed.
  • the terminal device can determine the average moving speed and the average speed threshold. If the average motion speed is greater than the average speed threshold, it means that the terminal device moves faster in the target trajectory segment, and then an accelerated motion model can be selected from the model library as the target model.
  • the acceleration motion model may include models such as an acceleration bar and a conveyor belt.
  • the motion parameter may include an average trajectory direction.
  • the terminal device may determine whether the vertical component of the average trajectory direction is greater than the rising threshold. If the vertical component of the average trajectory direction is less than or equal to the rising threshold, it means that the rising range of the target trajectory segment is small, then the horizontal movement model can be selected from the model library as the target model corresponding to the target trajectory segment. If the component of the average trajectory direction in the vertical direction is greater than the rising threshold, it indicates that the target trajectory segment has a large rise, then the vertical movement model can be selected from the model library as the target model corresponding to the target trajectory segment.
  • the rising threshold can be, for example, Indicates that the rising angle of the target trajectory segment is greater than 45°.
  • Horizontal movement models can include models such as horizontal roads, gentle slope roads, and horizontal acceleration belts.
  • Vertical movement models can include models such as ladders, lifts, and climbing ropes.
  • the motion parameters may include the average trajectory direction and the average device orientation
  • the animation generation rules may include "If you want to generate an animation corresponding to a preset model, keep the terminal device in a tilted state, and move the corresponding distance".
  • the terminal device may determine whether the included angle between the average trajectory direction and the average device orientation is greater than an angle threshold. If the angle between the average trajectory direction and the average device orientation is greater than the angle threshold, it means that the user tilted the terminal device when moving the terminal device, indicating that the user wants to display the animation corresponding to the preset model on the target trajectory segment. Then the terminal device may determine that the target model corresponding to the target track segment is the preset model.
  • the preset model may include models such as bridge model and speed bump model, and the angle threshold may be, for example,
  • the motion parameter may include a speed accumulation change parameter.
  • the terminal device can determine the magnitude of the speed accumulation change parameter and the speed fluctuation threshold. If the speed cumulative change parameter is greater than the speed fluctuation threshold, it means that the terminal device has a large speed fluctuation in the target trajectory segment, then the speed fluctuation model can be selected from the model library as the target model.
  • the velocity fluctuation model may include models such as a road model with barricades.
  • the motion parameter may include a direction cumulative change parameter
  • the animation generation rule may include "if you want to generate an animation corresponding to a preset model, rotate the terminal device while moving".
  • the terminal device may determine whether the cumulative direction change parameter is greater than the direction fluctuation threshold. If the direction cumulative change parameter is greater than the direction fluctuation threshold, it means that the user is moving the terminal device while rotating it, indicating that the user wants to display the animation corresponding to the preset model on the target trajectory segment. Then the terminal device may determine that the target model corresponding to the target track segment is the preset model.
  • the preset models may include models such as bridge models and speed bump models.
  • the corresponding relationship between the motion parameters and the target model can be obtained according to animation generation rules.
  • technicians can set animation generation rules or adjust animation generation rules according to actual needs.
  • the target model corresponding to the motion parameter may be determined through semantic mapping. Specifically, correspondences between motion parameters and semantic features, and correspondences between semantic features and target models can be respectively established. Then, when determining the target model, the semantic features corresponding to the motion parameters can be determined first according to the correspondence between the motion parameters and the semantic features, and then the target model corresponding to the semantic features can be determined according to the correspondence between the semantic features and the target model. Wherein, each motion parameter may correspond to one or more semantic features. In this way, when the motion parameters include multiple parameters, they can be mapped to multiple semantic features according to the corresponding relationship, and the target model can be determined more accurately.
  • the target model is determined by the terminal device according to the trajectory parameter set.
  • the above method may also be executed by the server. After the server determines the target model, the server may send an identifier of the target model to the terminal device, so that the terminal device can display an animation effect corresponding to the target model in subsequent steps.
  • the trajectory of the terminal device may include one or more target trajectory segments, and the movement state characteristics of the terminal device in each target trajectory segment remain unchanged. Then, before determining the target model, it is possible to judge whether the target trajectory segment satisfies the trajectory generation condition according to the motion parameters.
  • the trajectory generation conditions include that the motion state characteristics of the terminal device remain unchanged within the target trajectory segment. If the motion parameters meet the trajectory generation conditions, the target model can continue to be determined according to the terminal device. If the motion parameters do not meet the trajectory generation conditions, the target trajectory segment can be divided into multiple target sub-trajectory segments according to the motion parameters, and the target model corresponding to each target sub-trajectory segment can be determined respectively. Among them, the motion state characteristics of the terminal equipment in each target sub-trajectory segment remain unchanged.
  • S203 Display an animation corresponding to the target model at a display position corresponding to the target track segment on the terminal device.
  • an animation corresponding to the target model may be displayed at a display position corresponding to the target track segment on the terminal device.
  • the terminal device may display an animation corresponding to the target model on its own display device or a display device connected to itself.
  • the terminal device may use traditional AR technology to determine the display position corresponding to the target trajectory segment, and display the animation effect corresponding to the target model at the corresponding display position.
  • the server can send the identification of the target model or the identification of the animation effect corresponding to the target model to the terminal device, so that the terminal device can display the target on its own display device or a display device connected to itself.
  • the animation corresponding to the model is a simple graphic representation of the model.
  • the display effect of the terminal device may be as shown in FIG. 3 .
  • the path 311 , the bridge 312 , the path 313 , the ladder 314 and the bridge 315 are virtual display effects.
  • Object 321, desktop 322, object 323, and object 324 actually exist. In this way, the display effect of the virtual object being attached to the real object is realized.
  • the user may first move the terminal device along the target track segment.
  • a trajectory parameter set collected by the terminal device in the target trajectory segment may be acquired.
  • a set of trajectory parameters may include multiple sets of trajectory parameters.
  • the target model can be displayed at the display position corresponding to the target track segment on the terminal device. In this way, since the display position of the target model is determined according to the segment of the target track moved by the terminal device, and the terminal device will be constrained by real objects during the movement process, the animation of the target model will also be constrained by real objects. In this way, without modeling the real object, a corresponding virtual model can be created based on the real object, realizing the interaction between the virtual object and the real object.
  • the target model is determined according to the trajectory parameters of the target trajectory segment, the user only needs to adjust the movement of the terminal device in the target trajectory segment to adjust the animation display effect corresponding to the target trajectory segment. In this way, the free choice of the virtual animation by the user is realized, and the user experience is improved.
  • FIG. 4 is a schematic structural diagram of an animation display device provided by an embodiment of the present application. This embodiment can be applied to a scene where an AR effect is displayed from a terminal device.
  • the animation display device 400 specifically includes an acquisition module 410 and a display module 420 .
  • the acquiring unit 410 is configured to acquire a trajectory parameter set corresponding to a target trajectory segment, the target trajectory segment is a trajectory segment passed by the terminal device during movement, the trajectory parameter set includes multiple sets of trajectory parameters, and the trajectory The parameters are collected by the terminal device during the movement of the target trajectory segment.
  • the display unit 420 is configured to display the animation corresponding to the target model at the display position corresponding to the target trajectory segment on the terminal device, the target model is determined according to the motion parameters, and the motion parameters are determined according to the trajectory parameter set, so The motion parameters reflect the motion state characteristics of the terminal device in the target trajectory segment.
  • the animation display device provided in the embodiment of the present application can execute the animation display method provided in any embodiment of the application, and has corresponding functional units and beneficial effects for executing the animation display method.
  • FIG. 5 it shows a schematic structural diagram of an electronic device (such as a terminal device or a server running a software program) 500 suitable for implementing an embodiment of the present disclosure.
  • the terminal equipment in the embodiment of the present disclosure may include but not limited to such as mobile phone, notebook computer, digital broadcast receiver, PDA (personal digital assistant), PAD (tablet computer), PMP (portable multimedia player), vehicle terminal (such as mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers and the like.
  • the electronic device shown in FIG. 5 is only an example, and should not limit the functions and scope of use of the embodiments of the present disclosure.
  • an electronic device 500 may include a processing device (such as a central processing unit, a graphics processing unit, etc.) 501, which may be randomly accessed according to a program stored in a read-only memory (ROM) 502 or loaded from a storage device 508.
  • ROM read-only memory
  • RAM random access memory
  • various appropriate actions and processes are executed by programs in the memory (RAM) 503 .
  • RAM random access memory
  • various programs and data necessary for the operation of the electronic device 500 are also stored.
  • the processing device 501 , ROM 502 and RAM 503 are connected to each other via a bus 504 .
  • An input/output (I/O) interface 1005 is also connected to the bus 504 .
  • the following devices can be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speaker, vibration an output device 507 such as a computer; a storage device 508 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 1009.
  • the communication means 509 may allow the electronic device 500 to perform wireless or wired communication with other devices to exchange data. While FIG. 5 shows electronic device 500 having various means, it is to be understood that implementing or having all of the means shown is not a requirement. More or fewer means may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product including a computer program carried on a non-transitory computer readable medium, the computer program including program code for executing the method shown in FIG. 2 .
  • the computer program may be downloaded and installed from a network via communication means 509 , or from storage means 508 , or from ROM 502 .
  • the processing device 501 executes the above-mentioned functions defined in the methods of the embodiments of the present disclosure.
  • the electronic device provided by the embodiments of the present disclosure and the animation display method provided by the above embodiments belong to the same inventive concept.
  • the embodiments of the present disclosure have similarities with the above embodiments. Same beneficial effect.
  • An embodiment of the present disclosure provides a computer storage medium, on which a computer program is stored, and when the program is executed by a processor, the animation display method provided in the foregoing embodiments is implemented.
  • the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the above two.
  • a computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can transmit, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted by any appropriate medium, including but not limited to wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
  • the client and the server can communicate using any currently known or future network protocols such as HTTP (HyperText Transfer Protocol, Hypertext Transfer Protocol), and can communicate with digital data in any form or medium Communications (eg, communication networks) are interconnected.
  • Examples of communication networks include local area networks (“LANs”), wide area networks (“WANs”), internetworks (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device, or may exist independently without being incorporated into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device:
  • the target trajectory segment is a trajectory segment that the terminal device passes through during movement
  • the trajectory parameter set includes multiple sets of trajectory parameters
  • the trajectory parameters are the parameters of the terminal device in the target trajectory segment Collected during the movement
  • the animation corresponding to the target model is displayed at the display position corresponding to the target trajectory segment on the terminal device
  • the target model is determined according to the motion parameters
  • the motion parameters are determined according to the trajectory parameter set, so
  • the motion parameters reflect the motion state characteristics of the terminal device in the target trajectory segment.
  • the computer-readable storage medium may be programmed with computer program code for carrying out the operations of the present disclosure in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and also conventional procedural programming languages—such as "C" or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as through an Internet service provider). Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider such as AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure may be implemented by software or by hardware. Among them, the name of the unit unit does not constitute a limitation of the unit itself under certain circumstances,
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs System on Chips
  • CPLD Complex Programmable Logical device
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • Example 1 provides an animation display method, which includes:
  • the target trajectory segment is a trajectory segment that the terminal device passes through during movement
  • the trajectory parameter set includes multiple sets of trajectory parameters
  • the trajectory parameters are the parameters of the terminal device in the target trajectory segment collected during exercise
  • the animation corresponding to the target model is displayed at the display position corresponding to the target trajectory segment on the terminal device.
  • the target model is determined according to the motion parameters.
  • the motion parameters are determined according to the trajectory parameter set.
  • the motion parameters reflect the The motion state characteristics of the terminal device in the target trajectory segment.
  • Example 2 provides an animation display method, and the method further includes:
  • the method is applied to a terminal device, and further includes:
  • the first instruction is used to indicate the starting point of the target trajectory segment
  • the second instruction is used to indicate the end point of the target trajectory segment
  • Example 3 provides an animation display method, the method further includes: optionally, the terminal device includes a gyroscope and an accelerometer, and the trajectory parameter set includes the first A trajectory parameter, the first trajectory parameter includes a first position, and the start of collecting the trajectory parameters includes:
  • the second location is the location where the terminal device is located when the user triggers the first instruction
  • a first location is determined based on the first location and the motion information.
  • Example 4 provides an animation display method, the method further includes: Optionally, after obtaining the trajectory parameter set corresponding to the target trajectory segment, the method further includes:
  • Example 5 provides an animation display method, the method further includes: optionally, displaying the animation corresponding to the target model at the display position corresponding to the target trajectory segment on the terminal device Previously, the method further included:
  • Example 6 provides an animation display method, the method further includes: optionally, before displaying the animation corresponding to the target model at the display position corresponding to the target track segment on the terminal device , the method also includes:
  • the target model corresponding to the motion parameter is selected from a model library, and the model library includes multiple types of models.
  • Example 7 provides an animation display method, the method further includes: optionally, the trajectory parameters include the collection time and the time when the terminal device collects the trajectory parameters Collecting a position, the motion parameter includes an average motion speed, and the average motion speed reflects the average speed of the terminal device in the target track segment;
  • the determining motion parameters according to the trajectory parameter set includes:
  • the selecting the target model corresponding to the motion parameter from the model library includes:
  • an accelerated movement model is selected from the model library as the target model corresponding to the target trajectory segment.
  • Example 8 provides an animation display method, the method further includes: optionally, the trajectory parameters include the collection position where the terminal device collects the trajectory parameters,
  • the motion parameters include an average trajectory direction, which is the direction from the starting point of the target trajectory segment to the end point of the target trajectory segment;
  • the determining motion parameters according to the trajectory parameter set includes:
  • the selecting the target model corresponding to the motion parameter from the model library includes:
  • a vertical movement model is selected from the model library as the target model corresponding to the target trajectory segment.
  • Example 9 provides an animation display method, the method further includes: optionally, the trajectory parameters also include the terminal device when the terminal device collects the trajectory parameters The device orientation, the motion parameter also includes an average device orientation, the average device orientation reflects the average direction of the terminal device in the target trajectory segment;
  • the determining motion parameters according to the trajectory parameter set includes:
  • the selecting the target model corresponding to the motion parameter from the model library includes:
  • a preset model is selected from the model library according to animation generation rules as the target model corresponding to the target trajectory segment, and the animation
  • the generating rule indicates the corresponding relationship between tilting the mobile terminal device and the animation corresponding to the preset model.
  • Example 10 provides an animation display method, the method further includes: optionally, the trajectory parameters include the collection time and the time when the terminal device collects the trajectory parameters Collecting a position, the motion parameter includes a speed cumulative change parameter, and the speed cumulative change parameter reflects the speed fluctuation of the terminal device in the target track segment;
  • the determining motion parameters according to the trajectory parameter set includes:
  • the selecting the target model corresponding to the motion parameter from the model library includes:
  • a speed fluctuation model is selected from the model library as the target model corresponding to the target trajectory segment.
  • Example Eleven provides an animation display method
  • the method further includes:
  • the method further includes:
  • the trajectory generation condition includes that the movement state characteristic of the terminal device remains unchanged.
  • Example 12 provides an animation display device, including:
  • the acquiring unit is configured to acquire a trajectory parameter set corresponding to a target trajectory segment, the target trajectory segment is a trajectory segment passed by the terminal device during movement, the trajectory parameter set includes multiple sets of trajectory parameters, and the trajectory parameter is a set of trajectory parameters of the terminal device Collected during the movement of the target trajectory segment;
  • a display unit configured to display an animation corresponding to a target model at a display position corresponding to a target track segment on the terminal device, the target model is determined according to motion parameters, the motion parameters are determined according to the track parameter set, the The motion parameter reflects the motion state characteristics of the terminal device in the target trajectory segment.
  • Example 13 provides an electronic device, the electronic device includes: one or more processors; a memory for storing one or more programs; when the One or more programs are executed by the one or more processors, so that the one or more processors implement the animation display method described in any embodiment of the present application.
  • Example Fourteen provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the implementation of any embodiment of the present application is achieved.
  • the animation display method

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请公开了一种动画显示方法、装置及设备。在本申请提供的动画显示方法中,可以先由用户将终端设备沿目标轨迹段移动。接着,可以获取终端设备在目标轨迹段中采集到的轨迹参数集合。轨迹参数集合可以包括多组轨迹参数。接着,可以在终端设备上目标轨迹段对应的显示位置显示目标模型。如此,由于目标模型的显示位置是根据终端设备移动过的目标轨迹段确定的,而终端设备在移动过程中会受到真实物体的约束,目标模型的动画也会受到真实物体的约束。如此,在不需要对真实物体进行建模的情况下,可以基于真实物体创建对应的虚拟模型,实现了虚拟对象和真实物体之间的交互。

Description

一种动画显示方法、装置及设备
本公开要求于2021年09月30日提交中国专利局、申请号为202111163674.X、发明名称为“一种动画显示方法、装置及设备”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本申请涉及计算机领域,尤其涉及一种动画显示方法、装置及设备。
背景技术
增强现实(Augmented Reality,简称AR)技术是一种将虚拟信息与真实世界巧妙融合的技术。在应用AR技术的过程中,可以对虚拟对象进行仿真,得到虚拟对象对应的动画,并将仿真得到的动画与真实世界中的画面叠加在一起进行显示。这样,出现在用户视野中的图像既包括真实世界的画面,也包括虚拟对象对应的动画,使得用户可以同时看到虚拟对象和真实世界。由于AR技术具有交互性强、互动性好等特点,在很多领域得到了广泛的应用。
特别地,通过AR技术,可以模拟虚拟对象与真实世界中真实物体之间的交互。例如,通过AR技术,可以实现某个虚拟对象在运动过程中与真实物体发生碰撞的显示效果。这样,通过AR技术实现虚拟世界和真实世界之间的结合与交互结合,可以带来更好的显示效果。
但是,为了实现虚拟对象与真实物体之间的交互,需要对真实物体进行建模。而建模过程往往较为复杂,会消耗大量的人力物力,增加AR技术的应用成本。
发明内容
为了解决现有技术的问题,本申请实施例提供了一种动画显示方法及装置。
第一方面,本申请实施例提供了一种动画显示方法,所述方法包括:
获取目标轨迹段对应的轨迹参数集合,所述目标轨迹段为终端设备在运动过程中经过的轨迹段,所述轨迹参数集合包括多组轨迹参数,所述轨迹参数是终端设备在目标轨迹段的运动中采集得到的;
在终端设备上目标轨迹段对应的显示位置显示目标模型对应的动画,所述目标模型是根据运动参数确定的,所述运动参数是根据所述轨迹参数集合确定的,所述运动参数体现所述终端设备在目标轨迹段中的运动状态特征。
第二方面,本申请实施例提供了一种动画显示装置,所述装置包括:
获取单元,用于获取目标轨迹段对应的轨迹参数集合,所述目标轨迹段为终端设备在运动过程中经过的轨迹段,所述轨迹参数集合包括多组轨迹参数,所述轨迹参数是终端设备在目标轨迹段的运动中采集得到的;
显示单元,用于在终端设备上目标轨迹段对应的显示位置显示目标模型对应的动画,所述目标模型是根据运动参数确定的,所述运动参数是根据所述轨迹参数集合确定的,所 述运动参数体现所述终端设备在目标轨迹段中的运动状态特征。
第三方面,本申请实施例提供了一种电子设备,所述电子设备包括:一个或多个处理器;存储器,用于存储一个或多个程序;当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如前述第一方面所述的动画显示方法。
第四方面,本申请实施例提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如前述第一方面所述的动画显示方法。
在本申请实施例提供的动画显示方法中,可以先由用户将终端设备沿目标轨迹段移动。接着,可以获取终端设备在目标轨迹段中采集到的轨迹参数集合。轨迹参数集合可以包括多组轨迹参数。接着,可以在终端设备上目标轨迹段对应的显示位置显示目标模型。如此,由于目标模型的显示位置是根据终端设备移动过的目标轨迹段确定的,而终端设备在移动过程中会受到真实物体的约束,目标模型的动画也会受到真实物体的约束。如此,在不需要对真实物体进行建模的情况下,可以基于真实物体创建对应的虚拟模型,实现了虚拟对象和真实物体之间的交互。另外,由于目标模型是根据目标轨迹段的轨迹参数确定的,用户只需调整终端设备在目标轨迹段的移动情况,即可调整目标轨迹段对应的动画显示效果。如此,实现了用户对虚拟动画的自由选择,提升了用户体验。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1为本申请提供的一种AR技术的应用场景的场景示意图;
图2为本申请提供的一种动画显示方法的流程示意图;
图3为本申请实施例提供的终端设备的显示界面的一种示意图;
图4为本申请实施例提供的动画显示装置的一种结构示意图;
图5为本申请实施例提供的电子设备的一种结构示意图。
具体实施方式
下面将参照附图更详细地描述本申请的实施例。虽然附图中显示了本申请的某些实施例,然而应当理解的是,本申请可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本申请。应当理解的是,本申请的附图及实施例仅用于示例性作用,并非用于限制本申请的保护范围。
应当理解,本申请的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本申请的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例” 表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本申请中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本申请中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
AR技术可以将虚拟世界和真实世界相结合,在许多领域得到的广泛的应用。例如,通过AR技术,可以实现虚拟小人在桌面上行走的显示效果,也可以实现虚拟小人碰到墙壁无法继续前进的显示效果。如此,一定程度上打破了虚拟世界和真实世界之间的“次元壁”,具有较好的显示效果,在许多领域得到了广泛的应用。
但是,对于传统的AR技术,为了实现虚拟对象和真实物体之间的交互,需要对真实物体进行建模。通过模拟真实物体的位置,才能够实现虚拟对象和真实物体之间的交互。也就是说,为了创建与真实物体相关联的虚拟对象,需要先建立真实物体对应虚拟模型。
例如,为了实现“虚拟小人碰到墙壁无法继续前进”的显示效果,可以先对“墙壁”这一真实物体进行建模,在虚拟环境中创建墙壁对应的虚拟对象,并设置墙壁对应的虚拟对象和虚拟小人之间的碰撞关系。在进行显示时,可以不显示墙壁对应的虚拟对象。这样,用户看到的画面中包括虚拟小人和真实的墙壁。在虚拟小人运动到真实的墙壁对应的位置时,会和墙壁对应的虚拟对象发生碰撞,无法继续前进。如此,实现了“虚拟小人碰到墙壁无法继续前进”的显示效果。
显然,对真实物体进行建模的过程较为复杂,需要消耗大量的人力物力,而且当环境发生变化时,原先的虚拟模型无法继续使用。如此,既增加了AR技术的使用成本,也限制了AR技术的应用环境。
参见图1,该图为本申请实施例提供的AR技术的一种应用场景的场景示意图。真实世界内包括平面11和放置于平面11上的物体12。终端设备13上部署有AR技术的相关软件,可以实现真实世界和虚拟对象之间存在互动的显示效果。
可选地,假设终端设备需要实现的显示效果为“轨道21-轨道22-轨道23”这条与物体12和平面11接触的轨道。那么在传统的AR技术中,可以分别建立平面11对应的虚拟模型和物体12对应的虚拟模型。接着,可以以平面11对应的虚拟模型和物体12对应的虚拟模型为基准,分别建立轨道21、轨道22和轨道23对应的模型,并在模型对应的显示位置显示对应的动画。例如,可以在物体12对应的虚拟模型的上表面部署轨道模型21对应的模型,在物体12对应的虚拟模型的右侧面部署轨道模型22对应的模型,在平面11对应的虚拟模型的对应位置部署轨道模型23对应的模型。
可见,在传统的AR技术中,可以对真实物体进行建模,从而模拟虚拟对象与真实物体之间的交互。显然,真实物体越复杂,需要建立的模型也就越复杂,实现AR显示的成本也就越高。另外,在更换环境后,由于原有的模型并不适用于新环境,传统的AR技术也不能快速实现虚拟对象与真实物体之间的交互。
为了解决现有技术的问题,本申请实施例提供了一种动画显示方法,下面结合说明书 附图进行详细介绍。
为了便于理解本申请实施例提供的技术方案,首先结合图1所示的场景示例进行说明。
为了实现轨道21、轨道22和轨道23的显示效果,在本申请实施例提供的技术方案中,用户可以先将终端设备沿运动轨迹31移动,由终端设备采集运动过程中的轨迹参数。接着,可以根据轨迹参数确定运动轨迹31的各个轨迹段分别对应的目标模型,并在终端设备上对应的显示位置显示目标模型的动画,从而在终端设备上显示轨道21、轨道22和轨道23。关于确定目标模型的详细过程请参见图2对应实施例的介绍,这里不再赘述。
图2为本申请实施例提供的一种动画显示方法的流程示意图,本申请实施例提供的动画显示方法可以适用于用户通过终端设备创建AR模型并显示对应的动画的应用场景,该方法可以由终端设备中安装的AR单元来执行,该单元可以由软件的方式来实现,其代码集成于终端设备的存储器中,并通过终端设备的处理单元执行。可选地,该方法还可以由服务器或其他具有数据处理功能的设备执行。下面以该方法由AR设备的处理单元为例进行介绍。如图2所示,该方法具体包括以下步骤:
S201:获取目标轨迹段对应的轨迹参数集合。
为了创建与目标轨迹段对应的目标模型,可以先将终端设备沿目标轨迹段进行移动。在终端设备在目标轨迹段对应的位置移动的过程中,终端设备可以采集多组轨迹参数,得到轨迹参数集合。可选地,每组轨迹参数可以包括这组轨迹参数的采集时间和采集位置。在一些可能的实现方式中,轨迹参数还可以包括采集时间对应的设备朝向。
下面首先对各个轨迹参数及其采集方法进行介绍。
在终端设备沿目标轨迹段移动的过程中,终端设备可以进行多次数据采集,每次采集可以采集多个参数作为一组轨迹参数。可选地,终端设备可以根据时间间隔进行数据采集,即每间隔预设时间间隔就可以采集一组轨迹参数。或者,终端设备可以根据距离进行采集,终端设备每移动预设距离,就可以采集一组轨迹参数。
在采集轨迹参数时,终端设备可以记录采集这组轨迹参数的时间作为采集时间。可选地,终端设备可以从系统中读取采集轨迹参数的时间作为轨迹参数中的采集时间。或者,终端设备可以将采集轨迹参数时的时间戳作为轨迹参数中的采集时间。
当终端参数处于目标轨迹段的起点时,终端设备可以记录自身当前所在的位置。例如,终端设备可以建立与地球固连的坐标系,并将自身当前所在的位置记录为坐标系的原点。在沿目标轨迹段移动的过程中,终端设备可以通过自身的陀螺仪和加速度计等传感器确定终端设备移动的距离,通过加速度计测量的加速度可以确定终端设备移动的距离,通过陀螺仪测量角加速度可以确定终端设备旋转过的角度。可见,结合陀螺仪和加速度计测量的结果,可以确定终端设备在移动过程中任意时刻的位置和朝向。如此,在采集轨迹参数时,可以根据陀螺仪和加速度计测量的结果确定终端设备的采集位置和采集时间对应的设备朝向。可选地,在一些可能的实现方式中,终端设备还可以通过其他的方式确定采集时间对应的采集位置和设备朝向。
在一些可能的实现方式中,采集位置可以通过三维坐标表示,设备朝向可以通过三维向量或空间四元数的方式表示。
在一些可能的实现方式中,目标轨迹段可以由用户根据想要创建的模型自由选择。例如,假设用户想要创建从地面移动到桌面的“梯子”类型的模型,那么用户可以手持终端设备从地面移动到桌面。假设用户想要创建从桌面的一角移动到另一角的“草地”类型的模型,那么用户可以手持终端设备从桌面的一角移动到另一角。可选地,在终端设备沿目标轨迹段移动的过程中,用户可以根据动画生成规则调整终端的移动速度和设备朝向等参数。关于动画生成规则的介绍可以参见后文,这里不再赘述。
在本申请实施例中,终端设备移动的轨迹可以被称为目标轨迹,目标轨迹可以包括一个或多个目标轨迹段。在每个目标轨迹段中,终端设备的运动状态特征基本保持不变。可选地,用户可以手动将目标轨迹分为多个目标轨迹段。例如,用户可以通过触发终端设备显示的分段控件触发分段指令。在接收到用户触发的分段指令之后,终端设备可以将接收到分段指令之前的轨迹划分为一个目标轨迹段。这样,通过分段控件触发分段指令,用户可以将终端设备的运动轨迹拆分为多个目标轨迹段。由于不同目标轨迹段对应的目标模型相互独立,最终得到的目标轨迹可以包括多个目标模型组成,实现了多种动画的组合。
举例说明。在目标轨迹段的起点,用户可以触发第一指令,从而控制终端设备开始采集轨迹参数。在目标轨迹段的终点,用户可以触发第二指令,从而控制终端设备停止采集轨迹参数。其中,第一指令和第二指令可以是前述分段指令,也可以是开始采集指令或停止采集指令。终端设备在获取到第一指令的时间和获取到第二指令的时间之间采集的轨迹参数即为目标轨迹段的轨迹参数集合。
为了确定轨迹参数集合中任意一个轨迹参数的采集位置,终端设备可以存储获取到第一指令时的位置,并根据运动信息确定采集位置。举例说明。假设轨迹参数集合包括第一轨迹参数,第一轨迹参数包括第一位置,那么终端设备可以先通过陀螺仪和加速度计确定终端设备的运动信息。运动信息表示从第二位置移动到第一位置的过程中,终端设备移动的距离和方向。接着,终端设备可以在第二位置的基础上,结合得到的运动信息确定第一位置。
或者,在一些其他可能的实现中,也可以先采集目标轨迹的轨迹参数集合,并根据轨迹参数集合确定对应的运动参数将目标轨迹划分为多个目标轨迹段。具体地,由于每个目标轨迹段对应一个目标模型,每个目标轨迹段中终端设备的运动状态特征应当保持不变。因此,根据运动参数可以确定终端设备在目标轨迹上运动状态特征的变化规律,从而将目标轨迹划分为多个目标轨迹段。关于运动参数的介绍可以参见后文,这里不再赘述。
在一些可能实现方式中,终端设备的加速度计和陀螺仪等传感器的精确度可能有限,导致直接采集到的轨迹参数可能不够准确。那么为了提高目标轨迹段的准确性,可以对轨迹采纳数集合进行数据清洗。
可选地,可以采用五点平滑法对轨迹参数进行清洗。例如,假设轨迹参数集合中第i个轨迹参数用p i表示,数据清洗后的第i个轨迹参数用p′ i表示。那么数据清洗的过程可以如下所示:
Figure PCTCN2022120159-appb-000001
其中,n为轨迹参数集合所包括的轨迹参数的总数量。
根据前文介绍可知,本申请实施例提供的动画生成方法可以由终端设备执行,也可以由服务器执行。如果本申请实施例提供的动画生成方法由终端设备执行,终端设备可以先存储在目标轨迹段的运动过程中采集的轨迹参数集合,并在生成目标轨迹段对应的动画时从存储器中提取轨迹参数集合。如果本申请实施例提供的动画生成方法由终端设备执行,终端设备可以先存储在目标轨迹段的运动过程中采集的轨迹参数集合。在需要生成目标轨迹段对应的动画时,服务器可以从终端设备处获取轨迹参数集合。例如,服务器可以通过向终端设备发送请求获取轨迹参数集合,或者由终端设备主动向服务推送轨迹参数集合。
S202:根据轨迹参数集合确定目标轨迹段对应的目标模型。
在获得轨迹参数集合之后,可以根据轨迹参数集合确定目标轨迹段对应的目标模型。具体地,可以先根据轨迹参数集合确定运动参数,再根据运动参数确定目标轨迹段对应的目标模型。其中,运动参数用于体现终端设备在目标轨迹段中的运动状态特征。目标模型为待显示在目标轨迹段上的虚拟模型,例如可以包括轨道模型、草地模型、桥梁模型和梯子模型等虚拟模型。
根据前文介绍可知,本申请实施例提供的动画显示方法可以应用于终端设备或服务器,下面以终端设备执行本申请实施例提供的动画显示方法为例,分别对上述两个过程进行介绍。
首先介绍终端设备根据轨迹参数集合确定运动参数的过程。
在本申请实施例中,终端设备可以先根据轨迹参数集合确定终端设备在目标轨迹段的运动参数。具体地,终端设备可以根据轨迹参数集合,对终端设备在目标轨迹段中的运动过程进行分析,从而确定目标轨迹段的运动参数。
可选地,运动参数可以包括平均运动速度、平均轨迹方向、平均设备朝向、速度累积变化参数和方向累积变化参数中的任意一种或多种。下面分别介绍终端设备确定这些运动参数的方法。
在第一种可能的实现中,运动参数可以包括平均运动速度。平均运动速度体现终端设备在目标轨迹段中的平均速度。
在计算终端设备的平均运动速度的过程中,可以先根据轨迹参数将目标轨迹段划分为多个子轨迹段,每个子轨迹段的起点分别对应一组轨迹参数,每个子轨迹段的终点也分别对应一组轨迹参数。可选地,在一些可能的实现中,子轨迹段内部不包括轨迹参数的采集点。接着,可以根据子轨迹段的起点的采集位置和采集时间,以及子轨迹段的终点的采集位置和采集时间,分别计算多个子轨迹段中每个子轨迹段中终端设备的移动速度。在得到终端设备在每个子轨迹段的运动速度之后,可以对多个子轨迹段的运动速度求平均值,得到终端设备的平均运动速度。
可选地,上述过程可以通过如下公式表示:
Figure PCTCN2022120159-appb-000002
其中,a 1表表示终端设备在目标轨迹段的平均运动速度,n为轨迹参数集合中轨迹参数的数量,p i表示轨迹参数集合中第i个轨迹参数所包括的采集位置,t i表示轨迹参数集合中第i个轨迹参数所包括的采集时间。可选地,由于采集位置p i可以是终端设备的三维坐标,那么在计算终端设备的平均速度时,可以将两个相邻的采集位置相减并取模,得到这两个采集位置之间的直线距离。
在第二种可能的实现方式中,运动参数可以包括平均轨迹方向。平均轨迹方向为目标轨迹段的起点到目标轨迹段的终点的方向,即目标轨迹段的总体朝向。
在计算平均轨迹朝向的过程中,终端设备可以先从轨迹参数集合中提取出目标轨迹段的起点对应的轨迹参数和目标轨迹段的终点对应的轨迹参数,并根据目标轨迹段的起点对应的采集位置和目标轨迹段的终点对应的采集位置计算平均轨迹方向。
例如,如果轨迹参数中采集位置以三维坐标的形式表示,那么可以将目标轨迹段的起点对应的采集位置和目标轨迹段的起点对应的采集位置分别转化为三维向量。接着,可以将计算得到的两个三维向量相减并进行归一化,得到的结果为从目标轨迹段的起点指向目标轨迹段的终点的单位向量,即平均轨迹方向。
可选地,上述过程可以通过如下公式表示:
Figure PCTCN2022120159-appb-000003
其中,a 2表示目标轨迹段的平均轨迹朝向,p begin表示目标轨迹段的起点对应的采集位置,p end表示目标轨迹段的终点对应的采集位置,剩余各个符号所表示的含义与前文相同,这里不再赘述。
在第三种可能的实现方式中,运动参数还包括平均设备朝向,平均设备朝向体现终端设备在目标轨迹段中的平均方向。
与计算平均运动速度的过程类似,在计算平均设备朝向的过程中,终端设备可以根据轨迹参数集合将目标轨迹段拆分为多个子轨迹段,并根据设备朝向分别计算终端设备在各个子轨迹中的平均朝向,再对多个子轨迹段的平均朝向取平均值,得到平均设备朝向。
可选地,假设设备朝向通过四元数表示,那么上述过程可以通过如下公式表示:
Figure PCTCN2022120159-appb-000004
其中,a 3表示终端设备在目标轨迹段的平均设备朝向,n为轨迹参数集合中轨迹参数的数量,,q i表示轨迹参数集合中第i个轨迹参数所包括的采设备朝向,toEuler表示将四元数转换为欧拉角的计算过程,剩余各个符号所表示的含义与前文相同,这里不再赘述。
在第四种可能的实现方式中,运动参数还可以包括速度累积变化参数。速度累积变化 参数体现终端设备在目标轨迹段中的速度波动情况。
与计算平均运动速度的过程类似,在计算平均设备朝向的过程中,终端设备可以根据轨迹参数集合将目标轨迹段拆分为多个子轨迹段,并根据采集位置和采集时间和分别计算终端设备在每个采集位置的速度,再根据相邻的两个采集位置的平均速度计算这两个采集位置之间的子轨迹段对应的平均加速度,最后对多个平均加速度取平均值,得到均值,得到速度累积变化参数。
具体地,假设采集位置通过三维坐标表示,表示,那么上述过程可以通过如下公式表示:
Figure PCTCN2022120159-appb-000005
Figure PCTCN2022120159-appb-000006
其中,v i表示终端设备在第i个轨迹参数的速度,a 4表示终端设备在目标轨迹段的速度波动情况参数,剩余各个符号所表示的含义与前文相同,这里不再赘述。
在第五种可能的实现方式中,运动参数包括方向累积变化参数。方向累积变化参数体现终端设备在目标轨迹段中的朝向的变化情况。
与计算平均设备朝向的过程类似,在计算方向累积变化参数的过程中,终端设备可以根据轨迹参数集合将目标轨迹段拆分为多个子轨迹段,并根据设备朝向分别计算各个子轨迹中终端设备旋转的角度和旋转速度,再对多个子轨迹段的旋转速度取平均值,得到方向累积变化参数。
具体地,假设设备朝向通过四元数或向量表示,那么上述过程可以通过如下公式表示:
Figure PCTCN2022120159-appb-000007
其中,a 5表示终端设备在目标轨迹段的方向累积变化参数,剩余各个符号所表示的含义与前文相同,这里不再赘述。
需要说明的是,上述给出的五种运动参数仅作为实例,不代表本申请实施例所提供的动画显示方法中仅包括这五种运动参数。在实际应用场景中,技术人员可以根据实际情况自由选择任意一种或多种运动参数,或增加其他能够体现终端设备在目标轨迹段的运动状态特征的运动参数。
上面介绍了终端设备确定运动参数的过程,下面对终端设备根据运动参数确定目标模型的过程进行介绍。
在计算得到运动参数之后,终端设备可以根据运动参数确定目标轨迹段对应的目标模型。可选地,终端设备可以根据运动参数从模型库中选择与运动参数相对应的目标模型。其中,模型库可以包括多种模型,每种模型对应一种动画显示效果。
在用户移动中终端设备之前,可以向用户展示用于生成目标模型的动画生成规则。动画生成规则用于指示运动状态特征和目标模型之间的对应关系。这样,如果用户想要在目标轨迹段上生成自己想要的第一模型,用户可以根据动画生成规则确定第一模型对应的运动状态特征,并根据运动状态特征移动终端设备。这样,在获取到终端设备在移动过程中采集的轨迹参数集合之后,可以根据轨迹参数集合得到运动参数,反推出终端设备在目标轨迹段的运动状态特征,再结合预设轨迹既可以确定用户想要在目标轨迹段生成的第一模型。这样,将第一模型确定为目标模型,即可在目标轨迹段上显示第一模型对应的动画效果。在本申请实施例中,动画生成规则可以由技术人员根据实际情况自行定义。
运动参数可以包括平均运动速度、平均轨迹方向、平均设备朝向、速度累积变化参数和方向累积变化参数中的任意一种或多种。下面分别介绍终端设备根据这些运动参数确定目标模型的方法。
在第一种可能的实现方式中,运动参数可以包括平均运动速度。
在确定目标模型的过程中,终端设备可以判断平均运动速度和平均速度阈值的大小。如果平均运动速度大于平均速度阈值,说明终端设备在目标轨迹段中运动较快,那么可以从模型库中选择加速运动模型作为目标模型。可选地,加速运动模型可以包括加速条、传送带等模型。
在第二种可能的实现方式中,运动参数可以包括平均轨迹方向。
在确定目标模型的过程中,终端设备可以判断平均轨迹方向在垂直方向上的分量是否大于上升阈值。如果平均轨迹方向在垂直方向上的分量小于或等于上升阈值,说明目标轨迹段上升的幅度较小,那么可以从模型库中选择水平移动模型作为目标轨迹段对应的目标模型。如果平均轨迹方向在垂直方向上的分量大于上升阈值,说明目标轨迹段上升的幅度较大,那么可以从模型库中选择垂直移动模型作为目标轨迹段对应的目标模型。
可选地,上升阈值例如可以是
Figure PCTCN2022120159-appb-000008
表示目标轨迹段上升的角度大于45°。水平移动模型可以包括水平道路、缓坡道路和水平加速带等模型。垂直移动模型可以包括梯子、升降梯和攀岩绳等模型。
在第三种可能的实现方式中,运动参数可以包括平均轨迹方向和平均设备朝向,动画生成规则可以包括“如果想要生成预设模型对应的动画,将终端设备保持倾斜状态,并移动相应的距离”。
在确定目标模型的过程中,终端设备可以判断平均轨迹方向与平均设备朝向之间的夹角是否大于角度阈值。如果平均轨迹方向与平均设备朝向之间的夹角大于角度阈值,说明用户在移动终端设备时倾斜了终端设备,表示用户希望在目标轨迹段显示预设模型对应的动画。那么终端设备可以确定目标轨迹段对应的目标模型为预设模型。可选地,预设模型可以包括桥模型和减速带模型等模型,角度阈值例如可以是
Figure PCTCN2022120159-appb-000009
在第四种可能的是实现方式中,运动参数可以包括速度累积变化参数。
在确定目标模型的过程中,终端设备可以判断速度累积变化参数和速度波动阈值的大小。如果速度累积变化参数大于速度波动阈值,说明终端设备在目标轨迹段中速度波动较大,那么可以从模型库中选择速度波动模型作为目标模型。可选地,速度波动模型可以包 括具有路障的道路模型等模型。
在第五种可能的实现方式中,运动参数可以包括方向累积变化参数,动画生成规则可以包括“如果想要生成预设模型对应的动画,边移动边旋转终端设备”。
在确定目标模型的过程中,终端设备可以判断方向累积变化参数是否大于方向波动阈值。如果方向累积变化参数大于方向波动阈值,说明用户在边旋转终端设备边移动终端设备,表示用户希望在目标轨迹段显示预设模型对应的动画。那么终端设备可以确定目标轨迹段对应的目标模型为预设模型。可选地,预设模型可以包括桥模型和减速带模型等模型。
需要说明的是,运动参数和目标模型之间的对应关系可以根据动画生成规则得到。在实际的应用场景中,技术人员可以根据实际需要设定动画生成规则或对动画生成规则进行调整。
在一些可能的实现方式中,可以通过语义映射的方式确定运动参数对应的目标模型。具体地,可以分别建立运动参数和语义特征之间的对应关系,以及语义特征和目标模型之间的对应关系。那么在确定目标模型时,可以先根据运动参数和语义特征之间的对应关系确定运动参数对应的语义特征,再根据语义特征和目标模型之间的对应关系确定语义特征对应的目标模型。其中,每种运动参数可以对应一个或多个语义特征。这样,当运动参数包括多种参数时,根据对应关系可以映射到多个语义特征,可以更加准确地确定目标模型。
在上文给出的实现方式中,由终端设备根据轨迹参数集合确定目标模型。而在一些其他可能的实现方式中,上述方法也可以由服务器执行。在服务器确定了目标模型之后,服务器可以向终端设备发送目标模型的标识,以便终端设备在后续步骤中展示目标模型对应的动画效果。
根据S201中的介绍可知,终端设备移动的轨迹中可以包括一个或多个目标轨迹段,每个目标轨迹段中终端设备的运动状态特征保持不变。那么在确定目标模型之前,可以先根据运动参数判断目标轨迹段是否满足轨迹生成条件。轨迹生成条件包括终端设备的运动状态特征在目标轨迹段内保持不变。如果运动参数满足轨迹生成条件,可以继续根据终端设备确定目标模型。如果运动参数不满足轨迹生成条件,可以根据运动参数将目标轨迹段划分为多个目标子轨迹段,并分别确定每个目标子轨迹段对应的目标模型。其中,每个目标子轨迹段中终端设备的运动状态特征保持不变。
S203:在终端设备上目标轨迹段对应的显示位置显示目标模型对应的动画。
在确定目标轨迹段对应的目标模型之后,可以在终端设备上目标轨迹段对应的显示位置显示目标模型对应的动画。具体地,假设本申请实施例提供的方法由终端设备执行,终端设备可以在自身的显示设备或与自身连接的显示设备上显示目标模型对应的动画。可选地,终端设备可以采用传统的AR技术确定目标轨迹段对应的显示位置,并在对应的显示位置显示目标模型对应的动画效果。
假设本申请实施例提供的方法由服务器执行,服务器可以向终端设备发送目标模型的标识或目标模型对应的动画效果的标识,以便终端设备在自身的显示设备或与自身连接的显示设备上显示目标模型对应的动画。
举例说明。在一种可能的实现方式中,终端设备的显示效果可以如图3所示。在图3 所示的实现方式中,路径311、桥312、路径313、梯子314和桥315为虚拟显示效果。物体321、桌面322、物体323和物体324为实际存在的。如此,实现了虚拟对象依附与真实物体的显示效果。
在本申请实施例提供的动画显示方法中,可以先由用户将终端设备沿目标轨迹段移动。接着,可以获取终端设备在目标轨迹段中采集到的轨迹参数集合。轨迹参数集合可以包括多组轨迹参数。接着,可以在终端设备上目标轨迹段对应的显示位置显示目标模型。如此,由于目标模型的显示位置是根据终端设备移动过的目标轨迹段确定的,而终端设备在移动过程中会受到真实物体的约束,目标模型的动画也会受到真实物体的约束。如此,在不需要对真实物体进行建模的情况下,可以基于真实物体创建对应的虚拟模型,实现了虚拟对象和真实物体之间的交互。另外,由于目标模型是根据目标轨迹段的轨迹参数确定的,用户只需调整终端设备在目标轨迹段的移动情况,即可调整目标轨迹段对应的动画显示效果。如此,实现了用户对虚拟动画的自由选择,提升了用户体验。
图4为本申请实施例提供的一种动画显示装置的结构示意图,本实施例可以适用于从通过终端设备显示AR效果的场景,该动画显示装置400具体包括获取模块410和显示模块420。
具体地,获取单元410,用于获取目标轨迹段对应的轨迹参数集合,所述目标轨迹段为终端设备在运动过程中经过的轨迹段,所述轨迹参数集合包括多组轨迹参数,所述轨迹参数是终端设备在目标轨迹段的运动中采集得到的。
显示单元420,用于在终端设备上目标轨迹段对应的显示位置显示目标模型对应的动画,所述目标模型是根据运动参数确定的,所述运动参数是根据所述轨迹参数集合确定的,所述运动参数体现所述终端设备在目标轨迹段中的运动状态特征。
本申请实施例所提供动画显示装置可执行本申请任意实施例所提供的动画显示方法,具备执行动画显示方法相应的功能单元和有益效果。
下面参考图5,其示出了适于用来实现本公开实施例的电子设备(例如运行有软件程序的终端设备或服务器)500的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图5示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图5所示,电子设备500可以包括处理装置(例如中央处理器、图形处理器等)501,其可以根据存储在只读存储器(ROM)502中的程序或者从存储装置508加载到随机访问存储器(RAM)503中的程序而执行各种适当的动作和处理。在RAM503中,还存储有电子设备500操作所需的各种程序和数据。处理装置501、ROM502以及RAM503通过总线504彼此相连。输入/输出(I/O)接口1005也连接至总线504。
通常,以下装置可以连接至I/O接口505:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置506;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置507;包括例如磁带、硬盘等的存储装置508;以及通信装置1009。 通信装置509可以允许电子设备500与其他设备进行无线或有线通信以交换数据。虽然图5示出了具有各种装置的电子设备500,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行图2所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置509从网络上被下载和安装,或者从存储装置508被安装,或者从ROM502被安装。在该计算机程序被处理装置501执行时,执行本公开实施例的方法中限定的上述功能。
本公开实施例提供的电子设备与上述实施例提供的动画显示方法属于同一发明构思,未在本公开实施例中详尽描述的技术细节可参见上述实施例,并且本公开实施例与上述实施例具有相同的有益效果。本公开实施例提供了一种计算机存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述实施例所提供的动画显示方法。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:
获取目标轨迹段对应的轨迹参数集合,所述目标轨迹段为终端设备在运动过程中经过的轨迹段,所述轨迹参数集合包括多组轨迹参数,所述轨迹参数是终端设备在目标轨迹段的运动中采集得到的;在终端设备上目标轨迹段对应的显示位置显示目标模型对应的动画,所述目标模型是根据运动参数确定的,所述运动参数是根据所述轨迹参数集合确定的,所述运动参数体现所述终端设备在目标轨迹段中的运动状态特征。
计算机可读存储介质可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元单元的名称在某种情况下并不构成对该单元本身的限定,
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,【示例一】提供了一种动画显示方法,该方法包括:
获取目标轨迹段对应的轨迹参数集合,所述目标轨迹段为终端设备在运动过程中经过的轨迹段,所述轨迹参数集合包括多组轨迹参数,所述轨迹参数是终端设备在目标轨迹段的运动中采集得到的;
在终端设备上目标轨迹段对应的显示位置显示目标模型对应的动画,所述目标模型是根据运动参数确定的,所述运动参数是根据所述轨迹参数集合确定的,所述运动参数体现所述终端设备在目标轨迹段中的运动状态特征。
根据本公开的一个或多个实施例,【示例二】提供了一种动画显示方法,该方法还包括:可选地,所述方法应用于终端设备,还包括:
获取用户触发的第一指令,开始采集所述轨迹参数;
获取所述用户触发的第二指令,停止采集所述轨迹参数;
其中,所述第一指令用于指示所述目标轨迹段的起点,所述第二指令用于指示所述目标轨迹段的终点。
根据本公开的一个或多个实施例,【示例三】提供了一种动画显示方法,该方法还包括:可选地,所述终端设备包括陀螺仪和加速度计,所述轨迹参数集合包括第一轨迹参数,所述第一轨迹参数包括第一位置,所述开始采集所述轨迹参数包括:
获取第二位置,所述第二位置为所述用户触发所述第一指令时所述终端设备所在的位置;
根据所述陀螺仪和所述加速度计确定所述终端设备的运动信息;
根据所述第一位置和所述运动信息确定第一位置。
根据本公开的一个或多个实施例,【示例四】提供了一种动画显示方法,该方法还包括:可选地,在获取目标轨迹段对应的轨迹参数集合之后,所述方法还包括:
对所述轨迹参数集合中每组轨迹参数进行数据清洗。
根据本公开的一个或多个实施例,【示例五】提供了一种动画显示方法,该方法还包括:可选地于,在终端设备上目标轨迹段对应的显示位置显示目标模型对应的动画之前,所述方法还包括:
根据所述轨迹参数集合确定所述目标轨迹段对应的显示位置;
确定目标模型对应的动画效果。
根据本公开的一个或多个实施例,【示例六】提供了一种动画显示方法,该方法还包括:可选地,在终端设备上目标轨迹段对应的显示位置显示目标模型对应的动画之前,所述方法还包括:
根据所述轨迹参数集合确定运动参数;
从模型库中选择所述运动参数对应的目标模型,所述模型库包括多种类型的模型。
根据本公开的一个或多个实施例,【示例七】提供了一种动画显示方法,该方法还包括:可选地,所述轨迹参数包括所述终端设备采集所述轨迹参数的采集时间和采集位置,所述运动参数包括平均运动速度,所述平均运动速度体现所述终端设备在目标轨迹段中的平均速度;
所述根据所述轨迹参数集合确定运动参数包括:
根据所述轨迹参数的采集时间将所述目标轨迹段分为多个子轨迹段;
根据所述多个子轨迹段中每个子轨迹段的起点的采集位置和采集时间,以及终点的采集位置和采集时间,分别计算每个子轨迹段中终端设备的移动速度;
根据所述多个子轨迹段中每个子轨迹段的移动速度,确定所述平均运动速度;
所述从模型库中选择所述运动参数对应的目标模型包括:
响应于所述平均运动速度大于平均速度阈值,从所述模型库中选择加速移动模型作为所述目标轨迹段对应的目标模型。
根据本公开的一个或多个实施例,【示例八】提供了一种动画显示方法,该方法还包括:可选地,所述轨迹参数包括所述终端设备采集所述轨迹参数的采集位置,所述运动参数包括平均轨迹方向,所述平均轨迹方向为所述目标轨迹段的起点到所述目标轨迹段的终点的方向;
所述根据所述轨迹参数集合确定运动参数包括:
根据所述目标轨迹段的起点的采集位置和所述目标轨迹段的终点的采集位置计算所述平均轨迹方向;
所述从模型库中选择所述运动参数对应的目标模型包括:
响应于所述平均轨迹方向在垂直方向的分量小于或等于上升阈值,从所述模型库中选择水平移动模型作为所述目标轨迹段对应的目标模型;
响应于所述平均轨迹方向在垂直方向的分量大于所述上升阈值,从所述模型库中选择垂直移动模型作为所述目标轨迹段对应的目标模型。
根据本公开的一个或多个实施例,【示例九】提供了一种动画显示方法,该方法还包括:可选地,所述轨迹参数还包括所述终端设备采集所述轨迹参数时终端设备的设备朝向,所述运动参数还包括平均设备朝向,所述平均设备朝向体现所述终端设备在目标轨迹段中的平均方向;
所述根据所述轨迹参数集合确定运动参数包括:
对所述轨迹参数集合所包括的多个设备朝向求平均值,得到所述平均设备朝向;
所述从模型库中选择所述运动参数对应的目标模型包括:
响应于所述平均设备朝向与所述平均轨迹方向之间的夹角大于角度阈值,根据动画生成规则从所述模型库中选择预设模型作为所述目标轨迹段对应的目标模型,所述动画生成规则指示倾斜移动终端设备与所述预设模型对应的动画之间的对应关系。
根据本公开的一个或多个实施例,【示例十】提供了一种动画显示方法,该方法还包括:可选地,所述轨迹参数包括所述终端设备采集所述轨迹参数的采集时间和采集位置,所述运动参数包括速度累积变化参数,所述速度累积变化参数体现所述终端设备在目标轨迹段中的速度波动情况;
所述根据所述轨迹参数集合确定运动参数包括:
根据所述轨迹参数集合计算所述终端设备在每个采集位置对应的速度;
根据所述终端设备在每个采集位置对应的速度计算速度累积变化参数;
所述从模型库中选择所述运动参数对应的目标模型包括:
响应于速度累积变化参数大于速度波动阈值,从所述模型库中选择速度波动模型作为所述目标轨迹段对应的目标模型。
根据本公开的一个或多个实施例,【示例十一】提供了一种动画显示方法,该方法还包括:可选地,在根据所述运动参数确定目标模型之前,所述方法还包括:
根据所述运动参数,确定所述目标轨迹段满足轨迹生成条件,所述轨迹生成条件包括所述终端设备的运动状态特征保持不变。
根据本公开的一个或多个实施例,【示例十二】提供了一种动画显示装置,包括:
获取单元,用于获取目标轨迹段对应的轨迹参数集合,所述目标轨迹段为终端设备在运动过程中经过的轨迹段,所述轨迹参数集合包括多组轨迹参数,所述轨迹参数是终端设备在目标轨迹段的运动中采集得到的;
显示单元,用于在终端设备上目标轨迹段对应的显示位置显示目标模型对应的动画,所述目标模型是根据运动参数确定的,所述运动参数是根据所述轨迹参数集合确定的,所述运动参数体现所述终端设备在目标轨迹段中的运动状态特征。
根据本公开的一个或多个实施例,【示例十三】提供了一种电子设备,所述电子设备包括:一个或多个处理器;存储器,用于存储一个或多个程序;当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如本申请任一实施例所述的动画显示方法。
根据本公开的一个或多个实施例,【示例十四】提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如本申请任一实施例所述的动画显示方法。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。

Claims (14)

  1. 一种动画显示方法,其特征在于,所述方法包括:
    获取目标轨迹段对应的轨迹参数集合,所述目标轨迹段为终端设备在运动过程中经过的轨迹段,所述轨迹参数集合包括多组轨迹参数,所述轨迹参数是终端设备在目标轨迹段的运动中采集得到的;
    在终端设备上目标轨迹段对应的显示位置显示目标模型对应的动画,所述目标模型是根据运动参数确定的,所述运动参数是根据所述轨迹参数集合确定的,所述运动参数体现所述终端设备在目标轨迹段中的运动状态特征。
  2. 根据权利要求1所述的方法,其特征在于,所述方法应用于终端设备,还包括:
    获取用户触发的第一指令,开始采集所述轨迹参数;
    获取所述用户触发的第二指令,停止采集所述轨迹参数;
    其中,所述第一指令用于指示所述目标轨迹段的起点,所述第二指令用于指示所述目标轨迹段的终点。
  3. 根据权利要求2所述的方法,其特征在于,所述终端设备包括陀螺仪和加速度计,所述轨迹参数集合包括第一轨迹参数,所述第一轨迹参数包括第一位置,所述开始采集所述轨迹参数包括:
    获取第二位置,所述第二位置为所述用户触发所述第一指令时所述终端设备所在的位置;
    根据所述陀螺仪和所述加速度计确定所述终端设备的运动信息;
    根据所述第二位置和所述运动信息确定第一位置。
  4. 根据权利要求1任一项所述的方法,其特征在于,在获取目标轨迹段对应的轨迹参数集合之后,所述方法还包括:
    对所述轨迹参数集合中每组轨迹参数进行数据清洗。
  5. 根据权利要求1所述的方法,其特征在于,在终端设备上目标轨迹段对应的显示位置显示目标模型对应的动画之前,所述方法还包括:
    根据所述轨迹参数集合确定所述目标轨迹段对应的显示位置;
    确定目标模型对应的动画效果。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,在终端设备上目标轨迹段对应的显示位置显示目标模型对应的动画之前,所述方法还包括:
    根据所述轨迹参数集合确定运动参数;
    从模型库中选择所述运动参数对应的目标模型,所述模型库包括多种类型的模型。
  7. 根据权利要求6所述的方法,其特征在于,所述轨迹参数包括所述终端设备采集所述轨迹参数的采集时间和采集位置,所述运动参数包括平均运动速度,所述平均运动速度体现所述终端设备在目标轨迹段中的平均速度;
    所述根据所述轨迹参数集合确定运动参数包括:
    根据所述轨迹参数的采集时间将所述目标轨迹段分为多个子轨迹段;
    根据所述多个子轨迹段中每个子轨迹段的起点的采集位置和采集时间,以及终点的采 集位置和采集时间,分别计算每个子轨迹段中终端设备的移动速度;
    根据所述多个子轨迹段中每个子轨迹段的移动速度,确定所述平均运动速度;
    所述从模型库中选择所述运动参数对应的目标模型包括:
    响应于所述平均运动速度大于平均速度阈值,从所述模型库中选择加速移动模型作为所述目标轨迹段对应的目标模型。
  8. 根据权利要求6所述的方法,其特征在于,所述轨迹参数包括所述终端设备采集所述轨迹参数的采集位置,所述运动参数包括平均轨迹方向,所述平均轨迹方向为所述目标轨迹段的起点到所述目标轨迹段的终点的方向;
    所述根据所述轨迹参数集合确定运动参数包括:
    根据所述目标轨迹段的起点的采集位置和所述目标轨迹段的终点的采集位置计算所述平均轨迹方向;
    所述从模型库中选择所述运动参数对应的目标模型包括:
    响应于所述平均轨迹方向在垂直方向的分量小于或等于上升阈值,从所述模型库中选择水平移动模型作为所述目标轨迹段对应的目标模型;
    响应于所述平均轨迹方向在垂直方向的分量大于所述上升阈值,从所述模型库中选择垂直移动模型作为所述目标轨迹段对应的目标模型。
  9. 根据权利要求8所述的方法,其特征在于,所述轨迹参数还包括所述终端设备采集所述轨迹参数时终端设备的设备朝向,所述运动参数还包括平均设备朝向,所述平均设备朝向体现所述终端设备在目标轨迹段中的平均方向;
    所述根据所述轨迹参数集合确定运动参数包括:
    对所述轨迹参数集合所包括的多个设备朝向求平均值,得到所述平均设备朝向;
    所述从模型库中选择所述运动参数对应的目标模型包括:
    响应于所述平均设备朝向与所述平均轨迹方向之间的夹角大于角度阈值,根据动画生成规则从所述模型库中选择预设模型作为所述目标轨迹段对应的目标模型,所述动画生成规则指示倾斜移动终端设备与所述预设模型对应的动画之间的对应关系。
  10. 根据权利要求6所述的方法,其特征在于,所述轨迹参数包括所述终端设备采集所述轨迹参数的采集时间和采集位置,所述运动参数包括速度累积变化参数,所述速度累积变化参数体现所述终端设备在目标轨迹段中的速度波动情况;
    所述根据所述轨迹参数集合确定运动参数包括:
    根据所述轨迹参数集合计算所述终端设备在每个采集位置对应的速度;
    根据所述终端设备在每个采集位置对应的速度计算速度累积变化参数;
    所述从模型库中选择所述运动参数对应的目标模型包括:
    响应于速度累积变化参数大于速度波动阈值,从所述模型库中选择速度波动模型作为所述目标轨迹段对应的目标模型。
  11. 根据权利要求1所述的方法,其特征在于,在根据所述运动参数确定目标模型之前,所述方法还包括:
    根据所述运动参数,确定所述目标轨迹段满足轨迹生成条件,所述轨迹生成条件包括 所述终端设备的运动状态特征保持不变。
  12. 一种动画显示装置,其特征在于,所述装置包括:
    获取单元,用于获取目标轨迹段对应的轨迹参数集合,所述目标轨迹段为终端设备在运动过程中经过的轨迹段,所述轨迹参数集合包括多组轨迹参数,所述轨迹参数是终端设备在目标轨迹段的运动中采集得到的;
    显示单元,用于在终端设备上目标轨迹段对应的显示位置显示目标模型对应的动画,所述目标模型是根据运动参数确定的,所述运动参数是根据所述轨迹参数集合确定的,所述运动参数体现所述终端设备在目标轨迹段中的运动状态特征。
  13. 一种电子设备,其特征在于,所述电子设备包括:一个或多个处理器;存储器,用于存储一个或多个程序;当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-11中任一所述的动画显示方法。
  14. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-11中任一所述的动画显示方法。
PCT/CN2022/120159 2021-09-30 2022-09-21 一种动画显示方法、装置及设备 WO2023051340A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/571,129 US20240282031A1 (en) 2021-09-30 2022-09-21 Animation display method and apparatus, and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111163674.XA CN113888724B (zh) 2021-09-30 2021-09-30 一种动画显示方法、装置及设备
CN202111163674.X 2021-09-30

Publications (1)

Publication Number Publication Date
WO2023051340A1 true WO2023051340A1 (zh) 2023-04-06

Family

ID=79005044

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/120159 WO2023051340A1 (zh) 2021-09-30 2022-09-21 一种动画显示方法、装置及设备

Country Status (3)

Country Link
US (1) US20240282031A1 (zh)
CN (1) CN113888724B (zh)
WO (1) WO2023051340A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113888724B (zh) * 2021-09-30 2024-07-23 北京字节跳动网络技术有限公司 一种动画显示方法、装置及设备
CN115131471B (zh) * 2022-08-05 2024-08-02 北京字跳网络技术有限公司 基于图像的动画生成方法、装置、设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104754111A (zh) * 2013-12-31 2015-07-01 北京新媒传信科技有限公司 移动终端应用的控制方法和控制装置
US10984574B1 (en) * 2019-11-22 2021-04-20 Adobe Inc. Generating animations in an augmented reality environment
CN112734801A (zh) * 2020-12-30 2021-04-30 深圳市爱都科技有限公司 运动轨迹展示方法、终端设备以及计算机可读存储介质
CN113888724A (zh) * 2021-09-30 2022-01-04 北京字节跳动网络技术有限公司 一种动画显示方法、装置及设备

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5559941A (en) * 1994-10-26 1996-09-24 Brechner; Eric L. Method for smoothly maintaining a vertical orientation during computer animation
US8860732B2 (en) * 2010-09-27 2014-10-14 Adobe Systems Incorporated System and method for robust physically-plausible character animation
WO2014006143A1 (en) * 2012-07-04 2014-01-09 Sports Vision & Facts Ug Method and system for real-time virtual 3d reconstruction of a live scene, and computer-readable media
US9342913B2 (en) * 2013-02-19 2016-05-17 Ngrain (Canada) Corporation Method and system for emulating inverse kinematics
SG11201811228SA (en) * 2017-06-19 2019-01-30 Beijing Didi Infinity Technology & Development Co Ltd Systems and methods for displaying movement of vehicle on map
CN110827376A (zh) * 2018-08-09 2020-02-21 北京微播视界科技有限公司 增强现实多平面模型动画交互方法、装置、设备及存储介质
CN109107160B (zh) * 2018-08-27 2021-12-17 广州要玩娱乐网络技术股份有限公司 动画交互方法、装置、计算机存储介质和终端
CN111275797B (zh) * 2020-02-26 2022-05-31 腾讯科技(深圳)有限公司 动画显示方法、装置、设备及存储介质
CN111589150B (zh) * 2020-04-22 2023-03-24 腾讯科技(深圳)有限公司 虚拟道具的控制方法、装置、电子设备及存储介质
CN113744372B (zh) * 2020-05-15 2024-09-27 完美世界(北京)软件科技发展有限公司 一种动画生成方法、装置、设备
CN112035041B (zh) * 2020-08-31 2022-05-31 北京字节跳动网络技术有限公司 一种图像处理方法、装置、电子设备和存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104754111A (zh) * 2013-12-31 2015-07-01 北京新媒传信科技有限公司 移动终端应用的控制方法和控制装置
US10984574B1 (en) * 2019-11-22 2021-04-20 Adobe Inc. Generating animations in an augmented reality environment
CN112734801A (zh) * 2020-12-30 2021-04-30 深圳市爱都科技有限公司 运动轨迹展示方法、终端设备以及计算机可读存储介质
CN113888724A (zh) * 2021-09-30 2022-01-04 北京字节跳动网络技术有限公司 一种动画显示方法、装置及设备

Also Published As

Publication number Publication date
US20240282031A1 (en) 2024-08-22
CN113888724A (zh) 2022-01-04
CN113888724B (zh) 2024-07-23

Similar Documents

Publication Publication Date Title
WO2023051340A1 (zh) 一种动画显示方法、装置及设备
US9355451B2 (en) Information processing device, information processing method, and program for recognizing attitude of a plane
CN112733820B (zh) 障碍物信息生成方法、装置、电子设备和计算机可读介质
WO2020228682A1 (zh) 对象交互方法及装置、系统、计算机可读介质和电子设备
WO2023151524A1 (zh) 图像显示方法、装置、电子设备及存储介质
CN114253647B (zh) 元素展示方法、装置、电子设备及存储介质
JP7518168B2 (ja) ビデオにオブジェクトを表示する方法、装置、電子機器、及びコンピュータ読み取り可能な記憶媒体
JP2023509866A (ja) 画像処理方法及び装置
WO2023151558A1 (zh) 用于显示图像的方法、装置和电子设备
EP4332904A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN111382701B (zh) 动作捕捉方法、装置、电子设备及计算机可读存储介质
WO2023193639A1 (zh) 图像渲染方法、装置、可读介质及电子设备
WO2023116562A1 (zh) 图像展示方法、装置、电子设备及存储介质
WO2024120114A1 (zh) 特效处理方法、装置、设备、计算机可读存储介质及产品
CN114049403A (zh) 一种多角度三维人脸重建方法、装置及存储介质
WO2023185393A1 (zh) 图像处理方法、装置、设备及存储介质
WO2023174087A1 (zh) 特效视频生成方法、装置、设备及存储介质
WO2023142834A1 (zh) 帧同步数据处理方法、装置、可读介质和电子设备
WO2022033445A1 (zh) 交互式动态流体效果处理方法、装置及电子设备
WO2023279939A1 (zh) 具备触觉交互功能的用户手持设备、触觉交互方法及装置
EP4336843A1 (en) Touch animation display method and apparatus, device, and medium
CN111627106B (zh) 人脸模型重构方法、装置、介质和设备
US11721027B2 (en) Transforming sports implement motion sensor data to two-dimensional image for analysis
CN108595095A (zh) 基于手势控制模拟目标体运动轨迹的方法和装置
CN114937059A (zh) 显示对象的运动控制方法及设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22874727

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18571129

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17.07.2024)