CN117953140A - Information generation method, device, equipment and medium - Google Patents

Information generation method, device, equipment and medium Download PDF

Info

Publication number
CN117953140A
CN117953140A CN202211280742.5A CN202211280742A CN117953140A CN 117953140 A CN117953140 A CN 117953140A CN 202211280742 A CN202211280742 A CN 202211280742A CN 117953140 A CN117953140 A CN 117953140A
Authority
CN
China
Prior art keywords
sampling
virtual
dimensional object
preset
path
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211280742.5A
Other languages
Chinese (zh)
Inventor
瞿镇一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202211280742.5A priority Critical patent/CN117953140A/en
Publication of CN117953140A publication Critical patent/CN117953140A/en
Pending legal-status Critical Current

Links

Abstract

The embodiment of the disclosure relates to an information generation method, device, equipment and medium, wherein the method comprises the following steps: determining a virtual three-dimensional object to be sampled; determining a plurality of sampling locations of the virtual three-dimensional object; sampling the virtual three-dimensional object based on a plurality of sampling positions to obtain a plurality of sampling images corresponding to the virtual three-dimensional object; and generating a target video or a target image combination corresponding to the virtual three-dimensional object based on the plurality of sampling images. The target video or the target image combination obtained by the embodiment of the disclosure not only can better ensure the visual impression of the user, so that the user can fully know the virtual three-dimensional object, but also has less required storage resources and processing resources, and is more convenient to store and display.

Description

Information generation method, device, equipment and medium
Technical Field
The present disclosure relates to the field of information processing technologies, and in particular, to an information generating method, apparatus, device, and medium.
Background
In a scene presented with a virtual three-dimensional model, such as virtual reality, stereoscopic impression can be presented for a user visually, and the method is widely applied and is popular with users. In some scenarios, information of a virtual three-dimensional model may need to be saved, such as a user is interested in a certain model in the virtual three-dimensional scene, and the information of the model is hoped to be saved so as to review the model form or share the model information to other users based on the model information, however, the form of the model information obtained by the prior art is poor, which results in poor effect of model display based on the model information.
Disclosure of Invention
In order to solve the above technical problems or at least partially solve the above technical problems, the present disclosure provides a video generating method, apparatus, device and medium.
The embodiment of the disclosure provides an information generation method, which comprises the following steps: determining a virtual three-dimensional object to be sampled; determining a plurality of sampling locations of the virtual three-dimensional object; sampling the virtual three-dimensional object based on a plurality of sampling positions to obtain a plurality of sampling images corresponding to the virtual three-dimensional object; and generating a target video or a target image combination corresponding to the virtual three-dimensional object based on the plurality of sampling images.
The embodiment of the disclosure also provides an information generating device, which comprises: the object determining module is used for determining a virtual three-dimensional object to be sampled; a sample determination module for determining a plurality of sample positions of the virtual three-dimensional object; the sampling processing module is used for sampling the virtual three-dimensional object based on a plurality of sampling positions to obtain a plurality of sampling images corresponding to the virtual three-dimensional object; and the information generation module is used for generating a target video or a target image combination corresponding to the virtual three-dimensional object based on the plurality of sampling images.
The embodiment of the disclosure also provides an electronic device, which comprises: a processor; a memory for storing the processor-executable instructions; the processor is configured to read the executable instructions from the memory and execute the instructions to implement an information generating method according to an embodiment of the present disclosure.
The present disclosure also provides a computer-readable storage medium storing a computer program for executing the information generating method as provided by the embodiments of the present disclosure.
According to the technical scheme provided by the embodiment of the disclosure, the virtual three-dimensional object can be sampled by utilizing the plurality of sampling positions, and the target video or the target image combination corresponding to the virtual three-dimensional object is generated based on the plurality of obtained sampling images, namely, the information of the virtual three-dimensional object is presented in a video or image combination mode, so that the obtained target video or target image combination can better ensure the visual impression of a user, the user can fully know the virtual three-dimensional object, and the required storage resources and processing resources are fewer, so that the virtual three-dimensional object is more convenient to store and display.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, the drawings that are required for the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a schematic flow chart of an information generating method according to an embodiment of the disclosure;
fig. 2 is a schematic view of shooting provided in an embodiment of the disclosure;
Fig. 3 is a schematic view of shooting provided in an embodiment of the disclosure;
FIG. 4 is a schematic view of an image sequence provided in an embodiment of the present disclosure;
Fig. 5 is a schematic structural diagram of an information generating apparatus according to an embodiment of the present disclosure;
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, a further description of aspects of the present disclosure will be provided below. It should be noted that, without conflict, the embodiments of the present disclosure and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced otherwise than as described herein; it will be apparent that the embodiments in the specification are only some, but not all, embodiments of the disclosure.
The inventor finds that in the related art, in case of preserving information of a certain model in a virtual three-dimensional scene, two main modes are adopted: one way is to acquire only one image of the model, and the model form presented to the user through a single image has certain limitation and gives the user poor visual impression; the other mode needs to download the model, more processing resources and storage resources are needed, model rendering processing is needed when the model is displayed, the processing resources are still needed, the time consumption is long, and the user needs to wait for a long time before watching the model form. In summary, the form of model information obtained in the past is not good, and the effect of model display based on the model information is also not good. In order to improve at least one of the above problems, embodiments of the present disclosure provide an information generating method, apparatus, device, and medium, which are described in detail below.
Fig. 1 is a flow chart of an information generating method according to an embodiment of the present disclosure, where the method may be performed by an information generating apparatus, and the apparatus may be implemented by using software and/or hardware, and may be generally integrated in an electronic device. As shown in fig. 1, the method mainly includes the following steps S102 to S108:
Step S102, determining a virtual three-dimensional object to be sampled.
The virtual three-dimensional object may be a 3D model in a virtual three-dimensional scene, such as an article, a building, a person, an animal, etc. in the virtual scene, and may also be a virtual three-dimensional scene. In some implementation examples, the virtual three-dimensional object specified by the user in the virtual three-dimensional scene may be taken as the virtual three-dimensional object to be sampled, that is, the user may specify the virtual three-dimensional object to be sampled according to the requirement in a preset manner, such as the user may specify the virtual three-dimensional object to be photographed through a specific prop, which is not limited herein.
Step S104, a plurality of sampling positions of the virtual three-dimensional object are determined.
In some embodiments, the plurality of sampling locations may be flexibly set according to the requirements, such as, without limitation, the sampling locations may be set by a user or default by a system.
In order to achieve better sampling effect, in some implementation examples, when determining sampling positions, the following conditions are required to be satisfied that the included angle between any two adjacent sampling positions is not larger than a preset angle threshold, and the included angle is the included angle between each of the two adjacent sampling positions and the connecting line of the virtual three-dimensional object; and/or the distance between any two adjacent sampling positions is not greater than a preset distance threshold. In practical applications, the determined multiple sampling positions of the virtual three-dimensional object only need to meet one or more of the above conditions. The angle threshold and the distance threshold can be flexibly set according to practical conditions, and two adjacent sampling positions can be constrained by setting the threshold so as to achieve better sampling effect, for example, if the distance between the adjacent sampling positions is closer and/or the included angle is smaller and shorter, the smoothness of the video generated based on the sampling image can be effectively ensured to be higher.
In some implementation examples, when determining the sampling positions, the following conditions are required to be satisfied, wherein the included angle between any two adjacent sampling positions is the same, and the included angle is the included angle between each of the two adjacent sampling positions and the connecting line of the virtual three-dimensional object; and/or the distance between any two adjacent sampling locations is the same. By the method, the virtual three-dimensional object can be uniformly sampled, so that a plurality of sampling images presented to a user can uniformly present different surface information of the virtual three-dimensional object, the connectivity between adjacent sampling images is stronger, and a target video obtained based on the plurality of sampling images is more natural.
And step S106, sampling the virtual three-dimensional object based on the plurality of sampling positions to obtain a plurality of sampling images corresponding to the virtual three-dimensional object.
Specifically, for each sampling position, sampling processing is performed on the virtual three-dimensional object at the sampling position, and the obtained sampling image can present the surface information of the virtual three-dimensional object. The local surface information of the virtual three-dimensional object which can be presented in the sampling images obtained by different sampling positions is different.
For ease of understanding, sampling a virtual three-dimensional object may be understood as capturing a virtual three-dimensional object in a virtual three-dimensional scene, such as capturing the virtual three-dimensional object with a virtual camera in the virtual three-dimensional scene, where the position of the virtual camera (capturing position) is the sampling position. Further, other sampling parameters such as sampling time (such as shooting time of the virtual camera) and sampling posture (such as posture of the virtual camera) may be further set, and then sampling processing is performed on the virtual three-dimensional object based on the sampling position, the sampling posture and the sampling time, so as to obtain a plurality of sampling images.
Step S108, generating a target video or a target image combination corresponding to the virtual three-dimensional object based on the plurality of sampling images. That is, information of the virtual three-dimensional object finally generated by the embodiments of the present disclosure may be represented in the form of video or image combination, or the like.
In practical application, a plurality of sampling images may be arranged based on the sampling position corresponding to each sampling image, and a target video or a target image combination may be generated based on the arranged sampling images, where the frame rate of the target video may be set by a user or by a default of the system, and the method is not limited herein. In addition, the embodiment of the present disclosure is not limited to a specific form of the target image combination, and the target image combination may be a diagram of a preset Format, for example, a GIF (GRAPHICS INTERCHANGE Format, image interchange Format); the target image combination can also be formed by arranging a plurality of sampling images in a mode of N rows and M columns, so that a user can watch the plurality of sampling images obtained by sampling the virtual three-dimensional object from different positions at a glance; the target image combination can also be in a PPT form and can be played page by page, the form of the target image combination is not limited by the embodiment of the disclosure, and the image combination form can be flexibly set according to requirements. In practical application, after obtaining a plurality of sampling images, a user may determine whether to generate a target video or a target image combination based on the plurality of sampling images, or may determine whether to generate the target video or the target image combination based on the plurality of sampling images according to the number of sampling images, for example, if the number of sampling images is higher than a preset number threshold, then the target video is generated, and if the number of sampling images is not higher than the preset number threshold, then the target image combination is generated; the foregoing are exemplary illustrations, and are not limiting herein.
It can be understood that the information of the virtual three-dimensional object is presented in a target video or target image combination mode, so that the visual impression of a user can be better ensured, and the user can clearly know the expression forms of the virtual three-dimensional object under different angles, so that the virtual three-dimensional object is fully known. In addition, the video or image combination requires less storage resources and processing resources, and is more convenient to store and display.
In some embodiments, the step S104 may be implemented with reference to the following steps a to B:
And step A, determining a sampling path.
The sampling path is a path set in the virtual three-dimensional scene, and the virtual three-dimensional object needs to be sampled for multiple times based on the path. In some specific implementation examples, a path specified by a user in a virtual three-dimensional scene may be acquired, and the path specified by the user is taken as a sampling path. That is, the sampling path may be specified by the user at his own discretion, illustratively, by controlling a prop to draw the sampling path freely in space. In other specific embodiments, the sampling path is generated based on the position of the virtual three-dimensional object and a preset path setting pattern. The path setting mode is a preset mode for generating a sampling path based on the position of the virtual three-dimensional object, and the path setting mode may be: the position of the virtual three-dimensional object is set as the center of a preset graph, a target graph is obtained based on preset graph parameters, the outline of the target graph is used as a sampling path, for example, the preset graph parameters comprise a circular radius, and the target graph is corresponding to the circular shape; for example, the preset graphic parameters include a rectangular side length, and the target graphic is rectangular. The foregoing is merely an example, and the path setting manner may be flexibly set according to actual requirements, which is not limited herein.
The user can wear a virtual reality device such as a virtual reality helmet, the virtual three-dimensional object is displayed to the user through the virtual reality device, and interaction between the user and the virtual reality is achieved according to the change of the position and the posture of the virtual reality device of the user and/or a virtual reality controller such as a handle. In the foregoing embodiment, when determining the sampling path, the virtual three-dimensional object may be displayed to the user based on the virtual reality device, and the sampling path may be determined according to the position and posture signals input by the user based on the virtual reality device and/or the virtual reality controller, and displayed to the user. In some specific implementation examples, a path designated by a user in a virtual three-dimensional scene can be acquired, the path designated by the user is used as a sampling path, and the sampling path is displayed to the user, so that the user can primarily know a sampling image which can be generated by combining the sampling path by changing the position and the posture of the user in virtual reality, and the sampling path is more flexibly adjusted. In other specific implementation examples, a sampling path is generated based on the position of the virtual three-dimensional object and a preset path setting mode, and the sampling path is displayed to the user, so that the user can select a more appropriate preset path for sampling through interaction with virtual reality.
The sampling paths can be flexibly set in the virtual three-dimensional scene through the user and/or the system, and the sampling paths are different, so that the finally-achieved sampling effects are different, and different visual impressions can be presented to the user. For a user using a virtual reality device and a virtual reality controller, displaying the sampling path in virtual reality enables the user to more intuitively and flexibly set or select the sampling path based on interactions with virtual reality.
Step B, determining a plurality of sampling positions of the virtual three-dimensional object based on the sampling paths; wherein each sampling location is located on a sampling path. In other words, the sampling locations may be determined along a sampling path, which is a connection of multiple sampling locations. In some implementation examples, the above step B may be implemented with reference to the following steps B1 to B2:
Step B1, obtaining the total number of the plurality of sampling positions. That is, the total number of sampling locations (i.e., the total number of samples) may be predetermined, thereby further determining each sampling location on the sampling path. There are various ways to determine the total number of sampling locations, and the following are exemplary:
Mode one: the total number of samples set by the user is obtained, and the total number of the plurality of sampling positions is obtained based on the total number of samples set by the user. That is, the total number of samples may be set by the user as desired, and is not limited herein.
Mode two: firstly, acquiring a preset total duration and a preset interval duration; the method comprises the steps of presetting a total duration to be the duration of a preset target video to be generated, wherein the preset interval duration is the interval between sampling times corresponding to two preset adjacent sampling positions; the total number of the plurality of sampling locations is then determined based on the preset total duration and the preset interval duration. The preset total duration can be set by a user or a default total duration set by a system; the preset interval duration may be set by a user, or may be a default interval duration set by the system. On the basis of the known total time length and the interval time length of the adjacent samples, the number of samples, i.e. the total number of the plurality of sample positions, can be further determined.
It can be understood that the interval between the sampling times corresponding to two adjacent sampling positions is shorter, because of the persistence of vision of the user, the finally obtained multiple sampling images can bring dynamic smooth look and feel to the user when being played in sequence, and when the interval between the sampling times corresponding to two adjacent sampling positions is longer, the finally obtained multiple sampling images can bring the user with a cartoon look and feel when being played in sequence, and the fluency of video playing is poor. In order to sufficiently ensure the viewing experience of the user, in the second mode, a specific implementation manner of determining the total number of the plurality of sampling positions based on the preset total duration and the preset interval duration may be performed with reference to the following: if the preset interval duration is not greater than the preset time interval threshold, determining the total number of the plurality of sampling positions based on the ratio between the preset total duration and the preset interval duration; if the preset interval time length is greater than the preset time interval threshold, determining the total number of the plurality of sampling positions based on the ratio between the preset total time length and the time interval threshold. In practical application, if the preset interval duration is set by a user, and if the interval duration set by the user is long, the system may present a jamming impression at a later stage, at this time, the system may perform automatic optimization processing, such as directly determining the number of sampling positions according to a time interval threshold, and if the preset total duration is the same, if the preset interval duration is longer than the time interval threshold, the number of samples determined based on the time interval threshold will be greater than the number of samples determined based on the preset interval duration, and the sampling effect is ensured by lifting the number of samples on a sampling path, so as to avoid the video jamming phenomenon. In practical applications, the time interval threshold is the longest interval between sampling times corresponding to two adjacent sampling positions, and can be determined based on the persistence of vision. For example, assuming that the video frame rate is not lower than 25fps, the video may have a certain smoothness, and the user may not feel any significant click, the time interval threshold may be set at 25fps, such as setting the time interval threshold to 0.04s. By the method, smoothness of the subsequently generated video can be effectively guaranteed.
Mode three: firstly, acquiring path parameters corresponding to a sampling path and correlation parameters between two preset adjacent sampling positions; the path parameters comprise total included angles corresponding to the sampling paths, wherein the total included angles are included angles between the connecting lines of the path start points and the path end points of the sampling paths and the virtual three-dimensional objects respectively, and the correlation parameters comprise included angles between connecting lines of two adjacent sampling positions and the virtual three-dimensional objects respectively; and/or the path parameters comprise the total path length of the sampling path, and the associated parameters comprise the distance between two adjacent sampling positions; the total number of the plurality of sampling locations is then determined based on the path parameters and the associated parameters. The total number of sampling positions can be obtained by using the ratio of the total included angle to the included angle corresponding to two adjacent sampling positions, or the total number of sampling positions can be obtained by using the ratio of the total length of the path to the distance corresponding to two adjacent sampling positions. By the method, the sampling paths can be evenly segmented in an angle or distance mode, the total number of the determined sampling positions is more reasonable, the sampling positions which are evenly distributed can be obtained, so that a plurality of sampling images which can be presented to a user subsequently can evenly present different information of the virtual three-dimensional object, the target video obtained based on the plurality of sampling images can be more natural, and the connection sense presented to the user is stronger.
And step B2, determining a plurality of sampling positions of the virtual three-dimensional object based on the total number of the plurality of sampling positions and the sampling paths.
In practical applications, given the total number of the plurality of sampling positions, each sampling position on the sampling path can be flexibly set according to requirements, and in particular, when determining the sampling position, the sampling position can be determined according to the following conditions, such as that the sampling position needs to meet one or more of the following conditions: the included angle between any two adjacent sampling positions is not larger than a preset angle threshold value, the distance between any two adjacent sampling positions is not larger than a preset distance threshold value, the included angle between any two adjacent sampling positions is the same, and the distance between any two adjacent sampling positions is the same. As described above, the sampling position satisfying the above condition contributes to achieving a better sampling effect.
Through the steps A to B, a plurality of sampling positions of the virtual three-dimensional object can be reasonably and reliably determined, and the effect of a sampling image obtained by each sampling and the overall effect of a target video can be guaranteed.
On the basis of determining a plurality of sampling positions, in some embodiments, the step of sampling the virtual three-dimensional object based on the plurality of sampling positions may be performed with reference to the following steps 1 and 2:
Step 1, acquiring sampling parameters corresponding to each sampling position; the sampling parameters include sampling time and/or sampling pose.
In practical application, besides sampling positions, sampling parameters such as sampling time or sampling posture corresponding to different sampling positions can be flexibly set according to requirements. Wherein the sampling pose may be characterized by a pose angle such as yaw angle, pitch angle, roll angle, etc., such as taking the pose angle of the virtual camera as the sampling pose. It can be appreciated that even though the sampling position is fixed, if the sampling parameters are different, sampling images with different effects can be obtained, so that the sampling images are richer. The sampling parameters may be set by default or by the user, and are not limited herein.
In order to better ensure the sampling effect, the sampling parameters can be constrained, and in some specific embodiments, the interval between sampling times corresponding to any two adjacent sampling positions is not greater than a preset time interval threshold; and/or the intervals between the sampling times corresponding to any two adjacent sampling positions are the same. The time interval threshold can be flexibly set, and through the mode, the interval constraint can be carried out on the sampling time corresponding to two adjacent sampling positions, and it can be understood that if the adjacent sampling time interval is shorter, the smoothness of the video generated based on the sampling image can be effectively ensured to be higher, or the connectivity of the image combination generated based on the sampling image is stronger; if the intervals between the sampling times corresponding to any two adjacent sampling positions are the same, the virtual three-dimensional object can be uniformly sampled in the time dimension, so that the finally obtained video or image combination has better time sequence and smoother and natural presentation effect.
Illustratively, in the case where the sampling parameter includes a sampling time, the embodiments of the present disclosure further provide a manner of acquiring a sampling time corresponding to each of the sampling locations: and determining the sampling time corresponding to each sampling position according to the arrangement sequence of the sampling positions on the sampling path. In practical application, the sampling time corresponding to each sampling position can be flexibly set according to the requirement, and in some specific implementation examples, the sampling time corresponding to each sampling position is determined according to a preset sampling time interval (also can be understood as sampling time delay) and the arrangement sequence of a plurality of sampling positions on a sampling path. For example, the sampling time of the first sampling position may be set first, then the sampling time corresponding to the rest sampling positions may be determined sequentially based on the preset sampling time interval and the arrangement sequence of the plurality of sampling positions on the sampling path, and the sampling time corresponding to each sampling position determined in the above manner is more reasonable.
In an exemplary case, where the sampling parameter includes a sampling gesture, the embodiment of the disclosure further provides a manner of acquiring a sampling gesture corresponding to each sampling position, and in some specific examples, the sampling gesture corresponding to each sampling position may be made to be a gesture facing the virtual three-dimensional object, so as to ensure that local surface information of the virtual three-dimensional object can be acquired more clearly and comprehensively at each sampling position. In other specific examples, the sampling gesture corresponding to each sampling position may be obtained according to a preset gesture, that is, the gesture of each sampling position may be preset, and angles such as a yaw angle, a pitch angle, a roll angle and the like in the gesture angles of different sampling positions may be the same, may be different, or may be partially the same. In other specific examples, provided the sampling path is relatively canonical, such as the sampling path is circular, sampling poses at multiple sampling locations on the sampling path may be oriented in tangential directions perpendicular to the sampling path. The above are all exemplary descriptions, and in practical application, the sampling gesture corresponding to each sampling position can be flexibly set according to the requirement, which is not limited herein.
In practical application, required sampling parameters can be flexibly set according to requirements, so that personalized sampling for the virtual three-dimensional object is realized.
And 2, sampling the virtual three-dimensional object based on the sampling positions and the sampling parameters corresponding to each sampling position.
Under the condition that sampling parameters corresponding to each sampling position are determined, the virtual three-dimensional object can be sampled based on the corresponding sampling parameters, and because the positions of different sampling positions relative to the virtual three-dimensional object are different, the local surface information of the virtual three-dimensional object under a plurality of positions can be obtained, so that a user can fully know the shape of the virtual three-dimensional object from multiple angles.
On the basis of obtaining a plurality of sampling images, a target video or a target image combination corresponding to the virtual three-dimensional object can be generated based on the plurality of sampling images, and in specific implementation, the arrangement sequence corresponding to each sampling image can be determined first, and the target video or the target image combination corresponding to the virtual three-dimensional object can be generated according to the arrangement sequence corresponding to each sampling image. In some specific implementation examples of determining the arrangement sequence corresponding to each sampling image, the arrangement sequence corresponding to each sampling image may be determined according to the arrangement sequence of the sampling position corresponding to each sampling image and/or the arrangement sequence of the sampling time corresponding to each sampling image. In this way, the obtained target video or target image combination can be more natural and has stronger joint feeling for people.
In some implementation examples, the embodiments of the present disclosure further provide a specific implementation manner of generating the target video corresponding to the virtual three-dimensional object according to the arrangement sequence corresponding to each sampling image, which may be performed with reference to the following steps a to b:
step a, obtaining a target frame rate; wherein the target frame rate is not less than a preset frame rate threshold.
In some implementations, a system default frame rate may be employed as the target frame rate. In addition, the user may also manually adjust the video frame rate, and in other embodiments, may obtain the video frame rate set by the user and obtain the target frame rate based on the video frame rate set by the user. For example, the video frame rate set by the user may be compared with a preset frame rate threshold (such as 25 fps), and if the video frame rate set by the user is not lower than the frame rate threshold, the video frame rate set by the user is taken as the target frame rate, and if the video frame rate set by the user is lower than the preset frame rate threshold, the preset frame rate threshold is taken as the target frame rate, so that the generated target video is smoother and smoother, and no click feeling is brought to the user. The user can change the smooth smoothness of the video playing by manually adjusting the video frame rate, such as setting the video frame rate to 60fps by the user, so that the video playing is smoother.
And b, generating a target video corresponding to the virtual three-dimensional object based on the arrangement sequence corresponding to each sampling image and the target frame rate. For example, the sampling images are ordered from front to back based on the arrangement order corresponding to each sampling image, and video generation processing is performed on a plurality of sampling images according to a target frame rate, so as to obtain a target video.
The target video obtained through the method can naturally and smoothly present object information obtained by sampling the virtual three-dimensional object from a plurality of directions and a plurality of times for the user, has a certain visual impact, can realize a slow lens playback effect similar to bullet time shooting, can further present a fresh visual experience of space-time interleaving for the user, and has stronger interestingness.
For easy understanding, the embodiment of the disclosure provides a specific application example applying the above-mentioned content, as shown in fig. 2, a user may designate an object to be photographed through a prop a in a virtual reality scene, where the prop a is located may be regarded as a photographing center (may also be referred to as a focusing center), and a virtual three-dimensional object located at the photographing center is the object to be photographed. The system can generate a plurality of shooting points (corresponding to the sampling positions) on the track path drawn by the user, the plurality of shooting points can be uniformly distributed, the uniform distribution can be represented by the same distance delta B between adjacent shooting points and the same angle a formed by connecting the adjacent shooting points to A, and further, when the plurality of shooting points are determined, the maximum threshold of delta B or the maximum threshold of angle a can be limited, and the maximum threshold of delta B or the maximum threshold of angle a can be determined according to practical experience, so that smooth transition can be ensured when a lens picture moves, the cutting sense is not generated, and good connectivity is realized between images obtained by the adjacent shooting points. Through the method, the total number N of the shooting points and the positions of the shooting points on the track path can be finally determined, shooting is further carried out according to the preset shooting time delay delta t (namely, the shooting time interval of two adjacent shooting points) towards the direction A, a plurality of shooting pictures (namely, the sampling images) corresponding to the shooting objects are obtained, and then shooting videos of the shooting objects are generated based on the plurality of shooting pictures. In addition, if Δt set by the user is larger, the situation that the smoothness of the video is poor is also caused, so that the maximum threshold value of Δt (namely, the preset time delay threshold value) can be limited, when Δt set by the user is larger than the time delay threshold value, shooting can be directly performed based on the time delay threshold value, so that the video is ensured to have certain smoothness, and the impression experience of being stuck to the user is avoided.
In addition, it can be understood that, for the same track path, the more the total number N of shooting points is, and/or the smaller the shooting time delay Δt is, the smoother and smoother the finally obtained video is, so in order to ensure the smoothness of the video, the total number N of shooting points and/or the shooting time delay Δt can be appropriately adjusted according to the needs, the user can adjust the total number N of shooting points according to the needs, the user can control the video rhythm speed or the video smoothness, and the system can also automatically adjust the parameters set by the user when the parameters do not meet the video smoothness requirements. In practical applications, the total duration T of the video may be set by the user or may be set by a system, for example, the system may determine that the capturing delay Δt is 0.04 seconds based on the lowest frame rate of 25fps when the total number N of capturing points is known to be 100, and the total duration t=Δt×n=4 seconds.
Further, for ease of understanding, taking an example that the track path is a semicircular arc, referring to fig. 3, a plurality of photographing points are uniformly distributed on the semicircular arc path, all photographing is performed towards the photographing main body, a first photographing point corresponding to the position B is a starting point, photographing time delay between two adjacent photographing points is Δt, and the photographing main body (the virtual three-dimensional object designated by the prop a) is photographed by using all the photographing points, so that the total duration T' is required. In practical application, after shooting is started, prop A and prop B can be hidden and displayed, and shooting pictures are not interfered; the system shoots a picture at each shooting point at each interval delta t along the track path, when the track path end point is reached, a video is automatically generated based on the acquired image sequence, wherein the image sequence can be shown by referring to fig. 4, images corresponding to different shooting points can present different local surface information of a shooting main body, dynamic visual impression with impact force can be presented to a user through multi-angle shooting and setting shooting time delay of different shooting points, and the viewing experience of the user is effectively improved.
In summary, the video or image combination obtained by the information generation method provided by the embodiment of the disclosure can effectively improve the visual impression of the user, not only can naturally and smoothly present the object information obtained by sampling the virtual three-dimensional object from a plurality of directions and a plurality of time points for the user, but also has a certain visual impact, can realize the slow lens playback effect similar to bullet time shooting, can further present the fresh visual experience of space-time interleaving for the user, and has stronger interestingness.
In addition, the information generating method provided by the embodiment of the present disclosure mainly performs multiple sampling on the virtual three-dimensional object in the virtual three-dimensional scene, and does not perform shooting on the real object in the real environment (i.e., the real world), so that constraint and cost constraint by physical conditions in the real environment, such as number cost constraint of real cameras, camera debugging constraint, layout constraint of a track path, and the like, are not required. In the virtual three-dimensional scene, a sampling path (also called as a shooting path, a track path or a mirror-moving track) can be flexibly set according to requirements, a plurality of sampling positions (namely shooting points) on the sampling path, corresponding sampling time, sampling gesture and other sampling parameters are flexibly set, the convenience and the freedom of data processing of a virtual world can be utilized to obtain special effect video or image combination aiming at an object to be shot without complex calculation and camera debugging by a user, and the sampling parameters such as the sampling path, the sampling position, the sampling time and the like can be completely set by a system by the self, can also be set by the user by the self according to requirements, and the operation is simple and convenient. The video or image combination obtained by the information generation method provided by the embodiment of the invention not only can enable the user to watch multi-azimuth information of the shooting object, but also can achieve the visual impact effect of bullet time video, effectively ensure the watching experience of the user, in addition, the user can store the video or image combination so as to watch the video or image combination later or share the video or image combination to other users, and the interesting video form or image combination form can further promote information sharing and spreading in the virtual three-dimensional scene, thereby improving the interactive experience of the user.
Corresponding to the information generating method provided by the embodiment of the present disclosure, the embodiment of the present disclosure further provides an information generating device, and fig. 5 is a schematic structural diagram of the information generating device provided by the embodiment of the present disclosure, where the device may be implemented by software and/or hardware, and may be generally integrated in an electronic device, as shown in fig. 5, and includes:
An object determining module 502, configured to determine a virtual three-dimensional object to be sampled;
a sample determination module 504 for determining a plurality of sample locations of the virtual three-dimensional object;
The sampling processing module 506 is configured to perform sampling processing on the virtual three-dimensional object based on the multiple sampling positions, so as to obtain multiple sampling images corresponding to the virtual three-dimensional object;
the information generating module 508 is configured to generate a target video or a target image combination corresponding to the virtual three-dimensional object based on the plurality of sampling images.
According to the technical scheme provided by the embodiment of the disclosure, the virtual three-dimensional object can be sampled by utilizing the plurality of sampling positions, the target video or the target image combination corresponding to the virtual three-dimensional object is generated by generating the target video based on the plurality of obtained sampling images, namely, the information of the virtual three-dimensional object is presented in a video or image combination mode, the visual impression of the user can be better ensured by the target video or the target image combination obtained by the mode, the user can know the virtual three-dimensional object more fully, and the storage resource and the processing resource which are required are less, so that the storage and the display are more convenient.
In some embodiments, an included angle between any two adjacent sampling positions is not greater than a preset angle threshold, where the included angle is an included angle between each of the two adjacent sampling positions and a connecting line of the virtual three-dimensional object; and/or the distance between any two adjacent sampling positions is not greater than a preset distance threshold.
In some embodiments, the included angle between any two adjacent sampling positions is the same, and the included angle is an included angle between each of the two adjacent sampling positions and a connecting line of the virtual three-dimensional object; and/or the distance between any two adjacent sampling locations is the same.
In some implementations, the sample determination module 504 is specifically configured to: determining a sampling path; determining a plurality of sampling locations of the virtual three-dimensional object based on the sampling paths; wherein each of the sampling locations is located on the sampling path.
In some implementations, the sample determination module 504 is specifically configured to: acquiring a path appointed by a user in a virtual three-dimensional scene, and taking the path appointed by the user as a sampling path; or generating a sampling path based on the position of the virtual three-dimensional object and a preset path setting mode.
In some implementations, the sample determination module 504 is specifically configured to: acquiring the total number of the plurality of sampling positions; a plurality of sampling locations of the virtual three-dimensional object is determined based on a total number of the plurality of sampling locations and the sampling path.
In some implementations, the sample determination module 504 is specifically configured to: acquiring a preset total duration and a preset interval duration; the preset total duration is the duration of a preset target video to be generated, and the preset interval duration is the interval between sampling times corresponding to two preset adjacent sampling positions; and determining the total number of the plurality of sampling positions based on the preset total duration and the preset interval duration.
In some implementations, the sample determination module 504 is specifically configured to: if the preset interval duration is not greater than a preset time interval threshold, determining the total number of the plurality of sampling positions based on the ratio between the preset total duration and the preset interval duration; if the preset interval time length is larger than a preset time interval threshold value, determining the total number of the sampling positions based on the ratio between the preset total time length and the time interval threshold value.
In some implementations, the sample determination module 504 is specifically configured to: acquiring path parameters corresponding to the sampling paths and correlation parameters between two preset adjacent sampling positions; the path parameters comprise total included angles corresponding to the sampling paths, the total included angles are included angles between a path starting point and a path ending point of the sampling paths and connecting lines of the virtual three-dimensional objects, and the association parameters comprise included angles between two adjacent sampling positions and connecting lines of the virtual three-dimensional objects; and/or the path parameters comprise the total path length of the sampling path, and the associated parameters comprise the distance between two adjacent sampling positions; based on the path parameters and the correlation parameters, a total number of the plurality of sampling locations is determined.
In some implementations, the sample determination module 504 is specifically configured to: obtaining a total number of samples set by a user, and obtaining a total number of a plurality of sampling positions based on the total number of samples set by the user.
In some embodiments, the sample processing module 506 is specifically configured to: acquiring sampling parameters corresponding to each sampling position; the sampling parameters comprise sampling time and/or sampling gesture; and carrying out sampling processing on the virtual three-dimensional object based on the sampling positions and the sampling parameters corresponding to each sampling position.
In some embodiments, the interval between sampling times corresponding to any two adjacent sampling positions is not greater than a preset time interval threshold; and/or the intervals between the sampling times corresponding to any two adjacent sampling positions are the same.
In some embodiments, the sampling parameters include sampling time, and the sampling processing module 506 is specifically configured to: and determining the sampling time corresponding to each sampling position according to the arrangement sequence of the sampling positions on the sampling path.
In some embodiments, the sample processing module 506 is specifically configured to: and determining the sampling time corresponding to each sampling position according to a preset sampling time interval and the arrangement sequence of the sampling positions on the sampling path.
In some embodiments, the sampling parameters include sampling poses, and each sampling position corresponds to a sampling pose that faces the virtual three-dimensional object.
In some embodiments, the information generation module 508 is specifically configured to: and determining the arrangement sequence corresponding to each sampling image, and generating a target video or a target image combination corresponding to the virtual three-dimensional object according to the arrangement sequence corresponding to each sampling image.
In some embodiments, the information generation module 508 is specifically configured to: and determining the arrangement sequence corresponding to each sampling image according to the arrangement sequence of the sampling position corresponding to each sampling image and/or the arrangement sequence of the sampling time corresponding to each sampling image.
In some embodiments, the information generation module 508 is specifically configured to: obtaining a target frame rate; wherein the target frame rate is not less than a preset frame rate threshold; and generating a target video corresponding to the virtual three-dimensional object based on the arrangement sequence corresponding to each sampling image and the target frame rate.
In some embodiments, the information generation module 508 is specifically configured to: and acquiring the video frame rate set by the user, and obtaining the target frame rate based on the video frame rate set by the user.
In some implementations, the object determination module 502 is specifically configured to: and taking the virtual three-dimensional object appointed by the user in the virtual three-dimensional scene as the virtual three-dimensional object to be sampled.
The video generating device provided by the embodiment of the disclosure can execute the video generating method provided by any embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of the executing method.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described apparatus embodiments may refer to corresponding procedures in the method embodiments, which are not described herein again.
The embodiment of the disclosure also provides an electronic device, which includes: a processor; a memory for storing processor-executable instructions; and a processor for reading the executable instructions from the memory and executing the instructions to implement the information generating method.
The embodiment of the disclosure also provides a computer readable storage medium, wherein the storage medium stores a computer program for executing the information generating method.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. As shown in fig. 6, the electronic device 600 includes one or more processors 601 and memory 602.
The processor 601 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities and may control other components in the electronic device 600 to perform desired functions.
The memory 602 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer readable storage medium that can be executed by the processor 601 to implement the video generation method and/or other desired functions of the embodiments of the present disclosure described above. Various contents such as an input signal, a signal component, a noise component, and the like may also be stored in the computer-readable storage medium.
In one example, the electronic device 600 may further include: input device 603 and output device 604, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
In addition, the input device 603 may also include, for example, a keyboard, a mouse, and the like.
The output device 604 may output various information to the outside, including the determined distance information, direction information, and the like. The output means 604 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, etc.
Of course, only some of the components of the electronic device 600 that are relevant to the present disclosure are shown in fig. 6, with components such as buses, input/output interfaces, etc. omitted for simplicity. In addition, the electronic device 600 may include any other suitable components depending on the particular application.
In addition to the methods and apparatus described above, embodiments of the present disclosure may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the video generation methods provided by the embodiments of the present disclosure.
The computer program product may write program code for performing the operations of embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium, on which computer program instructions are stored, which, when executed by a processor, cause the processor to perform the information generation method provided by the embodiments of the present disclosure.
The computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The disclosed embodiments also provide a computer program product comprising a computer program/instruction which, when executed by a processor, implements the information generating method in the disclosed embodiments.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is merely a specific embodiment of the disclosure to enable one skilled in the art to understand or practice the disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown and described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (23)

1. An information generation method, comprising:
determining a virtual three-dimensional object to be sampled;
Determining a plurality of sampling locations of the virtual three-dimensional object;
Sampling the virtual three-dimensional object based on a plurality of sampling positions to obtain a plurality of sampling images corresponding to the virtual three-dimensional object;
And generating a target video or a target image combination corresponding to the virtual three-dimensional object based on the plurality of sampling images.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
The included angle between any two adjacent sampling positions is not larger than a preset angle threshold value, and the included angle is an included angle between each of the two adjacent sampling positions and a connecting line of the virtual three-dimensional object;
And/or the number of the groups of groups,
The distance between any two adjacent sampling positions is not greater than a preset distance threshold.
3. The method of claim 1, wherein the step of determining the position of the substrate comprises,
The included angles between any two adjacent sampling positions are the same, and the included angles are the included angles between the connecting lines of the two adjacent sampling positions and the virtual three-dimensional object respectively;
And/or the number of the groups of groups,
The distance between any two adjacent sampling locations is the same.
4. The method of claim 1, wherein the determining the plurality of sampling locations of the virtual three-dimensional object comprises:
Determining a sampling path;
Determining a plurality of sampling locations of the virtual three-dimensional object based on the sampling paths; wherein each of the sampling locations is located on the sampling path.
5. The method of claim 4, wherein the determining a sampling path comprises:
Acquiring a path appointed by a user in a virtual three-dimensional scene, and taking the path appointed by the user as a sampling path;
Or alternatively
And generating a sampling path based on the position of the virtual three-dimensional object and a preset path setting mode.
6. The method of claim 4, wherein the determining a plurality of sampling locations of the virtual three-dimensional object based on the sampling path comprises:
Acquiring the total number of the plurality of sampling positions;
A plurality of sampling locations of the virtual three-dimensional object is determined based on a total number of the plurality of sampling locations and the sampling path.
7. The method of claim 6, wherein the obtaining a total number of the plurality of sampling locations comprises:
Acquiring a preset total duration and a preset interval duration; the preset total duration is the duration of a preset target video to be generated, and the preset interval duration is the interval between sampling times corresponding to two preset adjacent sampling positions;
and determining the total number of the plurality of sampling positions based on the preset total duration and the preset interval duration.
8. The method of claim 7, wherein the determining the total number of the plurality of sampling locations based on the preset total duration and the preset interval duration comprises:
If the preset interval duration is not greater than a preset time interval threshold, determining the total number of the plurality of sampling positions based on the ratio between the preset total duration and the preset interval duration;
if the preset interval time length is larger than a preset time interval threshold value, determining the total number of the sampling positions based on the ratio between the preset total time length and the time interval threshold value.
9. The method of claim 6, wherein the obtaining a total number of the plurality of sampling locations comprises:
Acquiring path parameters corresponding to the sampling paths and correlation parameters between two preset adjacent sampling positions; the path parameters comprise total included angles corresponding to the sampling paths, the total included angles are included angles between a path starting point and a path ending point of the sampling paths and connecting lines of the virtual three-dimensional objects, and the association parameters comprise included angles between two adjacent sampling positions and connecting lines of the virtual three-dimensional objects; and/or the path parameters comprise the total path length of the sampling path, and the associated parameters comprise the distance between two adjacent sampling positions;
Based on the path parameters and the correlation parameters, a total number of the plurality of sampling locations is determined.
10. The method of claim 6, wherein the obtaining a total number of the plurality of sampling locations comprises:
obtaining a total number of samples set by a user, and obtaining a total number of a plurality of sampling positions based on the total number of samples set by the user.
11. The method according to any one of claims 1 to 10, wherein the sampling the virtual three-dimensional object based on a plurality of the sampling locations comprises:
Acquiring sampling parameters corresponding to each sampling position; the sampling parameters comprise sampling time and/or sampling gesture;
And carrying out sampling processing on the virtual three-dimensional object based on the sampling positions and the sampling parameters corresponding to each sampling position.
12. The method of claim 11, wherein the interval between sampling times corresponding to any two adjacent sampling locations is no greater than a preset time interval threshold; and/or the intervals between the sampling times corresponding to any two adjacent sampling positions are the same.
13. The method of claim 11, wherein the sampling parameters include sampling time, and the obtaining the sampling parameters corresponding to each sampling location includes:
And determining the sampling time corresponding to each sampling position according to the arrangement sequence of the sampling positions on the sampling path.
14. The method of claim 13, wherein determining the sampling time for each of the sampling locations in the order in which the plurality of sampling locations are arranged on the sampling path comprises:
and determining the sampling time corresponding to each sampling position according to a preset sampling time interval and the arrangement sequence of the sampling positions on the sampling path.
15. The method of claim 13, wherein the sampling parameters comprise sampling poses, each of the sampling positions corresponding to a pose toward the virtual three-dimensional object.
16. The method of claim 1, wherein generating a target video or target image combination corresponding to the virtual three-dimensional object based on the plurality of sampled images comprises:
and determining the arrangement sequence corresponding to each sampling image, and generating a target video or a target image combination corresponding to the virtual three-dimensional object according to the arrangement sequence corresponding to each sampling image.
17. The method of claim 16, wherein determining the corresponding arrangement order of each of the sampled images comprises:
And determining the arrangement sequence corresponding to each sampling image according to the arrangement sequence of the sampling position corresponding to each sampling image and/or the arrangement sequence of the sampling time corresponding to each sampling image.
18. The method according to claim 16, wherein generating the target video corresponding to the virtual three-dimensional object according to the arrangement order corresponding to each of the sampling images comprises:
obtaining a target frame rate; wherein the target frame rate is not less than a preset frame rate threshold;
And generating a target video corresponding to the virtual three-dimensional object based on the arrangement sequence corresponding to each sampling image and the target frame rate.
19. The method of claim 18, wherein the obtaining the target frame rate comprises:
and acquiring the video frame rate set by the user, and obtaining the target frame rate based on the video frame rate set by the user.
20. The method of claim 1, wherein the determining a virtual three-dimensional object to be sampled comprises:
And taking the virtual three-dimensional object appointed by the user in the virtual three-dimensional scene as the virtual three-dimensional object to be sampled.
21. An information generating apparatus, comprising:
the object determining module is used for determining a virtual three-dimensional object to be sampled;
a sample determination module for determining a plurality of sample positions of the virtual three-dimensional object;
The sampling processing module is used for sampling the virtual three-dimensional object based on a plurality of sampling positions to obtain a plurality of sampling images corresponding to the virtual three-dimensional object;
and the information generation module is used for generating a target video or a target image combination corresponding to the virtual three-dimensional object based on the plurality of sampling images.
22. An electronic device, the electronic device comprising:
A processor;
A memory for storing the processor-executable instructions;
The processor is configured to read the executable instructions from the memory and execute the instructions to implement the information generating method according to any of the preceding claims 1-20.
23. A computer-readable storage medium, characterized in that the storage medium stores a computer program for executing the information generating method according to any one of the preceding claims 1-20.
CN202211280742.5A 2022-10-19 2022-10-19 Information generation method, device, equipment and medium Pending CN117953140A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211280742.5A CN117953140A (en) 2022-10-19 2022-10-19 Information generation method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211280742.5A CN117953140A (en) 2022-10-19 2022-10-19 Information generation method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN117953140A true CN117953140A (en) 2024-04-30

Family

ID=90793069

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211280742.5A Pending CN117953140A (en) 2022-10-19 2022-10-19 Information generation method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN117953140A (en)

Similar Documents

Publication Publication Date Title
JP6321150B2 (en) 3D gameplay sharing
US10325628B2 (en) Audio-visual project generator
US20170132829A1 (en) Method For Displaying and Animating Sectioned Content That Retains Fidelity Across Desktop and Mobile Devices
GB2590204A (en) Video shooting method and apparatus, terminal device, and storage medium
US20140237365A1 (en) Network-based rendering and steering of visual effects
CN107820132A (en) Living broadcast interactive method, apparatus and system
CN110636365B (en) Video character adding method and device, electronic equipment and storage medium
US11941728B2 (en) Previewing method and apparatus for effect application, and device, and storage medium
US20230057963A1 (en) Video playing method, apparatus and device, storage medium, and program product
CN114979495B (en) Method, apparatus, device and storage medium for content shooting
CN110730340B (en) Virtual audience display method, system and storage medium based on lens transformation
CN110572717A (en) Video editing method and device
CN114401443B (en) Special effect video processing method and device, electronic equipment and storage medium
US20140282000A1 (en) Animated character conversation generator
US20160350955A1 (en) Image processing method and device
JP6559375B1 (en) Content distribution system, content distribution method, and content distribution program
CN111277866B (en) Method and related device for controlling VR video playing
JP7125983B2 (en) Systems and methods for creating and displaying interactive 3D representations of real objects
KR20230152589A (en) Image processing system, image processing method, and storage medium
CN116017082A (en) Information processing method and electronic equipment
CN117953140A (en) Information generation method, device, equipment and medium
CN113559503B (en) Video generation method, device and computer readable medium
CN112929685B (en) Interaction method and device for VR live broadcast room, electronic device and storage medium
CN115049574A (en) Video processing method and device, electronic equipment and readable storage medium
CN114025237A (en) Video generation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination