CN114615513B - Video data generation method and device, electronic equipment and storage medium - Google Patents

Video data generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114615513B
CN114615513B CN202210227914.6A CN202210227914A CN114615513B CN 114615513 B CN114615513 B CN 114615513B CN 202210227914 A CN202210227914 A CN 202210227914A CN 114615513 B CN114615513 B CN 114615513B
Authority
CN
China
Prior art keywords
target material
video data
virtual object
material segments
segments
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210227914.6A
Other languages
Chinese (zh)
Other versions
CN114615513A (en
Inventor
翟昊
朱启
刘家诚
曾大亨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202210227914.6A priority Critical patent/CN114615513B/en
Publication of CN114615513A publication Critical patent/CN114615513A/en
Application granted granted Critical
Publication of CN114615513B publication Critical patent/CN114615513B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23412Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs for generating or manipulating the scene composition of objects, e.g. MPEG-4 objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure provides a video data generating method, apparatus, electronic device, and storage medium, wherein the method includes: displaying a plurality of material fragments in electronic equipment, wherein the material fragments comprise preset virtual objects and action information of the virtual objects; the virtual object is generated by rendering virtual object information in a 3D rendering environment; determining a plurality of target material segments from the plurality of material segments in response to a selection operation by a user; determining transition data between different target material segments; video data is generated based on the plurality of target material segments and the transition data. According to the embodiment of the disclosure, the participation degree of the user in the video generation process is improved, and the viewing experience of the user is also improved.

Description

Video data generation method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technology, and in particular, to a video data generation method, a video data generation device, an electronic apparatus, and a computer-readable storage medium.
Background
With the development of multimedia technology, live video becomes a popular interaction mode. More and more users choose to view video or live through a network platform. In order to enhance the effect of video or live broadcast, the production capability for video content is very important.
However, the current video content or live broadcast content is usually manufactured by a manufacturer, and the user cannot manufacture corresponding video content according to the own requirement, so as to influence the participation of the user.
Disclosure of Invention
Embodiments of the present disclosure provide at least a video data generating method, a video data generating apparatus, an electronic device, and a computer-readable storage medium.
The embodiment of the disclosure provides a video data generation method, which comprises the following steps:
displaying a plurality of material fragments in electronic equipment, wherein the material fragments comprise preset virtual objects and action information of the virtual objects; the virtual object is generated by rendering virtual object information in a 3D rendering environment;
determining a plurality of target material segments from the plurality of material segments in response to a selection operation by a user;
determining transition data between different target material segments;
video data is generated based on the plurality of target material segments and the transition data.
In the embodiment of the disclosure, the plurality of material segments are displayed in the electronic device, then the selection operation of the user is responded, the plurality of target material segments are determined from the plurality of material segments, the transition data among different target material segments are determined, and finally the video data are generated, so that the target material segments can be determined according to the requirements of the user, the participation degree of the user in the video generation process is improved, the transition data can be automatically acquired according to the target material segments, the connection relation among the different material segments is increased, the effect of final video presentation is consistent, and the viewing experience of the user is improved.
In one possible implementation manner, the determining the transition data between different target material slices includes:
determining a combined connection order among the plurality of target material segments;
determining transition data between adjacent different target material segments based on the combined connection sequence;
the generating video data based on the plurality of target material segments and the transition data includes:
and fusing the target material fragments and the transition data according to the combined connection sequence to generate the video data.
In the embodiment of the disclosure, the transition data between the adjacent different target material segments is determined according to the combined connection sequence among the plurality of target material segments, and the plurality of target material segments and the transition data are fused according to the combined connection sequence to generate the video data, so that the plurality of target material segments can be better connected, and the continuity of the video content is improved.
In one possible implementation manner, the determining the combined connection sequence between the plurality of target material segments includes:
determining a combined connection order among the plurality of target material segments based on the arrangement order determined by the user; or alternatively, the process may be performed,
Based on the virtual object in each target material segment and the action information of the virtual object, the combined connection sequence among the plurality of target material segments is determined.
In the embodiment of the disclosure, the arrangement sequence can be determined not only based on the user, but also based on the content of the material fragments, so that the arrangement sequence can be determined in a plurality of ways, the generated video content has diversity, and different visual experiences are brought to the user.
In one possible implementation manner, the displaying the plurality of material segments in the electronic device includes:
acquiring a real scene image shot by the electronic equipment;
displaying, in the electronic device, the plurality of material segments based on the real scene image;
the generating video data based on the plurality of target material segments and the transition data includes:
the video data is generated based on the realistic scene image, the plurality of target material segments, and the transition data.
In the embodiment of the disclosure, the video data is generated by acquiring the real scene image shot by the electronic equipment based on the real scene image, the plurality of target material fragments and the transition data, so that the virtual object can be ensured to display different real scenes each time, and further the virtual object can display special effects in different real spaces, and finally different video data can be generated, so that different visual experiences are brought to users.
In one possible implementation, the generating the video data based on the real scene image, the plurality of target material segments, and the transition data includes:
displaying the content of the target material segment based on the real scene image;
in the process of displaying the content of the target material segment, adjusting the display effect of the augmented reality AR picture to obtain an adjusted AR picture; the AR picture comprises the real scene image and the content of the currently displayed target material segment;
and generating the video data based on the adjusted AR picture and the transition data.
In the embodiment of the disclosure, in the process of displaying the content of the target material segment, the display effect of the AR picture is adjusted in real time, so that the picture content can be enriched, and the picture display effect can be improved.
In a possible implementation manner, the adjusting the display effect of the augmented reality AR picture to obtain the adjusted AR picture includes:
and adjusting at least one of the illumination effect, the shadow effect and the filter effect of the AR picture to obtain the adjusted AR picture.
In the embodiment of the disclosure, because the illumination effect, the shadow effect and the filter effect are respectively adjusted in real time in the display process, a richer display effect can be presented, and the picture display effect is improved.
In one possible implementation, the illumination effect acts on the real scene image, the shadow effect acts on the virtual object, and the filter effect acts on the virtual object.
In a possible implementation manner, after the generating video data based on the plurality of target material segments and the transition data, the method further includes:
and sending the video data to a target server, so that the target server sends the video data to other users or video live broadcast is carried out based on the video data.
The embodiment of the disclosure provides a video data generating device, which comprises:
the system comprises a material segment display module, a display module and a display module, wherein the material segment display module is used for displaying a plurality of material segments in electronic equipment, and the material segments comprise preset virtual objects and action information of the virtual objects; the virtual object is generated by rendering virtual object information in a 3D rendering environment;
A material segment determining module, configured to determine a plurality of target material segments from the plurality of material segments in response to a selection operation of a user;
the transition data determining module is used for determining transition data among different target material fragments;
and the video data generation module is used for generating video data based on the target material fragments and the transition data.
In one possible implementation manner, the transition data determining module is specifically configured to:
determining a combined connection order among the plurality of target material segments;
determining transition data between adjacent different target material segments based on the combined connection sequence;
the video data generation module is specifically configured to:
and fusing the target material fragments and the transition data according to the combined connection sequence to generate the video data.
In one possible implementation manner, the transition data determining module is specifically configured to:
determining a combined connection order among the plurality of target material segments based on the arrangement order determined by the user; or alternatively, the process may be performed,
based on the virtual object in each target material segment and the action information of the virtual object, the combined connection sequence among the plurality of target material segments is determined.
In one possible implementation manner, the material segment display module is specifically configured to:
acquiring a real scene image shot by the electronic equipment;
displaying, in the electronic device, the plurality of material segments based on the real scene image;
the video data generation module is specifically configured to:
the video data is generated based on the realistic scene image, the plurality of target material segments, and the transition data.
In one possible implementation manner, the video data generating module is specifically configured to:
displaying the content of the target material segment based on the real scene image;
in the process of displaying the content of the target material segment, adjusting the display effect of the augmented reality AR picture to obtain an adjusted AR picture; the AR picture comprises the real scene image and the content of the currently displayed target material segment;
and generating the video data based on the adjusted AR picture and the transition data.
In one possible implementation manner, the video data generating module is specifically configured to:
and adjusting at least one of the illumination effect, the shadow effect and the filter effect of the AR picture to obtain the adjusted AR picture.
In one possible implementation, the illumination effect acts on the real scene image, the shadow effect acts on the virtual object, and the filter effect acts on the virtual object.
In one possible embodiment, the apparatus further comprises:
and the video data transmitting module is used for transmitting the video data to a target server so that the target server can transmit the video data to other users or perform video live broadcast based on the video data.
The embodiment of the disclosure also provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication over the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the video data generation method of any one of the possible embodiments described above.
The disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the video data generation method described in any one of the possible implementations.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for the embodiments are briefly described below, which are incorporated in and constitute a part of the specification, these drawings showing embodiments consistent with the present disclosure and together with the description serve to illustrate the technical solutions of the present disclosure. It is to be understood that the following drawings illustrate only certain embodiments of the present disclosure and are therefore not to be considered limiting of its scope, for the person of ordinary skill in the art may admit to other equally relevant drawings without inventive effort.
Fig. 1 is a schematic diagram illustrating an execution subject of a video data generation method provided by an embodiment of the present disclosure;
FIG. 2 shows a flowchart of a first video data generation method provided by an embodiment of the present disclosure;
FIG. 3 illustrates an interface diagram showing a plurality of material segments provided by an embodiment of the present disclosure;
FIG. 4 shows a flowchart of a second video data generation method provided by an embodiment of the present disclosure;
Fig. 5 is a schematic diagram illustrating an arrangement manner of target material segments and transition data according to an embodiment of the disclosure;
FIG. 6 is a schematic diagram of fusing multiple target material segments according to an embodiment of the present disclosure;
fig. 7 is a flowchart illustrating a third video data generation method according to an embodiment of the present disclosure;
fig. 8 illustrates an interface schematic diagram for displaying a plurality of material segments based on a real scene image according to an embodiment of the present disclosure;
FIG. 9 illustrates a flow chart of a method for generating video data based on a realistic scene image provided by an embodiment of the disclosure;
fig. 10 is a flowchart illustrating a fourth video data generation method provided by an embodiment of the present disclosure;
fig. 11 is a schematic diagram illustrating a structure of transmitting video data according to an embodiment of the present disclosure;
fig. 12 is a schematic diagram showing a configuration of a video data generating apparatus provided in an embodiment of the present disclosure;
fig. 13 is a schematic diagram showing the structure of another video data generating apparatus provided by an embodiment of the present disclosure;
fig. 14 shows a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. The components of the embodiments of the present disclosure, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure provided in the accompanying drawings is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be made by those skilled in the art based on the embodiments of this disclosure without making any inventive effort, are intended to be within the scope of this disclosure.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
The term "and/or" is used herein to describe only one relationship, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
With the development of multimedia technology, live video becomes a popular interaction mode. More and more users choose to view video or live through a network platform. In order to enhance the effect of video or live broadcast, the production capability for video content is very important.
However, the current video content or live broadcast content is usually manufactured by a manufacturer, and the user cannot manufacture corresponding video content according to the own requirement, so as to influence the participation of the user.
In view of the foregoing, an embodiment of the present disclosure provides a video data generating method, including: displaying a plurality of material fragments in electronic equipment, wherein the material fragments comprise preset virtual objects and action information of the virtual objects; the virtual object is generated by rendering virtual object information in a 3D rendering environment; determining a plurality of target material segments from the plurality of material segments in response to a selection operation by a user; determining transition data between different target material segments; video data is generated based on the plurality of target material segments and the transition data.
In the embodiment of the disclosure, the plurality of material segments are displayed in the electronic device, then the selection operation of the user is responded, the plurality of target material segments are determined from the plurality of material segments, the transition data among different target material segments are determined, and finally the video data are generated, so that the target material segments can be determined according to the requirements of the user, the participation degree of the user in the video generation process is improved, the transition data can be automatically acquired according to the target material segments, the connection relation among the different material segments is increased, the effect of final video presentation is consistent, and the viewing experience of the user is improved.
Referring to fig. 1, a schematic diagram of an execution subject of a video data generating method according to an embodiment of the disclosure is shown, where the execution subject of the method is an electronic device 100, and the electronic device 100 may include a terminal and a server. For example, the method may be applied to a terminal, which may be, but not limited to, a smart phone 10, a desktop computer 20, a notebook computer 30, etc. shown in fig. 1, or a smart speaker, a smart watch, a tablet computer, etc. not shown in fig. 1. The method may also be applied to the server 40 or may be applied in an implementation environment consisting of the terminal and the server 40. The server 40 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud services, cloud databases, cloud computing, cloud storage, big data, artificial intelligence platforms, and the like.
In other embodiments, the electronic device 100 may also include an AR (Augmented Reality) device, a VR (Virtual Reality) device, an MR (Mixed Reality) device, and the like. For example, the AR device may be a mobile phone or a tablet computer with AR function, or may be AR glasses, which is not limited herein.
In some embodiments, the server 40 may communicate with the smart phone 10, the desktop computer 20, and the notebook computer 30 via the network 50. The network 50 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
Referring to fig. 2, a flowchart of a first video data generating method according to an embodiment of the present disclosure is shown, where the method includes steps S101 to S104, where:
s101, displaying a plurality of material fragments in electronic equipment, wherein the material fragments comprise preset virtual objects and action information of the virtual objects; the virtual object is generated by virtual object information in the 3D rendering environment after rendering.
The material segment refers to a part of original material provided by the electronic equipment. In this embodiment, the material segment includes a preset virtual object and motion information of the virtual object. In other embodiments, the material segment may further include environmental information in which the virtual object is located.
Specifically, virtual objects are generated after virtual object information in a 3D rendering environment is rendered, and the virtual objects are driven by control information captured by a motion capture device, so that motion information of the virtual objects is formed.
The 3D rendering environment may be a 3D engine running in the electronic device, capable of generating image information based on one or more perspectives based on the data to be rendered. The virtual objects are character models that exist in the 3D engine, and the corresponding virtual objects can be generated after rendering. In the embodiment of the present disclosure, the virtual object may be an avatar, or the like. In other embodiments, the virtual object may also be a virtual anchor or the like.
Illustratively, one form of the virtual object is to capture control information for the motion capture of the actor (person in the middle), drive the virtual object in the 3D engine to act, and at the same time acquire actor sound, fuse the actor sound with the virtual object picture, and generate a material segment.
The motion capture device comprises at least one of a limb motion capture device (such as clothing) worn on the body of the actor, a hand motion capture device (such as glove) worn on the hand of the actor, a facial motion capture device (such as a camera), and a sound capture device (such as a microphone, a laryngeal microphone, etc.).
Specifically, a plurality of optical marking points may be provided on the clothing and glove of the actor, the optical marking points may be made of a reflective material, and then position information of the plurality of optical marking points may be acquired through the image pickup device, and thus control information of the virtual object may be acquired. In addition, a plurality of sensors can be arranged on the motion capture device, and the position information, the moving speed and the moving direction information of the actors can be obtained through the sensors, so that the control information of the virtual object can be obtained.
S102, determining a plurality of target material segments from the plurality of material segments in response to a selection operation of a user.
The target material segment refers to a material segment selected by a user from a plurality of material segments. Different material segments are used for reflecting different virtual objects or different action information, for example, the material segments can be that the virtual object A is lifting a hand, can also be that the virtual object A is turning, can also be that the virtual object B is turning, and the like, and the method is not limited in detail.
For example, referring to fig. 3, a plurality of material segments may be displayed in the electronic device for selection by a user, where the material segment selected by the user is the target material segment. As shown in fig. 3, the plurality of material segments may include a material segment 1, a material segment 2, a material segment 3, and a material segment 4. For example, if the material segment 1 is selected by the user, the material segment 1 is determined to be the target material segment.
Specifically, if the trigger identifier "add" of a certain material segment is triggered, it may be determined that the material segment is selected by the user, for example, if the trigger identifier "add" corresponding to the material segment 1 is triggered, it is determined that the material segment 1 is a target material segment, and meanwhile, if the trigger identifier "add" corresponding to the material segment 3 is triggered, it is determined that the material segment 3 is also a target material segment, and finally, if the "determine" identifier is triggered, it is determined that the material segment 1 and the material segment 3 are target material segments. Of course, in other embodiments, the trigger may also be presented in other forms (such as a specific icon), and is not limited in particular.
S103, determining transition data among different target material segments.
After determining a plurality of target material segments from the plurality of material segments, transition data between different target material segments may be determined. The transition data are used for better connecting two adjacent different target material fragments. In this embodiment, the transition data is obtained automatically by determining the combination sequence between different target material segments. In other embodiments, the transition data may also be transition data selected by the user from a transition database according to the production requirements.
In particular, the transition data may include a trick fragment transition and a no trick fragment transition. The transition of skill fragment is completed by using special effect, mainly has the modes of fade-in fade-out, fold-in fold-out, scratch-in scratch-out and the like, and can well play roles of separating paragraphs, scene transition, up-and-down and the like. The non-skill fragment transfer is a direct fragment switching mode, comprising a similar fragment transfer mode, a transfer mode by using contrast factors and the like, and the mode can lead the connection between the material fragments to be more natural and smooth, have faster rhythm and stable picture.
And S104, generating video data based on the target material fragments and the transition data.
After determining the transition data between different target material segments, video data may be generated from the plurality of target material segments and the transition data. Where video data refers to a sequence of consecutive images, it will be appreciated that video is typically composed of pictures belonging to video frames and/or sounds belonging to audio frames.
In the embodiment of the disclosure, the plurality of material segments are displayed in the electronic device, then the selection operation of the user is responded, the plurality of target material segments are determined from the plurality of material segments, the transition data among different target material segments are determined, and finally the video data are generated, so that the target material segments can be determined according to the requirements of the user, the participation degree of the user in the video generation process is improved, the transition data can be automatically acquired according to the target material segments, the connection relation among the different material segments is increased, the effect of final video presentation is consistent, and the viewing experience of the user is improved.
Referring to fig. 4, a flowchart of a second video data generating method according to an embodiment of the present disclosure includes the following steps S201 to S205:
S201, displaying a plurality of material fragments in an electronic device, wherein the material fragments comprise preset virtual objects and action information of the virtual objects; the virtual object is generated by virtual object information in the 3D rendering environment after rendering.
Step S201 is similar to step S101, and will not be described herein.
S202, determining a plurality of target material segments from the plurality of material segments in response to a selection operation of a user.
Step S202 is similar to step S102, and will not be described herein.
S203, determining the combined connection sequence among the plurality of target material fragments.
After the target material segments are determined, the order of the combined connection between the plurality of target material segments needs to be further determined.
In one possible implementation, the order of the combined connection between the plurality of target material segments may be user-determined. Specifically, based on the arrangement order determined by the user, a combined connection order between the plurality of target material segments is determined.
In another possible embodiment, the combined connection order between the plurality of target material segments may be automatically determined according to the content of the target material segments. Specifically, based on the virtual object in each target material segment and the motion information of the virtual object, the combined connection order between the plurality of target material segments is determined.
Specifically, referring to fig. 3 again, taking the material segment 1 and the material segment 3 as the target material segment as an example, as shown in fig. 3, the target material segment 1 and the target material segment 3 are the same virtual object, the motion content of the target material segment 1 is the two-hand-fork waist of the virtual object a, the motion content of the target material segment 2 is the one-hand-fork waist of the virtual object a, and the one hand lifts the top of the head, so that the combination connection sequence from the target material segment 1 to the target material segment 2 can be determined according to the motion content.
S204, determining transition data between adjacent different target material segments based on the combined connection sequence.
After determining the sequence of the combined connection between the plurality of target material segments, the transition data between adjacent different target material segments may be further determined according to the sequence of the combined connection.
Illustratively, referring to fig. 5, the target material segments may include a target material segment 1, a target material segment 2, a target material segment 3, and the like, and the transition data may be determined to include transition data 1, transition data 2, and the like based on the target material segments. The transition data 1 is connected between the target material segment 1 and the target material segment 2, and the transition data 2 is connected between the target material segment 2 and the target material segment 3.
And S205, fusing the target material fragments and the transition data according to the combined connection sequence to generate the video data.
Referring to fig. 5 again, taking the example that the combined connection sequence is the target material segment 1, the target material segment 2 and the target material segment 3, the arrangement mode of the target material segment and the transition data is the target material segment 1, the transition data 1, the target material segment 2, the transition data 2 and the target material segment 3. After fusion is performed according to the arrangement mode, video data can be generated.
In this embodiment, according to the combined connection sequence between the multiple target material segments, the transition data between the adjacent different target material segments is determined, and the multiple target material segments and the transition data are fused according to the combined connection sequence to generate video data, so that the multiple target material segments can be better connected, and the continuity of video content is improved.
In other embodiments, a plurality of target material segments may be fused in parallel, for example, as shown in fig. 6, the target material segment 1 is a segment related to the virtual object a, the target material segment 2 is a segment related to the virtual object B, and after the target material segment 1 and the target material segment 2 are fused, the virtual object a and the virtual object B may appear on the screen 10 in the video data at the same time.
Referring to fig. 7, a flowchart of a third video data generating method according to an embodiment of the present disclosure includes the following steps S301 to S305:
s301, acquiring a real scene image shot by the electronic equipment.
The electronic device may be provided with an image acquisition component or may be externally connected with the image acquisition component, and after the electronic device enters a working state, the image acquisition component may be used to capture an image of the real scene in real time. For example, after the electronic device enters the material segment display page, a camera (such as a rear camera) is automatically called to shoot a current real scene image.
S302, in the electronic equipment, the plurality of material segments are displayed based on the real scene image.
For example, referring to fig. 8, after the real scene image is obtained, the real scene image may be displayed in the electronic device with the real scene image 21 as a background screen, where the real scene image may reflect the environmental information of the environment currently described by the user, such as trees on two sides of the road, walking people, and the like. That is, as the location of the electronic device changes, the image of the real scene may also change.
S303, determining a plurality of target material segments from the plurality of material segments in response to a selection operation of a user.
Step S303 is similar to step S102, and will not be described herein.
S304, determining transition data among different target material segments.
Step S304 is similar to step S103, and will not be described herein.
And S305, generating the video data based on the real scene image, the plurality of target material segments and the transition data.
In this embodiment, by acquiring the real scene image captured by the electronic device, the video data is generated based on the real scene image, the plurality of target material segments and the transition data, so that it can be ensured that the real scenes displayed by the virtual object each time are different, and further the virtual object can display special effects in different real spaces, and finally different video data can be generated, so that different visual experiences are brought to the user.
Referring to fig. 9, a flowchart of a method for generating video data based on a real scene image according to an embodiment of the disclosure is shown in the above step S305, including S3051-S3053:
and S3051, displaying the content of the target material segment based on the real scene image.
For example, the content of the target material segment may be presented with the real scene image as a background screen.
S3052, adjusting the display effect of the AR picture in the process of displaying the content of the target material segment to obtain an adjusted AR picture; the AR picture comprises the real scene image and the content of the currently displayed target material segment.
In the process of displaying the content of the target material segment, the display effect of the AR picture can be adjusted in real time, and the adjusted AR picture is obtained. The AR picture comprises a real scene image and contents of a target material segment which is currently displayed.
Among them, the augmented reality (Augmented Reality, AR) technology is a technology of smartly fusing virtual information with a real world, which can superimpose virtual information with a real environment on one screen in real time.
Specifically, in the process of displaying the content of the target material segment, at least one of the illumination effect, the shadow effect and the filter effect of the AR picture is adjusted, so as to obtain the adjusted AR picture.
The illumination effect of the AR picture may include brightness, saturation, and the like of the AR picture. The shadow effect of the AR screen may include the direction, transparency, size, etc. of the shadow. The filter effect of the AR picture may include blur degree, gray scale, and the like. Taking the illumination effect as a brightness effect as an example, the illumination effect can be adjusted by adjusting the brightness value. The illumination effect acts on the real scene image, the shadow effect acts on the target virtual object, and the filter effect acts on the target virtual object.
In this embodiment, because the in-process of showing is that the illumination effect, shadow effect and filter effect are adjusted in real time respectively, so, can present a richer bandwagon effect, promoted the picture bandwagon effect.
It may be appreciated that in the embodiment of the present disclosure, the adjustment of the display effect of the AR screen may be performed in response to the operation of the user, or the electronic device may automatically perform the adjustment according to the current display effect of the AR screen, which is not limited herein.
And S3053, generating the video data based on the adjusted AR picture and the transition data.
In this embodiment, in the process of displaying the content of the target material segment, the display effect of the AR picture is adjusted in real time, so that not only the picture content can be enriched, but also the picture display effect can be improved.
Referring to fig. 10, a flowchart of a fourth video data generating method according to an embodiment of the present disclosure is provided, and the method is different from the method in fig. 2 in that after step S104, the following S105 is further included:
and S105, the video data is sent to a target server, so that the target server sends the video data to other users or video live broadcast is carried out based on the video data.
For example, referring to fig. 11, after generating the video data, the video data may be transmitted to the target server 90, so that the target server 90 transmits the video data to the user terminal 300 to realize video sharing, or the video data may be transmitted to the live platform 200 in real time to realize live video.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Based on the same technical concept, the embodiment of the disclosure further provides a video data generating device corresponding to the video data generating method, and since the principle of solving the problem by the device in the embodiment of the disclosure is similar to that of the video data generating method in the embodiment of the disclosure, the implementation of the device can refer to the implementation of the method, and the repetition is omitted.
Referring to fig. 12, a schematic diagram of a video data generating apparatus according to an embodiment of the disclosure is provided, where the apparatus includes:
the material segment display module 501 is configured to display a plurality of material segments in an electronic device, where the material segments include a preset virtual object and action information of the virtual object; the virtual object is generated by rendering virtual object information in a 3D rendering environment;
A material segment determining module 502, configured to determine a plurality of target material segments from the plurality of material segments in response to a selection operation of a user;
a transition data determining module 503, configured to determine transition data between different target material segments;
the video data generating module 504 is configured to generate video data based on the plurality of target material segments and the transition data.
In one possible implementation manner, the transition data determining module 503 is specifically configured to:
determining a combined connection order among the plurality of target material segments;
determining transition data between adjacent different target material segments based on the combined connection sequence;
the video data generating module 504 is specifically configured to:
and fusing the target material fragments and the transition data according to the combined connection sequence to generate the video data.
In one possible implementation manner, the transition data determining module 503 is specifically configured to:
determining a combined connection order among the plurality of target material segments based on the arrangement order determined by the user; or alternatively, the process may be performed,
based on the virtual object in each target material segment and the action information of the virtual object, the combined connection sequence among the plurality of target material segments is determined.
In one possible implementation, the material segment presentation module 501 is specifically configured to:
acquiring a real scene image shot by the electronic equipment;
displaying, in the electronic device, the plurality of material segments based on the real scene image;
the video data generating module 504 is specifically configured to:
the video data is generated based on the realistic scene image, the plurality of target material segments, and the transition data.
In one possible implementation, the video data generating module 504 is specifically configured to:
displaying the content of the target material segment based on the real scene image;
in the process of displaying the content of the target material segment, adjusting the display effect of the augmented reality AR picture to obtain an adjusted AR picture; the AR picture comprises the real scene image and the content of the currently displayed target material segment;
and generating the video data based on the adjusted AR picture and the transition data.
In one possible implementation, the video data generating module 504 is specifically configured to:
and adjusting at least one of the illumination effect, the shadow effect and the filter effect of the AR picture to obtain the adjusted AR picture.
In one possible implementation, the illumination effect acts on the real scene image, the shadow effect acts on the virtual object, and the filter effect acts on the virtual object.
Referring to fig. 13, in one possible embodiment, the apparatus further includes:
the video data sending module 505 is configured to send the video data to a target server, so that the target server sends the video data to other users or performs live video broadcast based on the video data.
The process flow of each module in the apparatus and the interaction flow between the modules may be described with reference to the related descriptions in the above method embodiments, which are not described in detail herein.
Based on the same technical conception, the embodiment of the application also provides electronic equipment. Referring to fig. 14, a schematic structural diagram of an electronic device 700 according to an embodiment of the present application includes a processor 701, a memory 702, and a bus 703. The memory 702 is configured to store execution instructions, including a memory 7021 and an external memory 7022; the memory 7021 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 701 and data exchanged with an external memory 7022 such as a hard disk, and the processor 701 exchanges data with the external memory 7022 via the memory 7021.
In the embodiment of the present application, the memory 702 is specifically configured to store application program codes for executing the scheme of the present application, and the execution is controlled by the processor 701. That is, when the electronic device 700 is operating, communication between the processor 701 and the memory 702 is through the bus 703, causing the processor 701 to execute the application code stored in the memory 702, thereby performing the methods disclosed in any of the foregoing embodiments.
The Memory 702 may be, but is not limited to, random access Memory (Random Access Memory, RAM), read Only Memory (ROM), programmable Read Only Memory (Programmable Read-Only Memory, PROM), erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), etc.
The processor 701 may be an integrated circuit chip having signal processing capabilities. The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also Digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 700. In other embodiments of the application, electronic device 700 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the video data generating method described in the above method embodiments. Wherein the storage medium may be a volatile or nonvolatile computer readable storage medium.
The computer program product of the traffic verification method provided in the embodiments of the present disclosure includes a computer readable storage medium storing program code, where the program code includes instructions for executing the steps of a video data generating method in the above method embodiments, and the specific reference may be made to the above method embodiments, which are not repeated herein.
Wherein the above-mentioned computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or a part of the technical solution, or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present disclosure, and are not intended to limit the scope of the disclosure, but the present disclosure is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, it is not limited to the disclosure: any person skilled in the art, within the technical scope of the disclosure of the present disclosure, may modify or easily conceive changes to the technical solutions described in the foregoing embodiments, or make equivalent substitutions for some of the technical features thereof; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the disclosure, and are intended to be included within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (9)

1. A video data generation method, comprising:
displaying a plurality of material fragments in electronic equipment, wherein the material fragments comprise preset virtual objects and action information of the virtual objects; the virtual object is generated by rendering virtual object information in a 3D rendering environment;
determining a plurality of target material segments from the plurality of material segments in response to a selection operation by a user;
determining a combined connection sequence among the plurality of target material fragments based on the virtual object in each target material fragment and the action information of the virtual object, and determining transition data among different adjacent target material fragments according to the combined connection sequence;
and generating video data based on the plurality of target material fragments and the transition data according to the combined connection sequence.
2. The method of claim 1, wherein the presenting the plurality of material segments in the electronic device comprises:
acquiring a real scene image shot by the electronic equipment;
displaying, in the electronic device, the plurality of material segments based on the real scene image;
the generating video data based on the plurality of target material segments and the transition data includes:
The video data is generated based on the realistic scene image, the plurality of target material segments, and the transition data.
3. The method of claim 2, wherein the generating the video data based on the real scene image, the plurality of target material segments, and the transition data comprises:
displaying the content of the target material segment based on the real scene image;
in the process of displaying the content of the target material segment, adjusting the display effect of the augmented reality AR picture to obtain an adjusted AR picture; the AR picture comprises the real scene image and the content of the currently displayed target material segment;
and generating the video data based on the adjusted AR picture and the transition data.
4. The method of claim 3, wherein adjusting the display effect of the augmented reality AR picture to obtain the adjusted AR picture comprises:
and adjusting at least one of the illumination effect, the shadow effect and the filter effect of the AR picture to obtain the adjusted AR picture.
5. The method of claim 4, wherein the lighting effect acts on the real scene image, the shadow effect acts on the virtual object, and the filter effect acts on the virtual object.
6. The method according to any one of claims 1-5, wherein after generating video data based on the plurality of target material segments and the transition data, the method further comprises:
and sending the video data to a target server, so that the target server sends the video data to other users or video live broadcast is carried out based on the video data.
7. A video data generating apparatus, comprising:
the system comprises a material segment display module, a display module and a display module, wherein the material segment display module is used for displaying a plurality of material segments in electronic equipment, and the material segments comprise preset virtual objects and action information of the virtual objects; the virtual object is generated by rendering virtual object information in a 3D rendering environment;
a material segment determining module, configured to determine a plurality of target material segments from the plurality of material segments in response to a selection operation of a user;
the transition data determining module is used for determining a combined connection sequence among the plurality of target material fragments based on the virtual object in each target material fragment and the action information of the virtual object, and determining transition data among different adjacent target material fragments according to the combined connection sequence;
And the video data generation module is used for generating video data based on the target material fragments and the transition data according to the combined connection sequence.
8. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication over the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the video data generation method of any of claims 1-6.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, performs the video data generating method according to any of claims 1 to 6.
CN202210227914.6A 2022-03-08 2022-03-08 Video data generation method and device, electronic equipment and storage medium Active CN114615513B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210227914.6A CN114615513B (en) 2022-03-08 2022-03-08 Video data generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210227914.6A CN114615513B (en) 2022-03-08 2022-03-08 Video data generation method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114615513A CN114615513A (en) 2022-06-10
CN114615513B true CN114615513B (en) 2023-10-20

Family

ID=81860283

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210227914.6A Active CN114615513B (en) 2022-03-08 2022-03-08 Video data generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114615513B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116055798A (en) * 2022-07-08 2023-05-02 脸萌有限公司 Video processing method and device and electronic equipment
CN115242980B (en) * 2022-07-22 2024-02-20 中国平安人寿保险股份有限公司 Video generation method and device, video playing method and device and storage medium
CN115272060A (en) * 2022-08-12 2022-11-01 北京字跳网络技术有限公司 Transition special effect diagram generation method, device, equipment and storage medium
CN117409175B (en) * 2023-12-14 2024-03-19 碳丝路文化传播(成都)有限公司 Video recording method, system, electronic equipment and medium

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6335765B1 (en) * 1999-11-08 2002-01-01 Weather Central, Inc. Virtual presentation system and method
CN103678569A (en) * 2013-12-09 2014-03-26 北京航空航天大学 Construction method of virtual scene generation-oriented video image material library
CN107770457A (en) * 2017-10-27 2018-03-06 维沃移动通信有限公司 A kind of video creating method and mobile terminal
CN111369687A (en) * 2020-03-04 2020-07-03 腾讯科技(深圳)有限公司 Method and device for synthesizing action sequence of virtual object
CN111787395A (en) * 2020-05-27 2020-10-16 北京达佳互联信息技术有限公司 Video generation method and device, electronic equipment and storage medium
CN112312161A (en) * 2020-06-29 2021-02-02 北京沃东天骏信息技术有限公司 Method and device for generating video, electronic equipment and readable storage medium
CN112822542A (en) * 2020-08-27 2021-05-18 腾讯科技(深圳)有限公司 Video synthesis method and device, computer equipment and storage medium
CN113301409A (en) * 2021-05-21 2021-08-24 北京大米科技有限公司 Video synthesis method and device, electronic equipment and readable storage medium
CN113559503A (en) * 2021-06-30 2021-10-29 上海掌门科技有限公司 Video generation method, device and computer readable medium
CN113822972A (en) * 2021-11-19 2021-12-21 阿里巴巴达摩院(杭州)科技有限公司 Video-based processing method, device and readable medium
CN113838490A (en) * 2020-06-24 2021-12-24 华为技术有限公司 Video synthesis method and device, electronic equipment and storage medium
WO2021259322A1 (en) * 2020-06-23 2021-12-30 广州筷子信息科技有限公司 System and method for generating video
CN113923475A (en) * 2021-09-30 2022-01-11 宿迁硅基智能科技有限公司 Video synthesis method and video synthesizer
CN114049468A (en) * 2021-10-29 2022-02-15 北京市商汤科技开发有限公司 Display method, device, equipment and storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6335765B1 (en) * 1999-11-08 2002-01-01 Weather Central, Inc. Virtual presentation system and method
CN103678569A (en) * 2013-12-09 2014-03-26 北京航空航天大学 Construction method of virtual scene generation-oriented video image material library
CN107770457A (en) * 2017-10-27 2018-03-06 维沃移动通信有限公司 A kind of video creating method and mobile terminal
CN111369687A (en) * 2020-03-04 2020-07-03 腾讯科技(深圳)有限公司 Method and device for synthesizing action sequence of virtual object
CN111787395A (en) * 2020-05-27 2020-10-16 北京达佳互联信息技术有限公司 Video generation method and device, electronic equipment and storage medium
WO2021259322A1 (en) * 2020-06-23 2021-12-30 广州筷子信息科技有限公司 System and method for generating video
CN113838490A (en) * 2020-06-24 2021-12-24 华为技术有限公司 Video synthesis method and device, electronic equipment and storage medium
CN112312161A (en) * 2020-06-29 2021-02-02 北京沃东天骏信息技术有限公司 Method and device for generating video, electronic equipment and readable storage medium
CN112822542A (en) * 2020-08-27 2021-05-18 腾讯科技(深圳)有限公司 Video synthesis method and device, computer equipment and storage medium
CN113301409A (en) * 2021-05-21 2021-08-24 北京大米科技有限公司 Video synthesis method and device, electronic equipment and readable storage medium
CN113559503A (en) * 2021-06-30 2021-10-29 上海掌门科技有限公司 Video generation method, device and computer readable medium
CN113923475A (en) * 2021-09-30 2022-01-11 宿迁硅基智能科技有限公司 Video synthesis method and video synthesizer
CN114049468A (en) * 2021-10-29 2022-02-15 北京市商汤科技开发有限公司 Display method, device, equipment and storage medium
CN113822972A (en) * 2021-11-19 2021-12-21 阿里巴巴达摩院(杭州)科技有限公司 Video-based processing method, device and readable medium

Also Published As

Publication number Publication date
CN114615513A (en) 2022-06-10

Similar Documents

Publication Publication Date Title
CN114615513B (en) Video data generation method and device, electronic equipment and storage medium
CN110176077B (en) Augmented reality photographing method and device and computer storage medium
US11494993B2 (en) System and method to integrate content in real time into a dynamic real-time 3-dimensional scene
CN108648257B (en) Panoramic picture acquisition method and device, storage medium and electronic device
CN111080759B (en) Method and device for realizing split mirror effect and related product
CN106730815B (en) Somatosensory interaction method and system easy to realize
CN113905251A (en) Virtual object control method and device, electronic equipment and readable storage medium
CN114745598B (en) Video data display method and device, electronic equipment and storage medium
US20140368495A1 (en) Method and system for displaying multi-viewpoint images and non-transitory computer readable storage medium thereof
CN108882018B (en) Video playing and data providing method in virtual scene, client and server
CN112312111A (en) Virtual image display method and device, electronic equipment and storage medium
CN114095744B (en) Video live broadcast method and device, electronic equipment and readable storage medium
CN113852838B (en) Video data generation method, device, electronic equipment and readable storage medium
CN113784160A (en) Video data generation method and device, electronic equipment and readable storage medium
CN114697703B (en) Video data generation method and device, electronic equipment and storage medium
CN112714305A (en) Presentation method, presentation device, presentation equipment and computer-readable storage medium
CN114581566A (en) Animation special effect generation method, device, equipment and medium
CN114630173A (en) Virtual object driving method and device, electronic equipment and readable storage medium
CN114625468B (en) Display method and device of augmented reality picture, computer equipment and storage medium
CN112019906A (en) Live broadcast method, computer equipment and readable storage medium
CN113516761B (en) Method and device for manufacturing naked eye 3D content with optical illusion
CN114915798A (en) Real-time video generation method, multi-camera live broadcast method and device
CN117956214A (en) Video display method, device, video display equipment and storage medium
CN114627266A (en) Analog video production method, system, device and storage medium
CN116389779A (en) Interaction method, device and storage medium for video live virtual object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant