CN111356000A - Video synthesis method, device, equipment and storage medium - Google Patents

Video synthesis method, device, equipment and storage medium Download PDF

Info

Publication number
CN111356000A
CN111356000A CN202010182771.2A CN202010182771A CN111356000A CN 111356000 A CN111356000 A CN 111356000A CN 202010182771 A CN202010182771 A CN 202010182771A CN 111356000 A CN111356000 A CN 111356000A
Authority
CN
China
Prior art keywords
video
pictures
picture
areas
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010182771.2A
Other languages
Chinese (zh)
Inventor
曾令男
张全全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Reach Best Technology Co Ltd
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Reach Best Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Reach Best Technology Co Ltd filed Critical Reach Best Technology Co Ltd
Priority to CN202010182771.2A priority Critical patent/CN111356000A/en
Publication of CN111356000A publication Critical patent/CN111356000A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4858End-user interface for client configuration for modifying screen layout parameters, e.g. fonts, size of the windows

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application relates to a video synthesis method, a device, equipment and a storage medium, wherein the method comprises the following steps: acquiring configuration information of a preset picture layout, wherein the configuration information comprises the display size and the display position of each area in the display container; loading M preset local video pictures into M areas according to the configuration information, and loading N shooting video pictures acquired in real time into N areas, wherein L is a positive integer larger than 1, M and N are both positive integers, and the sum of M and N is equal to L; and rendering the pictures of the M areas and the pictures of the N areas to generate a first playing file. The method has the advantages that the multiple real-time shot videos and the multiple local videos are rendered to generate the video composite picture, a video manufacturer can realize special effect editing of multi-picture display in the same display container without professional skills, the multiple pictures are displayed in the same display container, a video viewer can view the multiple videos at the same time, and the information pickup time is saved.

Description

Video synthesis method, device, equipment and storage medium
The application is a divisional application with the name of video synthesis method, device, equipment and storage medium on application number 201810943403.8, application date 2018, 08 and 17.
Technical Field
The present application relates to the field of video processing, and in particular, to a video synthesis method, apparatus, device, and storage medium.
Background
With the continuous development of the internet era, people are not limited to characters and pictures aiming at information transmission, and the transmission of videos is also one of the mainstream transmission modes at present, and the transmission types of the videos are various, so that the video types can be distinguished according to different time lengths, such as short videos of 10 seconds and 15 seconds; long video of 1 minute to 5 minutes; and live video on-line, etc.
At present, video transmission can be performed through platforms such as video websites and social application programs, most of the social application programs are videos generated by recording of users, and the users display or share contents which the users want to express through the videos, so that the videos are direct and accurate.
However, as the number of videos is increasing, the content displayed in the videos needs to be accurately and completely expressed by increasing time, and it takes much time for a video manufacturer to process the videos one by one (such as editing, special effects, etc.); while it may take more time for a video viewer to view the complete content of each video, resulting in more time for the user to spend for video processing or video viewing.
Disclosure of Invention
To overcome the problems in the related art, the present application provides a video synthesis method, apparatus, device and storage medium.
According to a first aspect of embodiments of the present application, there is provided a video composition method, including:
acquiring configuration information of a preset picture layout, wherein the configuration information comprises the display size and the display position of each area in the display container;
loading M preset local video pictures into M areas according to the configuration information, and loading N shooting video pictures acquired in real time into N areas, wherein L is a positive integer larger than 1, M and N are both positive integers, and the sum of M and N is equal to L;
and rendering the pictures of the M areas and the pictures of the N areas to generate a first playing file.
Optionally, before the obtaining configuration information of a preset screen layout, where the configuration information includes a display size and a display position of each region in the display container, the method further includes:
acquiring the resolution of the display container;
calculating an aspect ratio of the display container according to a resolution of the display container;
and when the aspect ratio is larger than a preset first threshold value, confirming that a preset first layout format is configuration information of the picture layout, wherein the first layout format is that the L areas are sequentially stacked in the width direction of the display container.
Optionally, after the calculating the aspect ratio of the display container according to the resolution of the display container, the method further includes:
and when the aspect ratio is smaller than or equal to the preset first threshold value, confirming that a preset second layout format is configuration information of the picture layout, wherein the second layout format is that the L areas are sequentially stacked in the height direction of the display container.
Optionally, when both M and N are 1, the loading of the local video picture in the first area and the loading of the shooting video picture in the second area further include, before the acquiring the configuration information of the preset picture layout:
acquiring the resolution ratios of the local video picture and the shot video picture;
calculating the width ratio or the height ratio of the local video picture and the shot video picture according to the resolutions of the local video picture and the shot video picture;
and when the width ratio or the height ratio is smaller than a preset second threshold value, determining a preset third layout format as configuration information of the picture layout, wherein the third layout format is that the first area is nested in the second area.
Optionally, the loading M preset local video pictures into M regions according to the configuration information, and loading N shooting video pictures collected in real time into N regions includes:
acquiring the local video picture resolution;
scaling the local video picture according to the display sizes of the M areas in the display container to enable the local video picture to be completely displayed in the M areas;
and determining the resolution of the shot video pictures according to the display sizes of the N areas in the display container, and loading the N shot video pictures collected in real time into the N areas according to the resolution.
Optionally, the generating a first play file after rendering the pictures of the M regions and the pictures of the N regions includes:
decoding the pictures of the M areas and the pictures of the N areas to respectively generate L pieces of texture information, decoding the pictures of the M areas and the pictures of the N areas to respectively generate L pieces of texture information;
respectively calculating texture coordinates and vertex coordinates of the L pieces of texture information according to the preset configuration information of the picture layout;
and generating the first playing file according to the texture coordinates and the vertex coordinates.
Optionally, after the loading M preset local video pictures into M regions according to the configuration information, loading N captured video pictures in real time into N regions, and loading N captured video pictures in real time into N regions, the method further includes:
acquiring a preset special effect picture of a shot video;
and loading the shot video special effect picture to the N areas.
Optionally, generating a first play file after rendering the pictures of the M regions and the pictures of the N regions includes:
and superposing and rendering the pictures of the M areas and the pictures of the N areas to obtain the rendered first playing file.
According to a second aspect of the embodiments of the present application, there is provided an apparatus for video composition, which is used for picture composition of L regions in the same display container, and includes:
an acquisition unit configured to acquire configuration information of a preset screen layout, wherein the configuration information includes a display size and a display position of each area in the display container;
the processing unit is configured to load M preset local video pictures into M areas according to the configuration information, and load N shooting video pictures collected in real time into N areas, wherein L is a positive integer greater than 1, M and N are both positive integers, and the sum of M and N is equal to L;
and the execution unit is configured to render the pictures of the M areas and the pictures of the N areas and then generate a first playing file.
Optionally, the video synthesizing apparatus further includes:
a first acquisition unit configured to acquire a resolution of the display container;
a first processing unit configured to calculate an aspect ratio of the display container according to a resolution of the display container;
a first execution unit configured to confirm that a preset first layout format is configuration information of the screen layout when the aspect ratio is greater than a preset first threshold, wherein the first layout format is such that the L regions are sequentially stacked in a width direction of the display container.
Optionally, the video synthesizing apparatus further includes:
a second execution unit configured to confirm a preset second layout format as configuration information of the screen layout when the aspect ratio is less than or equal to the preset first threshold, wherein the second layout format is such that the L regions are sequentially stacked in a height direction of the display container.
Optionally, the video synthesizing apparatus further includes:
a second acquisition unit configured to acquire resolutions of the local video picture and the captured video picture;
a second processing unit configured to calculate a width ratio or a height ratio of the local video picture to the captured video picture according to resolutions of the local video picture and the captured video picture;
a third execution unit, configured to determine a preset third layout format as configuration information of the screen layout when the width ratio or the height ratio is smaller than a preset second threshold, where the third layout format is to make the first area be nested in the second area.
Optionally, the processing unit further comprises:
a third obtaining subunit configured to obtain the local video picture resolution;
a third processing subunit, configured to perform equal scaling on the local video picture according to the display sizes of the M regions in the display container, so that the local video picture is completely displayed in the M regions;
and the fourth execution subunit is configured to determine the resolution of the shot video pictures according to the display sizes of the N areas in the display container, and load the N shot video pictures acquired in real time into the N areas according to the resolution.
Optionally, the execution unit further includes:
a first decoding subunit configured to decode the pictures of the M regions and the pictures of the N regions, generate L pieces of texture information, respectively, decode the pictures of the M regions and the pictures of the N regions, and generate L pieces of texture information, respectively;
a fourth processing subunit, configured to calculate texture coordinates and vertex coordinates of the L pieces of texture information, respectively, according to the configuration information of the preset screen layout;
a fifth execution subunit configured to generate the first playback file according to the texture coordinates and the vertex coordinates.
Optionally, the video synthesizing apparatus further includes:
a fourth acquisition unit configured to acquire a preset captured video special effect picture;
a fifth processing unit configured to load the captured video special effect picture to the N areas.
Optionally, the execution unit further includes:
and the sixth execution subunit is configured to perform superposition rendering on the pictures of the M regions and the pictures of the N regions to obtain the rendered first play file.
According to a third aspect of embodiments herein, there is provided a video compositing device comprising a processor, a memory for storing processor-executable instructions, the processor being configured to perform the steps of the video compositing method described above.
According to a fourth aspect of embodiments herein, there is provided a non-transitory computer readable storage medium having instructions stored thereon which, when executed by a processor of a mobile terminal, enable the mobile terminal to perform the steps of the above-described video composition method.
According to a fifth aspect of embodiments of the present application, there is provided a computer program product comprising computer program code, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the steps of the above-mentioned video composition method.
The technical scheme provided by the embodiment of the application can have the following beneficial effects: acquiring configuration information of a preset picture layout, wherein the configuration information comprises the display size and the display position of each area in the display container; loading M preset local video pictures into M areas according to the configuration information, and loading N shooting video pictures acquired in real time into N areas, wherein L is a positive integer larger than 1, M and N are both positive integers, and the sum of M and N is equal to L; and rendering the pictures of the M areas and the pictures of the N areas to generate a first playing file. The method has the advantages that the multiple real-time shot videos and the multiple local videos are rendered to generate the video composite picture, a video manufacturer can realize special effect editing of multi-picture display in the same display container without professional skills, the multiple pictures are displayed in the same display container, a video viewer can view the multiple videos at the same time, and the information pickup time is saved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is a flow diagram illustrating a video compositing method according to an exemplary embodiment.
FIG. 2 is a flow diagram illustrating a validation of a first layout format according to an exemplary embodiment.
FIG. 3 is a flow diagram illustrating a validation of a third layout format according to an exemplary embodiment.
Fig. 4 is a diagram illustrating a third layout format according to an example embodiment.
FIG. 5 is a flow diagram illustrating cropping of video according to an exemplary embodiment.
Fig. 6 is a flowchart illustrating the generation of a first play file according to an exemplary embodiment.
FIG. 7 is a flowchart illustrating a captured video special effects screen loading according to an example embodiment.
Fig. 8 is a block diagram illustrating a video compositing apparatus according to an exemplary embodiment.
Fig. 9 is a block diagram illustrating a structure of a video composition apparatus according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Fig. 1 is a flowchart illustrating a video composition method according to an exemplary embodiment, where the video composition method is used in a terminal, as shown in fig. 1, and includes the following steps:
s1100, acquiring configuration information of a preset picture layout, wherein the configuration information comprises the display size and the display position of each area in the display container;
in the embodiment of the application, the same display container is divided into two areas, and the display size and the display position of the two areas in the display container are determined through configuration information. Here, the configuration information determines the position and space occupied by each video picture on the display screen when the two video data sources output the same picture. For example, the following steps are carried out: the display container is a display screen with the resolution of 1440 x 900, and the acquired configuration information shows that the first area is the part of 720 x 900 occupied by the left half of the screen, and the second area is the part of 720 x 900 occupied by the right half of the screen. It can be understood that the number of the data sources may be multiple, and the data sources include M local video pictures and N real-time shooting pictures, the corresponding display container is divided into L regions, L is the sum of M and N, and the configuration information determines the display size and the display position of each region in the display container.
S1200, loading M preset local video pictures into M areas according to the configuration information, and loading N shooting video pictures collected in real time into N areas, wherein L is a positive integer larger than 1, M and N are both positive integers, and the sum of M and N is equal to L;
in step S1100, the size and position of each region on the display screen is determined by the configuration information. The specific loading process is that each area corresponds to a media buffer, and the original data of the circularly read video is put into the corresponding media buffer for the display of the next step. The local video picture is a locally stored video picture or a temporarily cached video picture.
S1300, generating a first playing file after rendering the pictures of the M areas and the pictures of the N areas.
In step S1200, the local video data and the real-time shooting data are loaded in the media buffer of each area of the display container, and in this step, the first play file is obtained by performing overlay rendering on the L-path data. One path of the playing file is used for real-time playing, and the other path of the playing file is written into the file for storage.
Rendering is to draw an image on a screen, and drawing of a 3D object on the screen can be achieved. Rendering is accomplished by a renderer. And the renderer reads the video data in the media buffer to finish displaying the video pictures on the screen. In the embodiment of the application, the renderer reads the video data in the media buffer of each area simultaneously, and superimposes all the data to finish the drawing of the composite video picture on the same display container.
In the embodiment of the present application, the configuration information of the picture layout in the same display container is automatically adjusted according to the resolution of the display container, please refer to fig. 2 for a specific process, and fig. 2 is a flowchart illustrating an implementation manner of determining the picture layout according to the resolution of the display container in this embodiment.
As shown in fig. 2, before step S1100, the following steps are further included:
s1101, acquiring the resolution of the display container;
the display container in the embodiment of the present application is a screen of the terminal or a part of the screen of the terminal. Reading the resolution of the display container, the resolution of the display container can be obtained by: the method comprises the steps of obtaining the model of the terminal device, searching a database according to the model of the terminal device, and obtaining the screen resolution of the terminal device, wherein the screen resolution refers to the number of pixels displayed on a screen, and the resolution is 1920 x 1080, which means that the number of pixels displayed in the horizontal direction is 1920px (pixel), and the number of pixels displayed in the vertical direction is 1080 px. Or directly reading the specification parameters of the display screen used by the terminal equipment to obtain the resolution of the terminal screen. In some embodiments, the display container may also be a predetermined partial area on the display screen.
S1102, calculating the aspect ratio of the display container according to the resolution of the display container;
and acquiring the width and the height of the display container according to the resolution of the display container. The aspect ratio is the ratio of width to height, the width corresponding to the dimension in the horizontal direction and the height corresponding to the dimension in the vertical direction. It should be noted that the width and height of the mobile terminal are related to the placement status, and the placement status of the mobile terminal needs to be determined first to determine the dimensions in the horizontal direction and the vertical direction when calculating the aspect ratio. The width and height of the mobile terminal may vary according to the placement state. In calculating the aspect ratio, the dimension in the horizontal direction is always regarded as the width of the screen, and the dimension in the vertical direction is regarded as the height of the screen.
S1103, when the aspect ratio is greater than a preset first threshold, determining that a preset first layout format is configuration information of the screen layout, where the first layout format is such that the L regions are sequentially stacked in the width direction of the display container.
In the embodiment of the application, the aspect ratio is different, the picture layout is automatically converted, and the corresponding configuration information is also different. When the aspect ratio is greater than a set first threshold, for example, the set first threshold is 1, and the corresponding first layout format is a left-right layout, that is, when the ratio of the aspect ratio is greater than 1, the division of each region is divided in a left-right manner. In the embodiment of the application, an average division mode is adopted. For example, when the resolution of the display container is 1440 × 960, the horizontal direction is 1440, the vertical direction is 960, the aspect ratio is 1440/960, the aspect ratio is greater than 1, and both M and N are 1, the display container is divided equally in a left-right manner, that is, a local video image is displayed in the left portion 720 × 960 of the screen, and a real-time captured video image is displayed in the right portion 720 × 960 of the screen. Can be conveniently understood and can be interchanged from left to right.
In the embodiment of the present application, if the aspect ratio is less than or equal to 1, the layout manner is different from the foregoing, and a second layout manner is adopted, where the second layout manner is an up-down layout. The present application may also determine the configuration information of the picture layout according to the rotation angle of the shooting device and the aspect ratio of the display container, and then, in the foregoing example, if the mobile terminal is rotated, the rotation angle is 90 °, the resolution is still 1440 × 960, but the horizontal width is 960 and the vertical height is 1440, at this time, the aspect ratio is 960/1440, and the ratio is less than 1, the display container is divided according to the up-and-down manner, that is, the real-time shooting video picture is displayed in the area of 960 × 720 on the upper portion of the screen, and the real-time shooting video picture is displayed in the area of 960 × 720 on the lower portion of the screen. Can be conveniently understood and can be interchanged from top to bottom.
In some embodiments, the picture layout format may be automatically selected based on not only the resolution of the display container, but also the native video picture and the captured video picture resolution. Please refer to fig. 3. Fig. 3 is a flowchart illustrating an implementation of determining a picture layout format according to the resolution of a local video and a shot video when M and N are both 1, the local video picture is loaded in a first area, and the shot video picture is loaded in a second area according to an embodiment of the present application.
As shown in fig. 3, before step S1100, the following steps are further included:
s1111, acquiring the resolution of the local video picture and the shooting video picture;
video resolution refers to pixels across the width and height of a video frame, with a common screen width to height ratio of 4:3, corresponding to resolutions of 320 x 240 or 640 x 480, and a screen width to height ratio of 16:9, corresponding to resolutions of 640 x 360, 720 x 4805, 960 x 540, 1024 x 576, 1280 x 720. The resolution of the video is usually written into the video file as an attribute parameter, and the resolution of the existing video is obtained by reading the resolution parameter in the video file. The other path of video is collected by the video recording device in real time, and the resolution of the recorded video depends on the setting of the resolution on the video recording device, so that the resolution of the real-time shot video is obtained by reading the setting of the recording resolution on the video recording device.
S1112, calculating the width ratio or the height ratio of the local video picture to the shot video picture according to the resolutions of the local video picture and the shot video picture;
in some scenes, the resolution of the local video is much smaller than that of the real-time shot video, and if the local video and the real-time shot video occupy the same area of screen space when the local video and the real-time shot video are displayed on the same picture, some display space is wasted, and the display effect is not good. Therefore, by obtaining the resolutions of the local video picture and the shot video picture, the width ratio or the height ratio of the local video picture and the shot video picture is compared, and a layout mode different from a simple left-right layout or a simple up-down layout is set according to the ratio.
And S1113, when the width ratio or the height ratio is smaller than a preset second threshold, determining a preset third layout format as the configuration information of the picture layout, wherein the third layout format is that the first area is nested in the second area.
And when the width of the local video picture is much smaller than the width of the shot video picture, or the height of the local video picture is much smaller than the height of the shot video picture, setting the third layout format as a picture layout mode. The third layout format may be referred to as a pip layout format, and as shown in fig. 4 in particular, the live video takes up the entire space of the display container, and the local video is displayed in a corner of the display container. If the local video picture is too large in size, the local video picture is displayed according to the third layout format, most of the real-time shot video picture is covered by the local video picture, the display effect is not good, in order to achieve the best display effect, a second threshold value is set according to the width ratio or the height ratio of the local video picture and the shot video picture, the third layout format is adopted when the width ratio or the height ratio of the local video picture and the shot video picture is smaller than the second threshold value, in the embodiment of the application, the second threshold value is set to be 1/3, namely when the width of the local video picture is smaller than 1/3 of the width of the shot video. After the display container is divided, the local video resolution or the shooting video resolution is usually different from the resolution of the divided display area, and if the size of the video picture exceeds the size of the display area, the video picture needs to be processed so as to be displayed in the display area better. Referring to fig. 5 in particular, fig. 5 is a flowchart illustrating an embodiment of processing a local video frame and a video frame captured in real time according to a display size according to the present application.
As shown in fig. 5, in step S1200, the method further includes the following steps:
s1201, acquiring the local video picture resolution;
as mentioned previously, video resolution refers to pixels in the width and height of the video frame, with a common aspect ratio of 4:3, corresponding to a video resolution of 320 × 240 or 640 × 480; and a screen width to height ratio of 16:9, corresponding to video resolutions of 640 x 360, 720 x 405, 960 x 540, 1024 x 576, 1280 x 720. The resolution of the video is usually written into the video file as an attribute parameter, and the resolution of the existing video is obtained by reading the resolution parameter in the video file.
S1202, scaling the local video picture in an equal proportion according to the display sizes of the M areas in the display container so as to enable the local video picture to be completely displayed in the M areas;
and after the resolution of the local video is obtained, the display size is matched, and the original aspect ratio is kept for scaling in equal proportion. For example, a native video with a resolution of 600 × 600 and a display size of 720 × 720, enlarges both the original video width and height by a factor of 1.2.
S1203, determining the resolution of the shot video pictures according to the display sizes of the N areas in the display container, and loading the N shot video pictures collected in real time into the N areas according to the resolution.
The method is characterized in that the method comprises the steps of shooting in real time when a video image is shot, and limiting the size of a shot picture when the video image is shot, so that the shot video picture can be well displayed on a display container when being loaded to a display area.
As shown in fig. 6, in step S1300, the method further includes the steps of:
s1301, decoding the pictures of the M regions and the pictures of the N regions, respectively generating L pieces of texture information, decoding the pictures of the M regions and the pictures of the N regions, and respectively generating L pieces of texture information;
and decoding the video data cached in the media buffer corresponding to each region to generate texture information. Texture is the input information that the GPU uses to render an image. In computer graphics, texture refers to a bitmap representing details of the surface of an object. Because all textures in Direct3D are bitmaps, any bitmap can be pasted to the surface of Direct3D primitives. For example, an application may create objects and make their surfaces appear to have a wood-grain pattern. The textures of grass, soil, rocks and the like can be attached to the surfaces of the primitives forming the mountain, so that the mountain slope which looks real can be obtained. The application may also create other effects with the texture.
S1302, respectively calculating texture coordinates and vertex coordinates of the L pieces of texture information according to the preset configuration information of the picture layout;
when rendering an image, not only geometric coordinates but also texture coordinates are defined for each vertex. The geometric coordinates determine where the vertex is drawn on the screen, while the texture coordinates determine which texel in the texture image is assigned to the vertex. And determining texture coordinates and vertex coordinates of each path of texture according to the configuration information of the picture layout acquired in the previous step.
And S1303, generating the first playing file according to the texture coordinates and the vertex coordinates.
And acquiring texture coordinates and vertex coordinates, drawing the image through a vertex shader, and providing the vertex coordinates and the texture coordinates for each vertex. The vertex coordinates determine where on the screen a particular vertex should be rendered. The texture coordinates determine which texture unit in the texture image is to be assigned to this vertex. And displaying the drawn result in real time by a user, and writing the drawn result into a file for storage.
In some embodiments, some video special effects are added to a video shot in real time, specifically referring to fig. 7, and fig. 7 is a flowchart illustrating an embodiment of adding a video special effect according to an embodiment of the present disclosure.
As shown in fig. 7, after step S1200, the following steps are further included:
s1211, acquiring a preset shooting video special effect picture;
the method includes the steps that a video special effect is added to a video shot in real time, video special effect files are stored locally, the video special effects are multiple, and the user can select the video special effect to be added before starting video recording corresponding to the multiple local files.
And S1212, loading the shot video special effect picture to the N areas.
And loading the selected video special effect file into the N areas, and overlaying the selected video special effect file onto the shot video image acquired in real time during display to realize a preset effect.
In one embodiment, generating a first playback file after rendering the pictures of the M regions and the pictures of the N regions includes:
and performing picture superposition on the local video pictures of the M areas and the shot video pictures of the N areas, wherein the picture superposition can also form a rendering effect to obtain the rendered first playing file.
Fig. 8 is a block diagram illustrating a video compositing apparatus according to an exemplary embodiment. Referring to fig. 8, the apparatus includes an acquisition unit 2100, a processing unit 2200, and an execution unit 2300.
An acquisition unit 2100 configured to acquire configuration information of a preset screen layout, wherein the configuration information includes a display size and a display position of each region in the display container;
a processing unit 2200 configured to load M preset local video pictures into M regions according to the configuration information, and load N captured video pictures in real time into N regions, where L is a positive integer greater than 1, M and N are both positive integers, and a sum of M and N is equal to L;
the execution unit 2300 is configured to render the pictures of the M regions and the pictures of the N regions to generate a first play file.
In some embodiments, the video compositing apparatus further comprises: the display device comprises a first acquisition unit, a first processing unit and a first execution unit, wherein the first acquisition unit is configured to acquire the resolution of the display container; a first processing unit configured to calculate an aspect ratio of the display container according to a resolution of the display container; a first execution unit configured to confirm that a preset first layout format is configuration information of the screen layout when the aspect ratio is greater than a preset first threshold, wherein the first layout format is such that the L regions are sequentially stacked in a width direction of the display container.
In some embodiments, the video compositing apparatus further comprises: a second execution unit configured to confirm a preset second layout format as configuration information of the screen layout when the aspect ratio is less than or equal to the preset first threshold, wherein the second layout format is such that the L regions are sequentially stacked in a height direction of the display container.
In some embodiments, the video compositing apparatus further comprises: the system comprises a second acquisition unit, a second processing unit and a third execution unit, wherein the second acquisition unit is configured to acquire the resolutions of the local video picture and the shot video picture; a second processing unit configured to calculate a width ratio or a height ratio of the local video picture to the captured video picture according to resolutions of the local video picture and the captured video picture; a third execution unit, configured to determine a preset third layout format as configuration information of the screen layout when the width ratio or the height ratio is smaller than a preset second threshold, where the third layout format is to make the first area be nested in the second area.
In some embodiments, the processing unit further comprises: the system comprises a third acquisition subunit, a third processing subunit and a fourth execution subunit, wherein the third acquisition subunit is configured to acquire the local video picture resolution; a third processing subunit, configured to perform equal scaling on the local video picture according to the display sizes of the M regions in the display container, so that the local video picture is completely displayed in the M regions; and the fourth execution subunit is configured to determine the resolution of the shot video pictures according to the display sizes of the N areas in the display container, and load the N shot video pictures acquired in real time into the N areas according to the resolution.
In some embodiments, the execution unit further comprises: a first decoding subunit, a fourth processing subunit and a fifth execution subunit, where the first decoding subunit is configured to decode pictures of the M regions and pictures of the N regions, generate L pieces of texture information respectively, and decode the pictures of the M regions and the pictures of the N regions, and generate L pieces of texture information respectively; a fourth processing subunit, configured to calculate texture coordinates and vertex coordinates of the L pieces of texture information, respectively, according to the configuration information of the preset screen layout; a fifth execution subunit configured to generate the first playback file according to the texture coordinates and the vertex coordinates.
In some embodiments, the video compositing apparatus further comprises: the device comprises a fourth acquisition unit and a fifth processing unit, wherein the fourth acquisition unit is configured to acquire a preset shooting video special effect picture; a fifth processing unit configured to load the captured video special effect picture to the N areas.
In some embodiments, the execution unit further comprises: and the sixth execution subunit is configured to perform superposition rendering on the pictures of the M regions and the pictures of the N regions to obtain the rendered first play file.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 9 is a block diagram illustrating a structure of a video composition apparatus 900 according to an exemplary embodiment, the video composition apparatus 900 being a mobile terminal for video composition. For example, video compositing device 900 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and so forth.
Referring to fig. 9, the video composition apparatus 900 may include one or more of the following components: a processing component 902, a memory 904, a power component 906, a multimedia component 908, an audio component 910, an input/output (I/O) interface 912, a sensor component 914, and a communication component 916.
The processing component 902 generally controls the overall operation of the video compositing device 900, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. Processing component 902 may include one or more processors 920 to execute instructions to perform all or a portion of the steps of the methods described above. Further, processing component 902 can include one or more modules that facilitate interaction between processing component 902 and other components. For example, the processing component 902 can include a multimedia module to facilitate interaction between the multimedia component 908 and the processing component 902.
The memory 904 is configured to store various types of data to support operations at the video compositing device 900. Examples of such data include instructions for any application or method operating on video composition device 900, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 904 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power component 906 provides power to the various components of the video compositing device 900. The power components 906 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the video compositing device 900.
The multimedia component 908 includes a screen that provides an output interface between the video compositing device 900 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 908 includes a front facing camera and/or a rear facing camera. When the video composition apparatus 900 is in an operation mode, such as a photographing mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 910 is configured to output and/or input audio signals. For example, the audio component 910 includes a Microphone (MIC) configured to receive external audio signals when the video compositing device 900 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 904 or transmitted via the communication component 916. In some embodiments, audio component 910 also includes a speaker for outputting audio signals.
I/O interface 912 provides an interface between processing component 902 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 914 includes one or more sensors for providing status evaluation of various aspects for the video compositing device 900. For example, the sensor component 914 may detect an open/closed state of the video composition apparatus 900, the relative positioning of components, such as a display and keypad of the video composition apparatus 900, the sensor component 914 may also detect a change in position of the video composition apparatus 900 or a component of the video composition apparatus 900, the presence or absence of user contact with the video composition apparatus 900, orientation or acceleration/deceleration of the video composition apparatus 900, and a change in temperature of the video composition apparatus 900. The sensor assembly 914 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 914 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 914 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 916 is configured to facilitate communication between the video composition apparatus 900 and other devices in a wired or wireless manner. The video compositing device 900 may access a wireless network based on a communication standard, such as WiFi, a carrier network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 916 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 916 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the video composition apparatus 900 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 904 comprising instructions, executable by the processor 920 of the video compositing device 900 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In some embodiments, a computer program product is provided comprising computer program code, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the steps of the video composition method described above.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (18)

1. A video composition method for picture composition of L regions in the same display container, comprising:
acquiring configuration information of a preset picture layout, wherein the configuration information comprises the display size and the display position of each area in the display container;
loading M preset local video pictures into M areas according to the configuration information, and loading N shooting video pictures acquired in real time into N areas, wherein L is a positive integer larger than 1, M and N are both positive integers, and the sum of M and N is equal to L;
and rendering the pictures of the M areas and the pictures of the N areas to generate a first playing file.
2. The video synthesis method according to claim 1, further comprising, before the obtaining configuration information of the preset screen layout:
acquiring the resolution of the display container;
calculating an aspect ratio of the display container according to a resolution of the display container;
and when the aspect ratio is larger than a preset first threshold value, confirming that a preset first layout format is configuration information of the picture layout, wherein the first layout format is that the L areas are sequentially stacked in the width direction of the display container.
3. The video synthesis method according to claim 2, further comprising, after the calculating the aspect ratio of the display container according to the resolution of the display container:
and when the aspect ratio is smaller than or equal to the preset first threshold value, confirming that a preset second layout format is configuration information of the picture layout, wherein the second layout format is that the L areas are sequentially stacked in the height direction of the display container.
4. The video synthesis method according to claim 1, wherein when M and N are both 1, the local video picture is loaded in a first area, the captured video picture is loaded in a second area, and before the obtaining of the configuration information of the preset picture layout, the method further comprises:
acquiring the resolution ratios of the local video picture and the shot video picture;
calculating the width ratio or the height ratio of the local video picture and the shot video picture according to the resolutions of the local video picture and the shot video picture;
and when the width ratio or the height ratio is smaller than a preset second threshold value, determining a preset third layout format as configuration information of the picture layout, wherein the third layout format is that the first area is nested in the second area.
5. The video synthesis method according to claim 1, wherein the loading M preset local video pictures into M regions and loading N captured video pictures collected in real time into N regions according to the configuration information comprises:
acquiring the local video picture resolution;
scaling the local video picture according to the display sizes of the M areas in the display container to enable the local video picture to be completely displayed in the M areas;
and determining the resolution of the shot video pictures according to the display sizes of the N areas in the display container, and loading the N shot video pictures collected in real time into the N areas according to the resolution.
6. The video synthesis method according to claim 1, wherein generating a first playback file after rendering the M-region pictures and the N-region pictures comprises:
decoding the pictures of the M areas and the pictures of the N areas to respectively generate L pieces of texture information;
respectively calculating texture coordinates and vertex coordinates of the L pieces of texture information according to the preset configuration information of the picture layout;
and generating the first playing file according to the texture coordinates and the vertex coordinates.
7. The video synthesis method according to claim 1, wherein after the loading M preset local video pictures into M regions according to the configuration information and loading N captured video pictures collected in real time into N regions, the method further comprises:
acquiring a preset special effect picture of a shot video;
and loading the shot video special effect picture to the N areas.
8. The video synthesis method according to claim 1, wherein the rendering the pictures of the M regions and the pictures of the N regions to generate a first play file comprises:
and superposing and rendering the pictures of the M areas and the pictures of the N areas to obtain the rendered first playing file.
9. A video compositing apparatus for picture compositing of L regions in the same display container, comprising:
an acquisition unit configured to acquire configuration information of a preset screen layout, wherein the configuration information includes a display size and a display position of each area in the display container;
the processing unit is configured to load M preset local video pictures into M areas according to the configuration information, and load N shooting video pictures collected in real time into N areas, wherein L is a positive integer greater than 1, M and N are both positive integers, and the sum of M and N is equal to L;
and the execution unit is configured to render the pictures of the M areas and the pictures of the N areas and then generate a first playing file.
10. The video compositing device of claim 9, wherein said device further comprises:
a first acquisition unit configured to acquire a resolution of the display container;
a first processing unit configured to calculate an aspect ratio of the display container according to a resolution of the display container;
a first execution unit configured to confirm that a preset first layout format is configuration information of the screen layout when the aspect ratio is greater than a preset first threshold, wherein the first layout format is such that the L regions are sequentially stacked in a width direction of the display container.
11. The video compositing device of claim 10, wherein said device further comprises:
a second execution unit configured to confirm a preset second layout format as configuration information of the screen layout when the aspect ratio is less than or equal to the preset first threshold, wherein the second layout format is such that the L regions are sequentially stacked in a height direction of the display container.
12. The video compositing device of claim 9, wherein when M and N are both 1, said device further comprises:
a second acquisition unit configured to acquire resolutions of the local video picture and the captured video picture;
a second processing unit configured to calculate a width ratio or a height ratio of the local video picture to the captured video picture according to resolutions of the local video picture and the captured video picture;
a third execution unit, configured to determine a preset third layout format as configuration information of the screen layout when the width ratio or the height ratio is smaller than a preset second threshold, where the third layout format is to make the first area be nested in the second area.
13. The video compositing device of claim 9 wherein said processing unit further comprises:
a third obtaining subunit configured to obtain the local video picture resolution;
a third processing subunit, configured to perform equal scaling on the local video picture according to the display sizes of the M regions in the display container, so that the local video picture is completely displayed in the M regions;
and the fourth execution subunit is configured to determine the resolution of the shot video pictures according to the display sizes of the N areas in the display container, and load the N shot video pictures acquired in real time into the N areas according to the resolution.
14. The video compositing device of claim 9 wherein said execution unit further comprises:
a first decoding subunit configured to decode the pictures of the M regions and the pictures of the N regions, generate L pieces of texture information, respectively, decode the pictures of the M regions and the pictures of the N regions, and generate L pieces of texture information, respectively;
a fourth processing subunit, configured to calculate texture coordinates and vertex coordinates of the L pieces of texture information, respectively, according to the configuration information of the preset screen layout;
a fifth execution subunit configured to generate the first playback file according to the texture coordinates and the vertex coordinates.
15. The video compositing device of claim 9, wherein said device further comprises:
a fourth acquisition unit configured to acquire a preset captured video special effect picture;
a fifth processing unit configured to load the captured video special effect picture to the N areas.
16. The video compositing device of claim 9, wherein the execution unit further comprises:
and the sixth execution subunit is configured to perform superposition rendering on the pictures of the M regions and the pictures of the N regions to obtain the rendered first play file.
17. A video compositing device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of the video compositing method according to any of claims 1 to 8.
18. A non-transitory computer readable storage medium having instructions which, when executed by a processor of a mobile terminal, enable the mobile terminal to perform a video compositing method, the method comprising the steps of the video compositing method according to any of claims 1 to 8.
CN202010182771.2A 2018-08-17 2018-08-17 Video synthesis method, device, equipment and storage medium Pending CN111356000A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010182771.2A CN111356000A (en) 2018-08-17 2018-08-17 Video synthesis method, device, equipment and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810943403.8A CN109068166B (en) 2018-08-17 2018-08-17 Video synthesis method, device, equipment and storage medium
CN202010182771.2A CN111356000A (en) 2018-08-17 2018-08-17 Video synthesis method, device, equipment and storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201810943403.8A Division CN109068166B (en) 2018-08-17 2018-08-17 Video synthesis method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111356000A true CN111356000A (en) 2020-06-30

Family

ID=64687399

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202010182771.2A Pending CN111356000A (en) 2018-08-17 2018-08-17 Video synthesis method, device, equipment and storage medium
CN201810943403.8A Active CN109068166B (en) 2018-08-17 2018-08-17 Video synthesis method, device, equipment and storage medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201810943403.8A Active CN109068166B (en) 2018-08-17 2018-08-17 Video synthesis method, device, equipment and storage medium

Country Status (1)

Country Link
CN (2) CN111356000A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113727047A (en) * 2021-08-18 2021-11-30 深圳传音控股股份有限公司 Video processing method, mobile terminal and readable storage medium
CN113923351A (en) * 2021-09-09 2022-01-11 荣耀终端有限公司 Method, apparatus, storage medium, and program product for exiting multi-channel video shooting
CN114286177A (en) * 2021-12-28 2022-04-05 北京快来文化传播集团有限公司 Video splicing method and device and electronic equipment
CN115515005A (en) * 2021-06-07 2022-12-23 京东方科技集团股份有限公司 Method and device for acquiring cover of program switching and display equipment
WO2023036257A1 (en) * 2021-09-13 2023-03-16 北京字跳网络技术有限公司 Image processing method and apparatus
WO2024088141A1 (en) * 2022-10-28 2024-05-02 北京字跳网络技术有限公司 Special-effect processing method and apparatus, electronic device, and storage medium

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110121094A (en) * 2019-06-20 2019-08-13 广州酷狗计算机科技有限公司 Video is in step with display methods, device, equipment and the storage medium of template
CN110401820A (en) * 2019-08-15 2019-11-01 北京迈格威科技有限公司 Multipath video processing method, device, medium and electronic equipment
CN110572411A (en) * 2019-09-18 2019-12-13 北京云中融信网络科技有限公司 Method and device for testing video transmission quality
CN110781440A (en) * 2019-10-31 2020-02-11 北京东软望海科技有限公司 Container height adjusting method and device, computer equipment and storage medium
CN110996150A (en) * 2019-11-18 2020-04-10 咪咕动漫有限公司 Video fusion method, electronic device and storage medium
CN113497963B (en) * 2020-03-18 2023-04-18 阿里巴巴集团控股有限公司 Video processing method, device and equipment
CN111541868A (en) * 2020-03-31 2020-08-14 北京辰森世纪科技股份有限公司 Cooking state monitoring method, device and system
CN111901572B (en) * 2020-08-14 2022-03-18 广州盈可视电子科技有限公司 Multi-channel video stream synthesis method, device, equipment and storage medium
CN112004032B (en) 2020-09-04 2022-02-18 北京字节跳动网络技术有限公司 Video processing method, terminal device and storage medium
CN112235626B (en) * 2020-10-15 2023-06-13 Oppo广东移动通信有限公司 Video rendering method and device, electronic equipment and storage medium
CN114286115B (en) * 2021-11-24 2024-04-16 杭州星犀科技有限公司 Control method and system for picture display of multi-channel video
CN114429611B (en) * 2022-04-06 2022-07-08 北京达佳互联信息技术有限公司 Video synthesis method and device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080316297A1 (en) * 2007-06-22 2008-12-25 King Keith C Video Conferencing Device which Performs Multi-way Conferencing
CN101860715A (en) * 2010-05-14 2010-10-13 中兴通讯股份有限公司 Multi-picture synthesis method and system and media processing device
CN105898133A (en) * 2015-08-19 2016-08-24 乐视网信息技术(北京)股份有限公司 Video shooting method and device
CN106165430A (en) * 2016-06-29 2016-11-23 北京小米移动软件有限公司 Net cast method and device
CN106604047A (en) * 2016-12-13 2017-04-26 天脉聚源(北京)传媒科技有限公司 Multi-video-stream video direct broadcasting method and device
JP2018037859A (en) * 2016-08-31 2018-03-08 キヤノン株式会社 Image processing system and control method
CN108156520A (en) * 2017-12-29 2018-06-12 珠海市君天电子科技有限公司 Video broadcasting method, device, electronic equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8421840B2 (en) * 2008-06-09 2013-04-16 Vidyo, Inc. System and method for improved view layout management in scalable video and audio communication systems
CN101692693B (en) * 2009-09-29 2011-09-28 北京中科大洋科技发展股份有限公司 Multifunctional integrated studio system and a method
CN103530096B (en) * 2012-07-03 2018-11-16 索尼公司 Long-range control method, remote control equipment and display equipment
CN105704424A (en) * 2014-11-27 2016-06-22 中兴通讯股份有限公司 Multi-image processing method, multi-point control unit, and video system
CN107257506A (en) * 2017-06-29 2017-10-17 徐文波 Many picture special efficacy loading methods and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080316297A1 (en) * 2007-06-22 2008-12-25 King Keith C Video Conferencing Device which Performs Multi-way Conferencing
CN101860715A (en) * 2010-05-14 2010-10-13 中兴通讯股份有限公司 Multi-picture synthesis method and system and media processing device
CN105898133A (en) * 2015-08-19 2016-08-24 乐视网信息技术(北京)股份有限公司 Video shooting method and device
CN106165430A (en) * 2016-06-29 2016-11-23 北京小米移动软件有限公司 Net cast method and device
JP2018037859A (en) * 2016-08-31 2018-03-08 キヤノン株式会社 Image processing system and control method
CN106604047A (en) * 2016-12-13 2017-04-26 天脉聚源(北京)传媒科技有限公司 Multi-video-stream video direct broadcasting method and device
CN108156520A (en) * 2017-12-29 2018-06-12 珠海市君天电子科技有限公司 Video broadcasting method, device, electronic equipment and storage medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115515005A (en) * 2021-06-07 2022-12-23 京东方科技集团股份有限公司 Method and device for acquiring cover of program switching and display equipment
CN113727047A (en) * 2021-08-18 2021-11-30 深圳传音控股股份有限公司 Video processing method, mobile terminal and readable storage medium
CN113923351A (en) * 2021-09-09 2022-01-11 荣耀终端有限公司 Method, apparatus, storage medium, and program product for exiting multi-channel video shooting
CN113923351B (en) * 2021-09-09 2022-09-27 荣耀终端有限公司 Method, device and storage medium for exiting multi-channel video shooting
WO2023036257A1 (en) * 2021-09-13 2023-03-16 北京字跳网络技术有限公司 Image processing method and apparatus
CN114286177A (en) * 2021-12-28 2022-04-05 北京快来文化传播集团有限公司 Video splicing method and device and electronic equipment
WO2024088141A1 (en) * 2022-10-28 2024-05-02 北京字跳网络技术有限公司 Special-effect processing method and apparatus, electronic device, and storage medium

Also Published As

Publication number Publication date
CN109068166B (en) 2020-02-14
CN109068166A (en) 2018-12-21

Similar Documents

Publication Publication Date Title
CN109068166B (en) Video synthesis method, device, equipment and storage medium
CN108965982B (en) Video recording method and device, electronic equipment and readable storage medium
CN110321048B (en) Three-dimensional panoramic scene information processing and interacting method and device
US11875023B2 (en) Method and apparatus for operating user interface, electronic device, and storage medium
KR101791778B1 (en) Method of Service for Providing Advertisement Contents to Game Play Video
US20240078703A1 (en) Personalized scene image processing method, apparatus and storage medium
CN109862380B (en) Video data processing method, device and server, electronic equipment and storage medium
CN112738544B (en) Live broadcast room interaction method and device, electronic equipment and storage medium
CN108737891B (en) Video material processing method and device
CN114125320B (en) Method and device for generating special effects of image
CN113115097B (en) Video playing method, device, electronic equipment and storage medium
CN111988672A (en) Video processing method and device, electronic equipment and storage medium
CN112614228B (en) Method, device, electronic equipment and storage medium for simplifying three-dimensional grid
CN111754607A (en) Picture processing method and device, electronic equipment and computer readable storage medium
CN108986117B (en) Video image segmentation method and device
CN113452929B (en) Video rendering method and device, electronic equipment and storage medium
CN107566878B (en) Method and device for displaying pictures in live broadcast
CN107767838B (en) Color gamut mapping method and device
CN114140568A (en) Image processing method, image processing device, electronic equipment and storage medium
EP3799415A2 (en) Method and device for processing videos, and medium
CN113286073A (en) Imaging method, imaging device, and storage medium
CN113989424A (en) Three-dimensional virtual image generation method and device and electronic equipment
CN110662103B (en) Multimedia object reconstruction method and device, electronic equipment and readable storage medium
CN110312117B (en) Data refreshing method and device
CN109413232B (en) Screen display method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200630