CN109413352B - Video data processing method, device, equipment and storage medium - Google Patents

Video data processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN109413352B
CN109413352B CN201811325546.9A CN201811325546A CN109413352B CN 109413352 B CN109413352 B CN 109413352B CN 201811325546 A CN201811325546 A CN 201811325546A CN 109413352 B CN109413352 B CN 109413352B
Authority
CN
China
Prior art keywords
data
video data
filling
video
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811325546.9A
Other languages
Chinese (zh)
Other versions
CN109413352A (en
Inventor
王延之
崔昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Microlive Vision Technology Co Ltd
Original Assignee
Beijing Microlive Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Microlive Vision Technology Co Ltd filed Critical Beijing Microlive Vision Technology Co Ltd
Priority to CN201811325546.9A priority Critical patent/CN109413352B/en
Publication of CN109413352A publication Critical patent/CN109413352A/en
Application granted granted Critical
Publication of CN109413352B publication Critical patent/CN109413352B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4858End-user interface for client configuration for modifying screen layout parameters, e.g. fonts, size of the windows
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The disclosure relates to a method, an apparatus, a device and a storage medium for processing video data. The method comprises the following steps: acquiring recorded video data, played video data and filling data; the filling data is used for filling a non-video area in a picture corresponding to the playing video data; and displaying the recorded video data, the played video data and the filling data in a display area of the terminal according to the picture layout rule. According to the video data processing method, the filling data are added in the picture of the playing video, so that the picture proportion of the recorded video and the picture proportion of the playing video are more coordinated visually, and the quality of the video picture is improved.

Description

Video data processing method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of multimedia technologies, and in particular, to a method, an apparatus, a device, and a storage medium for processing video data.
Background
With the development of network technology and multimedia technology, playing and recording videos by using client devices has become one of the important applications of current multimedia technology. Meanwhile, with the rapid spread of multimedia network resources with rich content, when a user records and plays videos by using client equipment, application requirements for simultaneously recording and playing different videos often exist.
However, in the related art, when the client records and plays the video at the same time and the recorded video and the played video are played as a composite video, the aesthetic feeling of the composite video image is not good and the quality of the video image is reduced because the image proportion of the recorded video is different from that of the played video.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a method, an apparatus, a device and a storage medium for processing video data, which can effectively improve the quality of a synthesized video picture.
A method of processing video data, the method comprising:
acquiring recorded video data, played video data and filling data; the filling data is used for filling a non-video area in a picture corresponding to the playing video data;
and displaying the recorded video data, the played video data and the filling data in a display area of the terminal according to a picture layout rule.
In one embodiment, the displaying the recorded video data, the played video data, and the padding data in a display area of a terminal according to a pre-screen layout rule includes:
synthesizing the recorded video data, the played video data and the filling data by adopting a preset image processing method according to the picture layout rule to obtain synthesized video data;
controlling a player to play the composite video data in the display area.
In one embodiment, the synthesizing the recorded video data, the played video data, and the padding data according to the picture layout rule by using a preset image processing method to obtain synthesized video data includes:
respectively acquiring recorded video frame images, played video frame images and filled video frame images corresponding to the timestamps according to the recorded video data, the played video data and the filled data;
and synthesizing the recorded video frame images, the played video frame images and the filled video frame images corresponding to the timestamps according to the picture layout rule to generate the synthesized video data.
In one embodiment, the synthesizing, according to the picture layout rule, the recording video frame image, the playing video frame image, and the filling video frame image corresponding to each timestamp to generate the synthesized video data includes:
according to the picture layout rule, arranging the recorded video frame images, the played video frame images and the filled video frame images corresponding to the timestamps to obtain the video frame images corresponding to the timestamps;
and splicing the video frame images corresponding to the time stamps according to the sequence of the time stamps to obtain the synthesized video data.
In one embodiment, the displaying the recorded video data, the played video data, and the padding data in a display area of a terminal according to the screen layout rule includes:
and controlling a plurality of players to play the recorded video data, the played video data and the filling data at different positions in the display area respectively according to the picture layout rule.
In one embodiment, the screen layout rule includes a screen layout rule of the display area when recording a video or a layout rule generated according to a control instruction input by a user.
In one embodiment, the obtaining of padding data includes:
when a filling data acquisition instruction input by a user is received, displaying a filling data selection interface in the display area; the selection interface comprises a plurality of fill data types;
receiving a selection instruction input by the user on the filling data selection interface, wherein the selection instruction comprises an identifier of at least one filling data type selected by the user;
and acquiring the filling data according to the selection instruction.
An apparatus for processing video data, the apparatus comprising:
the acquisition module is used for acquiring recorded video data, played video data and filling data; the filling data is used for filling a non-video area in a picture corresponding to the playing video data;
and the display module is used for displaying the recorded video data, the played video data and the filling data in a display area of the terminal according to the picture layout rule.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring recorded video data, played video data and filling data; the filling data is used for filling a non-video area in a picture corresponding to the playing video data;
and displaying the recorded video data, the played video data and the filling data in a display area of the terminal according to a picture layout rule.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring recorded video data, played video data and filling data; the filling data is used for filling a non-video area in a picture corresponding to the playing video data;
and displaying the recorded video data, the played video data and the filling data in a display area of the terminal according to a picture layout rule.
The embodiment of the disclosure provides a video data processing method, a video data processing device, video data processing equipment and a storage medium. The method comprises the following steps: acquiring recorded video data, played video data and filling data; the filling data is used for filling a non-video area in a picture corresponding to the playing video data; and displaying the recorded video data, the played video data and the filling data in a display area of the terminal according to a picture layout rule. In the method for processing the video data, the filling data is added in the non-video area in the picture corresponding to the played video data by the terminal, so that the filling data and the played video data are combined into a new video data, and a new video picture is formed, and the proportion of the new video picture is added due to the filling data, so that the filling data can cover the non-video area in the played video data, and the picture edge of the recorded video data and the picture edge of the played video data are more harmonious in display proportion, thereby improving the quality of the combined video picture.
Drawings
Fig. 1 is a schematic application scenario diagram of a method for processing video data according to an embodiment;
fig. 2 is a flowchart illustrating a method for processing video data according to an embodiment;
FIG. 2a is a diagram illustrating an exemplary layout rule;
FIG. 2b is another exemplary layout rule provided in one embodiment;
FIG. 3 is a flowchart illustrating an implementation manner of S102 in the embodiment of FIG. 2;
fig. 4 is a flowchart illustrating a method for processing video data according to an embodiment;
fig. 5 is a flowchart illustrating a method for processing video data according to an embodiment;
fig. 6 is a flowchart illustrating a method for processing video data according to an embodiment;
FIG. 6a provides one embodiment of a fill data setup display interface;
FIG. 6b provides an alternative fill data setup display interface, according to one embodiment;
FIG. 7 is a diagram of an apparatus for processing video data according to an embodiment;
FIG. 8 is a diagram of an apparatus for processing video data according to one embodiment;
FIG. 9 is a diagram of an apparatus for processing video data according to an embodiment;
fig. 10 is a schematic internal structural diagram of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present disclosure more clearly understood, the present disclosure is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the disclosure and are not intended to limit the disclosure.
The video data processing method provided by the embodiment of the present disclosure is applicable to an application scenario as shown in fig. 1, where the application scenario describes a picture display interface for playing multiple video data by a terminal device. The specifically displayed screen may include: the video recording method comprises the steps of recording a video picture, playing a video picture and synthesizing a video picture. After the user finishes recording a segment of video, the recorded video can be synthesized with other videos, and then the picture of the recorded video and the picture of the played video can be simultaneously displayed in the display picture of the terminal equipment. The terminal device may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, portable wearable devices, and the like, which is not limited by the present disclosure.
It should be noted that in the video data processing method provided in the embodiment of the present disclosure, the execution main body may be a terminal, and the terminal may be a terminal device such as a mobile phone, an iPad, a computer, or the like.
In the related technology of synchronous recording and playing of terminal videos, a picture for recording a video and a picture for playing the video exist in a display screen of a terminal, and the picture for playing the video often has a black background edge, so that visual proportion of a picture boundary for recording the video and a picture boundary for playing the video is not coordinated, the picture aesthetic feeling is insufficient when a video is finally synthesized into a new video, and the quality of the playing picture of the playing video is reduced.
The embodiment of the disclosure provides a method for processing video data, and aims to solve the problems that when a conventional terminal plays a video, the ratio of the boundary of a playing picture of a recorded video to the boundary of the playing picture of the played video is not coordinated, so that the playing picture of a synthesized video is not attractive, and the quality of the playing picture is influenced.
The technical solution of the present disclosure is explained in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 2 is a flowchart illustrating a method for processing video data according to an embodiment. The embodiment relates to a specific process of adding filling data on the basis of playing video data and then processing recorded video data, played video data and filling data so as to improve the picture quality of a composite video. As shown in fig. 2, the method comprises the steps of:
s101, acquiring recorded video data, playing video data and filling data; the filling data is used for filling the non-video area in the picture corresponding to the playing video data.
The recorded video data is obtained by acquiring image data of a shot picture by a camera on the terminal when a user records a video by using the terminal. The played video data may be recorded video data pre-stored in the terminal, or alternatively, may also be video data acquired by downloading from the network after the terminal connects to the network, or alternatively, may also be video data transmitted from another terminal device through another communication connection manner, which is not limited in this embodiment. The filling data can be one or a combination of characters, pictures, videos and the like. The specific form of the padding data can be customized by a user, and optionally, can also be determined by a preset type of the padding data. The filling data may be data pre-stored in the terminal cache, or may be data downloaded from a network or acquired from other devices. The non-video area may be an area where there is no moving picture in the picture on which the video is played, for example, an area where a background edge such as black or white is located in the picture on which the video is played, or alternatively, an area corresponding to a still picture in the picture on which the video is played. Accordingly, the position of the frame of the padding data may be set in the area where the background side such as black, white, etc. in the frame of the playing video is located, or optionally, may also be set in the area corresponding to the still frame in the frame of the playing video.
In this embodiment, when a user records a video using a terminal, the user may first obtain played video data on the terminal, obtain recorded video data through an image data acquisition device of a camera of a mobile phone, and simultaneously, the user may also input an instruction for setting the padding data on the mobile phone before recording the video or after recording the video, so that the terminal may obtain the padding data according to the instruction information, or the terminal selects the padding data matched with a non-video area in a picture corresponding to the played video data according to a preset rule, and finally performs synthesis processing on the played video data, the recorded video data, and the padding data, and further performs playing in a screen of the terminal.
And S102, displaying the recorded video data, the played video data and the filling data in a display area of the terminal according to the picture layout rule.
The picture layout rule refers to the arrangement mode of pictures of the recorded video data, the played video data and the filled data in the display area on the terminal screen when the recorded video data, the played video data and the filled data are played in the display area on the terminal screen. The arrangement mode may include the position, shape, size, resolution, etc. of the picture for recording video data, the picture for playing video data, and the picture for filling data on the screen. The picture layout rule may be preset in the terminal, or alternatively, the picture layout rule may also be set by the terminal in a user-defined manner before the terminal performs the synthesizing process on the recorded video data, the played video data, and the padding data to generate the synthesized video, which is not limited in this embodiment. In addition, the arrangement may be various, for example, as shown in fig. 2a, the picture for recording the video data and the picture for playing the video data are arranged horizontally, and the picture for filling the data and the picture for playing the video data are arranged vertically, and vertically. For another example, as shown in fig. 2b, the frame for recording video data and the frame for playing video data are arranged vertically, and the frame for filling data and the frame for playing video data are arranged horizontally. This embodiment does not limit the arrangement position.
In this embodiment, when a user records a video using a terminal, and after acquiring a recorded video, a played video, and padding data, the recorded video, the played video, and the padding data are synthesized to generate synthesized video data, so that the terminal can play the synthesized video on a screen. The pictures for recording video data, playing video data and filling data can be arranged according to the picture layout rule and displayed in the display area on the user terminal. For example, as shown in fig. 2a, the screen layout rule set by the terminal may be that the display area is divided into two left and right areas with equal size, and the right area is further divided into three upper, middle and lower areas, wherein the left area may display a screen for recording video data, the middle area in the right area may display a screen for playing video data, and the upper and lower areas in the right area may respectively display a screen for filling data. After the terminal arranges the video pictures of the recorded video data, the played video data and the filling data according to the picture layout rule, the picture of the recorded video data is displayed in the left area in the display area on the terminal, the picture of the played video data is displayed in the middle area in the right area in the display area on the terminal, and the picture of the filling data is displayed in the upper area and the lower area in the right area in the display area on the terminal.
In the above embodiment, the terminal acquires recorded video data, played video data, and padding data; the filling data is used for filling a non-video area in a picture corresponding to the playing video data; and displaying the recorded video data, the played video data and the filling data in a display area of the terminal according to a picture layout rule. In the method for processing the video data, the filling data is added in the non-video area in the picture corresponding to the played video data by the terminal, so that the filling data and the played video data are combined into a new video data, and a new video picture is formed, and the proportion of the new video picture is added due to the filling data, so that the filling data can cover the non-video area in the played video data, and the picture edge of the recorded video data and the picture edge of the played video data are more harmonious in display proportion, thereby improving the quality of the combined video picture.
Fig. 3 is a schematic flowchart of an implementation manner of S102 in the embodiment of fig. 2, and the embodiment mainly relates to a specific process in which a terminal performs synthesis processing on a plurality of pieces of video data, and then plays a synthesized video obtained after the synthesis processing by using a player. On the basis of the foregoing embodiment, as shown in fig. 2, the foregoing S102 "displaying the recorded video data, the played video data, and the padding data in the display area of the terminal according to the screen layout rule" includes the following steps:
s201, synthesizing the recorded video data, the played video data and the filling data by adopting a preset image processing method according to the picture layout rule to obtain synthesized video data.
The preset image processing method may include an image processing method in which a picture for recording video data, a picture for playing video data, and a picture for filling data are arranged according to a picture layout rule; an image processing method for synchronously displaying all video images is carried out on the recorded video data, the played video data and the filling data; and carrying out splicing processing on the recorded video frame images, the played video frame images and the filled data frame images and the like. The composite video data is new video data formed by image processing a plurality of video data by the terminal using an image processing method.
In this embodiment, a user may pre-define a picture layout rule on the terminal, or after acquiring corresponding recorded video data, played video data, and padding data, pre-define a picture layout rule on the terminal through an instruction input terminal, then perform image processing on the recorded video data, the played video data, and the padding data by using an image processing device on the terminal, so as to lay out display positions of pictures of the recorded video data, the played video data, and the padded data in a display area in a terminal screen, that is, arrange according to the user-defined picture layout rule. And then the arranged recorded video data, the arranged played video data and the arranged filling data are subjected to image synthesis processing, and finally synthesized video data are generated.
Optionally, in one embodiment, the screen layout rule includes a screen layout rule of a display area when recording a video or a layout rule generated according to a control instruction input by a user.
In this embodiment, when a user records a video using a terminal, a display area in a terminal screen may include a picture for recording video data, a picture for playing video data, and a picture for filling data, and positions of the picture for recording video data, the picture for playing video data, and the picture for filling data in the display area may be set by the terminal according to a fixed picture layout rule before the user records the video, or may be set by the terminal according to a layout rule generated by a control instruction input by the user before the terminal records the video. As another alternative, after the terminal records the video, the terminal sets the positions of the picture for recording the video data, the picture for playing the video data, and the picture for filling the data in the display area according to the layout rule generated by the control instruction input by the user. The method for customizing the picture layout rule by the user strengthens the interactivity between the user and the terminal video, improves the experience degree of the user on the video picture, and simultaneously improves the ornamental property of the video picture through the setting of the user.
S202, controlling a player to play the synthesized video data in the display area.
Wherein, the terminal can control a player to play video data.
In this embodiment, when the terminal performs image processing on the recorded video data, the played video data, and the padding data, and generates the composite video data, the terminal controls a player thereon to convert the composite video data into an image for playing, and displays a picture of the recorded video data, a picture of the played video data, and a picture of the padding data in a display picture of the composite video data, and the player can display the picture of the recorded video data, the picture of the played video data, and the picture of the padding data at different positions of the display area according to the above picture layout rule.
In the above embodiment, the terminal performs synthesis processing on the recorded video data, the played video data, and the padding data by using a preset image processing method according to the picture layout rule to obtain synthesized video data; a player is controlled to play the composite video data in the display area. Since the screen layout rule may be a screen layout rule preset by the terminal, optionally, the screen layout rule may also be set by the user before recording the video. Therefore, the flexible arrangement of the picture for recording the video data, the picture for playing the video data and the picture for filling the data can be realized according to the picture layout rule, the interactivity between the user and the mobile phone video is enhanced, and the layout positions of the picture for recording the video data, the picture for playing the video data and the picture for filling the data can be adjusted through the setting of the picture layout rule, so that the proportion of each picture in the picture for synthesizing the video data is more harmonious visually, and the picture quality of the synthesized video data is improved.
In particular, one embodiment relates to a specific process for composing video data. As shown in fig. 4, in S201, "synthesizing the recorded video data, the played video data, and the padding data by using a preset image processing method according to the picture layout rule to obtain synthesized video data", the method may specifically include the following steps:
s301, according to the recorded video data, the played video data and the filling data, respectively acquiring recorded video frame images, played video frame images and filled video frame images corresponding to the timestamps.
Where the timestamp may identify a time of a moment, e.g., 17 hours, 17 minutes, 17 seconds. The recorded video frame image corresponding to the timestamp refers to a frame of recorded video image acquired by the image acquisition device at a specific time; the playing video frame image corresponding to the timestamp refers to a frame playing video image at a certain specific time; the filling video image corresponding to the time stamp refers to a frame filling video image at a specific time.
In this embodiment, in the process of image processing of the acquired recorded video data, played video data, and filler data by the terminal, the terminal may first determine a plurality of timestamps according to the video playing duration of the synthesized video data, and then extract the recorded video frame image, the played video frame image, and the filler video frame image corresponding to each timestamp from the recorded video data, the played video data, and the filler data according to each timestamp. Wherein the plurality of timestamps comprise a total duration equal to the video playback duration of the composite video data.
And S302, synthesizing the recorded video frame image, the played video frame image and the filled video frame image corresponding to each timestamp according to the picture layout rule to generate synthesized video data.
In this embodiment, when the terminal performs image processing on the recorded video data, the played video data, and the padding data by using the image processing apparatus, a plurality of timestamps are first set according to a video playing duration of the synthesized video data, the recorded video frame image, the played video frame image, and the padding video frame image corresponding to each timestamp are then obtained according to each timestamp, and finally the recorded video frame image, the played video frame image, and the padding video frame image corresponding to each timestamp are synthesized according to the above-mentioned frame layout rule, so as to further generate the synthesized video data. For example, when the video playing duration of the synthesized video data is 5s, 10 timestamps in units of 0.5s may be set, and then according to the 10 timestamps, each frame image corresponding to the 10 timestamps is found from the recorded video frame image, the played video frame image, and the filler video frame image, and finally, the recorded video frame image, the played video frame image, and the filler video frame image corresponding to the same timestamp are synthesized, so that the synthesis of all the recorded video frame images, the played video frame images, and the filler video frame images is completed together.
Optionally, one embodiment further describes a specific process of synthesizing video data. As shown in fig. 5, the synthesizing process is performed on the recorded video frame image, the played video frame image, and the filled video frame image corresponding to each timestamp according to the picture layout rule to generate synthesized video data, and the specific method may include:
s401, arranging the recorded video frame images, the played video frame images and the filled video frame images corresponding to the timestamps according to the picture layout rule to obtain the video frame images corresponding to the timestamps.
The video frame images corresponding to the timestamps are synthesized video frame images which are synthesized after recording the video frame images, playing the video frame images and filling the video frame images are arranged according to the picture layout rule, and are single-frame synthesized video frame images.
In this embodiment, after the terminal acquires the recorded video frame image, the played video frame image, and the filled video frame image corresponding to each timestamp by using the image processing device according to the set timestamps, the recorded video frame image, the played video frame image, and the filled video frame image corresponding to each timestamp are arranged according to the above-mentioned picture layout rule, so that the video frame images corresponding to the arranged timestamps can be generated.
S402, according to the sequence of the timestamps, splicing the video frame images corresponding to the timestamps to obtain the synthesized video data.
Because the time stamp is a concept of time, there is a chronological order, and the time length included in the plurality of time stamps may be equal to a period of time length. For example, the video playback time length of the composite video data in the present embodiment may include a plurality of time stamps, and the sum of the video data corresponding to the plurality of time stamps constitutes the composite video data.
In this embodiment, when the terminal needs to perform synthesis processing on the recorded video frame image, the played video frame image, and the filler video frame image corresponding to each timestamp, it is first necessary to arrange the recorded video frame image, the played video frame image, and the filler video frame image according to the above-mentioned picture layout rule, and generate the video frame image corresponding to each timestamp, so that the picture of the recorded video data, the picture of the played video data, and the picture of the filler data can be arranged in the display area of the terminal according to the above-mentioned picture layout rule. And then, when the terminal needs to splice the video frame images corresponding to the time stamps, sequencing the time stamps according to the time sequence, sequencing the video frame images corresponding to the time stamps according to the time sequence of the time stamps, and finally splicing the sequenced video frame images to generate a video data stream to obtain the synthesized video data.
Optionally, in one embodiment, the step S102 in the embodiment of fig. 2 "displaying the recorded video data, the played video data, and the padding data in the display area of the terminal according to the screen layout rule" may further include: and controlling a plurality of players to play and record video data, play video data and fill data at different positions in the display area respectively according to the picture layout rule.
In this embodiment, when the terminal needs to simultaneously display the picture of recording the video data, the picture of playing the video data, and the picture of filling the data on the display screen, the terminal can control the three players to play the picture of recording the video data, the picture of playing the video data, and the picture of filling the data, respectively. Optionally, the terminal may also control the two players to play the picture of the recorded video, the picture of the played video, and the picture of the padding data, respectively. And the terminal can further control a plurality of players to set the positions of the pictures of the corresponding video data in the display area of the display screen, the size of the occupied display area and the like according to the picture layout rule when the video corresponding to each player is played. The method does not need to synthesize the acquired recorded video data, the acquired played video data and the acquired filling data and then play and display the video pictures, so that the process of data processing can be reduced by using the method to display the pictures of a plurality of videos, thereby saving resources and reducing the processing time of a terminal on the plurality of video data.
In the embodiment of fig. 2, the terminal needs to acquire the recorded video data, the played video data and the padding data before playing the composite video. When the user sets the padding data on the terminal, the subsequent composite video data includes the padding data, and it should be noted that all the above embodiments are based on a scenario in which the user sets the padding data on the mobile phone. The following embodiment illustrates a process for a terminal to obtain padding data. Specifically, in one embodiment, as shown in fig. 6, the acquiring the padding data includes:
s501, when a filling data acquisition instruction input by a user is received, displaying a filling data selection interface in a display area; the selection interface includes a plurality of fill data types.
The stuffing data acquisition instruction is instruction information input by a user on the terminal, and is specifically used for enabling the user to set the type of stuffing data. The input mode of the instruction information may be a mode in which the user clicks a control on the touch screen, optionally, a mode in which the user inputs the instruction information by voice, which is not limited in this embodiment. The display area on the terminal screen may include a display area of a video picture, and may also include an area for a user to input instruction information. The fill data selection interface may include tabs, buttons, etc. corresponding to the various fill data types to facilitate the user's selection of the next fill data type. The type of the padding data may be various, and may be different data types such as a video data type, a text data type, a background color data type, an image data type, and the like.
In this embodiment, before the user records the video of the divide by using the terminal, the user may input the filling data acquisition instruction in the area of the display area where the user inputs the instruction information, and when the terminal acquires the filling data acquisition instruction input by the user, a filling data selection interface is displayed in the display area on the terminal, so that the user may further select the type of the filling data. For example, as shown in fig. 6a, a display area on the terminal is provided with a filling data setting control, and when a user needs to set filling data, the user can click the control, and then the display area correspondingly displays a filling data selection interface. The user can select the type of the filling data in the next step according to the filling data selection interface.
S502, receiving a selection instruction input by a user on a filling data selection interface, wherein the selection instruction comprises an identifier of at least one filling data type selected by the user.
The selection instruction is instruction information input on the filling data selection interface when the user needs to select the filling data type, and the instruction information may include instruction information of one filling data type selected by the user, and optionally, may also include instruction information of more than one filling data type selected by the user. For example, the user may set the type of the padding data as a text data type, and optionally, may set the type of the padding data as a text data type and an image data type at the same time. The identifier of the filling data type represents credentials of different filling data types, so that the terminal can identify different filling data types according to a selection instruction input by a user, and optionally, the identifier of the filling data type also can represent address information of the filling data, so that the terminal can obtain the filling data corresponding to the address information according to the address information.
In this embodiment, when the terminal acquires the filling data acquisition instruction input by the user, a display area on the terminal displays a filling data selection interface, the user can input a selection instruction on the filling data selection interface, and the selection instruction includes an identifier of the type of filling data selected by the user, so that the terminal can acquire the selection instruction input by the user on the filling data selection interface, acquire the identifier of the type of filling data selected by the user from the selection instruction, and determine the type of filling data through the identifier. For example, as shown in FIG. 6b, FIG. 6b is a schematic diagram of a fill data selection interface that includes different types of fill data selection sub-controls for video data, text data, background color data, and image data. When the user clicks the filling data setting control in fig. 6a, a filling data selection interface as shown in fig. 6b is displayed in the display area, the user can click the text data sub-control in the interface to input a selection instruction for selecting the filling data type as text data, and the terminal can determine that the type of the filling data is the text data type according to the selection instruction. The video data control in the filling data selection interface provides the user to select the video downloaded from the network or the recorded video as the filling data; the text data control provides a user to select and input a self-defined sentence or text as filling data; the background color data is to provide a user-selected color pattern as fill data; the drawing data is a drawing which provides a user with a choice of downloading from the network or a drawing which has been photographed as fill data.
And S503, acquiring filling data according to the selection instruction.
In this embodiment, when the user inputs a selection instruction on the filling data selection interface, the terminal may determine the type of the filling data according to the identifier of the type of the filling data in the selection instruction, and may also obtain the corresponding filling data from the terminal according to the identifier of the type of the filling data.
In the embodiment, when receiving a filling data acquisition instruction input by a user, the terminal displays a filling data selection interface in the display area; the terminal receives a selection instruction input by a user on a filling data selection interface; and acquiring filling data according to the selection instruction. The embodiment relates to a process of setting the filling data on the terminal by a user, and in the process, the user can set the type of the filling data by inputting instruction information according to the own requirements, so that the filling data is added on the basis of the existing video data recording and playing, and the setting diversity of the filling data can enable the filling data to cover the non-video area in the video data playing and simultaneously increase the aesthetic feeling and the functionality of the video display picture.
It should be understood that although the various steps in the flow charts of fig. 2-6 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-6 may include multiple sub-steps or phases that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or phases is not necessarily sequential.
Fig. 7 is a schematic diagram of an apparatus for processing video data according to an embodiment, as shown in fig. 7, the apparatus includes: an obtaining module 11 and a display module 12, wherein:
the acquisition module 11 is used for acquiring recorded video data, played video data and filling data; the filling data is used for filling a non-video area in a picture corresponding to the playing video data;
and a display module 12, configured to display the recorded video data, the played video data, and the padding data in a display area of the terminal according to a picture layout rule.
In one embodiment, as shown in fig. 8, on the basis of the processing device of video data shown in fig. 7, the display module 12 includes:
a synthesizing unit 120, configured to synthesize the recorded video data, the played video data, and the padding data by using a preset image processing method according to the picture layout rule to obtain synthesized video data;
a playing unit 121, configured to control a player to play the composite video data in the display area.
In one embodiment, on the basis of the video data processing apparatus shown in fig. 8, the synthesizing unit 120 is specifically configured to obtain the recorded video frame image, the played video frame image, and the padded video frame image corresponding to each timestamp according to the recorded video data, the played video data, and the padded data, respectively; and synthesizing the recorded video frame images, the played video frame images and the filled video frame images corresponding to the timestamps according to the picture layout rule to generate the synthesized video data.
In one embodiment, on the basis of the video data processing apparatus shown in fig. 8, the synthesizing unit 120 performs synthesizing processing on the recorded video frame image, the played video frame image, and the filler video frame image corresponding to each timestamp according to the screen layout rule to generate the synthesized video data, which may include: the synthesizing unit 110 is specifically configured to arrange the recorded video frame images, the played video frame images, and the filled video frame images corresponding to the timestamps according to the picture layout rule to obtain video frame images corresponding to the timestamps; and splicing the video frame images corresponding to the time stamps according to the sequence of the time stamps to obtain the synthesized video data.
In one embodiment, on the basis of the video data processing apparatus shown in fig. 7, the display module 12 is specifically configured to control a plurality of players to play the recorded video data, the played video data, and the padding data at different positions in the display area according to the picture layout rule.
In one embodiment, the screen layout rule includes a screen layout rule of the display area when recording a video or a layout rule generated according to a control instruction input by a user.
In one embodiment, as shown in fig. 9, the obtaining module 11 may include:
a first receiving unit 110, configured to display a padding data selection interface in the display area when a padding data acquisition instruction input by a user is received; the selection interface comprises a plurality of fill data types;
a second receiving unit 111, configured to receive a selection instruction input by the user on the filler data selection interface, where the selection instruction includes an identifier of at least one filler data type selected by the user;
an obtaining unit 112, configured to obtain the padding data according to the selection instruction.
For specific limitations of the video data processing apparatus, reference may be made to the above limitations of the video data processing method, which are not described herein again. The respective modules in the video data processing apparatus may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in a computer, and can also be stored in a memory in the computer in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 10. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing data such as video data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of processing video data.
Those skilled in the art will appreciate that the architecture shown in fig. 10 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring recorded video data, played video data and filling data; the filling data is used for filling a non-video area in a picture corresponding to the playing video data;
and displaying the recorded video data, the played video data and the filling data in a display area of the terminal according to a picture layout rule.
In one embodiment, the processor, when executing the computer program, further performs the steps of: synthesizing the recorded video data, the played video data and the filling data by adopting a preset image processing method according to the picture layout rule to obtain synthesized video data; controlling a player to play the composite video data in the display area.
In one embodiment, the processor, when executing the computer program, further performs the steps of: respectively acquiring recorded video frame images, played video frame images and filled video frame images corresponding to the timestamps according to the recorded video data, the played video data and the filled data; and synthesizing the recorded video frame images, the played video frame images and the filled video frame images corresponding to the timestamps according to the picture layout rule to generate the synthesized video data.
In one embodiment, the processor, when executing the computer program, further performs the steps of: according to the picture layout rule, arranging the recorded video frame images, the played video frame images and the filled video frame images corresponding to the timestamps to obtain the video frame images corresponding to the timestamps; and splicing the video frame images corresponding to the time stamps according to the sequence of the time stamps to obtain the synthesized video data.
In one embodiment, the processor, when executing the computer program, further implements: and controlling a plurality of players to play the recorded video data, the played video data and the filling data at different positions in the display area respectively according to the picture layout rule.
In one embodiment, the processor, when executing the computer program, further implements: the picture layout rule comprises a picture layout rule of the display area when the video is recorded or a layout rule generated according to a control instruction input by a user.
In one embodiment, the processor, when executing the computer program, further performs the steps of: when a filling data acquisition instruction input by a user is received, displaying a filling data selection interface in the display area; the selection interface comprises a plurality of fill data types; receiving a selection instruction input by the user on the filling data selection interface, wherein the selection instruction comprises an identifier of at least one filling data type selected by the user; and acquiring the filling data according to the selection instruction.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, the computer program, when executed by a processor, further implementing the steps of:
acquiring recorded video data, played video data and filling data; the filling data is used for filling a non-video area in a picture corresponding to the playing video data;
and displaying the recorded video data, the played video data and the filling data in a display area of the terminal according to a picture layout rule.
In one embodiment, the computer program when executed by the processor further performs the steps of: synthesizing the recorded video data, the played video data and the filling data by adopting a preset image processing method according to the picture layout rule to obtain synthesized video data; controlling a player to play the composite video data in the display area.
In one embodiment, the computer program when executed by the processor further performs the steps of: respectively acquiring recorded video frame images, played video frame images and filled video frame images corresponding to the timestamps according to the recorded video data, the played video data and the filled data; and synthesizing the recorded video frame images, the played video frame images and the filled video frame images corresponding to the timestamps according to the picture layout rule to generate the synthesized video data.
In one embodiment, the computer program when executed by the processor further performs the steps of: according to the picture layout rule, arranging the recorded video frame images, the played video frame images and the filled video frame images corresponding to the timestamps to obtain the video frame images corresponding to the timestamps; and splicing the video frame images corresponding to the time stamps according to the sequence of the time stamps to obtain the synthesized video data.
In one embodiment, the computer program when executed by the processor further implements: and controlling a plurality of players to play the recorded video data, the played video data and the filling data at different positions in the display area respectively according to the picture layout rule.
In one embodiment, the computer program when executed by the processor further implements: the picture layout rule comprises a picture layout rule of the display area when the video is recorded or a layout rule generated according to a control instruction input by a user.
In one embodiment, the computer program when executed by the processor further performs the steps of: when a filling data acquisition instruction input by a user is received, displaying a filling data selection interface in the display area; the selection interface comprises a plurality of fill data types; receiving a selection instruction input by the user on the filling data selection interface, wherein the selection instruction comprises an identifier of at least one filling data type selected by the user; and acquiring the filling data according to the selection instruction.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided by the present disclosure may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present disclosure, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for those skilled in the art, various changes and modifications can be made without departing from the concept of the present disclosure, and these changes and modifications are all within the scope of the present disclosure. Therefore, the protection scope of the present disclosure should be subject to the appended claims.

Claims (6)

1. A method for processing video data, the method comprising:
acquiring recorded video data, played video data and filling data; the filling data is used for filling a non-video area in a picture corresponding to the playing video data;
displaying the recorded video data, the played video data and the filling data in a display area of a terminal according to a picture layout rule;
the picture layout rule is used for representing the arrangement mode of the pictures of the recorded video data, the pictures of the played video data and the pictures of the filling data in the display area of the terminal when the recorded video data, the played video data and the filling data are played in the display area of the terminal;
displaying the recorded video data, the played video data and the filling data in a display area of a terminal according to the picture layout rule, comprising:
setting the positions of the pictures of the recorded video data, the played video data and the filling data in the display area of the terminal and the size of the occupied display area according to the picture layout rule;
and controlling a plurality of players to play and record video data, play video data and fill data at different positions in the display area respectively according to the setting.
2. The method according to claim 1, wherein the screen layout rule includes a screen layout rule of the display area when recording a video or a layout rule generated according to a control instruction input by a user.
3. The method of claim 1, wherein the obtaining padding data comprises:
when a filling data acquisition instruction input by a user is received, displaying a filling data selection interface in the display area; the selection interface comprises a plurality of fill data types;
receiving a selection instruction input by the user on the filling data selection interface, wherein the selection instruction comprises an identifier of at least one filling data type selected by the user;
and acquiring the filling data according to the selection instruction.
4. An apparatus for processing video data, the apparatus comprising:
the acquisition module is used for acquiring recorded video data, played video data and filling data; the filling data is used for filling a non-video area in a picture corresponding to the playing video data;
the display module is used for displaying the recorded video data, the played video data and the filling data in a display area of the terminal according to a picture layout rule;
the picture layout rule is used for representing the arrangement mode of the pictures of the recorded video data, the pictures of the played video data and the pictures of the filling data in the display area of the terminal when the recorded video data, the played video data and the filling data are played in the display area of the terminal;
displaying the recorded video data, the played video data and the filling data in a display area of a terminal according to the picture layout rule, comprising:
setting the positions of the pictures of the recorded video data, the played video data and the filling data in the display area of the terminal and the size of the occupied display area according to the picture layout rule;
and controlling a plurality of players to play and record video data, play video data and fill data at different positions in the display area respectively according to the setting.
5. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor realizes the steps of the method of any one of claims 1 to 3 when executing the computer program.
6. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 3.
CN201811325546.9A 2018-11-08 2018-11-08 Video data processing method, device, equipment and storage medium Active CN109413352B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811325546.9A CN109413352B (en) 2018-11-08 2018-11-08 Video data processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811325546.9A CN109413352B (en) 2018-11-08 2018-11-08 Video data processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109413352A CN109413352A (en) 2019-03-01
CN109413352B true CN109413352B (en) 2020-06-23

Family

ID=65472303

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811325546.9A Active CN109413352B (en) 2018-11-08 2018-11-08 Video data processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109413352B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110855921B (en) * 2019-11-12 2021-12-03 维沃移动通信有限公司 Video recording control method and electronic equipment
CN110996150A (en) * 2019-11-18 2020-04-10 咪咕动漫有限公司 Video fusion method, electronic device and storage medium
CN113301414B (en) * 2020-07-07 2023-06-02 阿里巴巴集团控股有限公司 Interface generation processing method and device, electronic equipment and computer storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1852403A (en) * 2006-04-11 2006-10-25 宇龙计算机通信科技(深圳)有限公司 Film converting edition apparatus and method
CN101534413A (en) * 2009-04-14 2009-09-16 深圳华为通信技术有限公司 System, method and apparatus for remote representation
CN101848346A (en) * 2009-03-27 2010-09-29 深圳市中彩联科技有限公司 Television and image display method thereof
CN102077587A (en) * 2008-06-30 2011-05-25 惠普开发有限公司 Compositing video streams
JP2016224180A (en) * 2015-05-28 2016-12-28 キヤノン株式会社 Zoom lens and image capturing device having the same
CN106604047A (en) * 2016-12-13 2017-04-26 天脉聚源(北京)传媒科技有限公司 Multi-video-stream video direct broadcasting method and device
CN106792092A (en) * 2016-12-19 2017-05-31 广州虎牙信息科技有限公司 Live video flow point mirror display control method and its corresponding device
CN108566519A (en) * 2018-04-28 2018-09-21 腾讯科技(深圳)有限公司 Video creating method, device, terminal and storage medium
CN108769561A (en) * 2018-06-22 2018-11-06 广州酷狗计算机科技有限公司 video recording method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1852403A (en) * 2006-04-11 2006-10-25 宇龙计算机通信科技(深圳)有限公司 Film converting edition apparatus and method
CN102077587A (en) * 2008-06-30 2011-05-25 惠普开发有限公司 Compositing video streams
CN101848346A (en) * 2009-03-27 2010-09-29 深圳市中彩联科技有限公司 Television and image display method thereof
CN101534413A (en) * 2009-04-14 2009-09-16 深圳华为通信技术有限公司 System, method and apparatus for remote representation
JP2016224180A (en) * 2015-05-28 2016-12-28 キヤノン株式会社 Zoom lens and image capturing device having the same
CN106604047A (en) * 2016-12-13 2017-04-26 天脉聚源(北京)传媒科技有限公司 Multi-video-stream video direct broadcasting method and device
CN106792092A (en) * 2016-12-19 2017-05-31 广州虎牙信息科技有限公司 Live video flow point mirror display control method and its corresponding device
CN108566519A (en) * 2018-04-28 2018-09-21 腾讯科技(深圳)有限公司 Video creating method, device, terminal and storage medium
CN108769561A (en) * 2018-06-22 2018-11-06 广州酷狗计算机科技有限公司 video recording method and device

Also Published As

Publication number Publication date
CN109413352A (en) 2019-03-01

Similar Documents

Publication Publication Date Title
US20230230306A1 (en) Animated emoticon generation method, computer-readable storage medium, and computer device
CN109348276B (en) video picture adjusting method and device, computer equipment and storage medium
CN109525884B (en) Video sticker adding method, device, equipment and storage medium based on split screen
CN109413352B (en) Video data processing method, device, equipment and storage medium
CN112422831A (en) Video generation method and device, computer equipment and storage medium
CN107770626A (en) Processing method, image synthesizing method, device and the storage medium of video material
JP7038226B2 (en) Video processing methods, devices, terminals and media
CN111491174A (en) Virtual gift acquisition and display method, device, equipment and storage medium
CN109348155A (en) Video recording method, device, computer equipment and storage medium
CN113099298A (en) Method and device for changing virtual image and terminal equipment
WO2016150388A1 (en) Interface processing method, apparatus, and system
CN113747240B (en) Video processing method, apparatus and storage medium
CN109525896A (en) Comment on answering method, device, equipment and storage medium
JP2024506639A (en) Image display methods, devices, equipment and media
CN109525880A (en) Synthetic method, device, equipment and the storage medium of video data
CN111352560B (en) Screen splitting method and device, electronic equipment and computer readable storage medium
CN112558854B (en) Multi-picture split-screen mode customization method and device and computer equipment
KR20100055145A (en) Wireless communication terminal for providing function editing and compositing taken image and the method
CN112083852B (en) Recommendation bit layout method, device, equipment and medium for video application
CN114170472A (en) Image processing method, readable storage medium and computer terminal
CN109743635B (en) Comment reply method, device, equipment and storage medium
CN108846881A (en) A kind of generation method and device of facial expression image
KR101984616B1 (en) System for providing contents using images
CN114466145B (en) Video processing method, device, equipment and storage medium
CN111221444A (en) Split screen special effect processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant