CN109743635B - Comment reply method, device, equipment and storage medium - Google Patents

Comment reply method, device, equipment and storage medium Download PDF

Info

Publication number
CN109743635B
CN109743635B CN201811324458.7A CN201811324458A CN109743635B CN 109743635 B CN109743635 B CN 109743635B CN 201811324458 A CN201811324458 A CN 201811324458A CN 109743635 B CN109743635 B CN 109743635B
Authority
CN
China
Prior art keywords
video data
comment
reply
picture
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811324458.7A
Other languages
Chinese (zh)
Other versions
CN109743635A (en
Inventor
宋帛衡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Miaozhen Tick (Beijing) Network Technology Co.,Ltd.
Original Assignee
Beijing Microlive Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Microlive Vision Technology Co Ltd filed Critical Beijing Microlive Vision Technology Co Ltd
Priority to CN201811324458.7A priority Critical patent/CN109743635B/en
Publication of CN109743635A publication Critical patent/CN109743635A/en
Application granted granted Critical
Publication of CN109743635B publication Critical patent/CN109743635B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The terminal can acquire a trigger instruction input by a user, wherein the trigger instruction is an instruction generated by operating a target character comment in a comment area by the user, and then the terminal can acquire first video data according to the trigger instruction, call second video data and synthesize reply video data according to the first video data, the second video data and the target character comment. The terminal equipment synthesizes the first video data, the second video data and the target character comment into reply video data, and utilizes the reply video data to comment the target character comment for the second time.

Description

Comment reply method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of information processing technologies, and in particular, to a comment reply method, apparatus, device, and storage medium.
Background
With the development of terminal devices, people frequently use the terminal devices to publish video files, and other users often comment on the published video files.
Usually, people reply to a text comment of a published video, and only the text comment is referred to and replied in the form of text. For example, for a published video content, other users make a text comment on the video, and when a publisher of the video replies to the text comment of other users, the publisher of the video can only reply to the comment by referring to the text comment of other users on the video and by text.
When the method is adopted to reply the comment of the video content, the readability of the reply content is poor, and the form of the reply content is single.
Disclosure of Invention
Based on this, it is necessary to provide a comment reply method, apparatus, device, and storage medium for a problem that readability of comment reply content is poor.
In a first aspect, a method of comment replying includes:
acquiring a trigger instruction input by a user; the triggering instruction is generated by the user operating the target character comment in the comment area;
acquiring first video data according to the trigger instruction, and calling second video data; the second video data is video data for commenting on the target text comment;
synthesizing reply video data according to the first video data, the second video data and the target character comment; the reply video data comprises at least two display areas when being played, and each area respectively plays the video data corresponding to the area. .
According to the comment replying method, the terminal can acquire a trigger instruction input by a user, wherein the trigger instruction is an instruction generated by operating the target character comment in the comment area by the user, and then the terminal can acquire first video data according to the trigger instruction, call currently played second video data, and synthesize and reply video data according to the first video data, the second video data and the target character comment; the reply video data comprises at least two display areas when being played, and each area respectively plays the video data corresponding to the area. In this embodiment, the terminal device synthesizes the first video data, the second video data and the target text comment into the reply video data, and the reply video data is used to comment the target text comment for the second time.
In one embodiment, the first video data and the second video data are two views displayed in a split screen mode on a current display screen.
In one embodiment, the synthesizing reply video data according to the first video data, the second video data and the target text comment comprises:
synthesizing the first video data and the second video data to obtain third video data;
performing video conversion processing on the target character comment to obtain fourth video data;
synthesizing the third video data and the fourth video data to obtain the reply video data; and when playing, the third video data in the reply video data comprises two display areas, wherein one display area plays the first video data, and the other display area plays the second video data.
In one embodiment, synthesizing the first video data and the second video data to obtain third video data includes:
processing the first video data and the second video data by adopting a video processing algorithm, respectively acquiring a frame picture of the first video data and a frame picture of the second video data, and acquiring a playing time corresponding to each frame picture;
controlling the frame pictures of the first video data and the frame pictures of the second video data with the same playing time to be displayed in the same playing picture according to the playing time corresponding to each frame picture to obtain a plurality of synthesized frame pictures;
the third video data is generated based on a plurality of synthesized frame pictures.
In one embodiment, the method further comprises:
acquiring a target background picture;
performing video conversion processing on the target text comment to obtain fourth video data, including:
and performing video conversion processing on the target background picture and the target character comment to generate fourth video data.
In one embodiment, performing video conversion processing on the target background picture and the target text comment to generate the fourth video data includes:
converting the target text comment and the target background picture into a frame picture;
and generating the fourth video data by continuously playing the frame pictures.
In one embodiment, synthesizing the third video data and the fourth video data to obtain the reply video data includes:
processing the third video data and the fourth video data by adopting a video processing algorithm to respectively obtain a frame picture of the third video data and a frame picture converted from the fourth video data;
respectively acquiring the playing time corresponding to the frame picture of the third video data and the playing time corresponding to the frame picture of the fourth video data according to a preset playing rule;
and setting each frame picture at a position corresponding to the video file according to the respective playing time to form the reply video data.
In one embodiment, the first video data is related to the target text comment.
In one embodiment, the obtaining the target background picture includes:
acquiring the target background picture from a background picture selection interface according to a background picture selection instruction input by a user; the background picture selection interface comprises a plurality of background pictures to be selected.
In a second aspect, a comment replying apparatus includes:
the acquisition module is used for acquiring a trigger instruction input by a user; the triggering instruction is generated by the user operating the target character comment in the comment area;
the processing module is used for acquiring first video data according to the trigger instruction and calling second video data; the second video data is video data for commenting on the target text comment;
the synthesis module is used for synthesizing reply video data according to the first video data, the second video data and the target character comment; the reply video data comprises at least two display areas when being played, and each area respectively plays the video data corresponding to the area.
In a third aspect, a computer device comprises a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring a trigger instruction input by a user; the triggering instruction is generated by the user operating the target character comment in the comment area;
acquiring first video data according to the trigger instruction, and calling second video data; the second video data is video data for commenting on the target text comment;
synthesizing reply video data according to the first video data, the second video data and the target character comment; the reply video data comprises at least two display areas when being played, and each area respectively plays the video data corresponding to the area.
In a fourth aspect, a computer readable storage medium having stored thereon a computer program which when executed by a processor implements the steps of:
acquiring a trigger instruction input by a user; the triggering instruction is generated by the user operating the target character comment in the comment area;
acquiring first video data according to the trigger instruction, and calling second video data; the second video data is the video data of the target character comment;
synthesizing reply video data according to the first video data, the second video data and the target character comment; the reply video data comprises at least two display areas when being played, and each area respectively plays the video data corresponding to the area.
According to the comment replying method, the comment replying device, the comment replying equipment and the storage medium, the terminal can acquire a trigger instruction input by a user, the trigger instruction is an instruction generated by operating the target character comment in the comment area by the user, and then the terminal can acquire first video data according to the trigger instruction, call currently played second video data, and synthesize and reply video data according to the first video data, the second video data and the target character comment; the reply video data comprises at least two display areas when being played, and each area respectively plays the video data corresponding to the area. The method comprises the steps that the terminal equipment synthesizes first video data, second video data and target character comments into reply video data, the reply video data is utilized to comment the target character comments for the second time, and the reply video simultaneously displays the first video data, the second video data and the target character comments, so that the video form is more vivid and visual, the readability of reply contents is high, in addition, the mode of video comments is increased based on only character comments in the original comment area, and the diversity of comment modes is greatly improved.
Drawings
FIG. 1 is a schematic diagram of a comment reply interface provided by one embodiment;
FIG. 2 is a flowchart illustrating a method of comment reply in one embodiment;
FIG. 3 is a flowchart showing a method of comment reply in another embodiment;
FIG. 4 is a flowchart showing a method of comment reply in another embodiment;
FIG. 5 is a flowchart showing a method of comment reply in another embodiment;
FIG. 6 is a flowchart showing a method of comment reply in another embodiment;
FIG. 7 is a flowchart showing a method of comment reply in another embodiment;
fig. 8 is a schematic structural diagram of a comment replying device according to an embodiment;
fig. 9 is a schematic structural diagram of a comment replying device according to another embodiment;
fig. 10 is a schematic structural diagram of a comment replying device according to another embodiment;
FIG. 11 is an internal block diagram of a computing node device provided in one embodiment.
Detailed Description
With the development of terminal devices, people frequently use the terminal devices to publish multimedia files such as videos, other users often comment on the published multimedia files, people reply to the text comments of the published videos, the comments are usually replied in a text form, and the readability of replied contents is poor. The comment reply method, device, equipment and storage medium aim at solving the problem of poor readability of text comment reply.
Fig. 1 is a schematic diagram of a comment reply interface according to an embodiment, and as described in fig. 1, when a reply is made to a target text comment 12, a reply video 11 displays first video data 110, second video data 120, and the target text comment 12 in a split screen. It should be noted that fig. 1 is only an example.
The comment replying method provided by the embodiment can be applied to a comment replying terminal, the comment replying terminal can be an electronic device with a data processing function, such as a smart phone, a tablet computer, a notebook computer, a desktop computer or a personal digital assistant, and the specific form of the comment replying terminal is not limited by the embodiment.
It should be noted that, in the comment reply method provided in the embodiment of the present disclosure, the execution subject may be a comment reply device, and the comment reply device may be implemented by software, hardware, or a combination of software and hardware as part or all of a terminal of comment reply.
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
FIG. 2 is a flow diagram that illustrates a method for comment reply in one embodiment. The embodiment relates to a process for performing video reply on a target text comment, wherein the process comprises two views of split screen display. As shown in fig. 2, the method comprises the steps of:
s101, acquiring a trigger instruction input by a user; the trigger instruction is an instruction generated by the user operating the target character comment in the comment area.
Specifically, the trigger instruction may be an instruction generated by a user operating a target text comment in a comment area, and may be used to indicate to reply to a selected target text comment, and may also be used to indicate a mode of replying to a target text comment, where the replying mode may be a video reply, an audio reply, or a motion picture reply, and the embodiment of the present disclosure does not limit this.
In a specific process of acquiring a trigger instruction input by a user, the trigger instruction may be an instruction generated by a user performing a touch operation on a display area of a target text comment, or an instruction generated by a user performing a selection operation on a comment selection interface, or an instruction generated by an operation of a user instructing to select a target text comment through a voice command, which is not limited in this disclosure.
Taking an example that a user performs touch operation on a display area of target text comments to generate a trigger instruction, a plurality of text comments exist below a published multimedia file (such as a video), when the user needs to select one of the text comments to reply, the touch operation is performed on the display area of the text comments to change a capacitance value of a touch position, and a terminal can determine the text comments of the touch position as the target text comments according to the changed capacitance value to further determine that the trigger instruction is an instruction for replying the target text comments.
S102, acquiring first video data according to a trigger instruction, and calling second video data; the second video data is video data of the target text comment.
Specifically, the first video data and the second video data may be displayed on the current display screen in a split-screen manner, and the second video data may be played on the current display screen in the process of acquiring the first video data.
In a specific process of acquiring the first video data according to the trigger instruction, the first video data may be acquired by recording a video on site, or may be acquired by retrieving video data stored in a storage device of the terminal itself, which is not limited in the application embodiment.
Further, optionally, the first video data is related to the target text comment.
Specifically, the content of the first video data may be a reply for the target text comment, may also be a supplementary content for the target text comment, and may also be a content similar to the target text comment, which is not limited in this disclosure.
In a specific process of calling the second video data, the second video data may be obtained by calling video data stored in a storage device of the terminal itself, or may be obtained by calling multimedia data stored in other devices, or may be obtained by calling video data of a target text comment, which is not limited in this disclosure.
Optionally, the first video data and the second video data are two views displayed in a split screen mode on the current display screen.
Specifically, when the first video data and the second video data are displayed on the current display screen in a split-screen manner, the area of the view corresponding to the first video data may be equal to, greater than, or smaller than the area of the view corresponding to the second video data, which is not limited in this disclosure.
S103, synthesizing reply video data according to the first video data, the second video data and the target character comments; the reply video data comprises at least two display areas when being played, and each area respectively plays the video data corresponding to the area.
On the basis of the above embodiment, after the terminal obtains the target text comment, the first video data and the second video data, the first video data, the second video data and the target text comment are synthesized into video data, and the reply video data is obtained.
The terminal may combine the first video data and the second video data into new video data, and then combine the target text comment and the new video data into reply video data, and may also combine the first video data, the second video data, and the target text comment into reply video data at the same time, which is not limited in this disclosure.
When the reply video data is synthesized, the reply video data comprises at least two display areas when played, and each area plays the video data corresponding to the area. The reply video data may include two display areas during playing, one display area playing the first video data, and the other display area playing the second video data; the reply video data may also include three display areas during playing, a first display area playing the first video data, a second display area playing the second video data, and a third display area playing the target text comment. The disclosed embodiments are not so limited.
According to the comment replying method, the terminal can obtain a trigger instruction input by a user, the trigger instruction is an instruction generated by the user operating the target character comment in the comment area, the terminal can obtain first video data according to the trigger instruction and call currently played second video data, the first video data and the second video data are two views displayed on a current display screen in a split screen mode, and the replying video data are synthesized according to the first video data, the second video data and the target character comment. In the embodiment, the terminal equipment synthesizes the first video data, the second video data and the target character comment into the reply video data, and utilizes the reply video data to carry out secondary comment on the target character comment, wherein the reply is carried out in a video form; in addition, in the embodiment, the first video data, the second video data and the target character comments are displayed in the reply video at the same time, so that the video form is more vivid and visual, and the display form of the playing picture of the reply video is more diversified when the target character comments are replied.
Fig. 3 is a flowchart illustrating a method of comment reply in another embodiment. This embodiment relates to a possible implementation process of synthesizing, by a terminal, reply video data according to first video data, second video data, and a target text comment, as shown in fig. 3, where S103 may include:
s201, synthesizing the first video data and the second video data to obtain third video data.
Specifically, in the process of synthesizing the first video data and the second video data, the view of the first video data and the view of the second video data may be displayed in parallel, the view of the second video data may be embedded in the view of the first video data, and the view of the first video data may be embedded in the view of the second video data to obtain the third video data.
S202, performing video conversion processing on the target character comments to obtain fourth video data.
Specifically, the video conversion processing may be processing of changing text data into a video, and may be processing of converting text into a picture and playing the picture according to a preset rule, or processing of directly shooting text to obtain a video file, which is not limited in the embodiment of the present disclosure.
In the process of carrying out video conversion processing on the target text comment, text content in the target text comment can be converted into pictures, and the pictures can be one or more pictures; further, if there is one picture, continuously playing the picture within the playing time to obtain video data; and if the number of the pictures is multiple, playing the multiple pictures according to a preset sequence in the playing time length to obtain video data, wherein the video data is fourth video data.
For example, the text content in the target text comment is converted into one dynamic picture data through a picture conversion process, that is, the picture into which the target text comment is converted includes a plurality of pictures, and assuming that three pictures are provided, the text angle of the text content in each picture is different, and video data, that is, fourth video data is obtained by cyclically playing the three picture data.
S203, synthesizing the third video data and the fourth video data to obtain reply video data; when playing, the third video data in the reply video data comprises two display areas, wherein one display area plays the first video data, and the other display area plays the second video data.
Specifically, on the basis of the foregoing embodiment, when the third video data and the fourth video data are synthesized, the specific synthesizing method may be to insert the entire third video data before the play start time of the fourth video data, or to insert the entire third video data between the fourth video data, or to insert the entire third video data after the play completion time of the fourth video data, which is not limited in this disclosure.
The third video data in the reply video data includes two display areas when being played, where the two display areas may be disposed above and below the current display screen, or may also be disposed on the left side and the right side of the current display screen, and taking the case where the two display areas are disposed above and below the current display screen, the upper display area may play the first video data, and the lower display area plays the second video data, which is not limited in this disclosure.
According to the comment replying method, the terminal synthesizes the first video data and the second video data to obtain third video data, performs video conversion processing on the target character comment to obtain fourth video data, and synthesizes the third video data and the fourth video data to obtain replying video data, wherein the third video data in the replying video data comprises two display areas when being played, one display area plays the first video data, and the other display area plays the second video data. In the embodiment, the terminal equipment synthesizes the first video data, the second video data and the target character comment into the reply video data, and utilizes the reply video data to carry out secondary comment on the target character comment, wherein the reply is carried out in a video form; in addition, in the embodiment, the first video data, the second video data and the target character comments are displayed in the reply video at the same time, so that the video form is more vivid and visual, and the display form of the playing picture of the reply video is more diversified when the target character comments are replied.
Fig. 4 is a flowchart illustrating a method for replying a comment in another embodiment, where the embodiment relates to a possible implementation process in which a terminal performs a synthesis process on first video data and second video data to obtain third video data. As shown in fig. 4, on the basis of the foregoing embodiment, the foregoing S201 may include the following steps:
s301, processing the first video data and the second video data by adopting a video processing algorithm, respectively obtaining a frame picture of the first video data and a frame picture of the second video data, and obtaining a playing time corresponding to each frame picture.
Specifically, the frame picture may be a single image picture in the video data, and the playing time refers to a time when the frame picture appears in the video data, which may be a specific time, or a sequence of the frame picture in the video data. Each frame in the video data corresponds to a playing time. The video processing algorithm may be configured to split the video data into a plurality of frame pictures, and obtain a playing time corresponding to each frame picture. When the video processing algorithm is specifically adopted to process the first video data and the second video data, all frame pictures in the video data can be split, and one frame picture can be split from N continuous frame pictures.
For example, if the first video data includes 100 frame pictures and the second video data includes 50 frame pictures, a video processing algorithm is adopted to split 2 consecutive frame pictures of the first video data into 1 frame picture to obtain frame pictures corresponding to 50 first video data, and the second video data is split into all frame pictures to obtain frame pictures corresponding to 50 second video data.
S302, controlling the frame pictures of the first video data and the frame pictures of the second video data with the same playing time to be displayed in the same playing picture according to the playing time corresponding to each frame picture, and obtaining a plurality of synthesized frame pictures.
Specifically, on the basis of the above embodiment, the frame picture corresponding to the first video data and the frame picture corresponding to the second video data are obtained, and the playing time corresponding to each frame picture is obtained, and the frame pictures of the first video data and the frame pictures of the second video data with the same playing time are controlled to be displayed in the same playing picture, that is, the synthesized frame picture is obtained.
For example, frame pictures with playing times of T1, T2, and T3 in the first video data are respectively obtained as a first frame picture 1, a first frame picture 2, and a first frame picture 3, frame pictures with playing times of T1, T2, and T3 in the second video data are respectively obtained as a second frame picture 1, a second frame picture 2, and a second frame picture 3, the first frame picture 1 and the second frame picture 1 are displayed in the same playing picture to obtain a composite frame picture 1, the first frame picture 2 and the second frame picture 2 are displayed in the same playing picture to obtain a composite frame picture 2, and the first frame picture 3 and the second frame picture 3 are displayed in the same playing picture to obtain a composite frame picture 3.
S303, third video data is generated based on the plurality of synthesized frame pictures.
Specifically, on the basis of the above embodiment, after obtaining the plurality of combined frame pictures, the plurality of combined frame pictures may be arranged according to a preset rule to generate the third video data. The third video data may be generated by arranging the playing times in order, or the third video data may be generated by repeatedly arranging the composite frame pictures within a segment of the playing time and arranging the other playing times in order. The disclosed embodiments are not so limited.
In the comment replying method, the terminal may process the first video data and the second video data by using a video processing algorithm, respectively obtain the frame picture of the first video data and the frame picture of the second video data, and obtain the playing time corresponding to each frame picture, and control the frame pictures of the first video data and the frame pictures of the second video data having the same playing time to be displayed in the same playing picture according to the playing time corresponding to each frame picture, so as to obtain a plurality of synthesized frame pictures, and generate the third video data based on the plurality of synthesized frame pictures. In the embodiment, the terminal synthesizes the third video data and the fourth video data into the reply video data by synthesizing the third video data, and utilizes the reply video data to comment the target character comment for the second time, wherein the reply is performed in a video form; in addition, in the embodiment, the first video data and the second video data are displayed in a split screen mode in the reply video, the video form is more vivid and visual, and the display form of the playing picture of the reply video is more diversified when the target character comment is replied.
Fig. 5 is a flowchart illustrating a method of comment reply in another embodiment. This embodiment relates to another possible implementation process in which the terminal generates fourth video data by performing video conversion processing on the target text comment, as shown in fig. 5, where S202 may include:
s401, obtaining a target background picture.
Specifically, the target background picture may be a picture set after the target text comment, and the picture content may be content related to the text comment or content unrelated to the text comment. It may be a pure color picture, a picture with a ground pattern lace, a person picture, an animal picture, a plant picture, an artificially synthesized picture, etc., which is not limited by the embodiments of the present disclosure. The form of the picture may be a static picture or a dynamic picture, which is not limited in the embodiments of the present disclosure.
Further, the target background picture may be acquired according to a selection of the user. Optionally, according to a background picture selection instruction input by a user, obtaining the target background picture from a background picture selection interface; the background picture selection interface comprises a plurality of background pictures to be selected.
Specifically, the background picture selection instruction may be used to indicate that one or more of the multiple background pictures to be selected are specified as the target background picture, and may be a voice generation instruction, a touch generation instruction, a text generation instruction, or a gesture generation instruction, which is not limited in this disclosure. The background picture selection interface may be configured with a plurality of background pictures to be selected, which may be a list of the background pictures, a list of textual descriptions of the background pictures, or a list of a combination of the background pictures and the textual descriptions thereof, and the embodiment of the present disclosure does not limit this.
When the target background picture is specifically obtained, the target background picture may be obtained according to a voice instruction input by a user, or may be obtained by an instruction input by a user through an operation on a function control on a touch screen, or may be obtained by an instruction generated by inputting characters in a text box on the display screen, or may be obtained by an instruction input by a user through a preset gesture, which is not limited in the embodiment of the present disclosure.
S402, performing video conversion processing on the target background picture and the target character comment to generate fourth video data.
Specifically, on the basis of the above embodiment, fourth video data is generated by performing video conversion processing on the target background picture and the target text comment, which may be that video conversion processing is performed on the target background picture and the target text comment respectively to obtain two video data, and the two video data are synthesized to obtain the fourth video data; or, synthesizing the target background picture and the target character comment to obtain a new picture, and performing video conversion processing on the picture to generate fourth video data. The disclosed embodiments are not so limited.
According to the video comment method, the terminal can acquire the target background picture, and performs video conversion processing on the target background picture and the target character comment to generate fourth video data. In the embodiment, the target text comment is replied by converting the content of the target background picture and the target text comment into the fourth video data and synthesizing the first video data, the second video data and the fourth video data into one reply video data, so that when the reply video data expresses the related content in the form of a dynamic picture, the target text comment and the target background picture are displayed in the dynamic picture, the expression form is more vivid and visual, and the readability of the reply content is further improved.
Fig. 6 is a flowchart illustrating a method of comment reply in another embodiment. The present embodiment relates to another possible implementation process in which the terminal performs video conversion processing on a target background picture and a target text comment to generate fourth video data, as shown in fig. 6, where S402 may include:
s501, converting the target character comments and the target background picture into frame pictures.
Specifically, the respective target text comments may be converted into text frame pictures, the target background picture may be converted into picture frame pictures, and the text frame pictures and the picture frame pictures are synthesized into one frame picture; or the target text comment and the target background picture can be synthesized into a picture, and the picture is converted into a frame picture. The disclosed embodiments are not so limited.
On the basis of the above embodiment, the target background picture may be a still picture or a moving picture. When the background picture is a static picture, the target character comment and the target background picture can be converted into a frame picture through a video conversion algorithm; when the background picture is a dynamic picture, the target text comment and the target background picture can be converted into a group of frame pictures through a video conversion algorithm.
S502, continuously playing the frame pictures to generate fourth video data.
Specifically, on the basis of the above embodiment, when the target background picture is a still picture, the target text comment and the target background picture may be converted into one frame picture by a video conversion algorithm, and the fourth video data is generated by continuously playing the frame picture. When the target background picture can be a dynamic picture, the target character comment and the target background picture can be converted into a group of frame pictures through a video conversion algorithm, and the group of frame pictures are continuously played to generate fourth video data.
According to the comment replying method, the terminal converts the target character comment and the target background picture into the frame picture, and generates the fourth video data by continuously playing the frame picture. In the embodiment, the contents of the target background picture and the target character comment are converted into the frame pictures, the fourth video data is generated by continuously playing the frame pictures, and the first video data, the second video data and the fourth video data are combined into the reply video data to reply the target character comment, so that when the reply video data expresses the related contents in the form of the dynamic pictures, the target character comment and the target background picture are displayed in the dynamic pictures, the expression form is more vivid and visual, and the readability of the reply content is further improved.
Fig. 7 is a flowchart illustrating a method of comment reply in another embodiment. This embodiment relates to another possible implementation process of the terminal performing the synthesizing process on the third video data and the fourth video data to obtain the reply video data, as shown in fig. 7, the step S203 may include:
s601, processing the third video data and the fourth video data by adopting a video processing algorithm, and respectively obtaining a frame picture of the third video data and a frame picture converted from the fourth video data.
When the video processing algorithm is used for processing the third video data and the fourth video data, all frame pictures in the video data can be split, and one frame picture can be split from N continuous frame pictures. The disclosed embodiments are not so limited.
For example, the third video data includes 100 frame pictures, and the fourth video data includes 200 frame pictures, all the frame pictures are split from the third video data and the fourth video data by using a video processing algorithm, so that the frame pictures corresponding to 100 third video data and the frame pictures corresponding to 200 fourth video data are obtained.
S602, respectively acquiring the playing time corresponding to the frame picture of the third video data and the playing time corresponding to the frame picture of the fourth video data according to a preset playing rule.
Specifically, the preset playing rule may be an arrangement order of the third video data and the fourth video data, and may be that the third video data is played first and then the fourth video data is played, or that the fourth video data is played first and then the third video data is played, and then the fourth video data is played again, which is not limited in this embodiment of the present disclosure. According to a preset playing rule, after the playing sequence of the third video data and the fourth video data is determined, the playing time corresponding to the frame picture of the third video data and the playing time corresponding to the frame picture of the fourth video data are obtained.
Taking the example of playing the fourth video data first and then playing the third video data, the playing time corresponding to the first frame picture in the fourth video data is the starting time, and the playing times corresponding to the other frame pictures in the fourth video data are the playing times of the fourth video data. If the playing time corresponding to the last frame in the fourth video data is T0 seconds, the playing time corresponding to the first frame in the third video data is T0+ x seconds, where x is a time interval between the fourth video data and the third video data, and may be 0 second or a time interval obtained according to a user input, which is not limited in this disclosure. The playing time corresponding to other frames in the third video data is the playing time of the other frames in the third video data plus T0+ x seconds. For example, if the playing time of the frame picture 1, the frame picture 2, and the frame picture 3 in the third video data is T1, T2, and T3 seconds, the playing time of the frame picture 1, the frame picture 2, and the frame picture 3 obtained according to the preset playing rule is T1+ T0+ x + T0+ x, T2+ T0+ x + T0+ x, and T3+ T0+ x seconds.
S603, setting each frame picture at a position corresponding to the video file according to each playing time to form reply video data.
Specifically, on the basis of the above embodiment, each frame picture in the third video data and the fourth video data is set at a corresponding position of the video file according to the respective playing time to form the reply video data. Since each frame corresponds to its respective playing time, each frame is arranged according to its playing time to form a video data, i.e. a reply video data.
In the comment replying method, the terminal processes the third video data and the fourth video data by adopting a video processing algorithm, respectively obtains the frame picture of the third video data and the frame picture converted from the fourth video data, respectively obtains the playing time corresponding to the frame picture of the third video data and the playing time corresponding to the frame picture of the fourth video data according to a preset playing rule, and sets each frame picture at a position corresponding to the video file according to the respective playing time to form the replying video data. In this embodiment, the terminal synthesizes the third video data and the fourth video data into reply video data, and performs secondary comment on the target character comment by using the reply video data, wherein the reply is performed in a video form.
It should be understood that although the various steps in the flow charts of fig. 2-7 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-7 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential.
Fig. 8 is a schematic structural diagram of the comment replying device according to an embodiment. As shown in fig. 8, the comment replying device includes: an acquisition module 10, a processing module 20 and a synthesis module 30, wherein
The acquiring module 10 is used for acquiring a trigger instruction input by a user; the triggering instruction is generated by the user operating the target character comment in the comment area;
the processing module 20 is configured to obtain first video data according to the trigger instruction and call second video data; the second video data is video data for commenting on the target text comment;
a synthesizing module 30, configured to synthesize reply video data according to the first video data, the second video data, and the target text comment; the reply video data comprises at least two display areas when being played, and each area respectively plays the video data corresponding to the area.
The comment replying device provided by the embodiment of the disclosure can execute the method embodiment, and the implementation principle and the technical effect are similar, and are not described again here.
Fig. 9 is a schematic structural diagram of a comment replying device according to another embodiment. On the basis of the embodiment shown in fig. 8, the synthesis module 30 includes: a first synthesis unit 301, a conversion unit 302, and a second synthesis unit 303, wherein:
a first synthesizing unit 301, configured to synthesize the first video data and the second video data to obtain third video data;
a conversion unit 302, configured to perform video conversion processing on the target text comment to obtain fourth video data;
a second synthesizing unit 303, configured to synthesize the third video data and the fourth video data to obtain the reply video data; and when playing, the third video data in the reply video data comprises two display areas, wherein one display area plays the first video data, and the other display area plays the second video data.
In an embodiment, the first synthesizing unit 301 is specifically configured to process the first video data and the second video data by using a video processing algorithm, respectively obtain a frame picture of the first video data and a frame picture of the second video data, and obtain a playing time corresponding to each frame picture; controlling the frame pictures of the first video data and the frame pictures of the second video data with the same playing time to be displayed in the same playing picture according to the playing time corresponding to each frame picture to obtain a plurality of synthesized frame pictures; the third video data is generated based on a plurality of synthesized frame pictures.
In an embodiment, the second synthesizing unit 303 is specifically configured to process the third video data and the fourth video data by using a video processing algorithm, and separately obtain a frame picture of the third video data and a frame picture converted from the fourth video data; respectively acquiring the playing time corresponding to the frame picture of the third video data and the playing time corresponding to the frame picture of the fourth video data according to a preset playing rule; and setting each frame picture at a position corresponding to the video file according to the respective playing time to form the reply video data.
In one embodiment, the first video data is related to the target text comment.
The comment replying device provided by the embodiment of the disclosure can execute the method embodiment, and the implementation principle and the technical effect are similar, and are not described again here.
Fig. 10 is a schematic structural diagram of a comment reply device according to another embodiment. On the basis of the embodiment shown in fig. 9 described above, the conversion unit 302 includes: an acquisition subunit 3021 and a conversion subunit 3022, wherein:
an acquiring subunit 3021, configured to acquire a target background picture;
a conversion subunit 3022, configured to perform video conversion processing on the target background picture and the target text comment, and generate the fourth video data.
In one embodiment, the conversion subunit 3022 is specifically configured to convert the target text comment and the target background picture into a frame picture; and generating the fourth video data by continuously playing the frame pictures.
In an embodiment, the obtaining subunit 3021 is specifically configured to obtain the target background picture from a background picture selection interface according to a background picture selection instruction input by a user; the background picture selection interface comprises a plurality of background pictures to be selected.
The comment replying device provided by the embodiment of the disclosure can execute the method embodiment, and the implementation principle and the technical effect are similar, and are not described again here.
For the specific definition of a comment reply device, reference may be made to the above definition of the comment reply method, which is not described herein again. The respective modules in the comment replying device described above may be implemented in whole or in part by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 11. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer device is used for realizing a comment reply method when being executed by a processor. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in FIG. 11 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices in which the disclosed aspects apply, as a particular computing device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring a trigger instruction input by a user; the triggering instruction is generated by the user operating the target character comment in the comment area;
acquiring first video data according to the trigger instruction, and calling second video data; the second video data is video data for commenting on the target text comment;
synthesizing reply video data according to the first video data, the second video data and the target character comment; the reply video data comprises at least two display areas when being played, and each area respectively plays the video data corresponding to the area.
In one embodiment, the first video data and the second video data are two views displayed in a split screen mode on a current display screen.
In one embodiment, the processor, when executing the computer program, further performs the steps of: synthesizing the first video data and the second video data to obtain third video data; performing video conversion processing on the target character comment to obtain fourth video data; synthesizing the third video data and the fourth video data to obtain the reply video data; and when playing, the third video data in the reply video data comprises two display areas, wherein one display area plays the first video data, and the other display area plays the second video data.
In one embodiment, the processor, when executing the computer program, further performs the steps of: processing the first video data and the second video data by adopting a video processing algorithm, respectively acquiring a frame picture of the first video data and a frame picture of the second video data, and acquiring a playing time corresponding to each frame picture; controlling the frame pictures of the first video data and the frame pictures of the second video data with the same playing time to be displayed in the same playing picture according to the playing time corresponding to each frame picture to obtain a plurality of synthesized frame pictures; the third video data is generated based on a plurality of synthesized frame pictures.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring a target background picture; performing video conversion processing on the target text comment to obtain fourth video data, including: and performing video conversion processing on the target background picture and the target character comment to generate fourth video data.
In one embodiment, the processor, when executing the computer program, further performs the steps of: converting the target text comment and the target background picture into a frame picture; and generating the fourth video data by continuously playing the frame pictures.
In one embodiment, the processor, when executing the computer program, further performs the steps of: processing the third video data and the fourth video data by adopting a video processing algorithm to respectively obtain a frame picture of the third video data and a frame picture converted from the fourth video data; respectively acquiring the playing time corresponding to the frame picture of the third video data and the playing time corresponding to the frame picture of the fourth video data according to a preset playing rule; and setting each frame picture at a position corresponding to the video file according to the respective playing time to form the reply video data.
In one embodiment, the first video data is related to the target text comment.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring the target background picture from a background picture selection interface according to a background picture selection instruction input by a user; the background picture selection interface comprises a plurality of background pictures to be selected.
The implementation principle and technical effect of the computer device provided in this embodiment are similar to those of the method embodiments described above, and are not described herein again.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a trigger instruction input by a user; the triggering instruction is generated by the user operating the target character comment in the comment area;
acquiring first video data according to the trigger instruction, and calling second video data; the second video data is video data for commenting on the target text comment;
synthesizing reply video data according to the first video data, the second video data and the target character comment; the reply video data comprises at least two display areas when being played, and each area respectively plays the video data corresponding to the area.
In one embodiment, the first video data and the second video data are two views displayed in a split screen mode on a current display screen.
In one embodiment, the computer program when executed by the processor further performs the steps of: synthesizing the first video data and the second video data to obtain third video data; performing video conversion processing on the target character comment to obtain fourth video data; synthesizing the third video data and the fourth video data to obtain the reply video data; and when playing, the third video data in the reply video data comprises two display areas, wherein one display area plays the first video data, and the other display area plays the second video data. .
In one embodiment, the computer program when executed by the processor further performs the steps of: processing the first video data and the second video data by adopting a video processing algorithm, respectively acquiring a frame picture of the first video data and a frame picture of the second video data, and acquiring a playing time corresponding to each frame picture; controlling the frame pictures of the first video data and the frame pictures of the second video data with the same playing time to be displayed in the same playing picture according to the playing time corresponding to each frame picture to obtain a plurality of synthesized frame pictures; the third video data is generated based on a plurality of synthesized frame pictures.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring a target background picture; performing video conversion processing on the target text comment to obtain fourth video data, including: and performing video conversion processing on the target background picture and the target character comment to generate fourth video data.
In one embodiment, the computer program when executed by the processor further performs the steps of: converting the target text comment and the target background picture into a frame picture; and generating the fourth video data by continuously playing the frame pictures.
In one embodiment, the computer program when executed by the processor further performs the steps of: processing the third video data and the fourth video data by adopting a video processing algorithm to respectively obtain a frame picture of the third video data and a frame picture converted from the fourth video data; respectively acquiring the playing time corresponding to the frame picture of the third video data and the playing time corresponding to the frame picture of the fourth video data according to a preset playing rule; and setting each frame picture at a position corresponding to the video file according to the respective playing time to form the reply video data.
In one embodiment, the first video data is related to the target text comment.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring the target background picture from a background picture selection interface according to a background picture selection instruction input by a user; the background picture selection interface comprises a plurality of background pictures to be selected.
The implementation principle and technical effect of the computer-readable storage medium provided by this embodiment are similar to those of the above-described method embodiment, and are not described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided by the present disclosure may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present disclosure, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for those skilled in the art, various changes and modifications can be made without departing from the concept of the present disclosure, and these changes and modifications are all within the scope of the present disclosure. Therefore, the protection scope of the present disclosure should be subject to the appended claims.

Claims (9)

1. A method of comment replying, comprising:
acquiring a trigger instruction input by a user; the triggering instruction is generated by the user operating the target character comment in the comment area;
acquiring first video data according to the trigger instruction, and calling second video data; the second video data is video data commented on by the target character comment;
synthesizing reply video data according to the first video data, the second video data and the target character comment; the reply video data comprises at least two display areas when being played, and each area respectively plays the video data corresponding to the area;
the reply video data comprises third video data and fourth video data, the fourth video data is obtained by performing video conversion processing on the target character comment, and the third video data is obtained by performing synthesis processing on the first video data and the second video data;
synthesizing the first video data and the second video data to obtain third video data, including:
processing the first video data and the second video data by adopting a video processing algorithm, respectively acquiring a frame picture of the first video data and a frame picture of the second video data, and acquiring a playing time corresponding to each frame picture;
controlling the frame pictures of the first video data and the frame pictures of the second video data with the same playing time to be displayed in the same playing picture according to the playing time corresponding to each frame picture to obtain a plurality of synthesized frame pictures;
the third video data is generated based on a plurality of synthesized frame pictures.
2. The method of claim 1, wherein the first video data and the second video data are two views of a split screen display on a current display screen.
3. The method of claim 1, wherein synthesizing reply video data from the first video data, the second video data, and the target text comment comprises:
synthesizing the first video data and the second video data to obtain third video data;
performing video conversion processing on the target character comment to obtain fourth video data;
synthesizing the third video data and the fourth video data to obtain the reply video data; and when playing, the third video data in the reply video data comprises two display areas, wherein one display area plays the first video data, and the other display area plays the second video data.
4. The method of claim 3, further comprising:
acquiring a target background picture;
performing video conversion processing on the target text comment to obtain fourth video data, including:
and performing video conversion processing on the target background picture and the target character comment to generate fourth video data.
5. The method according to claim 4, wherein performing video conversion processing on the target background picture and the target text comment to generate the fourth video data comprises:
converting the target text comment and the target background picture into a frame picture;
and generating the fourth video data by continuously playing the frame pictures.
6. The method of claim 3, wherein synthesizing the third video data and the fourth video data to obtain the reply video data comprises:
processing the third video data and the fourth video data by adopting a video processing algorithm to obtain a frame picture of the third video data and a frame picture converted from the fourth video data;
respectively acquiring the playing time corresponding to the frame picture of the third video data and the playing time corresponding to the frame picture of the fourth video data according to a preset playing rule;
and setting each frame picture at a position corresponding to the video file according to the respective playing time to form the reply video data.
7. A comment reply apparatus characterized in that the apparatus comprises:
the acquisition module is used for acquiring a trigger instruction input by a user; the triggering instruction is generated by the user operating the target character comment in the comment area;
the processing module is used for acquiring first video data according to the trigger instruction and calling second video data; the second video data is video data for commenting on the target text comment;
the synthesis module is used for synthesizing reply video data according to the first video data, the second video data and the target character comment; the reply video data comprises at least two display areas when being played, and each area respectively plays the video data corresponding to the area;
the reply video data comprises third video data and fourth video data, the fourth video data is obtained by performing video conversion processing on the target character comment, and the third video data is obtained by performing synthesis processing on the first video data and the second video data;
the synthesis module is specifically configured to: processing the first video data and the second video data by adopting a video processing algorithm, respectively acquiring a frame picture of the first video data and a frame picture of the second video data, and acquiring a playing time corresponding to each frame picture; controlling the frame pictures of the first video data and the frame pictures of the second video data with the same playing time to be displayed in the same playing picture according to the playing time corresponding to each frame picture to obtain a plurality of synthesized frame pictures; the third video data is generated based on a plurality of synthesized frame pictures.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor realizes the steps of the method according to any of claims 1-6 when executing the computer program.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
CN201811324458.7A 2018-11-08 2018-11-08 Comment reply method, device, equipment and storage medium Active CN109743635B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811324458.7A CN109743635B (en) 2018-11-08 2018-11-08 Comment reply method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811324458.7A CN109743635B (en) 2018-11-08 2018-11-08 Comment reply method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109743635A CN109743635A (en) 2019-05-10
CN109743635B true CN109743635B (en) 2020-05-22

Family

ID=66355584

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811324458.7A Active CN109743635B (en) 2018-11-08 2018-11-08 Comment reply method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109743635B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114598893B (en) * 2020-11-19 2024-04-30 京东方科技集团股份有限公司 Text video realization method and system, electronic equipment and storage medium
CN115563320A (en) * 2021-07-01 2023-01-03 北京字节跳动网络技术有限公司 Information reply method, device, electronic equipment, computer storage medium and product

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4364176B2 (en) * 2005-06-20 2009-11-11 シャープ株式会社 Video data reproducing apparatus and video data generating apparatus
CN101521789A (en) * 2009-03-20 2009-09-02 中兴通讯股份有限公司 Video telephone, system and method for realizing word message sending function during video communication.
CN101667188A (en) * 2009-07-24 2010-03-10 刘雪英 Method and system for leaving audio/video messages and comments on blog
US20130055101A1 (en) * 2011-08-30 2013-02-28 Google Inc. System and Method for Tagging Belongings in Photos and Posts
TWI542204B (en) * 2012-09-25 2016-07-11 圓剛科技股份有限公司 Multimedia comment system and multimedia comment method
CN104572883B (en) * 2014-12-22 2019-01-25 东软集团股份有限公司 The implementation method and device of clue between quickly revert and comment

Also Published As

Publication number Publication date
CN109743635A (en) 2019-05-10

Similar Documents

Publication Publication Date Title
CN109525884B (en) Video sticker adding method, device, equipment and storage medium based on split screen
US20200195980A1 (en) Video information processing method, computer equipment and storage medium
CN109348299A (en) Comment on answering method, device, equipment and storage medium
CN112422831A (en) Video generation method and device, computer equipment and storage medium
CN111970571B (en) Video production method, device, equipment and storage medium
JP7395070B1 (en) Video processing methods and devices, electronic equipment and computer-readable storage media
CN109525896A (en) Comment on answering method, device, equipment and storage medium
US10922479B2 (en) Method and electronic device for creating an electronic signature
CN111694603B (en) Screen sharing method, device, computer equipment and storage medium
CN109348155A (en) Video recording method, device, computer equipment and storage medium
CN109743635B (en) Comment reply method, device, equipment and storage medium
CN111752535A (en) Web page development method and device, computer equipment and readable storage medium
CN109413352B (en) Video data processing method, device, equipment and storage medium
CN114253449A (en) Screen capturing method, device, equipment and medium
CN114998102A (en) Image processing method and device and electronic equipment
CN112182692B (en) Building project file processing method and device, computer equipment and storage medium
CN114564921A (en) Document editing method and device
CN112995770B (en) Video playing method and device, storage medium and computer equipment
CN114827737A (en) Image generation method and device and electronic equipment
CN114564134A (en) Application icon display method and device
CN114579233A (en) Desktop deformer display method and device, electronic equipment and storage medium
CN109525886B (en) Method, device and equipment for controlling video playing speed and storage medium
CN111291256A (en) Personalized homepage generation method, device, electronic device and storage medium
CN111047922A (en) Pronunciation teaching method, device, system, computer equipment and storage medium
CN109327732B (en) Multimedia file playing control method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220921

Address after: Room 228, 2nd Floor, Building 1, Yard 23, North Third Ring West Road, Haidian District, Beijing 100089

Patentee after: Miaozhen Tick (Beijing) Network Technology Co.,Ltd.

Address before: Room 2-5-9b, 8th floor, building 2, No. 48, Zhichun Road, Haidian District, Beijing 100089

Patentee before: BEIJING MICROLIVE VISION TECHNOLOGY Co.,Ltd.