CN113301389B - Comment processing method and device for generating video - Google Patents

Comment processing method and device for generating video Download PDF

Info

Publication number
CN113301389B
CN113301389B CN202110547955.9A CN202110547955A CN113301389B CN 113301389 B CN113301389 B CN 113301389B CN 202110547955 A CN202110547955 A CN 202110547955A CN 113301389 B CN113301389 B CN 113301389B
Authority
CN
China
Prior art keywords
video
pictures
content
total number
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110547955.9A
Other languages
Chinese (zh)
Other versions
CN113301389A (en
Inventor
郑洪刚
刘伟科
李同猛
韩卫召
沈俊杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Wodong Tianjun Information Technology Co Ltd
Priority to CN202110547955.9A priority Critical patent/CN113301389B/en
Publication of CN113301389A publication Critical patent/CN113301389A/en
Application granted granted Critical
Publication of CN113301389B publication Critical patent/CN113301389B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234336Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by media transcoding, e.g. video is transformed into a slideshow of still pictures or audio is converted into text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/47815Electronic shopping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a comment processing method and device for generating a video, and relates to the technical field of computers. One embodiment of the method comprises: obtaining comment data used for generating a video, and splitting the comment data into text content, image content and/or video content; calculating the total number of frame images according to the display duration of the video and the number of the preset frame images per second, and respectively converting the text content, the image content and/or the video content into the total number of images; and combining the text content pictures, the image content pictures and/or the video content pictures with the same number to obtain the total number of frame images, and integrating to obtain the comment video. According to the embodiment, the comment data of the article are synthesized into a video form according to the characters, the pictures and the video, the comment data are used for being displayed to the consumer in the live broadcasting process, the live broadcasting content is enriched, and the consumer can better know the use feeling of the article conveniently.

Description

Comment processing method and device for generating video
Technical Field
The invention relates to the technical field of computers, in particular to a comment processing method and device for generating a video.
Background
The current direct broadcast selling becomes a new selling trend, and the appeal for enriching and perfecting the direct broadcast content is increasingly wide. In the live broadcast process, the main contents of the main promoted goods are as follows: and displaying the basic information and the use effect of the article on site. But for the item's after-market review content, either nothing is mentioned, or simply dictating, or posting an item review link.
The behavior that the consumers refer to other people to comment information before shopping exists, so that the existing live broadcast content has the following defects aiming at the item comment information: without reference to item review information, consumers cannot receive such reference information; dictation article comment information, which cannot be guaranteed to be received by consumers due to uncertainty of entering and exiting a live broadcast room, and dictation information is difficult to be completely absorbed by the consumers; and the link of the item comment is published, a consumer is required to jump a window, and the interaction is not friendly.
Disclosure of Invention
In view of this, embodiments of the present invention provide a comment processing method and apparatus for generating a video, which can at least solve a phenomenon that it is not easy for an existing consumer to obtain comment information of an item in a live broadcast process.
To achieve the above object, according to an aspect of an embodiment of the present invention, there is provided a comment processing method for generating a video, including:
obtaining comment data used for generating a video, and splitting the comment data into text content, image content and/or video content;
calculating the total number of frame images according to the display duration of the video and the number of the preset frame images per second, and respectively converting the text content, the image content and/or the video content into the total number of images; the number of the pictures is sequentially increased according to the picture generation sequence;
and combining the text content pictures, the image content pictures and/or the video content pictures with the same number to obtain the total number of frame images, and integrating to obtain the comment video.
Optionally, the converting the text content, the image content, and/or the video content into the total number of pictures respectively includes:
calculating the sub-text content displayed by each frame image according to the size of the area where the text content in the frame image is located, converting each sub-text content into a picture with the size, and obtaining the total number of continuous text content pictures according to the sequence of the sub-text contents; and/or
Adjusting the size of each image content according to the size of the area where the image content in the frame image is located, adjusting the quantity of each adjusted image content, and obtaining the total quantity of continuous image content pictures by combining the sequence of the image content; wherein the image content is located within the picture; and/or
And converting the video content into the total number of pictures, and adjusting the size of each picture according to the size of the area where the video content is located in the frame image to obtain the total number of continuous video content pictures.
Optionally, the adjusting the number of each adjusted image content includes:
determining whether the number of image contents is less than the total number;
if the judgment result is less than the total number, performing quantity copying on each adjusted image content according to the quotient of the total number and the image content quantity to obtain the total number of image content pictures; or
And if the judgment result is greater than the preset value, extracting the total number of image content pictures according to the sequence of the image content.
Optionally, the converting the video content into the total number of pictures includes:
judging the size relation between the length of the video content and the display duration;
if the judgment result is larger than the total number of pictures, the video frames of the display duration are intercepted backwards from the starting point of the video content, and the video frames are converted into the total number of pictures; or
If the judgment result is equal to the total number of pictures, converting the video content into the total number of pictures; or
If the judgment result is less than the preset threshold, converting the video frame per second into the pictures with the number of the frame images, calculating a first tolerance according to the length and the display duration, copying the pictures step by step according to the first tolerance, and inserting the copied pictures into the positions behind and adjacent to the copied pictures until the total number of the pictures is reached.
Optionally, the calculating a first tolerance according to the length and the display duration includes:
and determining a difference value between the display duration and the length, calculating a quotient of the length and the difference value, and if a remainder exists, rounding the quotient downwards to obtain the first tolerance.
Optionally, the capturing the video frame of the display duration backwards from the starting point of the video content, and converting the video frame into the total number of pictures, includes:
judging the size relation between the length of the video content and the first display duration; wherein the first display duration is greater than the display duration;
if the judgment result is larger than the preset threshold, the video frames with the display duration are intercepted backwards from the starting point of the video content, and the video frames are converted into the total number of pictures; or
If the judgment result is less than or equal to the frame image number, converting the video frame per second into the frame image number of pictures, calculating a second tolerance according to the length and the display duration, and gradually eliminating the pictures according to the second tolerance until the total number of the pictures is reached.
Optionally, the converting the video content into the total number of pictures includes:
determining the length of video content, calculating the quotient of the total number and the length, and converting video frames per second into the quotient pictures;
if the quotient is calculated to be rounded up, calculating a third tolerance according to the length and the display duration, and gradually eliminating pictures according to the third tolerance until the total number of pictures is reached; or
If the quotient is calculated to be rounded down, pictures are gradually copied according to the third tolerance, and the copied pictures are inserted into positions behind and adjacent to the copied pictures until the total number of pictures is reached.
Optionally, the upper left of each frame image shows a text content picture, the upper right shows an image content picture, and the lower part shows a video content picture.
Optionally, the text content and the image content are located in the same area of the frame image.
Optionally, the obtaining of comment data for generating a video includes:
receiving input of the identification of the target object, inquiring comment data of the target object based on the identification, and further screening out high-quality comment data marked as high quality.
Optionally, after obtaining the comment video, the method further includes:
responding to the live broadcast operation of the target object, and triggering to display the comment video; wherein the triggering presentation is performed by any one of anchor selection, user selection or automatic triggering.
Optionally, the triggering and displaying the comment video further includes:
covering a transparent layer on a live broadcast interface, and displaying the comment video in a small window form on the transparent layer; and responding to the closing operation of the comment video, and closing the transparent layer; or
Dividing a display screen interface into two parts, wherein one part is used for displaying live content, and the other part is used for displaying the comment video; and in response to a closing operation of the comment video, closing the other part to display only live content on a display screen interface.
Optionally, after obtaining the comment video, the method further includes:
and determining a user identifier corresponding to the high-quality comment data, and naming the comment video by using the identifier and the user identifier.
To achieve the above object, according to another aspect of an embodiment of the present invention, there is provided a comment processing apparatus for generating a video, including:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring comment data used for generating a video and splitting the comment data into text content, image content and/or video content;
the conversion module is used for calculating the total number of the frame images according to the display duration of the video and the number of the frame images preset per second, and respectively converting the text content, the image content and/or the video content into the total number of images; the number of the pictures is sequentially increased according to the picture generation sequence;
and the integration module is used for merging the text content pictures, the image content pictures and/or the video content pictures with the same number to obtain the total number of frame images and integrating the frame images to obtain the comment video.
Optionally, the conversion module is configured to:
calculating the sub-text content displayed by each frame image according to the size of the area where the text content in the frame image is located, converting each sub-text content into a picture with the size, and obtaining the total number of continuous text content pictures according to the sequence of the sub-text contents; and/or
Adjusting the size of each image content according to the size of the area where the image content in the frame image is located, adjusting the quantity of each adjusted image content, and obtaining the total quantity of continuous image content pictures by combining the sequence of the image content; wherein the image content is located within the picture; and/or
And converting the video content into the total number of pictures, and adjusting the size of each picture according to the size of the area where the video content is located in the frame image to obtain the total number of continuous video content pictures.
Optionally, the conversion module is configured to: determining whether the number of image contents is less than the total number;
if the judgment result is less than the total number, performing quantity copying on each adjusted image content according to the quotient of the total number and the image content quantity to obtain the total number of image content pictures; or
And if the judgment result is greater than the preset value, extracting the total number of image content pictures according to the sequence of the image content.
Optionally, the conversion module is configured to:
judging the size relationship between the length of the video content and the display duration;
if the judgment result is larger than the preset threshold, the video frames with the display duration are intercepted backwards from the starting point of the video content, and the video frames are converted into the total number of pictures; or
If the judgment result is equal to the total number of pictures, converting the video content into the total number of pictures; or
If the judgment result is less than the preset threshold, converting the video frame per second into the pictures with the number of the frame images, calculating a first tolerance according to the length and the display duration, copying the pictures step by step according to the first tolerance, and inserting the copied pictures into the positions behind and adjacent to the copied pictures until the total number of the pictures is reached.
Optionally, the conversion module is configured to: and determining a difference value between the display duration and the length, calculating a quotient of the length and the difference value, and if a remainder exists, rounding the quotient downwards to obtain the first tolerance.
Optionally, the conversion module is configured to: judging the size relation between the length of the video content and the first display duration; wherein the first display duration is greater than the display duration;
if the judgment result is larger than the total number of pictures, the video frames of the display duration are intercepted backwards from the starting point of the video content, and the video frames are converted into the total number of pictures; or
If the judgment result is less than or equal to the frame image number, converting the video frame per second into the frame image number of pictures, calculating a second tolerance according to the length and the display duration, and gradually eliminating the pictures according to the second tolerance until the total number of the pictures is reached.
Optionally, the conversion module is configured to: determining the length of video content, calculating the quotient of the total number and the length, and converting video frames per second into the quotient pictures;
if the quotient is calculated to be rounded up, calculating a third tolerance according to the length and the display duration, and gradually eliminating pictures according to the third tolerance until the total number of pictures is reached; or
If the quotient is calculated to be rounded down, pictures are gradually copied according to the third tolerance, and the copied pictures are inserted into positions behind and adjacent to the copied pictures until the total number of pictures is reached.
Optionally, the upper left of each frame image shows a text content picture, the upper right shows an image content picture, and the lower part shows a video content picture.
Optionally, the text content and the image content are located in the same area of the frame image.
Optionally, the obtaining module is configured to: receiving input of the identification of the target object, inquiring comment data of the target object based on the identification, and further screening out high-quality comment data marked as high quality.
Optionally, the display device further comprises a display module, configured to: responding to the live broadcast operation of the target object, and triggering and displaying the comment video; wherein the triggering presentation is performed by any one of anchor selection, user selection or automatic triggering.
Optionally, the display module is further configured to:
covering a transparent layer on a live broadcast interface, and displaying the comment video in a small window form on the transparent layer; and responding to the closing operation of the comment video, and closing the transparent layer; or
Dividing a display screen interface into two parts, wherein one part is used for displaying live content, and the other part is used for displaying the comment video; and in response to a closing operation of the comment video, closing the other part to display only live content on a display screen interface.
Optionally, the system further includes a naming module, configured to:
and determining a user identifier corresponding to the high-quality comment data, and naming the comment video by using the identifier and the user identifier.
To achieve the above object, according to still another aspect of embodiments of the present invention, there is provided a comment processing electronic device for generating a video.
The electronic device of the embodiment of the invention comprises: one or more processors; a storage device, configured to store one or more programs, which when executed by the one or more processors, cause the one or more processors to implement any of the above comment processing methods for generating a video.
To achieve the above object, according to a further aspect of the embodiments of the present invention, there is provided a computer-readable medium on which a computer program is stored, the program implementing any of the above comment processing methods for generating a video when executed by a processor.
According to the scheme provided by the invention, one embodiment of the invention has the following advantages or beneficial effects: the method comprises the following steps of collecting and converting comment data of live broadcasting articles, and finally outputting high-quality comment videos which are suitable for being directly played in a live broadcasting room, wherein consumers can watch the comment videos as required; in the live broadcast process of the article, even if the anchor does not refer to comment data, the consumer can receive the high-quality comment data with reference value of the article, and the method does not need to skip a window, and is friendly to interaction; the conversion of the video content can be performed in various ways, and the conversion is diversified.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
fig. 1 is a main flow diagram of a comment processing method for generating a video according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a frame image; (ii) a
FIG. 3 is a schematic diagram of a process for converting video content into pictures;
fig. 4 is a schematic flow chart of converting video content into pictures based on fig. 3;
FIG. 5 is a schematic diagram of another process for converting video content into pictures;
FIG. 6 is a diagram of three groups of pictures integrated into a complete frame image;
FIG. 7 is a schematic flow chart of an alternative comment processing method for generating a video in accordance with an embodiment of the present invention;
FIG. 8 is a schematic diagram of the main modules of a comment processing apparatus for generating a video according to an embodiment of the present invention;
FIG. 9 is an exemplary system architecture diagram in which embodiments of the present invention may be applied;
FIG. 10 is a schematic block diagram of a computer system suitable for use with a mobile device or server implementing an embodiment of the invention.
Detailed Description
Exemplary embodiments of the invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The words involved in the present scheme are explained as follows:
1. RESTful API: a network communication style based on HTTP protocol.
2. Buffer image class library: the class library for operating pictures in the Java system is an image class with a buffer area, mainly has the functions of loading a picture into a memory, providing functions of obtaining a drawing object, zooming the picture, selecting image smoothness and the like, and is generally used for picture size conversion, picture graying, transparent and opaque setting and the like.
3. JavaCV class library: class libraries for manipulating video in the Java architecture.
4. IO flow: in the Java system, class libraries of streaming data are operated.
It should be noted that the scheme aims to convert and integrate characters, pictures, video data and the like into one video, and a viewer can visually know more information through the video, so that the scheme is not only suitable for conversion of live scenes and item comment information, but also suitable for other aspects. For example, in a hot news scene, videos, pictures and text contents of news and related comment data thereof are integrated into one video, so that viewers can conveniently know the hot news contents and the public opinions of the viewers on the hot news contents.
Referring to fig. 1, shown is a main flowchart of a comment processing method for generating a video according to an embodiment of the present invention, including the following steps:
s101: obtaining comment data used for generating a video, and splitting the comment data into text content, image content and/or video content;
s102: calculating the total number of frame images according to the display duration of the video and the number of the preset frame images per second, and respectively converting the text content, the image content and/or the video content into the total number of images; the number of the pictures is sequentially increased according to the picture generation sequence;
s103: and combining the text content pictures, the image content pictures and/or the video content pictures with the same number to obtain the total number of frame images, and integrating to obtain the comment video.
In the above embodiment, for step S101, the item identifier of the target item of the video to be generated, which is input by the merchant, is received and then stored in the memory in the form of a JAVA object set, that is, the item identifier is input to the acquisition and conversion system. The acquisition and conversion system first retrieves the comment data, specifically, the high-quality comment data, of the target item according to the item identifier, specifically, refer to the description shown in subsequent fig. 3.
Firstly, dividing comment data according to character types, image types and video types to obtain character contents, image contents and video contents. The specific implementation code is as follows:
Figure BDA0003074332090000101
for step S102, assuming that the display duration of the generated video is 10 seconds, and 24 frames of images are set in 1 second, 240 frames of images need to be synthesized, and the size of each frame of image is 240 × 270. The final comment video generated by the scheme adopts a mode of displaying text content at the upper left, displaying image content at the upper right and displaying video content at the lower part, wherein the size of a text part is 120 × 90, the size of a picture part is 120 × 90 and the size of a video part is 240 × 180, and is specifically shown in fig. 2.
1. Converting the text content into a text content picture form, wherein the specific implementation codes are as follows:
Figure BDA0003074332090000102
/>
Figure BDA0003074332090000111
the sub-text content that each frame of image can exhibit is calculated to reasonably distribute the text content over the 240 frames of images. Converting the sub-text content on each frame of image into a picture with the attribute of size, format and the like meeting the requirement through a buffer image class library, wherein the size requirement of the picture is 120 × 90, and a jpg format is adopted. The buffer image is an implementation class thereof, such as public abstract Graphics getGraphics (); obtaining Graphics objects drawn on the image.
In addition, continuity between pictures needs to be guaranteed, the continuity of the pictures is guaranteed through the generation sequence of the pictures, the pictures are stored in the JAVA object set according to the generation sequence, and the pictures are taken out according to the generation sequence. Further, in the picture generation order, a numeric flag is set for each picture, and the numeric flag is sequentially incremented from 1 to 240.
2. Converting the image content into an image content picture form, wherein the specific implementation codes are as follows:
Figure BDA0003074332090000112
firstly, editing the pictures through the buffer image class library to enable the attributes such as the size, the format and the like of the pictures to meet the requirements, and then reasonably amplifying or deleting the number of each picture to enable the total number of the pictures to be the total number of the frame images and ensure the continuity of the pictures.
Generally, the number of pictures in one comment data is limited, and generally does not exceed 10, so that the number needs to be amplified in an equal ratio when each image content is converted into a frame image. For example, there are 8 pictures in the comment data, and for any one picture, the size and format of the picture are processed first, the size is 120 × 90, and the format is jpg, and then 29 pictures are copied, that is, the number of each picture is enlarged to 30, and the total number of the pictures meets the requirement of 240.
However, in some cases, it is necessary to delete the picture, and it is assumed that 1 frame image is set every second, that is, 10 images are required in total. And the comment data has 11 pictures, and 10 pictures are preferably extracted according to the sequence of the image contents in consideration of the characteristic that the pictures do not have continuity.
It should be noted that the above process may be to increase or extract the number of picture contents, and then perform processing on the picture contents according to attributes such as size and format.
3. Converting the video content into a video content picture, wherein the specific implementation codes are as follows:
Figure BDA0003074332090000121
10s video frames are extracted from the video content through a JavaCV class library, then the video frames are converted into 240 pictures through a buffer image class library, the attributes such as the size and the format of the pictures are edited to meet the requirements, and the continuity of the pictures is ensured.
In practice, the length of the video content may not be 10s strictly, and this requires many considerations:
embodiment one, see the description shown in fig. 3:
s301: judging the size relation between the length of the video content and the display duration;
s302: if the judgment result is larger than the total number of pictures, the video frames of the display duration are intercepted backwards from the starting point of the video content, and the video frames are converted into the total number of pictures;
s303: if the judgment result is equal to the total number of pictures, converting the video content into the total number of pictures;
s304: if the judgment result is less than the preset value, converting the video frame per second into the number of pictures of the frame image, and calculating a first tolerance according to the length and the display duration;
s305: gradually copying pictures according to the first tolerance, and inserting the copied pictures into positions behind and adjacent to the copied pictures until the total number of pictures is reached.
If the length t of the video content is greater than 10s (for example only), in order to ensure the continuity of the video content, it is preferable to capture 10s video frames from head to tail in a video capture mode, and then convert the video frames into 240 pictures, the size of the pictures is adjusted to 240 × 180, and the format is jpg. If t is equal to 10s, the video frame is directly converted into t 24 jpg pictures with the size of 240 × 180. If t is less than 10s, the video frame is preferably converted into t 24 jpg pictures with the size of 240 × 180, then the number of the complement intervals (namely a first tolerance) is calculated, n = t × 24/(10-t) = t/(10-t) is calculated, if n has a decimal, the images are directly discarded, an integer is reserved, one current picture is copied every n, the current picture is inserted into the position behind and adjacent to the current picture, copying is stopped after accumulating and copying (10-t) =24 pictures, and all pictures are pictures which are used for converting the video subsequently.
It should be noted that, here, the video content may also be converted into t × 24 pictures, and after the number of the supplementary pictures is calculated and copied, format and size conversion may be performed on all the pictures.
Example two, see the description shown in fig. 4, mainly for the case where t is greater than 10 s:
s401: judging the size relation between the length of the video content and the first display duration; wherein the first display duration is greater than the display duration;
s402: if the judgment result is larger than the preset threshold, the video frames with the display duration are intercepted backwards from the starting point of the video content, and the video frames are converted into the total number of pictures;
s403: if the judgment result is less than or equal to the frame image number, converting the video frame per second into the frame image number of pictures;
s404: and calculating a second tolerance according to the length and the display duration, and gradually eliminating pictures according to the second tolerance until the total number of pictures is reached.
In actual operation, there may be a case that the length of the video content is too long, and if the length t of the video content is greater than 20s (for example only), in order to ensure the continuity of the video content, a screenshot video mode may be adopted to capture 10s of video frames from head to tail, and then convert the video frames into 240 pictures, the size of the pictures is adjusted to 240 × 180, and the format is jpg. If the length t of the video content is between 10s and 20s, the video content is preferably converted into t × 24 jpg pictures with the size of 240 × 180, then the interval number (i.e., a second tolerance) n = t × 24/(t-10) × 24= t/(t-10) of the pictures to be rejected is calculated, if n is a decimal number, the whole is rounded downwards, one picture is rejected every n, 24 pictures are rejected in total (t-10), and the remaining pictures are the pictures to be subsequently converted.
It should be noted that, here, the video content may also be preferentially converted into t × 24 pictures, and after the number of the removed pictures is calculated and the removal is completed, format and size conversion is performed on the remaining pictures.
Further, for both the first tolerance and the second tolerance, a rounding-down manner is adopted, and taking the second tolerance as an example, if the rounding-up manner is adopted, the number of remaining pictures after one picture is removed may still be larger than 240, but if the rounding-down manner is adopted, the number of the obtained pictures is not removed to the end, and therefore the rounding-down manner is preferred.
Example three, see description shown in fig. 5:
s501: determining the length of the video content, calculating the quotient of the total number and the length, and converting the video frames per second into the quotient pictures;
s502: if the quotient is calculated to be rounded up, calculating a third tolerance according to the length and the display duration, and gradually eliminating pictures according to the third tolerance until the total number of pictures is reached;
s503: if the quotient is calculated to be rounded down, pictures are gradually copied according to the third tolerance, and the copied pictures are inserted into positions behind and adjacent to the copied pictures until the total number of pictures is reached.
Different from the foregoing fig. 3 and fig. 4, this embodiment preferably converts each frame of video in the video content into m pictures, where m =240/t. Assuming that the video content is only 5s, each video frame needs to be converted into 48 pictures, but if the video content is 20s, each video frame is converted into 12 pictures. In some cases, the division may be incomplete, for example, when the video content is 7s,m =240/7=34.3:
1) If the rounding-up method is adopted, the obtained m =35 video frames per second need to be converted into 35 pictures, but the method of 35 × 7=245>240 needs to continue to adopt the method of removing pictures. Specifically, the number of intervals (i.e., the third tolerance) n = t/(10-t) =2.3 for the pictures to be removed is calculated, the rounding is performed downwards to 2, one picture is removed every 2 pictures, 5 pictures are removed in total, and the remaining pictures are the pictures to be converted subsequently.
2) If rounding-up is adopted, the obtained m =34 video frames per second need to be converted into 34 pictures, but 34 × 7=238 and 240, a complementary picture mode is adopted. Specifically, one current picture is copied every 2 pictures and inserted into the position behind and adjacent to the current picture, and after 2 pictures are copied accumulatively, copying is stopped, and all pictures are pictures which are used for converting the video subsequently.
For step S103, three groups of pictures of text content, image content and video content are generated, each group comprising 240 pictures, each picture being numerically labeled and increasing from 1 to 240. And synthesizing the text content pictures, the image content pictures and the video content pictures with the same number into one picture through the buffer image class library according to the picture number, and finally obtaining 240 high-quality comment pictures and ensuring the continuity of the comment pictures. Through a JavaCV class library, the 240 pictures are synthesized into a video form to obtain a high-quality comment video, which is specifically shown in FIG. 6.
According to the method provided by the embodiment, the comment data of the article are synthesized into a video form according to the characters, the pictures and the video, the video form is used for displaying to the consumer in the live broadcasting process, the live broadcasting content is enriched, the reference comment information behavior before the consumer purchases the article is solved, the consumer can better know the use feeling of the article, the purchasing worry is eliminated, and the article transaction is promoted.
Referring to fig. 7, a schematic flow chart of an optional comment processing method for generating a video according to an embodiment of the present invention is shown, including the following steps:
s701: receiving input of an identifier of a target object, inquiring comment data of the target object based on the identifier, and further screening out high-quality comment data marked as high quality;
s702: the method comprises the steps of obtaining comment data used for generating a video, and splitting the comment data into text content, image content and/or video content;
s703: calculating the total number of the frame images according to the display duration of the video and the number of the frame images preset per second, and respectively converting the text content, the image content and/or the video content into the total number of images; the number of the pictures is sequentially increased according to the picture generation sequence;
s704: combining the text content pictures, the image content pictures and/or the video content pictures with the same number to obtain the total number of frame images and integrating the frame images to obtain a comment video;
s705: responding to the live broadcast operation of the target object, and triggering to display the comment video; wherein the triggering presentation is performed by any one of anchor selection, user selection or automatic triggering.
In the above embodiment, the descriptions of steps S702 to S704 shown in fig. 1 to 6 can be referred to, and are not repeated herein.
In the above embodiment, as for step S701, the purpose of the present scheme is to generate a video that facilitates the consumer to know the information of the article, but the producer is not the consumer but a merchant or a platform, and the consumer only has the ability to view the video. The platform provides a RESTful API externally for receiving the item identification input by the merchant. The item identification is a code of an item involved in live broadcasting of a merchant, and generally consists of a string of numbers, so that the item identification can quickly and accurately retrieve comment data of the item identification in an item database, and one-time input of a plurality of item identifications is supported.
The comment data are large in quantity, so that valuable comment data are provided for the user, the page turning times/browsing duration of the user can be reduced, the comment data can be preprocessed, and high-quality comment data can be screened out. It should be noted that the high-quality comment data is not a simple exaggeration of the article, but a true comment from the perspective of objective facts, and is matched with characters, pictures and videos, so that the advantages and disadvantages of the article are fed back in fact.
Different platforms respectively define auditing standards, the number and content of characters, the number, size, definition, format and content of pictures, the length, size, definition, format and content of videos, all need manual auditing, some comment data can be marked as high-quality comment data only after the review, and the auditing right is usually responsible for the platforms. For example, the platform a sets a set of auditing standards for the high-quality comments in advance, encourages buyers to publish the high-quality comments after purchasing the goods through some reward mechanisms, and is convenient for more consumers to know the real situation of the goods through the high-quality comment data.
The screened high-quality comment data comprise three forms of characters, pictures and videos, the corresponding relation between the object identification of the target object and the high-quality comment data is established and temporarily stored in the acquisition and transformation system, and the subsequent program can conveniently identify the object to which each comment data belongs.
It should be noted that, the acquisition and conversion system is used for temporarily storing the high-quality comment data to be converted, because the conversion is a key step and is a time-consuming step, the conversion data is preferably processed in an asynchronous manner, that is, the acquisition and conversion of the comment data are separately performed, so as to prevent the data conversion process from being affected by the occurrence of a problem in the data acquisition.
For step S705, the number of comment videos for the same item may be large, and may be distinguished through the user identifier, so as to avoid confusion, and therefore after the comment video is generated, the comment video may be retrieved through the item identifier — user identifier name, such as 12345 — flying higher mp4, according to the item identifier — user identifier name. And then outputting the comment videos to a specified storage position (a special space for storing the comment videos is planned in advance) such as C:/online/show through an IO streaming technology so as to retrieve through item identification when the follow-up live broadcast needs to be used.
In the process of live broadcasting the target object, a comment video playing button in a live broadcasting interface can be triggered by a main broadcast, so that comment videos are broadcasted to all users in a live broadcasting room. Or, in the client interface, the user autonomously selects whether to play or not, or automatically triggers, and the specific embodiment is not limited, and may be implemented in parallel in one or more ways.
Furthermore, in order to better provide a user interaction mode and avoid the influence of the display of the comment video on the playing of the live interface, a transparent layer can be covered above the live interface, the comment video is displayed on the transparent layer in a small window mode, and if a user clicks to close the comment video, the transparent layer is closed at the same time. Or the display screen interface is divided into two parts, such as left, right, up and down, one part is used for displaying live content, the other part is used for displaying comment video, and if the user selects to close the comment video, the other part needs to be closed so as to only display the live content.
The method provided by the embodiment converts the comment data of the live item into the comment video with specified duration through acquisition and conversion, is suitable for being directly played in the live broadcasting room, and becomes live broadcasting content which can be directly watched by a consumer as required.
Aiming at the problem that a consumer cannot effectively utilize good-quality comments of an article to promote the transaction behavior of the consumer because the consumer cannot easily obtain the comment information of the article in the live broadcasting process, the embodiment of the invention provides a comment video generation scheme;
1. the method comprises the steps of collecting and converting comment data of live broadcasting articles, and finally outputting high-quality comment videos which are suitable for being directly played in a live broadcasting room, so that consumers can watch the comment videos as required;
2. in the live broadcast process of the article, even if the anchor does not refer to comment data, the consumer can receive the high-quality comment data with reference value of the article, and the method does not need to skip a window, and is friendly to interaction;
3. the conversion of the video content can be performed in various ways, and the conversion is diversified.
Referring to fig. 8, a schematic diagram illustrating main modules of a comment processing apparatus 800 for generating a video according to an embodiment of the present invention is shown, including:
an obtaining module 801, configured to obtain comment data used for generating a video, and split the comment data into text content, image content, and/or video content;
a conversion module 802, configured to calculate the total number of frame images according to the display duration of the video and the number of frame images preset per second, and respectively convert text content, image content, and/or video content into the total number of pictures; the number of the pictures is sequentially increased according to the picture generation sequence;
and an integration module 803, configured to merge the text content pictures, the image content pictures, and/or the video content pictures with the same number to obtain the total number of frame images, and integrate the frame images to obtain the comment video.
In the device for implementing the present invention, the conversion module 802 is configured to:
calculating the sub-text content displayed by each frame image according to the size of the area where the text content in the frame image is located, converting each sub-text content into a picture with the size, and obtaining the total number of continuous text content pictures according to the sequence of the sub-text contents; and/or
Adjusting the size of each image content according to the size of the area where the image content in the frame image is located, adjusting the quantity of each adjusted image content, and obtaining the total quantity of continuous image content pictures by combining the sequence of the image content; wherein the image content is located within the picture; and/or
And converting the video content into the total number of pictures, and adjusting the size of each picture according to the size of the area where the video content is located in the frame image to obtain the total number of continuous video content pictures.
In the device for implementing the present invention, the conversion module 802 is configured to:
determining whether the number of image contents is less than the total number;
if the judgment result is less than the total number, performing quantity copying on each adjusted image content according to the quotient of the total number and the image content quantity to obtain the total number of image content pictures; or
And if the judgment result is greater than the preset threshold, extracting the total number of image content pictures according to the sequence of the image content.
In the implementation apparatus of the present invention, the conversion module 802 is configured to:
judging the size relation between the length of the video content and the display duration;
if the judgment result is larger than the preset threshold, the video frames with the display duration are intercepted backwards from the starting point of the video content, and the video frames are converted into the total number of pictures; or
If the judgment result is equal to the total number of pictures, converting the video content into the total number of pictures; or
If the judgment result is less than the preset value, converting the video frame per second into the number of pictures of the frame image, calculating a first tolerance according to the length and the display duration, gradually copying the pictures according to the first tolerance, and inserting the copied pictures into positions behind and adjacent to the copied pictures until the total number of pictures is reached.
In the device for implementing the present invention, the conversion module 802 is configured to:
and determining a difference value between the display duration and the length, calculating a quotient of the length and the difference value, and if a remainder exists, rounding the quotient downwards to obtain the first tolerance.
In the device for implementing the present invention, the conversion module 802 is configured to:
judging the size relation between the length of the video content and the first display duration; wherein the first display duration is greater than the display duration;
if the judgment result is larger than the total number of pictures, the video frames of the display duration are intercepted backwards from the starting point of the video content, and the video frames are converted into the total number of pictures; or
If the judgment result is less than or equal to the frame image number, converting the video frame per second into the frame image number of pictures, calculating a second tolerance according to the length and the display duration, and gradually eliminating the pictures according to the second tolerance until the total number of the pictures is reached.
In the implementation apparatus of the present invention, the conversion module 802 is configured to:
determining the length of the video content, calculating the quotient of the total number and the length, and converting the video frames per second into the quotient pictures;
if the quotient is calculated to be rounded up, calculating a third tolerance according to the length and the display duration, and gradually eliminating pictures according to the third tolerance until the total number of pictures is reached; or
If the quotient is calculated to be rounded down, pictures are gradually copied according to the third tolerance, and the copied pictures are inserted into positions behind and adjacent to the copied pictures until the total number of pictures is reached.
In the implementation device of the invention, the upper left of each frame image shows the text content picture, the upper right shows the image content picture and the lower part shows the video content picture.
In the implementation device of the invention, the size of the area where the text content and the image content are located in the frame image is the same.
In the implementation apparatus of the present invention, the obtaining module 801 is configured to:
receiving input of the identification of the target object, inquiring comment data of the target object based on the identification, and further screening out high-quality comment data marked as high quality.
The device also comprises a display module used for:
responding to the live broadcast operation of the target object, and triggering and displaying the comment video; wherein the triggering presentation is performed by any one of anchor selection, user selection or automatic triggering.
In the implementation apparatus of the present invention, the display module is further configured to:
covering a transparent layer on a live broadcast interface, and displaying the comment video in a small window form on the transparent layer; and responding to the closing operation of the comment video, and closing the transparent layer; or
Dividing a display screen interface into two parts, wherein one part is used for displaying live content, and the other part is used for displaying the comment video; and in response to the closing operation of the comment video, closing the other part to display only the live content on the display screen interface.
The device for implementing the invention also comprises a naming module used for: and determining a user identifier corresponding to the high-quality comment data, and naming the comment video by using the identifier and the user identifier.
In addition, the detailed implementation of the device in the embodiment of the present invention has been described in detail in the above method, so that the repeated description is not repeated here.
Fig. 9 shows an exemplary system architecture 900 in which embodiments of the invention may be applied, including terminal devices 901, 902, 903, a network 904 and a server 905 (by way of example only).
The terminal devices 901, 902, 903 may be various electronic devices having display screens and supporting web browsing, and installed with various communication client applications, and users may interact with the server 905 through the network 904 using the terminal devices 901, 902, 903 to receive or send messages and the like.
Network 904 is the medium used to provide communication links between end devices 901, 902, 903 and server 905. The network 904 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The server 905 may be a server providing various services for performing operations of splitting, converting, integrating into a frame image, and integrating into a comment video of comment data.
It should be noted that the method provided by the embodiment of the present invention is generally executed by the server 905, and accordingly, the apparatus is generally disposed in the server 905.
It should be understood that the number of terminal devices, networks, and servers in fig. 9 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 10, a block diagram of a computer system 1000 suitable for use with a terminal device implementing an embodiment of the invention is shown. The terminal device shown in fig. 10 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 10, the computer system 1000 includes a Central Processing Unit (CPU) 1001 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1002 or a program loaded from a storage section 1008 into a Random Access Memory (RAM) 1003. In the RAM 1003, various programs and data necessary for the operation of the system 1000 are also stored. The CPU 1001, ROM 1002, and RAM 1003 are connected to each other via a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
The following components are connected to the I/O interface 1005: an input section 1006 including a keyboard, a mouse, and the like; an output portion 1007 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 1008 including a hard disk and the like; and a communication section 1009 including a network interface card such as a LAN card, a modem, or the like. The communication section 1009 performs communication processing via a network such as the internet. The driver 1010 is also connected to the I/O interface 1005 as necessary. A removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1010 as necessary, so that a computer program read out therefrom is mounted into the storage section 1008 as necessary.
In particular, according to embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication part 1009 and/or installed from the removable medium 1011. The computer program performs the above-described functions defined in the system of the present invention when executed by a Central Processing Unit (CPU) 1001.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present invention may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor comprises an acquisition module, a conversion module and an integration module. Where the names of these modules do not in some cases constitute a limitation on the module itself, for example, an integrated module may also be described as an "integrated commentary video module".
As another aspect, the present invention also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise:
the method comprises the steps of obtaining comment data used for generating a video, and splitting the comment data into text content, image content and/or video content;
calculating the total number of frame images according to the display duration of the video and the number of the preset frame images per second, and respectively converting the text content, the image content and/or the video content into the total number of images; the number of the pictures is sequentially increased according to the picture generation sequence;
and combining the text content pictures, the image content pictures and/or the video content pictures with the same number to obtain the total number of frame images, and integrating to obtain the comment video.
According to the technical scheme of the embodiment of the invention, the comment data of the live broadcast item is collected and converted, and finally, a high-quality comment video suitable for being directly played in a live broadcast room is output, so that a consumer can watch the comment video as required; in the live broadcast process of the article, even if the anchor does not refer to comment data, the consumer can receive the high-quality comment data with reference value of the article, and the method does not need to skip a window, and is friendly to interaction; the conversion of the video content can be performed in various ways, and the conversion is diversified.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (16)

1. A comment processing method for generating a video, comprising:
obtaining comment data used for generating a video, and splitting the comment data into text content, image content and/or video content;
calculating the total number of frame images according to the display duration of the video and the number of the preset frame images per second, and respectively converting the text content, the image content and/or the video content into the total number of images; the number of the pictures is sequentially increased according to the picture generation sequence;
combining the text content pictures, the image content pictures and/or the video content pictures with the same number to obtain the total number of frame images and integrating the frame images to obtain a comment video; wherein, each frame image comprises a text part, an image part and a video part which are mutually independent.
2. The method of claim 1, wherein converting the text content, the image content, and/or the video content into the total number of pictures respectively comprises:
calculating the sub-text content displayed by each frame image according to the size of the area where the text content in the frame image is located, converting each sub-text content into a picture with the size, and obtaining the total number of continuous text content pictures according to the sequence of the sub-text contents; and/or
Adjusting the size of each image content according to the size of the area where the image content in the frame image is located, adjusting the quantity of each image content, and combining the sequence of the image contents to obtain the total quantity of continuous image content pictures; wherein the image content is located within the picture; and/or
And converting the video content into the total number of pictures, and adjusting the size of each picture according to the size of the area where the video content is located in the frame image to obtain the total number of continuous video content pictures.
3. The method of claim 2, wherein the quantity adjusting for each image content comprises:
determining whether the number of image contents is less than the total number;
if the judgment result is less than the total number, performing quantity expansion on each image content according to the quotient of the total number and the image content number to obtain the total number of image content pictures; or
And if the judgment result is greater than the preset value, extracting the total number of image content pictures according to the sequence of the image content.
4. The method of claim 2, wherein converting the video content into the total number of pictures comprises:
judging the size relationship between the length of the video content and the display duration;
if the judgment result is larger than the preset threshold, the video frames with the display duration are intercepted backwards from the starting point of the video content, and the video frames are converted into the total number of pictures; or
If the judgment result is equal to the total number of pictures, converting the video content into the total number of pictures; or
If the judgment result is less than the preset threshold, converting the video frame per second into the pictures with the number of the frame images, calculating a first tolerance according to the length and the display duration, copying the pictures step by step according to the first tolerance, and inserting the copied pictures into the positions behind and adjacent to the copied pictures until the total number of the pictures is reached.
5. The method of claim 4, wherein said calculating a first tolerance based on said length and said length of display comprises:
and determining a difference value between the display duration and the length, calculating a quotient of the length and the difference value, and if a remainder exists, rounding the quotient downwards to obtain the first tolerance.
6. The method according to claim 4, wherein said capturing the video frames of the presentation duration backward from the starting point of the video content, and converting the video frames into the total number of pictures comprises:
judging the size relation between the length of the video content and the first display duration; wherein the first display duration is greater than the display duration;
if the judgment result is larger than the preset threshold, the video frames with the display duration are intercepted backwards from the starting point of the video content, and the video frames are converted into the total number of pictures; or
If the judgment result is less than or equal to the frame image number, converting the video frame per second into the frame image number of pictures, calculating a second tolerance according to the length and the display duration, and gradually eliminating the pictures according to the second tolerance until the total number of the pictures is reached.
7. The method of claim 2, wherein converting the video content into the total number of pictures comprises:
determining the length of video content, calculating the quotient of the total number and the length, and converting video frames per second into the quotient pictures;
if the quotient is calculated to be rounded up, calculating a third tolerance according to the length and the display duration, and gradually eliminating pictures according to the third tolerance until the total number of pictures is reached; or
If the quotient is calculated to be rounded down, pictures are gradually copied according to the third tolerance, and the copied pictures are inserted into positions behind and adjacent to the copied pictures until the total number of pictures is reached.
8. The method according to any one of claims 2-7, wherein the upper left of each frame image shows a text content picture, the upper right shows an image content picture, and the lower shows a video content picture.
9. A method as claimed in any one of claims 2 to 7, characterized in that the size of the area in which the text content and the image content are located in the frame image is the same.
10. The method of claim 1, wherein obtaining comment data for generating a video comprises:
receiving input of the identification of the target object, inquiring comment data of the target object based on the identification, and further screening out high-quality comment data marked as high quality.
11. The method of claim 10, further comprising, after said obtaining a review video:
responding to the live broadcast operation of the target object, and triggering and displaying the comment video; wherein the triggering presentation is performed by any one of anchor selection, user selection or automatic triggering.
12. The method of claim 11, wherein the triggering presentation of the commentary video further comprises:
covering a transparent layer on a live broadcast interface, and displaying the comment video in a small window form on the transparent layer; and responding to the closing operation of the comment video, and closing the transparent layer; or
Dividing a display screen interface into two parts, wherein one part is used for displaying live content, and the other part is used for displaying the comment video; and in response to the closing operation of the comment video, closing the other part to display only the live content on the display screen interface.
13. The method of any of claims 10-12, further comprising, after said obtaining a review video: and determining a user identifier corresponding to the high-quality comment data, and naming the comment video by using the identifier and the user identifier.
14. An item review processing apparatus for generating a video, comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring comment data used for generating a video and splitting the comment data into text content, image content and/or video content;
the conversion module is used for calculating the total number of the frame images according to the display duration of the video and the number of the frame images preset per second, and respectively converting the text content, the image content and/or the video content into the total number of images; the number of the pictures is sequentially increased according to the picture generation sequence;
the integration module is used for merging the text content pictures, the image content pictures and/or the video content pictures with the same number to obtain the total number of frame images and integrating the frame images to obtain a comment video; wherein, each frame image comprises a text part, an image part and a video part which are mutually independent.
15. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-13.
16. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-13.
CN202110547955.9A 2021-05-19 2021-05-19 Comment processing method and device for generating video Active CN113301389B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110547955.9A CN113301389B (en) 2021-05-19 2021-05-19 Comment processing method and device for generating video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110547955.9A CN113301389B (en) 2021-05-19 2021-05-19 Comment processing method and device for generating video

Publications (2)

Publication Number Publication Date
CN113301389A CN113301389A (en) 2021-08-24
CN113301389B true CN113301389B (en) 2023-04-07

Family

ID=77322932

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110547955.9A Active CN113301389B (en) 2021-05-19 2021-05-19 Comment processing method and device for generating video

Country Status (1)

Country Link
CN (1) CN113301389B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114938473B (en) * 2022-05-16 2023-12-12 上海幻电信息科技有限公司 Comment video generation method and comment video generation device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108322837A (en) * 2018-01-10 2018-07-24 链家网(北京)科技有限公司 Video generation method based on picture and device
US10121187B1 (en) * 2014-06-12 2018-11-06 Amazon Technologies, Inc. Generate a video of an item
CN110309351A (en) * 2018-02-14 2019-10-08 阿里巴巴集团控股有限公司 Video image generation, device and the computer system of data object
CN111327960A (en) * 2020-03-05 2020-06-23 北京字节跳动网络技术有限公司 Article processing method and device, electronic equipment and computer storage medium
CN112291614A (en) * 2019-07-25 2021-01-29 北京搜狗科技发展有限公司 Video generation method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9066145B2 (en) * 2011-06-30 2015-06-23 Hulu, LLC Commenting correlated to temporal point of video data

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10121187B1 (en) * 2014-06-12 2018-11-06 Amazon Technologies, Inc. Generate a video of an item
CN108322837A (en) * 2018-01-10 2018-07-24 链家网(北京)科技有限公司 Video generation method based on picture and device
CN110309351A (en) * 2018-02-14 2019-10-08 阿里巴巴集团控股有限公司 Video image generation, device and the computer system of data object
CN112291614A (en) * 2019-07-25 2021-01-29 北京搜狗科技发展有限公司 Video generation method and device
CN111327960A (en) * 2020-03-05 2020-06-23 北京字节跳动网络技术有限公司 Article processing method and device, electronic equipment and computer storage medium

Also Published As

Publication number Publication date
CN113301389A (en) 2021-08-24

Similar Documents

Publication Publication Date Title
US10348794B2 (en) Media production system with score-based display feature
US10735819B2 (en) Method and apparatus for concurrent broadcast of media program and social networking derived information exchange
US10546010B2 (en) Method and system for storytelling on a computing device
CN101601286B (en) Concurrent presentation of video segments enabling rapid video file comprehension
EP3005719B1 (en) Methods and systems for creating, combining, and sharing time-constrained videos
CN111432235A (en) Live video generation method and device, computer readable medium and electronic equipment
CN111447489A (en) Video processing method and device, readable medium and electronic equipment
US20090024922A1 (en) Method and system for synchronizing media files
CN104065979A (en) Method for dynamically displaying information related with video content and system thereof
CN114071179B (en) Live broadcast preview method, device, equipment and medium
KR101867082B1 (en) Apparatus and method for creating highlight video
KR20160057864A (en) Electronic apparatus for generating summary contents and methods thereof
US9147434B2 (en) Information processing apparatus and information processing method
CN113050857A (en) Music sharing method and device, electronic equipment and storage medium
CN113301389B (en) Comment processing method and device for generating video
US10284883B2 (en) Real-time data updates from a run down system for a video broadcast
CN115190366A (en) Information display method and device, electronic equipment and computer readable medium
CN113655930B (en) Information publishing method, information display method and device, electronic equipment and medium
CN110532472B (en) Content synchronous recommendation method and device, electronic equipment and storage medium
WO2023174073A1 (en) Video generation method and apparatus, and device, storage medium and program product
CN115357755B (en) Video generation method, video display method and device
JP2002123693A (en) Contents appreciation system
KR102308508B1 (en) Review making system
KR20150048961A (en) System for servicing hot scene, method of servicing hot scene and apparatus for the same
CN111901629A (en) Method and device for generating and playing video stream

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant