CN115623280A - Bullet screen generation method and device, electronic equipment and storage medium - Google Patents

Bullet screen generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115623280A
CN115623280A CN202110796495.3A CN202110796495A CN115623280A CN 115623280 A CN115623280 A CN 115623280A CN 202110796495 A CN202110796495 A CN 202110796495A CN 115623280 A CN115623280 A CN 115623280A
Authority
CN
China
Prior art keywords
sub
target
pattern
patterns
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110796495.3A
Other languages
Chinese (zh)
Inventor
张怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Hode Information Technology Co Ltd
Original Assignee
Shanghai Hode Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Hode Information Technology Co Ltd filed Critical Shanghai Hode Information Technology Co Ltd
Priority to CN202110796495.3A priority Critical patent/CN115623280A/en
Publication of CN115623280A publication Critical patent/CN115623280A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • H04N21/4355Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles

Abstract

The disclosure provides a bullet screen generation method and device, electronic equipment and a storage medium, and relates to the technical field of video files. The implementation scheme is as follows: acquiring a target image, wherein the target image comprises a target pattern; acquiring a plurality of sub-patterns of the target pattern and the position of each sub-pattern in the plurality of sub-patterns in the target pattern based on the image characteristics of the target image; based on the plurality of sub-patterns, obtaining a plurality of target barrages and a configuration position of each of the plurality of target barrages, wherein each of the plurality of target barrages corresponds to one or more sub-patterns of the plurality of sub-patterns, and wherein the configuration position is configured to: and combining the bullet screens into a combined bullet screen based on the corresponding configuration positions, wherein the position of each of the target bullet screens in the combined bullet screen corresponds to the position of the corresponding one or more sub-patterns in the target pattern.

Description

Bullet screen generation method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of video file technology, and in particular, to a barrage generation method and apparatus, an electronic device, a computer-readable storage medium, and a computer program product.
Background
With the development of internet and video technology, video playing has become a way for mass life and entertainment. The user expresses own feeling by publishing the text comments to the display interface when watching the video file, so that the barrage is displayed on the display interface when the video file is played, a real-time interactive feeling is provided for audiences, and the atmosphere of the audiences when watching the video file can be relieved. Meanwhile, the barrage appearing in the playing process of the video file can help the video file attract more people to watch the video file, and the popularity of the video file is improved.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, the problems mentioned in this section should not be considered as having been acknowledged in any prior art, unless otherwise indicated.
Disclosure of Invention
The disclosure provides a bullet screen generating method and device, electronic equipment, a computer readable storage medium and a computer program product.
According to an aspect of the present disclosure, there is provided a bullet screen generating method, including: acquiring a target image, wherein the target image comprises a target pattern; acquiring a plurality of sub-patterns of the target pattern and the position of each sub-pattern in the plurality of sub-patterns in the target pattern based on the image characteristics of the target image; and obtaining a plurality of target barrage and a configuration position of each of the plurality of target barrage based on the plurality of sub-patterns, wherein each of the plurality of target barrage corresponds to one or more sub-patterns of the plurality of sub-patterns, and wherein the configuration position is configured to: enabling the respective bullet screens to be combined into a combined bullet screen based on the corresponding configuration positions, wherein the position of each of the target bullet screens in the combined bullet screen corresponds to the position of the corresponding one or more sub-patterns in the target pattern.
According to another aspect of the present disclosure, there is also provided a bullet screen generating device, including: a first acquisition unit configured to acquire a target image including a target pattern; a second obtaining unit configured to obtain a plurality of sub-patterns of the target pattern and a position of each of the plurality of sub-patterns in the target pattern based on an image feature of the target image; a third obtaining unit configured to obtain a plurality of target barrage and a configuration position of each of the plurality of target barrage based on the plurality of sub-patterns, wherein each of the plurality of target barrage corresponds to one or more sub-patterns of the plurality of sub-patterns, and wherein the configuration position is configured to: enabling the respective bullet screens to be combined into a combined bullet screen based on the corresponding configuration positions, wherein the position of each of the target bullet screens in the combined bullet screen corresponds to the position of the corresponding one or more sub-patterns in the target pattern.
According to another aspect of the present disclosure, there is also provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores a computer program which, when executed by the at least one processor, implements a method according to the above.
According to another aspect of the present disclosure, there is also provided a non-transitory computer readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the method according to the above.
According to another aspect of the present disclosure, there is also provided a computer program product comprising a computer program, wherein the computer program when executed by a processor implements the method according to the above.
According to one or more embodiments of the present disclosure, a plurality of target barrages and a configuration position of each of the plurality of target barrages are acquired according to an image feature of a target image including a target pattern, wherein each of the plurality of target barrages corresponds to one or more sub-patterns of a plurality of sub-patterns obtained according to the target pattern, and a combination of the plurality of barrages can be combined into a combination barrage according to the configuration position of each of the target barrages, the combination barrage having a shape of the target pattern, so that the generated combination barrage is visually similar to the target pattern, and the combination barrage is interesting and improves a barrage effect.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the embodiments and, together with the description, serve to explain the exemplary implementations of the embodiments. The illustrated embodiments are for purposes of example only and do not limit the scope of the claims. Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements.
Fig. 1 shows a schematic flow diagram of a bullet screen generation method according to some embodiments of the present disclosure;
fig. 2 illustrates a flow chart of a process of acquiring a target image in a bullet screen generation method according to some embodiments of the present disclosure;
fig. 3A illustrates a schematic diagram of a target image in a bullet screen generation method according to some embodiments of the present disclosure;
fig. 3B illustrates a schematic diagram of a gray value distribution matrix of a target image in a bullet screen generation method according to some embodiments of the present disclosure;
fig. 4 shows a schematic flow diagram of a process of acquiring a plurality of sub-patterns in a target pattern and a position of each of the plurality of sub-patterns in the target pattern in a bullet screen generating method according to some embodiments of the present disclosure;
fig. 5 shows a schematic flow chart of a process of obtaining a target bullet screen in a bullet screen generating method according to some embodiments of the present disclosure;
fig. 6 illustrates an exemplary flow chart of a process of obtaining a target bullet screen from at least one matching bullet screen in a bullet screen generation method according to some embodiments of the present disclosure;
fig. 7 illustrates a schematic diagram of a combined bullet screen displayed in a video file in a bullet screen generating method according to some embodiments of the present disclosure;
fig. 8 shows a schematic block diagram of a bullet screen generating device according to some embodiments of the present disclosure; and
FIG. 9 illustrates a block diagram of an exemplary electronic device that can be used to implement some embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of embodiments of the present disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, unless otherwise specified, the use of the terms "first", "second", etc. to describe various elements is not intended to limit the positional relationship, the timing relationship, or the importance relationship of the elements, and such terms are used only to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, while in some cases they may refer to different instances based on the context of the description.
The terminology used in the description of the various described examples in this disclosure is for the purpose of describing the particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, if the number of elements is not specifically limited, the element may be one or a plurality of. Furthermore, the term "and/or" as used in this disclosure is intended to encompass any and all possible combinations of the listed items.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
According to an aspect of the present disclosure, a bullet screen generating method is provided. Referring to fig. 1, a bullet screen generation method according to some embodiments of the present invention is schematically illustrated. The bullet screen generating method 100 includes:
step S110: acquiring a target image, wherein the target image comprises a target pattern;
step S120: acquiring a plurality of sub-patterns of the target pattern and a position of each sub-pattern of the plurality of sub-patterns in the target pattern based on image characteristics of the target image;
step S130: obtaining a plurality of target barricades and a configuration position of each of the plurality of target barricades based on the plurality of sub-patterns, wherein each of the plurality of target barricades corresponds to one or more sub-patterns of the plurality of sub-patterns, and wherein the configuration position is configured to: enabling the respective bullet screens to be combined into a combined bullet screen based on the corresponding configuration positions, wherein the position of each of the target bullet screens in the combined bullet screen corresponds to the position of the corresponding one or more sub-patterns in the target pattern.
According to the method, the multiple target bullet screens and the configuration position of each target bullet screen in the multiple target bullet screens are obtained according to the image characteristics of the target image comprising the target pattern, each target bullet screen in the multiple target bullet screens corresponds to one or more sub-patterns in the multiple sub-patterns obtained according to the target pattern, and the multiple bullet screens can be combined into a combined bullet screen according to the configuration position of each target bullet screen, wherein the combined bullet screen has the shape of the target pattern, so that the generated combined bullet screen is similar to the target pattern visually, interestingness is achieved, and the bullet screen effect is improved. For example, according to the target image containing the rose pattern, the generated combined barrage has the shape of a rose, so that the combined barrage visually presents the shape of the rose, and the combined barrage is interesting.
In some embodiments, the target image is obtained based on a video file.
In some embodiments, as shown in fig. 2, the step S110 of acquiring the target image includes:
step S210: acquiring the target pattern based on a video file; and
step S220: acquiring the target image based on the target pattern, wherein the target image further comprises a background area, and the background area has a preset color different from the color of the target pattern.
Based on the video file, the target pattern is obtained, the obtained target pattern is related to the video content, has typicality and is a prominent mark capable of representing the video file, and the shape of the combined bullet screen generated according to the obtained target pattern is the typical shape of the prominent mark pattern related to the video file content, so that the generated combined bullet screen is related to the video file content, and the interestingness of the combined bullet screen is further improved.
Meanwhile, based on the target pattern, a target image is obtained, the target image comprises the target pattern and a background area, and the background area has a preset color which is different from the color of the target pattern, so that the difference between the target pattern and the background area in the target image is obvious. The image characteristics of the target pattern and the background area in the target image are obviously distinguished, so that the position of each sub-pattern of the plurality of sub-patterns and the plurality of sub-patterns of the target pattern in the target image is more accurate based on the image characteristics of the target image, the matching degree of the target barrage and one or more sub-patterns obtained based on the plurality of sub-patterns is improved, the shape and distribution of the obtained combined barrage are closer to the shape and distribution of the target pattern, and the barrage effect is improved.
It should be understood that the "target pattern" according to the present disclosure may be a graphic or a character, the graphic is, for example, a graphic having various planar shapes such as a heart shape, a rectangle shape, a rose shape, etc., and the character may be a chinese character, a letter, a symbol, etc., without limitation.
In some embodiments, in step S210, the video file is any type of video file and the target pattern is a pattern of a symbolic icon of the video file. According to some embodiments, the video file is a comedy video file and the target pattern is a symbolic icon associated with the comedy video file. For example, for a "fire Yinyanzhe" drama video file, the target pattern is a "wood leaf" icon; for the video file of the drama of "sea horse", the target pattern is a "one piece" icon. According to other embodiments, the video file is a game video and the target pattern is a score pattern of the game video. For example, in a football game, the target pattern is a score pattern sufficient for both parties to the game. For example, in a soccer game, the target pattern is a logo, a name (abbreviated) of the team, or the like of both sides of the game.
In some embodiments, in step S220, the target pattern is converted into a target image. Referring to fig. 3A, a "wood leaf" icon in a "fire fighter" action video file is captured as a target pattern 300 and the target pattern 310 is transformed into a target image with a background area 320 that is white in a preset color, according to some embodiments. Wherein the size of the target image 300 can be obtained from a video file.
In other embodiments, the target image may be obtained directly from the video file, for example, an image of a frame in the video file having the target pattern may be captured to obtain the target image. The regions of the target image except the region where the target pattern is located are all background regions.
It should be understood that the above-mentioned obtaining of the target image by obtaining the target pattern and then performing transformation based on the target pattern, or obtaining the target image directly based on the video file, is merely exemplary, and those skilled in the art may achieve the technical effects of the present invention by any method for obtaining the target image.
Meanwhile, it should be understood that the setting of the background area of the target image to have the preset color is also only an example, and those skilled in the art will understand that the background area may also be set to have a plurality of colors or patterns.
In some embodiments, the image features comprise color features. In step S120, a plurality of sub-patterns of the target pattern and a position of each sub-pattern of the plurality of sub-patterns in the target pattern are obtained based on the color feature of the target image.
The position of each sub-pattern in the plurality of sub-patterns in the target pattern can be obtained by processing the pixel data of the target image, so that the processing mode is simple and the processing data amount is small.
According to other embodiments, in step S120, the target image is input into the trained neural network model, so that the trained neural network calculates a plurality of sub-patterns in the target pattern and positions of each sub-pattern in the plurality of sub-patterns in the target pattern based on image features (e.g., shape features) of the target image. For example, the trained neural network model is trained using a target image containing a plurality of target patterns, the target image is used as a training input in the training process, and the plurality of sub-patterns of the target pattern and the positions of the plurality of sub-patterns are used as outputs.
In some embodiments, as shown in fig. 4, the step S120 of obtaining the plurality of sub-patterns of the target pattern and the position of each sub-pattern of the plurality of sub-patterns in the target pattern based on the image feature of the target image includes:
step S410: dividing the target image into a plurality of sub-images, wherein the plurality of sub-images have the same size, and the plurality of sub-images comprise a plurality of pattern sub-images, each pattern sub-image of the plurality of pattern sub-images is at least partially located in an area where the target pattern is located and has a unique corresponding sub-pattern in the plurality of sub-patterns;
step S420: acquiring a gray value of each of the plurality of sub-images;
step S430: determining a gray value distribution matrix based on the plurality of sub-images and the gray value of each sub-image in the plurality of sub-images, wherein for each element in the gray value distribution matrix, the value of the element represents the gray value of the sub-image corresponding to the element, and the position of the element in the gray value distribution matrix represents the position of the sub-image corresponding to the element in the target image; and
step S440: and acquiring an element corresponding to each pattern sub-image in the gray value distribution matrix and the position of the element in the gray value distribution matrix for each pattern sub-image in the plurality of pattern sub-images.
According to the embodiment of the disclosure, the target image is divided into a plurality of sub-images, wherein the plurality of sub-images comprise pattern sub-images corresponding to the sub-patterns on the target pattern, the gray value distribution matrix is obtained through the gray value of the sub-images in the target image, and the positions of the pattern sub-images in the target image are obtained through the positions of the elements corresponding to the pattern sub-images in the gray value distribution matrix, so that the positions of the sub-patterns corresponding to the pattern sub-images in the target pattern can be obtained. In the above processing mode, the position of the sub-pattern in the target pattern can be obtained only by acquiring the gray scale value of each sub-image in the image, so that the method for acquiring the position of the sub-pattern in the target pattern is simpler, and the amount of processed data is small.
In some embodiments, in step S410, the target image is divided into a plurality of sub-images, wherein the plurality of sub-images have the same size. According to some embodiments, the size of the sub-image is set according to the size of a target bullet screen preset to be presented. For example, as shown in fig. 3B, according to the preset that each character in the target bullet screen to be presented is a size of a five-size font (i.e., 0.37 cm), the size of the sub-image is determined as a square lattice with a side length of 0.37cm, so that the target image in fig. 3A is divided into square lattices with a side length of 0.37 cm.
In some embodiments, in step S420, the gray-level value of each sub-image is obtained by using an average value method. For example, if the numbers of the red, green and blue pixels in each sub-image are respectively R, G and B, the Gray level Gray is calculated by the following formula (1).
Gray=(R+G+B)/3 (1)
In some embodiments, in step S430, a gray value distribution matrix is obtained by the gray values of the respective sub-images calculated in step S420, and the positions of the respective sub-patterns in the target image coincide with the positions of the elements corresponding to the gray values of the respective sub-patterns in the gray value distribution matrix. In some embodiments, the gray value distribution matrix is obtained at the same time as the coordinate distribution matrix, wherein each element in the gray value distribution matrix corresponds to each element in the coordinate distribution matrix one to one. The positions of the respective sub-images in the plurality of sub-images can thus be obtained from the positions of the respective elements in the coordinate distribution matrix.
In some embodiments, in step S440, the elements of the grayscale value distribution matrix corresponding to the pattern sub-image are acquired according to the grayscale value distribution matrix acquired in step S430.
According to some embodiments, the background region of the target image has a preset color, wherein the preset color has a preset grayscale value. Then in step S440, the obtained elements corresponding to the pattern sub-image are the elements of which the corresponding gray-scale values in the gray-scale value distribution matrix are different from the preset gray-scale values.
Referring to fig. 3A, a schematic diagram of sub-images obtained after dividing the target image 300 is shown, wherein the sub-images include a background sub-image 321 in a background area 320 and a pattern sub-image 311 in an area where the target pattern 310 is located. Referring to fig. 3B, a schematic diagram of a pixel distribution matrix of the target image 300 obtained from the sub-images (including the background sub-image 321 and the pattern sub-image 311) obtained by dividing the target image 300 according to fig. 3A is shown, wherein an element 321A corresponding to the gray-scale value of the background sub-image 321 in the background region 320 is 255, and an element 311A corresponding to the gray-scale value of the pattern sub-image 311 in the region where the target pattern 310 is located is smaller than 255. Thus, in step S430, the position of the element with the grayscale value less than 255 in the grayscale value distribution matrix corresponding to the grayscale value of the pattern sub-image can be obtained, which is based on the position of the pattern sub-image in the target image, which also represents the position of the sub-pattern in the target image.
In some embodiments, as shown in fig. 5, the step S130 of obtaining a plurality of target bullet screens and a configuration position of each of the plurality of target bullet screens based on the plurality of sub-patterns includes:
step S510: obtaining a plurality of sub-pattern groups and the position of each sub-pattern group in the plurality of sub-pattern groups in the target pattern based on the plurality of sub-patterns, wherein each sub-pattern group in the plurality of sub-pattern groups comprises one or more sub-patterns;
step S520: for each sub-pattern group in the plurality of sub-pattern groups, acquiring at least one matched bullet screen corresponding to the sub-pattern group; and
step S530: for each sub-pattern group in the plurality of sub-pattern groups, determining the target barrage from the at least one matching barrage corresponding to the sub-pattern group, and determining the position configuration of the target barrage based on the position of the sub-pattern group in the target pattern.
Barrages tend to be of different sizes and, when added to a video file, tend to appear to occupy areas of different sizes. For example, when a bullet screen sentence composed of one or more characters is added to a video file, the size of the area occupied by the bullet screen sentence on the video file is determined by the size of the characters and the number of the characters. For example, the bullet screen "Taishuai" is a bullet screen sentence composed of three characters, "Tai", "Shuai" and "Shiai", which occupies an area of three characters when added to a video file, e.g., a character displayed in a font of five, which occupies an area of 3 × 0.37cm × 0.37 cm. Meanwhile, when the bullet screen with different sizes is added to the video file, the sizes of the occupied areas are different, for example, the sizes of the areas occupied by the bullet screen "you have coins" and the bullet screen "too beautiful" are different.
Therefore, according to the embodiment of the disclosure, a plurality of sub-pattern groups are obtained, at least one matching barrage corresponding to the sub-pattern groups is obtained based on the sub-pattern groups, and a target barrage is determined from the at least one matching barrage, so that the size of the area occupied by the target barrage in the combined barrage corresponds to the size of the target pattern occupied by the sub-pattern, the obtained target barrage can correspond to the area occupied by the sub-pattern, and in the finally obtained combined barrage, each target barrage has a corresponding proportion in the combined barrage, so that the target barrages of various sizes have the same embodiment in the combined barrage, each target barrage in the combined barrage has a good display effect, and the barrage effect is further improved.
In some embodiments, in step S510, a plurality of sub-images obtained based on the target image are split, and the plurality of sub-images are split into a plurality of sub-image groups, wherein the plurality of sub-images located in the area where the target pattern is located constitute one sub-image group, and the plurality of sub-images located in the area where the target pattern is located constitute a plurality of corresponding sub-pattern groups. In some embodiments, in step S510, each sub-pattern group of the plurality of sub-pattern groups is obtained, and one or more sub-patterns of the sub-pattern groups are located at the same pixel height. For example, by dividing a plurality of sub-images located in the same row in the target image into one image group, a plurality of sub-patterns corresponding to the plurality of sub-images located in the region where the target pattern is located are obtained as one sub-pattern group.
In other embodiments, different groups of sub-patterns may be divided according to the number of sub-patterns in step S510. For example, 2 sub-patterns, 3 sub-patterns, 4 sub-patterns, 5 sub-patterns, 823010 sub-patterns are respectively divided into one sub-pattern group, so that a plurality of sub-pattern groups include respective numbers of sub-patterns, whereby target marbles of various sizes corresponding to the respective numbers of sub-patterns can be obtained.
In some embodiments, in step S520, a matching bullet screen corresponding to each of the plurality of sub-pattern groups is selected from the bullet screen library. The bullet screen library may be composed of a bullet screen sent by the user when the user watches the video file, or a bullet screen generated by the server according to the video file, or a bullet screen sent by the user when the user watches the video file and a bullet screen generated by the server according to the video file, which is not limited herein.
In some embodiments, in step S520, a matching bullet screen is obtained based on the length of the group of sub-patterns.
In some embodiments, in step S520, each of the acquired at least one matching bullet screens includes at least one character, and the at least one character is in one-to-one correspondence with the one or more pattern sub-images in the corresponding pattern sub-image group. For example, for a sub-pattern group including 2 sub-patterns, a bullet screen including two characters is obtained; for a sub-pattern group including 3 sub-patterns, a bullet screen including three characters is obtained.
By one-to-one correspondence of at least one character of each of the at least one matching bullet screens with the one or more pattern sub-images in the corresponding pattern sub-image group, the position configuration of each corresponding character can be obtained based on the position of each sub-pattern in the plurality of sub-patterns in the target pattern, that is, one character corresponds to one sub-pattern.
In some embodiments, the coordinate matrix of each sub-image is acquired while obtaining the gray value distribution matrix of the target image based on each sub-image of the plurality of sub-images of the target image. Thus, the coordinates of the pattern sub-image corresponding to each sub-pattern in the plurality of sub-patterns of the target pattern can be obtained through the gray value distribution matrix, and the coordinates of the pattern sub-image corresponding to each sub-pattern are the position configuration of each corresponding character.
In some embodiments, as shown in fig. 6, the step S530 of determining the target bullet screen from the at least one matching bullet screen includes:
step S610: for each matching bullet screen in the at least one matching bullet screen, calculating the difference value between the gray value of each character in the at least one character of the matching bullet screen and the gray value of the corresponding pattern sub-image; and
step S620: and acquiring the target bullet screen based on the corresponding difference value of each character in the at least one character of each matched bullet screen.
The gray value of each character in at least one character of each matched bullet screen in at least one matched bullet screen is compared with the gray value of the corresponding pattern sub-image, the matched bullet screen close to the gray value of the corresponding pattern sub-image is selected as the target bullet screen, the target bullet screen is matched with the gray value of the corresponding pattern sub-image, and after the target bullet screens are combined to form the combined bullet screen, the gray value of the combined bullet screen is similar to the gray value of the target pattern in the target image in distribution, and the bullet screen effect of the combined bullet screen is further improved. For example, as the colors of the sub-patterns in the target pattern are different, the gray scale values of the corresponding sub-images in the target pattern are distributed differently, the gray scale value of the corresponding sub-image is smaller for the sub-pattern with a darker color, and the gray scale value of the corresponding sub-image is larger for the sub-pattern with a lighter color. When the target bullet screen is determined from at least one matched bullet screen, the character with the smaller gray value is obtained for the sub-pattern with the deeper color, the character with the deeper gray value is obtained for the sub-pattern with the color money, the gray value of each character in each target bullet screen in the finally obtained combined bullet screen is close to the gray value of the corresponding sub-image in the target image, the visual effect of the combined bullet screen is close to the target pattern, and the bullet screen effect is improved.
It should be understood that "characters" as referred to in the present disclosure means chinese characters, symbols, numerals, etc. that may be displayed in text form, and each character has a uniform size when displayed. For example, chinese characters are all displayed in a font of five. Meanwhile, it is to be understood that the gray value of a character in the present disclosure represents a gray value of a preset area occupied by the character calculated based on pixels of an image after the character is converted into the image. For example, a gray value calculation method of a Chinese character "o" displayed in a font of five numbers is as follows: after the Chinese character "o" in the five-numbered font is converted into an image of 0.37cm × 0.37cm, the gray value of the image of 0.37cm × 0.37cm is calculated, and the gray value of the image of 0.37cm × 0.37cm is the gray value of the Chinese character "o" in the five-numbered font.
In some embodiments, for each of the at least one matching barrage, individual characters of the at least one character in the matching barrage have a uniform color, wherein the individual characters have different grayscale values according to the number of strokes of the individual characters. For example, the character "o" has a greater number of strokes and thus a smaller grayscale value than the character "one".
In some embodiments, for each of the at least one matching barrage, the respective characters in the at least one character in the matching barrage have different colors, and the characters with different gray values are obtained by setting different colors.
In some embodiments, the target bullet screen acquired in step S610 is a matching bullet screen with the smallest sum of differences between the gray values of the respective characters in the at least one matching bullet screen and the gray values of the corresponding pattern sub-images.
Because the sum of the differences between the gray values of each character in at least one character of the target bullet screen and the gray values of the corresponding pattern sub-images is minimum, the gray values of the sub-images corresponding to each character in the target bullet screen and each sub-pattern in the corresponding sub-pattern group are the closest, so that the gray values of the characters in the target bullet screen in the obtained combined bullet screen are the closest to the gray values of the sub-images corresponding to the target image, the obtained combined bullet screen is finally the combined bullet screen which is the closest to the target pattern in visual effect, and the bullet screen effect is further improved.
In some embodiments, after the step S130 is completed, the multiple target barrage is further arranged based on the configuration position of each of the multiple target barrage to obtain a combined barrage for the client to pull and display, where the position of each of the multiple target barrage in the combined barrage corresponds to the position of the corresponding one or more sub-patterns in the target pattern.
In some embodiments, after step S130 is completed, the configuration positions of the target barricades and the target barricades in the target barricades are sent to the client, so that the client can combine the target barricades into a combined barricade based on the configuration positions of the target barricades and the target barricades in the target barricades, and the position of each target barricade in the target barricades corresponds to the position of the corresponding sub-pattern in the target pattern. The configuration position is configuration information for configuring the position of the target bullet screen at the client.
Referring to fig. 7, a schematic diagram of a combined bullet screen according to an embodiment of the present disclosure is shown, wherein a combined bullet screen 700 generated according to a "wood leaf" icon of "fire fighter" has a shape of the "wood leaf" icon, so that the generated combined bullet screen 700 is similar to the "wood leaf" icon visually, thereby having interest.
According to another aspect of the present disclosure, a bullet screen generating device is also provided. As shown in fig. 8, the apparatus 800 may include: a first acquiring unit 810 configured to acquire a target image, the target image including a target pattern; a second obtaining unit 820 configured to obtain a plurality of sub-patterns of the target pattern and a position of each of the plurality of sub-patterns in the target pattern based on an image feature of the target image; a third obtaining unit 830 configured to obtain a plurality of target barricades and a configuration position of each of the plurality of target barricades based on the plurality of sub-patterns, wherein each of the plurality of target barricades corresponds to one or more sub-patterns of the plurality of sub-patterns, and wherein the configuration position is configured to: enabling the plurality of bullet screens to be combined into a combined bullet screen based on the corresponding configuration positions, wherein the position of each of the plurality of target bullet screens in the combined bullet screen corresponds to the position of the corresponding one or more sub-patterns in the target pattern.
According to another aspect of the present disclosure, there is also provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores a computer program which, when executed by the at least one processor, implements a method according to the above.
According to another aspect of the present disclosure, there is also provided a non-transitory computer readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the method according to the above.
According to another aspect of the present disclosure, there is also provided a computer program product comprising a computer program, wherein the computer program realizes the method according to the above when executed by a processor.
Referring to fig. 9, a block diagram of a structure of an electronic device 900, which may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. The electronic devices may be different types of computer devices, such as laptop computers, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 9, the electronic device 900 may include at least one processor 910, a working memory 920, an input unit 940, a display unit 950, a speaker 960, a storage unit 970, a communication unit 980, and other output units 990, which can communicate with each other through a system bus 930.
Processor 910 may be a single processing unit or multiple processing units, all of which may include single or multiple computing units or multiple cores. The processor 910 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitry, and/or any devices that manipulate signals based on operational instructions. Processor 910 may be configured to retrieve and execute computer readable instructions stored in working memory 920, storage unit 970, or other computer readable medium, such as program code for operating system 920a, program code for application 920b, and so forth.
Working memory 920 and storage unit 970 are examples of computer-readable storage media for storing instructions that are executed by processor 910 to implement the various functions described above. The working memory 920 may include both volatile and non-volatile memory (e.g., RAM, ROM, etc.). Further, storage unit 970 may include a hard disk drive, solid state drive, removable media, including external and removable drives, memory cards, flash memory, floppy disks, optical disks (e.g., CDs, DVDs), storage arrays, network attached storage, storage area networks, and so forth. Both the working memory 920 and the storage unit 970 may be collectively referred to herein as memory or computer-readable storage media, and may be non-transitory media capable of storing computer-readable, processor-executable program instructions as computer program code, which may be executed by the processor 910 as a particular machine configured to implement the operations and functions described in the examples herein.
The input unit 940 may be any type of device capable of inputting information to the electronic device 900, and the input unit 940 may receive input numeric or character information and generate key signal inputs related to user settings and/or function controls of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a track pad, a track ball, a joystick, a microphone, and/or a remote controller. The output units may be any type of device capable of presenting information and may include, but are not limited to, a display unit 950, speakers 960, and other output units 990, which may include, but is not limited to, a video/audio output terminal, a vibrator, and/or a printer. The communication unit 980 allows the electronic device 900 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, 1302.11 devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
The application 920b in the working register 920 may be loaded to perform the various methods and processes described above, such as steps S110-S130 in fig. 1. For example, in some embodiments, the bullet screen generation method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 970. In some embodiments, part or all of a computer program may be loaded and/or installed onto the electronic device 900 via the storage unit 970 and/or the communication unit 980. When loaded and executed by processor 910, may perform one or more of the steps of the bullet screen generation method described above. Alternatively, in other embodiments, processor 910 may be configured to perform the bullet screen generation method by any other suitable means (e.g., by way of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user may provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be understood that various forms of the flows shown above, reordering, adding or deleting steps, may be used. For example, the steps described in the present disclosure may be performed in parallel, sequentially or in different orders, and are not limited herein as long as the desired results of the technical aspects of the present disclosure can be achieved.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the above-described methods, systems and apparatus are merely exemplary embodiments or examples and that the scope of the present invention is not limited by these embodiments or examples, but only by the claims as issued and their equivalents. Various elements in the embodiments or examples may be omitted or may be replaced with equivalents thereof. Further, the steps may be performed in an order different from that described in the present disclosure. Further, various elements in the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced with equivalent elements that appear after the present disclosure.

Claims (14)

1. A bullet screen generation method comprises the following steps:
acquiring a target image, wherein the target image comprises a target pattern;
acquiring a plurality of sub-patterns of the target pattern and a position of each sub-pattern of the plurality of sub-patterns in the target pattern based on image characteristics of the target image;
acquiring a plurality of target bullet screens and a configuration position of each of the plurality of target bullet screens based on the plurality of sub-patterns, wherein,
each of the plurality of target barricades corresponds to one or more of the plurality of sub-patterns, and wherein,
the configuration position is configured to: enabling the respective bullet screens to be combined into a combined bullet screen based on the corresponding configuration positions, wherein the position of each of the target bullet screens in the combined bullet screen corresponds to the position of the corresponding one or more sub-patterns in the target pattern.
2. The method of claim 1, wherein the acquiring a target image comprises:
acquiring the target pattern based on a video file; and
acquiring the target image based on the target pattern, wherein the target image further comprises a background area, and the background area has a preset color different from the color of the target pattern.
3. The method of claim 1 or 2, wherein the image features comprise color features.
4. The method of claim 3, wherein the obtaining the target pattern's plurality of sub-patterns and the position of each of the plurality of sub-patterns in the target pattern based on the image characteristics of the target image comprises:
dividing the target image into a plurality of sub-images, wherein the plurality of sub-images have the same size and comprise a plurality of pattern sub-images, each pattern sub-image of the plurality of pattern sub-images is at least partially located in an area where the target pattern is located and has a unique corresponding sub-pattern in the plurality of sub-patterns;
acquiring a gray value of each of the plurality of sub-images;
determining a gray value distribution matrix based on the plurality of sub-images and the gray value of each sub-image in the plurality of sub-images, wherein for each element in the gray value distribution matrix, the value of the element represents the gray value of the sub-image corresponding to the element, and the position of the element in the gray value distribution matrix represents the position of the sub-image corresponding to the element in the target image; and
and acquiring an element corresponding to each pattern sub-image in the gray value distribution matrix and the position of the element in the gray value distribution matrix for each pattern sub-image in the plurality of pattern sub-images.
5. The method according to claim 4, wherein the predetermined color has a predetermined gray value, and wherein the element corresponding to the pattern sub-image is an element in the gray value distribution matrix which has a corresponding gray value different from the predetermined gray value.
6. The method of claim 4 or 5, wherein the obtaining a plurality of target bullet screens and a configuration position of each of the plurality of target bullet screens based on the plurality of sub-patterns comprises:
obtaining a plurality of sub-pattern groups based on the plurality of sub-patterns, wherein each sub-pattern group in the plurality of sub-pattern groups comprises one or more sub-patterns;
for each sub-pattern group in the plurality of sub-pattern groups, acquiring at least one matched bullet screen corresponding to the sub-pattern group; and
for each sub-pattern group in the plurality of sub-pattern groups, determining the target barrage from the at least one matching barrage corresponding to the sub-pattern group.
7. The method of claim 6, wherein for each of the plurality of sub-pattern groups, one or more sub-patterns in the sub-pattern group are located at a same pixel height.
8. The method of claim 6, wherein each of the at least one matching barrage includes at least one character in a one-to-one correspondence with the one or more sub-patterns in the corresponding sub-pattern group.
9. The method of claim 8, wherein determining the target bullet screen from the at least one matching bullet screen comprises:
for each matching bullet screen in the at least one matching bullet screen, calculating a difference value between the gray value of each character in the at least one character of the matching bullet screen and the gray value of the corresponding pattern sub-image; and
and acquiring the target bullet screen based on the corresponding difference value of each character in the at least one character of each matched bullet screen.
10. The method of claim 9, wherein the target bullet screen is the matching bullet screen in which the sum of the respective difference values for each of the at least one character in the at least one matching bullet screen is the smallest.
11. A bullet screen generating device comprising:
a first acquisition unit configured to acquire a target image including a target pattern;
a second obtaining unit configured to obtain a plurality of sub-patterns of the target pattern and a position of each of the plurality of sub-patterns in the target pattern based on an image feature of the target image;
a third acquisition unit configured to acquire a plurality of target bullet screens and an arrangement position of each of the plurality of target bullet screens based on the plurality of sub-patterns, wherein,
each of the plurality of target barrage corresponds to one or more of the plurality of sub-patterns, and wherein,
the configuration position is configured to: enabling the respective bullet screens to be combined into a combined bullet screen based on the corresponding configuration positions, wherein the position of each of the target bullet screens in the combined bullet screen corresponds to the position of the corresponding one or more sub-patterns in the target pattern.
12. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein
The memory stores a computer program which, when executed by the at least one processor, implements the method according to any one of claims 1-10.
13. A non-transitory computer readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the method of any of claims 1-10.
14. A computer program product comprising a computer program, wherein the computer program realizes the method according to any one of claims 1-10 when executed by a processor.
CN202110796495.3A 2021-07-14 2021-07-14 Bullet screen generation method and device, electronic equipment and storage medium Pending CN115623280A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110796495.3A CN115623280A (en) 2021-07-14 2021-07-14 Bullet screen generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110796495.3A CN115623280A (en) 2021-07-14 2021-07-14 Bullet screen generation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115623280A true CN115623280A (en) 2023-01-17

Family

ID=84854995

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110796495.3A Pending CN115623280A (en) 2021-07-14 2021-07-14 Bullet screen generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115623280A (en)

Similar Documents

Publication Publication Date Title
US11369872B2 (en) Storage medium storing game program, game processing method, and information processing apparatus
US11531918B2 (en) Interactive live event outcome selection and prediction
US20150179021A1 (en) System and method for allocating playing positions among players in a squares game
CN106648397B (en) A kind of the game operation record processing method and system of mobile terminal
CN110379251A (en) Intelligence based on touch-control clipboard assists system of practising handwriting
CN113268303A (en) Interface element configuration method and device, storage medium and electronic equipment
CN112925520A (en) Method and device for building visual page and computer equipment
CN104166779A (en) Achieving method and device for simulating card game on game terminal
WO2023232014A1 (en) Image processing method and apparatus, and electronic device and storage medium
CN110297932B (en) Method and device for determining maximum inscribed circle of closed graph in vector diagram and electronic equipment
CN115623280A (en) Bullet screen generation method and device, electronic equipment and storage medium
CN108292193A (en) Animated digital ink
CN116271819A (en) Content display method, device, computer equipment and storage medium
CN110569627B (en) Image processing method and device and electronic equipment
CN110721471B (en) Virtual application object output method and device and computer storage medium
CN109550245B (en) Multi-row display method and device for mahjong game
JP7049608B2 (en) Report card creation support system and report card creation support program
CN111953849A (en) Method and device for displaying message board, electronic equipment and storage medium
JP2017062332A (en) Learning system
US20130267311A1 (en) Identity game
JP7052946B2 (en) Report card creation support system and report card creation support program
CN111611503B (en) Page processing method and device, electronic equipment and storage medium
CN110298702B (en) Information display method and device, intelligent robot, storage medium and electronic equipment
WO2022264344A1 (en) Business strategy evaluation device and business strategy evaluation program
Juric et al. Data mining of computer game assisted e/m-learning systems in higher education

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination