CN112995742A - Barrage publishing method, equipment and storage medium - Google Patents

Barrage publishing method, equipment and storage medium Download PDF

Info

Publication number
CN112995742A
CN112995742A CN202010006219.8A CN202010006219A CN112995742A CN 112995742 A CN112995742 A CN 112995742A CN 202010006219 A CN202010006219 A CN 202010006219A CN 112995742 A CN112995742 A CN 112995742A
Authority
CN
China
Prior art keywords
target
bullet screen
video image
target video
designated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010006219.8A
Other languages
Chinese (zh)
Inventor
高英虎
朱雪岩
卢京池
张仁伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Publication of CN112995742A publication Critical patent/CN112995742A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof

Abstract

The embodiment of the application provides a bullet screen issuing method, equipment and a storage medium. In the embodiment of the application, for a currently displayed target video image in a target video in a playing process, under the condition that the target video image contains a specified part, contour feature data of the specified part of a target object contained in the target video image can be acquired; and according to the contour characteristic data, determining an adjacent region of the designated part of the target object, and displaying the bullet screen editing control in the adjacent region. Therefore, the user can release the bullet screen data which can be displayed in the second adjacent area of the designated part of the target object through the bullet screen editing control. The bullet screen publishing mode provided by the embodiment of the application improves the diversity and flexibility of bullet screen publishing and is beneficial to improving user experience.

Description

Barrage publishing method, equipment and storage medium
Technical Field
The present application relates to the field of internet technologies, and in particular, to a bullet screen publishing method, device, and storage medium.
Background
With the continuous development of internet technology, users can interact in various ways. The barrage technique in video is a way to interact.
When watching a video, a user can utilize terminal equipment of the user to publish barrage data, but the existing barrage publishing mode is single and poor in flexibility.
Disclosure of Invention
Aspects of the present application provide a bullet screen publishing method, apparatus, system and storage medium, so as to provide a new bullet screen publishing method, thereby facilitating improvement of diversity and flexibility of bullet screen sending.
The embodiment of the application provides a bullet screen publishing method, which is characterized by comprising the following steps:
in the process of playing a target video, displaying a target video image in the target video;
under the condition that the target video image contains a designated part, acquiring contour feature data of the designated part of a target object contained in the target video image;
determining a first adjacent area of the designated part of the target object according to the contour feature data of the designated part of the target object;
displaying a bullet screen editing control in the first adjacent area so that a user can submit bullet screen data for the bullet screen editing control;
responding to the release operation aiming at the bullet screen editing control, and releasing bullet screen data submitted by the user; and the bullet screen data submitted by the user can be displayed in a second adjacent area of the designated part of the target object.
The embodiment of the application further provides a bullet screen issuing method, which includes:
detecting whether a video image included in a target video in playing contains a designated part or not;
for a target video image containing the designated part, determining a first adjacent area of the designated part based on contour feature data of the designated part in the target video image;
displaying a bullet screen editing control in the first adjacent area so that a user can issue bullet screen data based on the bullet screen editing control;
responding to the release operation aiming at the bullet screen editing control, and releasing bullet screen data submitted by the user; wherein the bullet screen data submitted by the user can be displayed in a second adjacent area of the designated part.
An embodiment of the present application further provides a terminal device, including: a memory, a processor and a display screen; wherein the memory is used for storing a computer program;
the processor is coupled to the memory for executing the computer program for:
in the process of playing a target video, displaying a target video image in the target video on the display screen;
under the condition that the target video image contains the designated part of the target object, acquiring contour feature data of the designated part of the target object contained in the target video image;
determining a first adjacent area of the designated part of the target object according to the contour feature data of the designated part of the target object;
displaying a bullet screen editing control in the first adjacent area so that a user can submit bullet screen data for the bullet screen editing control;
responding to the release operation aiming at the bullet screen editing control, and releasing bullet screen data submitted by the user; and the bullet screen data submitted by the user can be displayed in a second adjacent area of the designated part of the target object.
An embodiment of the present application further provides a terminal device, including: a memory, a processor and a display screen; wherein the memory is used for storing a computer program;
the processor is coupled to the memory for executing the computer program for:
playing the target video through the display screen;
detecting whether a video image included in the target video contains a specified part or not;
for a target video image containing a specified part, determining a first adjacent area of the specified part based on contour feature data of the specified part in the target video image;
displaying a bullet screen editing control in the first adjacent area so that a user can issue bullet screen data based on the bullet screen editing control;
responding to the release operation aiming at the bullet screen editing control, and releasing bullet screen data submitted by the user; wherein the bullet screen data submitted by the user can be displayed in a second adjacent area of the designated part.
Embodiments of the present application also provide a computer-readable storage medium storing computer instructions, which when executed by one or more processors, cause the one or more processors to perform the steps of the method of claims above.
In the embodiment of the application, for a currently displayed target video image in a target video in a playing process, under the condition that the target video image contains a specified part, contour feature data of the specified part of a target object contained in the target video image can be acquired; and according to the contour characteristic data, determining an adjacent region of the designated part of the target object, and displaying the bullet screen editing control in the adjacent region. Therefore, the user can release the bullet screen data which can be displayed in the second adjacent area of the designated part of the target object through the bullet screen editing control. The bullet screen publishing mode provided by the embodiment of the application improves the diversity and flexibility of bullet screen publishing and is beneficial to improving user experience.
On the other hand, the bullet screen released by the bullet screen releasing mode can avoid the designated position in the video picture when being displayed, the effect that the bullet screen is displayed along with the target object can also be realized, the user can watch the video and the bullet screen simultaneously when watching, the back-and-forth switching between the video and the bullet screen is not needed, and the watching experience of the user is further improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1a is a schematic flowchart of a bullet screen issuing method according to an embodiment of the present application;
fig. 1b is a schematic illustration showing a barrage editing control provided in an embodiment of the present application;
fig. 1c is a schematic diagram illustrating a function display of a barrage editing control according to an embodiment of the present application;
fig. 1d is a schematic view illustrating a bullet screen displaying effect according to an embodiment of the present application;
fig. 1e is a schematic diagram illustrating an operation of a bullet screen issuing mode according to an embodiment of the present application;
fig. 2a is a schematic flowchart of another bullet screen issuing method according to an embodiment of the present application;
fig. 2b is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
To the technical problem that the mode is single, the flexibility is relatively poor that present bullet screen issue, in some embodiments of this application, provide a new bullet screen issue mode, the leading principle is: for a currently displayed target video image in a target video in a playing process, under the condition that the target video image contains a specified part, acquiring contour characteristic data of the specified part of a target object contained in the target video image; and according to the contour characteristic data, determining an adjacent region of the designated part of the target object, and displaying the bullet screen editing control in the adjacent region. Therefore, the user can release the bullet screen data which can be displayed in the second adjacent area of the designated part of the target object through the bullet screen editing control. The bullet screen publishing mode provided by the embodiment of the application improves the diversity and flexibility of bullet screen publishing and is beneficial to improving user experience. On the other hand, the bullet screen released by the bullet screen releasing mode can avoid the designated position in the video picture when being displayed, the effect that the bullet screen is displayed along with the target object can also be realized, the user can watch the video and the bullet screen simultaneously when watching, the back-and-forth switching between the video and the bullet screen is not needed, and the watching experience of the user is further improved.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
It should be noted that: like reference numerals refer to like objects in the following figures and embodiments, and thus, once an object is defined in one figure or embodiment, further discussion thereof is not required in subsequent figures and embodiments.
Fig. 1a is a schematic flowchart of a bullet screen issuing method according to an embodiment of the present application. As shown in fig. 1a, the method mainly comprises:
101. and in the process of playing the target video, displaying the target video image in the target video.
102. When the target video image contains the designated part, contour feature data of the designated part of the target object contained in the target video image is acquired.
103. And determining a first adjacent area of the designated part of the target object according to the contour feature data of the designated part of the target object.
104. And displaying the bullet screen editing control in the first adjacent area so that a user can submit bullet screen data according to the bullet screen editing control.
105. Responding to the release operation aiming at the barrage editing control, and releasing barrage data submitted by a user; the bullet screen data submitted by the user can be displayed in a second adjacent area of the designated part of the target object.
In this embodiment, the target video refers to a video currently being played by the terminal device, and in this embodiment, the source of the target video is not limited. Alternatively, the target video may be a video watched online by the user, a video downloaded or cached to the local in advance, or a video recorded locally, and the like. In the embodiment of the present application, specific content of the video data is not limited, and may be a television show, a variety program video, a network small video, and the like, but is not limited thereto.
In this embodiment, the terminal device playing the target video refers to a computer device used by the user and having functions of computing, accessing to the internet, communicating, playing the video and the like required by the user, and may be, for example, a smart phone, a tablet computer, a personal computer, a wearable device and the like. Alternatively, the terminal device may also be a video playing device formed by a Set Top Box (STB) and a television. Among them, the digital video converter box is generally called a set-top box or set-top box.
In this embodiment, the target video image refers to a video frame currently played by the terminal device. In this embodiment, the target video image may or may not include an image of the designated portion. In different embodiments, the target objects are different, and the designated parts are different; the designated portion may be different even for the same target object. For example, in some embodiments, for a human, the designated area may be any one or more areas of the human's face, hands, arms, legs, and the like. Plural means 2 or more. In other embodiments, for plants, the designated location may be one or more of the root, the neck, the leaves, the flowers, etc. of the plant. In still other embodiments, for an animal, the designated site is a head, limb, body, etc. site of the animal; and the like, but are not limited thereto.
Further, for a target video image containing a specified portion, in the present embodiment, the contour feature data of the specified portion of the target object in the target video image in the frame image may be acquired. Optionally, the contour feature data of the designated part of the target object in the target video image may include: the position of the designated location in the target video image is characterized by, but not limited to, pixel coordinates, etc. For convenience of description, in some embodiments described below, the contour feature data of the designated portion of the target object in the target video image is simply referred to as contour feature data of the designated portion of the target object.
Furthermore, an adjacent region of the designated part of the target object can be determined according to the contour feature data of the designated part of the target object, and the adjacent region can be used for displaying the barrage editing control. For convenience of description and distinction, in the embodiment of the present application, an adjacent area for showing a designated portion of a barrage editing control is defined as a first adjacent area. In the embodiment of the application, the first adjacent area is an adjacent area of the designated part of the target object in the target video image.
Further, as shown in FIG. 1b, a barrage editing control may be exposed in the first adjacent area. The user can submit the bullet screen data through the bullet screen editing control. In this embodiment, the implementation form of the barrage editing control is not limited. Alternatively, the barrage editing control can be in the form of a bubble (shown in fig. 1 b), a text box or an editing pattern, and the like, but is not limited thereto.
Correspondingly, for the terminal equipment, the bullet screen data submitted by the user can be published in response to the publishing operation of the bullet screen editing control. In this embodiment, the bullet screen data submitted by the user for the bullet screen editing control can be displayed in the second adjacent area of the designated part of the target object. Wherein, the second adjacent region refers to the adjacent region of the designated region of the target object in the target video image. Alternatively, the first adjacent region and the second adjacent region may be the same region or different regions. Optionally, the area of the second adjacent region is larger than the area of the first adjacent region. The area of the second adjacent region can be determined by the content of the displayed target bullet screen data.
In this embodiment, for a currently displayed target video image in a target video in a playing process, under the condition that the target video image includes a specified portion, contour feature data of the specified portion of a target object included in the target video image may be acquired; and according to the contour characteristic data, determining an adjacent region of the designated part of the target object, and displaying the bullet screen editing control in the adjacent region. Therefore, the user can release the bullet screen data which can be displayed in the second adjacent area of the designated part of the target object through the bullet screen editing control. The bullet screen publishing mode provided by the embodiment of the application improves the diversity and flexibility of bullet screen publishing and is beneficial to improving user experience.
On the other hand, the bullet screen released by the bullet screen releasing mode can avoid the designated position in the video picture when being displayed, the effect that the bullet screen is displayed along with the target object can also be realized, a user can watch the video content and the bullet screen content simultaneously when watching, the back-and-forth switching between the video content and the bullet screen content is not needed, and the watching experience of the user is further improved.
For example, in some embodiments, the designated site may be a human face. The bullet screen that the mode of issuing of bullet screen that this embodiment provided can avoid the facial image in the video picture when the show to the bullet screen can realize the effect that the bullet screen "shows with the people" in the adjacent region show of facial image. Therefore, the bullet screen data of each frame of video image can be displayed in the adjacent area of the face image contained in the bullet screen data, and the display effect that the bullet screen moves along with the movement of the face image can be realized in the video playing process, so that the user experience can be further improved.
In the embodiment of the present application, the adjacent areas of the designated location refer to: the adjacent area of the designated location in the target video image, which may be a set distance from the contour edge of the designated location, such as 1mm, 0.5mm, etc., but is not limited thereto. In the present embodiment, the relative positional relationship between the first adjacent region and the second adjacent region and the designated region in the target video image is not defined. Alternatively, the first adjacent region and the second adjacent region of the designated location may be located at an upper region, a lower region, a left region or a right region of the designated location, and the like, but are not limited thereto.
In this embodiment, the number of target objects in the target video image may be 1 or more. Optionally, one target object corresponds to one designated part. For example, the target object is a human face, and the designated portion is a human face. Alternatively, the contour feature data of the specified portion in the target video image contains a position feature of the specified portion in the target video image, such as a position feature of a face image in the target video image.
In this embodiment, the position relationship between the first adjacent area of each designated portion and the image of the designated portion in the target video image may be determined according to the position of at least one designated portion included in the target display image and the position of the first adjacent area corresponding to each designated portion. Optionally, when determining the first adjacent area of the at least one designated portion, the determination may be performed sequentially according to the position of the at least one designated area included in the target video image. The first designated area is exemplified as follows. The first appointed part is any one of the at least one appointed part, and adjacent areas are not determined yet.
For the first designated location, a first adjacent area of the designated location of which the adjacent area has been determined in the at least one designated location may be obtained. Wherein the number of the second designated parts is 0, 1 or more. Further, the first adjacent area of the first designated part may be determined with a target that the adjacent area of the first designated part does not cover the positions of the plurality of designated parts and the first adjacent area of the second designated part. It should be noted that, in the embodiment of the present application, the fact that the neighboring area of the first designated location does not cover the positions of the plurality of designated locations and the first neighboring area of the second designated location means that: the determined first adjacent area of the first appointed part is not overlapped with the areas where the appointed parts are located and the first adjacent area of the second appointed part, namely, no area intersection exists. Thus, the first adjacent area of each designated part can be determined in turn. Furthermore, the corresponding barrage editing control can be displayed in the first adjacent area of each designated part.
Correspondingly, for a second adjacent area of the designated part contained in the target video image, the second adjacent area of the designated part of the target object can be determined according to the contour feature data of the designated part of the target object; and displaying the target bullet screen data aiming at the designated part of the target object in a second adjacent area of the designated part of the target object. The target bullet screen data, which is displayed in the second adjacent area and is specific to the designated portion of the target object, will be described in the following embodiments, and details are not repeated here. For a specific implementation of determining the second adjacent region of the designated portion of the second target object, reference may be made to the related content of the determination process of the first adjacent region, which is not described herein again.
It should be noted that, in the embodiment of the present application, the target video image may include one or more target objects, and each target object corresponds to one or more designated portions. For example, if the target object is a human face and the designated portion is a human face, one target object corresponds to one face image in the target video image. Optionally, as shown in fig. 1b, for the designated positions of the plurality of target objects, a bullet screen editing control may be displayed in the first adjacent area of each designated position.
Optionally, for a case where the first adjacent area of each designated portion includes a plurality of designated portions and corresponds to one bullet screen editing control, the user may release bullet screen data for a certain designated portion by triggering the corresponding bullet screen editing control. Optionally, the user may trigger a bullet screen editing control corresponding to a certain designated portion, call an editing page, and input bullet screen data through the editing page. Correspondingly, as shown in fig. 1c, for the terminal device, in response to the selection operation for the multiple bullet screen editing controls, the bullet screen editing control selected by the user may be used as the target editing control; and aiming at the triggering operation of the target editing control, an editing page is displayed, and a user can input bullet screen data through the editing page. Optionally, the user may trigger a publishing operation for the target editing control to publish the bullet screen data submitted by the user. Optionally, as shown in fig. 1d, a corresponding submission control may be set to publish the bullet screen data. Correspondingly, the terminal equipment responds to the triggering operation aiming at the submitting control and releases the bullet screen data submitted by the user.
Further, if the target barrage data for the designated part in the target video image is the barrage data submitted by the user, the target barrage data can be displayed in a second adjacent area in the target video image. As shown in fig. 1d, the bullet screen data, "a good lover doll," submitted by the user may be presented in a second adjacent region of the target facial image. . . ".
In this embodiment, the implementation form of the edit page is not limited. Alternatively, the editing page may be a cover layer, partially or completely covering the target video image, or the editing page may be an opaque suspension frame, suspended on the target video image.
Optionally, in order to prevent the effect of viewing by the user from being affected by playing the next frame of video image during the process of editing the barrage data by the user, the state of the target video may be adjusted to a pause state during the process of editing the barrage data by the user, that is, the playing of the target video is paused. Therefore, the target video image does not disappear in the process of editing the bullet screen data by the user, and the user experience is improved.
Based on the method, for the terminal equipment, the state of the target video can be adjusted to be a pause state in response to the triggering operation aiming at the bullet screen editing control, and a bullet screen data editing page is displayed for a user to input bullet screen data. Further, the terminal device can respond to the releasing operation aiming at the barrage editing control, readjust the state of the target video to be the playing state and release the barrage data submitted by the user.
In this embodiment of the application, a specific implementation manner of publishing the barrage data submitted by the user is not limited, and optionally, the terminal device may send the barrage data submitted by the user to the server device. Correspondingly, the server side equipment receives the bullet screen data and can schedule the bullet screen data.
The server device is a computer device that can perform video data management, respond to a service request of the terminal device, and provide a service related to video data processing for a user, and generally has the capability of undertaking and guaranteeing the service. The server device may be a single server device, a cloud server array, or a Virtual Machine (VM) running in the cloud server array. In addition, the server device may also refer to other computing devices with corresponding service capabilities, such as a terminal device (running a service program) such as a computer. Alternatively, the server device may be a server of a video website.
Optionally, the server device may audit the bullet screen data submitted by the user, and determine whether the bullet screen data meets the display condition; if the judgment result is yes, the bullet screen data can be used as target bullet screen data of the designated part and issued to the terminal equipment; if the judgment result is no, the bullet screen data can be discarded.
Further, the server device adds the bullet screen data into the bullet screen pool to wait for scheduling when the user meets the display condition for the submitted bullet screen data, and sends the bullet screen data serving as target bullet screen data to the terminal device when the bullet screen data is scheduled. In the embodiment of the present application, a specific implementation manner of scheduling, by a server device, bullet screen data in a bullet screen pool is not limited. Optionally, the server device may schedule according to the submission time of the bullet screen data, or according to the user class of submitting the bullet screen data, or the server device may also schedule according to the content of the bullet screen data, and so on, but is not limited thereto.
It should be noted that, in this embodiment, the bullet screen data displayed by the terminal device may be bullet screen data submitted by the user of the terminal device for a specified portion in the target video image, or bullet screen data submitted by other users for a specified portion in the target video image. The specific content of the target bullet screen data of the designated part in the target video image displayed by the terminal equipment can be determined by the server equipment.
In the embodiment of the present application, the target video image may or may not include the designated portion. Based on this, in the embodiment of the present application, the specified portion detection may also be performed on the target video image. Alternatively, the terminal device may perform specified portion detection on the target video image according to the image characteristics of the specified portion to determine whether the target video image contains the specified portion. However, the real-time detection method has a high requirement on the performance of the terminal device, and may affect the playing speed of the video.
In order not to affect the performance of the terminal device, the server device can detect the designated part of the target video image. Further, the server device may add the detection result as a designated tag to the target video image. The designation label is used to indicate whether the target video image contains a designated portion. Accordingly, the terminal device can acquire the designated label carried by the target video image and judge that the target video image contains the designated part according to the designated label. If the judgment result is yes, acquiring the contour feature data of the designated part contained in the target video image.
Further, in the embodiment of the present application, the contour feature data of the specified portion of the target object in the target video image, which is included in the target video image, may be determined by the terminal device, and may also be determined by the server device. If the terminal device determines the contour feature data of the designated portion in the target video image, the requirement on the data processing performance of the terminal device is high, and the power consumption of the terminal device is also high. On the other hand, when the terminal device displays the target video image, the contour characteristic data of the designated part in the video image is determined in real time, which affects the playing speed of the target video, and even when the bullet screen editing control is subsequently displayed, the condition of staggered frame display can occur, namely, the bullet screen editing control for the designated part of the target video image is displayed on other video images.
In order to improve the accuracy of the display position of the barrage editing control and ensure the speed of the terminal device for playing the target video, in the embodiment of the application, the server device may determine the profile feature data of at least one designated part included in the target video image. Optionally, after the target video image is acquired, the server device may perform image processing on the target video image, determine whether the target video image includes the specified portion, and determine the contour feature data of the included specified portion. Or, the server device may also perform image processing on the target video image in advance, and determine whether the target video image contains the specified part; and determining the contour feature data of the designated part in the target video image under the condition that the target video image contains the designated part. The contour feature data of the designated part in the target video image may be a position feature of the designated part in the target video image, such as a pixel coordinate of the designated part in the target video image.
In one embodiment, if the designated part is a human face, face detection may be performed on the target video image to determine whether the video image includes a face image; and determining the facial image in the target contour feature data in the case where the target video image is determined to contain a facial image. For example, face detection may be performed on a target video image, a location feature of a face image contained in the target video image may be determined, and so on.
Alternatively, the server device may also perform face detection on the target video image in advance, determine at least one face image, and determine a position of the at least one face image in the target video image. The position of the at least one face image in the target video image may be a pixel coordinate of the at least one face image in the target video image.
In the embodiment of the present application, a specific implementation of face detection on a target video image is not limited. Alternatively, the server device may perform face detection using haar (Harr) feature extraction, Adaboost classifier, or Multi-task convolutional neural network (MTCNN) algorithm, but is not limited thereto.
Further, the server-side equipment can also prestore a profile feature set corresponding to the target video image; the set of contour features comprises a correspondence between image identifications of the video images and positions of the designated parts in the corresponding video images. The video image refers to a video image included in the target video. Correspondingly, the server device may obtain the contour feature data of the designated part in the target video image from the correspondence between the image identifier of the video image and the position of the designated part in the corresponding video image according to the image identifier included in the video request, and issue the contour feature data of the designated part in the target video image to the terminal device.
The image identifier is information uniquely identifying a frame of video image. Alternatively, the image identifier may be a play time node of the video image or a frame number of the video image. For example, for a video of a television series, the playing time node of the video image may be the fraction of a second or even milliseconds, microseconds, etc. of the episode for the video image. Alternatively, the playing time node may be accurate to a period corresponding to the frame rate of the video data. For example, if the frame rate of the video data is 25 frames per second, the play time node can be accurate to 40 ms.
Optionally, the server device may issue the profile feature set to the terminal device. Accordingly, the terminal device can acquire the contour feature data of the designated part in the target video image according to the identification of the target video image. Optionally, the terminal device may match the identifier of the target video image with a correspondence between the image identifier of the video image and the position of the designated part in the corresponding video image, and filter out, from the contour feature set, contour feature data of the designated part corresponding to the target video image in the target video image.
Optionally, in this embodiment of the present application, in the same frame of video image, one designated portion corresponds to one target object. The server-side equipment can extract multi-frame reference video images from the target video according to a set time interval; the set time interval is positive integral multiple of the time interval between two adjacent frames of images in the video file where the target video image is located. For example, if the time interval between two adjacent frames of images in the video file is 40ms, the set time interval may be 40ms, 80ms, 120ms, etc., but is not limited thereto.
Further, the server-side device may perform specified portion identification on the multi-frame reference video image according to specified portion characteristics of at least one target object corresponding to at least one specified portion, so as to determine a specified portion corresponding to each of the at least one target object and contour characteristic data of each specified portion in the multi-frame reference video image. Optionally, for any reference video image, the server device may perform designated portion detection on the reference video image, and determine whether the reference video image includes a designated portion; when the reference video image contains the designated part, determining the outline characteristic data of the designated part contained in the reference image in the reference video image; and identifying the at least one designated part according to the characteristics of the designated part of the at least one target object, and determining the designated part corresponding to each target object.
Further, the server-side device may generate the set of contour features of the designated portion of the at least one target object according to the contour feature data of the designated portion of the at least one target object in the multi-frame reference video image and the playing time node of the multi-frame reference video image. Wherein, the face track of the target object refers to: and the distribution condition of the outline characteristic data of the designated part of the target object in the playing time period corresponding to the multi-frame reference video image.
Further, it is considered that, after a specified portion of the same target object may continuously appear in a multi-frame reference video image for a certain period of time, the specified portion of the target object may no longer appear due to shot switching or scene change. Based on this, in order to improve the smoothness of the set of contour features of the designated portion of the target object, the set of contour features of the target object may be segmented. The following takes a first target object of the at least one target object as an example for illustration. The first target object is any object in at least one target object. Aiming at a first target object, a first video image with a specified part of the first target object appearing for the first time can be identified from video images which are not subjected to specified part identification in a plurality of frames of reference video images; and performing specified part identification on other frame images after the first video image, and determining whether the specified parts of the first target object are contained in the video images. In the embodiment of the present application, N consecutive video images of the designated portion of the first target object may no longer appear as a break point for the segmentation of the profile feature set. Wherein N is a positive integer. Preferably, N is greater than or equal to 2, and the specific value can be flexibly set and is not limited herein.
Based on the above analysis, if there is a case where the designated portion of the first target object does not exist in N consecutive frames of video images after the first video image, the contour feature set of the first target object at the target playback time node is generated based on the contour feature data of the designated portion of the first target object in the target image set and the target playback time node corresponding to the target image set. Wherein the target image set comprises: a first video image, a previous frame image of N consecutive frames of video images in which a specified portion of the first target object does not exist (for convenience of distinction and description, "previous frame image" herein is defined as a second video image), and a video image between the first video image and the second video image. The video image between the first video image and the second video image is the video image with the playing time node between the playing time node of the first video image and the playing time node of the second video image. Correspondingly, the target playing time node corresponding to the target image set is the playing time node of each video image in the target image set. In this way, the server device may generate the set of contour features of the first target object in the playing time period corresponding to the multi-frame reference video image. In the same way, the set of contour features of at least one target object in the playing time period corresponding to the multi-frame reference video image can be obtained.
Optionally, after the server device generates the profile feature set of the first target object in the playing time period corresponding to the multi-frame reference video image, the server device may further perform noise reduction and anti-shake processing on each profile feature set of the first target object, perform merging processing on each generated profile feature set, and the like, so as to obtain the optimized profile feature set.
Further, the server device may send the profile feature set of at least one target object in the playing time period corresponding to the multi-frame reference video image to the terminal device. Accordingly, the terminal device can obtain the contour feature data of at least one face image at the target playing time node according to the target playing time node of the target video image, and the contour feature data is used as the contour feature data of at least one face image in the target video image.
Optionally, in an application scenario where the terminal device downloads or caches the video file in advance, the server device may send the profile feature set of the at least one target object in the playing time period corresponding to the multi-frame reference video image to the terminal device in the video downloading process. Or, in an application scene of watching a video on line, the server device may send the set of profile features of at least one target object in the playing time period corresponding to the multi-frame reference video image to the terminal device in the video process.
Further, considering that the data size of the profile feature set of the at least one target object in the playing time period corresponding to the multi-frame reference video image is large, the profile feature set of the at least one target object in the playing time period corresponding to the multi-frame reference video image can be distributed to the terminal equipment in batches. Optionally, the profile feature set of at least one target object in the playing time period corresponding to the multi-frame reference video image may be distributed to the terminal device in batches according to the progress of the terminal device in playing the video. For example, the target video may be divided into a plurality of video segments according to the playing time, and in the playing process of the previous video segment, the profile feature set in the playing time segment corresponding to the next video segment is sent to the terminal device. Therefore, the terminal equipment can match the target playing time node of the target video image in the received contour feature set, and further obtain the contour feature of the designated part contained in the target video image.
In the embodiment of the present application, for convenience of description and distinction, the set of contour features of the target object, which has been received by the terminal device, is defined as a first set of contour features. The first contour feature set is contour feature data of a face image of the target object in a first playing time period. Accordingly, the terminal device can acquire the first contour feature set and acquire the contour feature data of the target object from the first contour feature set according to the identification of the target video image.
In some embodiments, the image identification of the video image may be a play time node corresponding to the video image. Accordingly, the correspondence between the image identifier of the video image and the contour feature of the designated part in the corresponding video image may be: and the playing time node and the corresponding relation of the designated part in the video image corresponding to the playing time node. Accordingly, the image identification of the target video image may be: and target playing time nodes corresponding to the target video images. Based on the contour feature data, the contour feature data of the designated part in the target playing time node can be obtained according to the target playing time node of the target video image and used as the contour feature data of the designated part in the target display image.
Further, whether a target playing time node corresponding to the target video image belongs to a first playing time period or not can be judged; if the judgment result is yes, the terminal device matches the target playing time node in the first contour feature set to obtain contour feature data of the designated part of the target object at the target playing time node.
Correspondingly, if the target playing time node corresponding to the target video image does not belong to the first playing time period, the terminal device may send a first data request to the server device, where the first data request includes the target playing time node. Correspondingly, the server-side equipment can receive the first data request and determine the profile feature set of the target object in the second playing time period according to the target playing time node; the second playing time period comprises a target playing time node and is later than the first playing time period. Further, the server device issues the second profile feature set of the target object in the second playing time period to the terminal device.
Correspondingly, the terminal equipment receives the second contour feature set, and can match the target playing time node in the second contour feature set, so as to obtain the position of the designated part of the target object at the target playing time node.
It should be noted that the first profile feature set of the first target object may be actively pushed to the terminal device by the server device, or may be obtained by the terminal device making a data request to the server device. Optionally, the terminal device may send a second data request to the server device at a start time node of the first play time period, where the second data request includes the start time node.
Correspondingly, the server device may receive the second data request, determine a first profile feature set of the target object in the first playing time period according to the start time node, and send the first profile feature set to the terminal device.
It should be noted that, in the embodiment of the present application, other bullet screen issuing manners may also be provided. The bullet screen issuing method provided by the above embodiment can be selected by the user in which method to issue the bullet screen. Correspondingly, as shown in fig. 1e, the terminal device may respond to the trigger operation for the bullet screen launching control to display at least one bullet screen publishing mode; wherein, at least one bullet screen issuing mode comprises the bullet screen issuing mode provided by the embodiment. For example, at least one of the bullet screen issuing modes in fig. 1e may be an a mode, a B mode, and a C mode. The bullet screen issuing mode provided by the above embodiment is defined as "a mode". Therefore, the user can select the bullet screen release mode from at least one bullet screen release mode. Optionally, if the user selects the "a mode", for the terminal device, the corresponding bullet screen issuing mode may be started in response to the trigger operation for the "a mode". Optionally, the bullet screen issuing mode provided in the embodiment of the present application may be set as the prop card shown in fig. 1e, and may also be implemented as a tab or the like, but is not limited thereto.
The foregoing embodiment describes a bullet screen publishing manner for a single frame video image in a target video, and the embodiment of the present application may also perform bullet screen publishing for the whole target video, which is exemplarily described below with reference to the accompanying drawings.
Fig. 2a is a schematic flow chart of another bullet screen issuing method according to an embodiment of the present application. As shown in fig. 2a, the method comprises:
201. whether a video image included in a target video in playing contains a specified part or not is detected.
202. For a target video image containing a designated part, determining a first adjacent area of the designated part based on contour feature data of the designated part in the target video image.
203. And displaying the bullet screen editing control in the first adjacent area so that a user can release bullet screen data based on the bullet screen editing control.
204. Responding to the release operation aiming at the barrage editing control, and releasing barrage data submitted by a user; and the bullet screen data submitted by the user can be displayed in a second adjacent area of the designated part.
In this embodiment, the target video refers to a video currently being played by the terminal device, and for the description of the implementation form of the target video and the implementation form of the terminal device playing the target video, reference may be made to relevant contents of the above embodiments, and details are not described here again.
In this embodiment, for a target video being played, it is possible to detect whether a video image included in the target video includes a specified portion. For the description of the designated portion, reference may be made to the relevant contents of the above embodiments, which are not repeated herein.
Further, for a target video image containing a specified portion in the target video, in this embodiment, the contour feature data of the specified portion of the target object in the target video image in the frame image may be acquired. Optionally, the data of the contour feature of the specified region in the target video image may include: the position of the designated location in the target video image is characterized by, but not limited to, pixel coordinates, etc. For convenience of description, in some embodiments described below, the contour feature data of the designated portion in the target video image is simply referred to as the contour feature data of the designated portion.
Furthermore, an adjacent area of the designated part can be determined according to the contour feature data of the designated part, and the adjacent area can be used for displaying the barrage editing control. For convenience of description and distinction, in the embodiment of the present application, an adjacent area for showing a designated portion of a barrage editing control is defined as a first adjacent area. In the embodiment of the application, the first adjacent area is an adjacent area of the designated part of the target object in the target video image. In the embodiment of the present application, the adjacent areas of the designated location refer to: the adjacent area of the designated location in the target video image, which may be a set distance from the contour edge of the designated location, such as 1mm, 0.5mm, etc., but is not limited thereto.
Further, as shown in FIG. 1b above, a barrage editing control may be exposed in the first adjacent area. The user can submit the bullet screen data through the bullet screen editing control. For the implementation form of the barrage editing control, reference may be made to the relevant contents in the above embodiments, and details are not described here.
And for the terminal equipment, the bullet screen data submitted by the user can be published in response to the publishing operation of the bullet screen editing control. In this embodiment, the bullet screen data submitted by the user for the bullet screen editing control can be displayed in the second adjacent area of the designated part. Wherein the second adjacent region is an adjacent region of the specified region in the target video image. Alternatively, the first adjacent region and the second adjacent region may be the same region or different regions.
In the present embodiment, the relative positional relationship between the first adjacent region and the second adjacent region and the designated region in the target video image is not defined. Alternatively, the first adjacent region and the second adjacent region of the designated location may be located at an upper region, a lower region, a left region or a right region of the designated location, and the like, but are not limited thereto. Optionally, the area of the second adjacent region is larger than the area of the first adjacent region. The area of the second adjacent region can be determined by the content of the displayed target bullet screen data.
In this embodiment, for a target video in playing, when a target video image included in the target video includes a designated portion, an adjacent area of the designated portion of the target object is determined based on contour feature data in the target video image according to the designated portion, and a bullet screen editing control is displayed in the adjacent area. Therefore, the user can release the bullet screen data which can be displayed in the second adjacent area of the designated part of the target object through the bullet screen editing control. The bullet screen publishing mode provided by the embodiment of the application improves the diversity and flexibility of bullet screen publishing and is beneficial to improving user experience.
On the other hand, the bullet screen released by the bullet screen releasing mode can avoid the designated position in the video picture when being displayed, the effect that the bullet screen is displayed along with the target object can also be realized, a user can watch the video content and the bullet screen content simultaneously when watching, the back-and-forth switching between the video content and the bullet screen content is not needed, and the watching experience of the user is further improved.
For example, in some embodiments, the designated site may be a human face. The bullet screen that the mode of issuing of bullet screen that this embodiment provided can avoid the facial image in the video picture when the show to the bullet screen can realize the effect that the bullet screen "shows with the people" in the adjacent region show of facial image. Therefore, the bullet screen data of each frame of video image can be displayed in the adjacent area of the face image contained in the bullet screen data, and the display effect that the bullet screen moves along with the movement of the face image can be realized in the video playing process, so that the user experience can be further improved.
In the embodiment of the present application, each frame of video image may or may not include the designated region. Based on this, in the embodiment of the present application, the specified portion detection may also be performed on the target video image. Alternatively, the terminal device may perform specified portion detection on each frame of video image according to the image characteristics of the specified portion to determine whether the video image contains the specified portion. However, the real-time detection method has a high requirement on the performance of the terminal device, and may affect the playing speed of the video.
In order not to affect the performance of the terminal device, the server device may perform designated portion detection on the video image included in the target video, and determine the contour feature data of the designated portion included in the target video image, thereby obtaining a contour data set corresponding to the target video. For specific embodiments of the server device obtaining the contour feature data of the designated portion in the target video image and generating the contour data set corresponding to the target video, reference may be made to relevant contents of the foregoing embodiments, and details are not described herein again.
Further, the server-side equipment can also prestore a profile feature set corresponding to the target video image; the set of contour features comprises a correspondence between image identifications of the video images and positions of the designated parts in the corresponding video images. The video image refers to a video image included in the target video, and the description of the image identifier of the video image may refer to the relevant contents of the above embodiments, which are not described herein again. Accordingly, the server device can send the profile feature set corresponding to the target video to the terminal device according to the video request of the device.
Correspondingly, the terminal equipment can inquire whether the video image corresponding to the current inquiry period contains the contour characteristic data of the first target object in the contour characteristic set according to the set inquiry period and the identification of each frame of video image in sequence; further, if the profile feature data is queried in the current query period, determining that the profile feature data of the first target object included in the video image corresponding to the current query period is detected in the profile feature set.
The query period is 1/K times of the display time of each frame of video image in the target video, and K is a positive integer. Preferably, K is 1, that is, the query period is equal to the display time of each frame of video image in the target video. The display time of each frame of video image in the target video can be determined by the frame rate of the target video, i.e. the display time of each frame of video image in the target video is equal to the inverse of the frame rate of the target video. For example, if the frame rate of the target video is 25 frames/s, the display time of each frame of video image in the target video is 40 ms.
In this embodiment, the number of designated portions in the target video image may be 1 or more. Optionally, each designated part corresponds to a target, for example, the target object is a human face, and the designated part is a human face. Optionally, the contour feature data of the designated region in the target video image includes: the position feature of the specified part in the target video image, for example, the position feature of the face image in the target video image, etc.
In this embodiment, the position relationship between the first adjacent area of each designated portion and the image of the designated portion in the target video image may be determined according to the position of at least one designated portion included in the target display image and the position of the first adjacent area corresponding to each designated portion. Optionally, when determining the first adjacent area of the at least one designated portion, the determination may be performed sequentially according to the position of the at least one designated area included in the target video image. The first designated area is exemplified as follows. The first appointed part is any one of the at least one appointed part, and adjacent areas are not determined yet.
For the first designated location, a first adjacent area of the designated location of which the adjacent area has been determined in the at least one designated location may be obtained. Wherein the number of the second designated parts is 0, 1 or more. Further, the first adjacent area of the first designated part may be determined with a target that the adjacent area of the first designated part does not cover the positions of the plurality of designated parts and the first adjacent area of the second designated part. For the explanation that the adjacent area of the first designated location does not cover the locations of the plurality of designated locations and the first adjacent area of the second designated location, reference may be made to the relevant contents of the foregoing embodiments, and details are not described here. Furthermore, the corresponding barrage editing control can be displayed in the first adjacent area of each designated part.
Correspondingly, for a second adjacent area of the designated part contained in the target video image, the second adjacent area of the designated part of the target object can be determined according to the contour feature data of the designated part of the target object; and displaying the target bullet screen data aiming at the designated part of the target object in a second adjacent area of the designated part of the target object. For specific implementation forms of the target barrage data for the designated portion of the target object displayed in the second adjacent region and the barrage data submitted by the user, reference may be made to the related contents of the above embodiments, which are not described herein again. For a specific implementation of determining the second adjacent region of the designated portion of the second target object, reference may be made to the related content of the determination process of the first adjacent region, which is not described herein again.
Optionally, in order to prevent the effect of viewing by the user from being affected by playing the next frame of video image during the process of editing the barrage data by the user, the state of the target video may be adjusted to a pause state during the process of editing the barrage data by the user, that is, the playing of the target video is paused. Therefore, the target video image does not disappear in the process of editing the bullet screen data by the user, and the user experience is improved.
Based on the method, for the terminal equipment, the state of the target video can be adjusted to be a pause state in response to the triggering operation aiming at the bullet screen editing control, and a bullet screen data editing page is displayed for a user to input bullet screen data. Further, the terminal device can respond to the releasing operation aiming at the barrage editing control, readjust the state of the target video to be the playing state and release the barrage data submitted by the user.
It should be noted that the execution subjects of the steps of the methods provided in the above embodiments may be the same device, or different devices may be used as the execution subjects of the methods. For example, the execution subject of steps 101 and 102 may be device a; for another example, the execution subject of step 101 may be device a, and the execution subject of step 102 may be device B; and so on.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations are included in a specific order, but it should be clearly understood that the operations may be executed out of the order presented herein or in parallel, and the sequence numbers of the operations, such as 101, 102, etc., are merely used for distinguishing different operations, and the sequence numbers do not represent any execution order per se. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel.
Accordingly, embodiments of the present application also provide a computer-readable storage medium storing computer instructions, which, when executed by one or more processors, cause the one or more processors to perform the steps of the bullet screen issuing method.
Fig. 2b is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 2b, the terminal device includes: a memory 20a, a processor 20b and a display screen 20 c; the memory 20a is used for storing computer programs.
The processor 20b is coupled to the memory 20c for executing a computer program for: in the process of playing the target video, displaying the target video image in the target video on the display screen 20 c; under the condition that the target video image contains the designated part of the target object, acquiring contour characteristic data of the designated part of the target object contained in the target video image; determining a first adjacent area of the designated part of the target object according to the contour feature data of the designated part of the target object; displaying the bullet screen editing control in the first adjacent area so that a user can submit bullet screen data according to the bullet screen editing control; responding to the release operation aiming at the barrage editing control, and releasing the barrage data submitted by the user; the bullet screen data submitted by the user can be displayed in a second adjacent area of the designated part of the target object.
In some embodiments, the processor 20b, when acquiring the contour feature data of the specified portion of the target object in the target video image, is specifically configured to: acquiring a first contour feature set of a target object contained in a target video; the first contour feature set comprises a corresponding relation between video image identification and contour feature data; and acquiring contour characteristic data of the target object from the first contour characteristic set according to the identification of the target video image.
Optionally, the identification of the target video image is its target play time node. Accordingly, when the processor 20b acquires the contour feature data of the target object in the target video image from the first contour feature set, it is specifically configured to: according to the target playing time node of the target video image, acquiring contour characteristic data of the target object at the target playing time node from the first contour characteristic set, and taking the contour characteristic data as the contour characteristic data of the target object at the target playing time node; the first contour feature set is the contour feature of the target object in the first playing time period.
Further, when the processor 20b acquires the contour feature data of the target object at the target play time node from the contour feature set, it is specifically configured to: judging whether the target playing time node belongs to a first playing time period or not; and if so, matching the target playing time node in the first contour feature set to obtain contour feature data of the target object at the target playing time node.
Correspondingly, if the judgment result is no, sending a data request to the server device through the communication component 20d, wherein the data request includes a target playing time node; receiving a second profile feature set issued by the server device for the target play time node through the communication component 20 d; the second playing time period corresponding to the second profile feature set is later than the first playing time period and comprises a target playing time node; and matching the target playing time node in the second contour feature set to obtain contour feature data of the facial image of the target object at the target playing time node.
In other embodiments, the contour feature data of the designated portion of the target object includes: the position characteristics of the designated part of the target object in the target video image; the number of target objects is at least one. Accordingly, the processor 20b, when determining the first neighboring area of the contour of the target object, is specifically configured to: aiming at the first designated part, determining a first adjacent area of the first designated part by taking the adjacent area of the first designated part not covering the designated part of at least one target object and the first adjacent area of the second designated part as targets; the first appointed part is any one of the appointed parts of the at least one target object, the adjacent area of which is not determined yet, and the second appointed part is the appointed part of the adjacent area of which is determined in the appointed parts of the at least one target object.
In still other embodiments, the target video image carries a designated tag indicating whether the target video image contains the designated region. Accordingly, the processor 20b is further configured to: acquiring a designated label carried by a target video image; judging whether the target video image contains a specified part or not according to the specified label; if the judgment result is yes, acquiring the contour feature data of the designated part of the target object in the target video image.
In some other embodiments, when the processor 20b issues the bullet screen data submitted by the user, it is specifically configured to: responding to the trigger operation aiming at the bullet screen editing control, and adjusting the state of the target video to be a pause state; displaying a bullet screen data editing page for a user to submit bullet screen data; responding to the release operation aiming at the barrage editing control, and readjusting the state of the target video to be a playing state; and the bullet screen data submitted by the user is sent to the server device through the communication component 20d for the server device to schedule.
Optionally, the processor 20b is further configured to: determining a second adjacent area of the designated part of the target object according to the contour feature data of the designated part of the target object; and displaying the target bullet screen data aiming at the designated part of the target object in a second adjacent area of the designated part of the target object.
Optionally, the target barrage data is barrage data submitted by the user.
The terminal device provided by the embodiment of the application can realize the bullet screen publishing of the target video besides providing the bullet screen publishing mode of any video image in the target video. The following is an exemplary description from the bullet screen distribution of the target video.
In some embodiments, the processor 20b is further configured to: playing the target video through the display screen;
detecting whether a video image included in a target video contains a designated part or not;
for a target video image containing a specified part, determining a first adjacent area of the specified part based on the contour feature data of the specified part in the video image;
displaying a bullet screen editing control in the first adjacent area so that a user can release bullet screen data based on the bullet screen editing control;
responding to the release operation aiming at the barrage editing control, and releasing barrage data submitted by a user; and the bullet screen data submitted by the user can be displayed in a second adjacent area of the designated part.
Optionally, when detecting whether the video image included in the target video being played contains the designated portion, the processor 20b is specifically configured to: according to a set query period, sequentially querying whether a video image corresponding to the current query period contains profile feature data or not in a profile feature set according to the identification of each frame of video image; if yes, determining that the video image corresponding to the current query period contains the designated part; the query period is 1/K times of the display time of each frame of video image in the target video, wherein K is a positive integer.
Further, the processor 20b is further configured to: and if so, using the inquired contour feature data as the contour feature data of the specified part in the target video image.
In some embodiments, the contour feature data for the specified region includes: specifying the position characteristics of the part in the target video image; the number of the designated portions is at least one. Accordingly, when determining the first adjacent area of the designated area, the processor 20b is specifically configured to: aiming at the first appointed part, determining a first adjacent region of the first appointed part according to the position characteristics of the appointed part in a target video image by taking the adjacent region of the first appointed part not covering at least one appointed part and the first adjacent region of the second appointed part as a target; the first appointed part is any one of the at least one appointed part for which the adjacent area is not determined, and the second appointed part is the appointed part for which the adjacent area is determined in the at least one appointed part.
In some embodiments, the processor 20b is further configured to: responding to the trigger operation aiming at the bullet screen editing control, and adjusting the state of the target video to be a pause state;
displaying a bullet screen data editing page for a user to submit bullet screen data; responding to the release operation aiming at the barrage editing control, and readjusting the state of the target video to be a playing state; and sending the bullet screen data submitted by the user to the server side equipment for scheduling by the server side equipment.
In some optional embodiments, as shown in fig. 2b, the terminal device may further include: power supply 20e, audio 20f, etc. Only some of the components are schematically shown in fig. 2b, and it is not meant that the terminal device must contain all of the components shown in fig. 2b, nor that the terminal device can only include the components shown in fig. 2 b.
In embodiments of the present application, the memory is used to store computer programs and may be configured to store various other data to support operations on the device on which the memory is located. Wherein the processor may execute a computer program stored in the memory to implement the corresponding control logic. The memory may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
In the embodiments of the present application, the processor may be any hardware processing device that can execute the above described method logic. Alternatively, the processor may be a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), or a Micro Controller Unit (MCU); programmable devices such as Field-Programmable Gate arrays (FPGAs), Programmable Array Logic devices (PALs), General Array Logic devices (GAL), Complex Programmable Logic Devices (CPLDs), etc. may also be used; or Advanced Reduced Instruction Set (RISC) processors (ARM), or System On Chip (SOC), etc., but is not limited thereto.
In embodiments of the present application, the communication component is configured to facilitate wired or wireless communication between the device in which it is located and other devices. The device in which the communication component is located can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, 4G, 5G or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component may also be implemented based on Near Field Communication (NFC) technology, Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, or other technologies.
In the embodiment of the present application, the display screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the display screen includes a touch panel, the display screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
In embodiments of the present application, a power supply component is configured to provide power to various components of the device in which it is located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
In embodiments of the present application, the audio component may be configured to output and/or input audio signals. For example, the audio component includes a Microphone (MIC) configured to receive an external audio signal when the device in which the audio component is located is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in a memory or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals. For example, for an electronic device with language interaction functionality, voice interaction with a user may be enabled through an audio component, and so forth.
The terminal device provided in this embodiment, for a currently displayed target video image in a target video in a playing process, may acquire, under a condition that the target video image includes a specified portion, profile feature data of the specified portion of a target object included in the target video image; and according to the contour characteristic data, determining an adjacent region of the designated part of the target object, and displaying the bullet screen editing control in the adjacent region. Therefore, the user can release the bullet screen data which can be displayed in the second adjacent area of the designated part of the target object through the bullet screen editing control. The bullet screen publishing mode provided by the embodiment of the application improves the diversity and flexibility of bullet screen publishing and is beneficial to improving user experience.
On the other hand, the bullet screen released by the bullet screen releasing mode can avoid the designated position in the video picture when being displayed, the effect that the bullet screen is displayed along with the target object can also be realized, a user can watch the video content and the bullet screen content simultaneously when watching, the back-and-forth switching between the video content and the bullet screen content is not needed, and the watching experience of the user is further improved.
For example, in some embodiments, the designated site may be a human face. The bullet screen that the mode of issuing of bullet screen that this embodiment provided can avoid the facial image in the video picture when the show to the bullet screen can realize the effect that the bullet screen "shows with the people" in the adjacent region show of facial image. Therefore, the bullet screen data of each frame of video image can be displayed in the adjacent area of the face image contained in the bullet screen data, and the display effect that the bullet screen moves along with the movement of the face image can be realized in the video playing process, so that the user experience can be further improved.
It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (18)

1. A bullet screen publishing method is characterized by comprising the following steps:
in the process of playing a target video, displaying a target video image in the target video;
under the condition that the target video image contains a designated part, acquiring contour feature data of the designated part of a target object contained in the target video image;
determining a first adjacent area of the designated part of the target object according to the contour feature data of the designated part of the target object;
displaying a bullet screen editing control in the first adjacent area so that a user can submit bullet screen data for the bullet screen editing control;
responding to the release operation aiming at the bullet screen editing control, and releasing bullet screen data submitted by the user; and the bullet screen data submitted by the user can be displayed in a second adjacent area of the designated part of the target object.
2. The method according to claim 1, wherein the acquiring contour feature data of the designated portion of the target object in the target video image, which is included in the target video image, comprises:
acquiring a first contour feature set of a target object contained in the target video; wherein the first set of profile features comprises a correspondence between video image identifications and profile feature data;
and acquiring the contour feature data of the target object from the first contour feature set according to the identification of the target video image.
3. The method of claim 2, wherein the identification of the target video image is its target play time node; the acquiring, according to the identifier of the target video image, the contour feature data of the target object in the target video image from the first contour feature set includes:
acquiring contour characteristic data of the target object at the target playing time node from the first contour characteristic set according to the target playing time node of the target video image, wherein the contour characteristic data is used as the contour characteristic data of the target object at the target playing time node;
the first contour feature set is a contour feature of the target object in a first playing time period.
4. The method according to claim 3, wherein the obtaining contour feature data of the target object at the target playing time node from the first contour feature set according to the target playing time node of the target video image comprises:
judging whether the target playing time node belongs to a first playing time period or not;
and if so, matching the target playing time node in the first contour feature set to obtain contour feature data of the target object at the target playing time node.
5. The method of claim 4, further comprising:
if the judgment result is negative, sending a data request to the server-side equipment, wherein the data request comprises the target playing time node;
receiving the second profile feature set issued by the server-side equipment aiming at the target playing time node; a second playing time period corresponding to the second profile feature set is later than the first playing time period and comprises the target playing time node;
and matching the target playing time node in the second contour feature set to obtain contour feature data of the facial image of the target object at the target playing time node.
6. The method of claim 1, wherein the contour feature data of the designated portion of the target object comprises: a position feature of a designated part of the target object in the target video image; the number of the target objects is at least one;
the determining a first adjacent area of the contour of the target object according to the contour feature data of the target object in the target video image includes:
aiming at a first designated part, determining a first adjacent area of a first designated part by taking the adjacent area of the first designated part as a target, wherein the adjacent area of the first designated part does not cover the designated part of the at least one target object and the first adjacent area of a second designated part;
the first designated part is a designated part of any one of the designated parts of the at least one target object for which the adjacent area is not yet determined, and the second designated part is a designated part of the at least one target object for which the adjacent area is determined.
7. The method according to claim 1, wherein the target video image carries a specific tag indicating whether the target video image contains a specific portion; the method further comprises the following steps:
acquiring a designated label carried by the target video image;
judging whether the target video image contains a designated part or not according to the designated label;
if the judgment result is yes, acquiring the contour feature data of the designated part of the target object in the target video image.
8. The method of any of claims 1-7, wherein publishing the user-submitted barrage data in response to the publishing operation for the barrage editing control comprises:
responding to the triggering operation aiming at the bullet screen editing control, and adjusting the state of the target video to be a pause state;
displaying a bullet screen data editing page for the user to submit bullet screen data; and
responding to the release operation aiming at the bullet screen editing control, and readjusting the state of the target video to be a playing state;
and sending the bullet screen data submitted by the user to server equipment for scheduling by the server equipment.
9. The method of any one of claims 1-7, further comprising:
determining a second adjacent area of the designated part of the target object according to the contour feature data of the designated part of the target object;
and displaying target bullet screen data aiming at the designated part of the target object in a second adjacent area of the designated part of the target object.
10. The method of claim 9, wherein the target barrage data is barrage data submitted by the user.
11. A bullet screen publishing method is characterized by comprising the following steps:
detecting whether a video image included in a target video in playing contains a designated part or not;
for a target video image containing a specified part, determining a first adjacent area of the specified part based on contour feature data of the specified part in the target video image;
displaying a bullet screen editing control in the first adjacent area so that a user can issue bullet screen data based on the bullet screen editing control;
responding to the release operation aiming at the bullet screen editing control, and releasing bullet screen data submitted by the user; wherein the bullet screen data submitted by the user can be displayed in a second adjacent area of the designated part.
12. The method according to claim 11, wherein the detecting whether the video image included in the target video being played contains the designated portion comprises:
according to a set query period, sequentially querying whether the video image corresponding to the current query period contains profile feature data or not in the profile feature set according to the identification of each frame of video image;
if yes, determining that the video image corresponding to the current query period contains the designated part;
the query period is 1/K times of the display time of each frame of video image in the target video, wherein K is a positive integer.
13. The method of claim 12, further comprising:
and if so, using the inquired contour feature data as the contour feature data of the specified part in the target video image.
14. The method of claim 11, wherein the contour feature data of the designated location comprises: the position feature of the designated part in the target video image; the number of the designated parts is at least one;
the determining a first adjacent area of the designated part based on the contour feature data of the designated part in the target video image comprises:
aiming at a first appointed part, determining a first adjacent region of the first appointed part according to the position characteristics of the appointed part in a target video image by taking the adjacent region of the first appointed part not to cover the at least one appointed part and the first adjacent region of a second appointed part as a target;
the first designated part is a designated part of any one of the at least one designated part for which the adjacent area is not determined yet, and the second designated part is a designated part of the at least one designated part for which the adjacent area is determined.
15. The method according to any one of claims 11-14, further comprising:
responding to the triggering operation aiming at the bullet screen editing control, and adjusting the state of the target video to be a pause state;
displaying a bullet screen data editing page for the user to submit bullet screen data; and
responding to the release operation aiming at the bullet screen editing control, and readjusting the state of the target video to be a playing state;
and sending the bullet screen data submitted by the user to server equipment for scheduling by the server equipment.
16. A terminal device, comprising: a memory, a processor and a display screen; wherein the memory is used for storing a computer program;
the processor is coupled to the memory for executing the computer program for:
in the process of playing a target video, displaying a target video image in the target video on the display screen;
under the condition that the target video image contains the designated part of the target object, acquiring contour feature data of the designated part of the target object contained in the target video image;
determining a first adjacent area of the designated part of the target object according to the contour feature data of the designated part of the target object;
displaying a bullet screen editing control in the first adjacent area so that a user can submit bullet screen data for the bullet screen editing control;
responding to the release operation aiming at the bullet screen editing control, and releasing bullet screen data submitted by the user; and the bullet screen data submitted by the user can be displayed in a second adjacent area of the designated part of the target object.
17. A terminal device, comprising: a memory, a processor and a display screen; wherein the memory is used for storing a computer program;
the processor is coupled to the memory for executing the computer program for:
playing the target video through the display screen;
detecting whether a video image included in the target video contains a specified part or not;
for a target video image containing the designated part, determining a first adjacent area of the designated part based on contour feature data of the designated part in the video image;
displaying a bullet screen editing control in the first adjacent area so that a user can issue bullet screen data based on the bullet screen editing control;
responding to the release operation aiming at the bullet screen editing control, and releasing bullet screen data submitted by the user; wherein the bullet screen data submitted by the user can be displayed in a second adjacent area of the designated part.
18. A computer-readable storage medium having stored thereon computer instructions, which, when executed by one or more processors, cause the one or more processors to perform the steps of the method of any one of claims 1-15.
CN202010006219.8A 2019-12-13 2020-01-03 Barrage publishing method, equipment and storage medium Pending CN112995742A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2019112864939 2019-12-13
CN201911286493 2019-12-13

Publications (1)

Publication Number Publication Date
CN112995742A true CN112995742A (en) 2021-06-18

Family

ID=76344249

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010006219.8A Pending CN112995742A (en) 2019-12-13 2020-01-03 Barrage publishing method, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112995742A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108668175A (en) * 2018-05-02 2018-10-16 北京奇艺世纪科技有限公司 A kind of dissemination method and device of barrage word
CN110351593A (en) * 2019-06-28 2019-10-18 维沃移动通信有限公司 Information processing method, device, terminal device and computer readable storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108668175A (en) * 2018-05-02 2018-10-16 北京奇艺世纪科技有限公司 A kind of dissemination method and device of barrage word
CN110351593A (en) * 2019-06-28 2019-10-18 维沃移动通信有限公司 Information processing method, device, terminal device and computer readable storage medium

Similar Documents

Publication Publication Date Title
JP6857251B2 (en) Video content switching and synchronization system, and how to switch between multiple video formats
US10645332B2 (en) Subtitle displaying method and apparatus
WO2020083021A1 (en) Video recording method and apparatus, video playback method and apparatus, device, and storage medium
US10741215B1 (en) Automatic generation of video playback effects
US20130007787A1 (en) System and method for processing media highlights
CN107147939A (en) Method and apparatus for adjusting net cast front cover
US20220130077A1 (en) Content modification in a shared session among multiple head-mounted display devices
CN112995740A (en) Barrage display method, equipment, system and storage medium
US8837912B2 (en) Information processing apparatus, information processing method and program
KR20190024249A (en) Method and electronic device for providing an advertisement
CN103576848A (en) Gesture operation method and gesture operation device
CN106851326B (en) Playing method and device
CN108965981B (en) Video playing method and device, storage medium and electronic equipment
CN109089128A (en) A kind of method for processing video frequency, device, equipment and medium
CN112169320B (en) Method, device, equipment and storage medium for starting and archiving application program
US20240129576A1 (en) Video processing method, apparatus, device and storage medium
CN112169318B (en) Method, device, equipment and storage medium for starting and archiving application program
CN113822972A (en) Video-based processing method, device and readable medium
CN113556603B (en) Method and device for adjusting video playing effect and electronic equipment
US9525854B2 (en) Information processing method and electronic device
CN113965665A (en) Method and equipment for determining virtual live broadcast image
CN113301413B (en) Information display method and device
CN112995742A (en) Barrage publishing method, equipment and storage medium
CN115941869A (en) Audio processing method and device and electronic equipment
CN110971924A (en) Method, device, storage medium and system for beautifying in live broadcast process

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination