CN112312190A - Video picture display method and device, electronic equipment and storage medium - Google Patents

Video picture display method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112312190A
CN112312190A CN202011080616.6A CN202011080616A CN112312190A CN 112312190 A CN112312190 A CN 112312190A CN 202011080616 A CN202011080616 A CN 202011080616A CN 112312190 A CN112312190 A CN 112312190A
Authority
CN
China
Prior art keywords
target
picture
area
video
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011080616.6A
Other languages
Chinese (zh)
Inventor
韩旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Amusement Starcraft Beijing Technology Co ltd
Original Assignee
Amusement Starcraft Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Amusement Starcraft Beijing Technology Co ltd filed Critical Amusement Starcraft Beijing Technology Co ltd
Priority to CN202011080616.6A priority Critical patent/CN112312190A/en
Publication of CN112312190A publication Critical patent/CN112312190A/en
Priority to PCT/CN2021/113055 priority patent/WO2022073389A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/752Contour matching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure relates to a method, a device, an electronic device and a storage medium for displaying video pictures, wherein the method comprises the following steps: responding to an area designation operation implemented by a user aiming at an original image of a target video, determining a target area in the original image, wherein a video image in the target area is obtained by drawing according to an image layer and a bullet screen image layer positioned above the image layer; adjusting the bullet screen layer in the target area to be positioned below the picture layer; and drawing and displaying a target picture corresponding to the target area according to the adjusted picture layer.

Description

Video picture display method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of video display, and in particular, to a method and an apparatus for displaying a video frame, an electronic device, and a storage medium.
Background
The current mainstream video playing platform usually provides a bullet screen display function for the audience, that is, a bullet screen related to the video is displayed simultaneously in the video playing process.
In order to avoid the blocking of the barrage on the video picture, in the related art, the server often determines the object position of the target object in the video in advance, and then provides the video and the object position to the client, so that the client draws the video picture according to the object position, and thus only the target object is displayed at the object position without displaying the corresponding barrage, and the blocking prevention display effect on the target object is realized. However, because the target object and the corresponding blocking position in the above manner are determined before being extracted by the server, the finally realized blocking display effect is often difficult to meet the viewing requirements of audience users, and the user experience is not good.
Disclosure of Invention
The disclosure provides a display method and device of video pictures, electronic equipment and a storage medium, which are used for at least solving the technical problem that the blocking prevention display effect of a bullet screen in the related art is difficult to meet the watching requirement of a user. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, a method for displaying a video frame is provided, including:
responding to an area designation operation implemented by a user for an original image of a target video, and determining a target area in the original image, wherein a video image in the target area is obtained by drawing according to an image layer and a bullet screen image layer positioned above the image layer;
adjusting the bullet screen layer in the target area to be positioned below the picture layer;
and drawing and displaying a target picture corresponding to the target area according to the adjusted picture layer.
Optionally, the determining the target region in the original picture of the target video in response to the region specification operation performed by the user for the original picture includes at least one of:
detecting the target area drawn in an original picture by a user;
determining a target contour template selected by a user from alternative contour templates, and determining a corresponding area of the target contour template in a display area of an original picture as the target area after detecting that the user places the target contour template in the display area;
determining a target object template selected by a user from alternative object templates, and determining a corresponding area of the target object in an original picture as a target area under the condition that the target object matched with the target object template in the original picture is detected;
determining a target object template selected by a user from alternative object templates, providing the target object template and a picture image corresponding to the original picture to a server, and receiving area coordinates of a target area returned by the server, wherein the target area corresponds to a target object matched with the target object template in the picture image.
Determining other areas in the original picture, which are different from the target area;
and splicing the video pictures of the other areas and the target picture into a complete video picture, and displaying the complete video picture.
Optionally, the original picture and the target picture are displayed in an HTML5 page, and the rendering the target picture corresponding to the target area includes:
the target screen corresponding to the target area is rendered using the native canvas function of the HTML5 page.
According to a second aspect of the embodiments of the present disclosure, a method for displaying a video frame is provided, including:
receiving a target object template and a picture image corresponding to an original picture, wherein the target object template is selected by a user from preset alternative object templates;
determining a target object in the picture image that matches the target object template;
and returning the area coordinates of the target area corresponding to the target object in the original picture to the client so that the client can draw and display the target picture corresponding to the target area according to the adjusted picture layer after adjusting the bullet screen layer in the target area to be positioned below the picture layer.
Optionally, the determining a target object in the picture image that matches the target object template includes:
identifying all picture objects in the picture image;
sequentially calculating the matching degree between each picture object and the target object template;
and determining the picture object with the highest matching degree or the picture object with the matching degree higher than a preset matching degree threshold value as the target object.
Optionally, the method further includes:
and under the condition that the matching degrees corresponding to all the picture objects are not higher than the threshold value of the matching degrees, returning a matching failure message to the client, wherein the matching failure message is used for indicating that the target object does not exist in the picture image.
According to a third aspect of the embodiments of the present disclosure, an apparatus for displaying a video frame is provided, including:
the device comprises an area determining module, a display module and a display module, wherein the area determining module is configured to respond to an area specifying operation which is carried out by a user aiming at an original image of a target video, and determine a target area in the original image, and a video image in the target area is drawn according to an image layer and a bullet screen image layer positioned above the image layer;
the layer adjusting module is configured to adjust the bullet screen layer in the target area to be located below the picture layer;
and the drawing and displaying module is configured to draw and display a target picture corresponding to the target area according to the adjusted picture layer.
Optionally, the region determining module is further configured to at least one of:
detecting the target area drawn in an original picture by a user;
determining a target contour template selected by a user from alternative contour templates, and determining a corresponding area of the target contour template in a display area of an original picture as the target area after detecting that the user places the target contour template in the display area;
determining a target object template selected by a user from alternative object templates, and determining a corresponding area of the target object in an original picture as a target area under the condition that the target object matched with the target object template in the original picture is detected;
determining a target object template selected by a user from alternative object templates, providing the target object template and a picture image corresponding to the original picture to a server, and receiving area coordinates of a target area returned by the server, wherein the target area corresponds to a target object matched with the target object template in the picture image.
Optionally, the method further includes:
a further region determination module configured to determine a further region of the original picture distinct from the target region;
and the picture splicing module is configured to splice the video pictures of the other areas and the target picture into a complete video picture and display the complete video picture.
Optionally, the original frame and the target frame are displayed in an HTML5 page, and the rendering and displaying module is further configured to:
the target screen corresponding to the target area is rendered using the native canvas function of the HTML5 page.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a display apparatus for video pictures, including:
the template receiving module is configured to receive a target object template and a picture image corresponding to an original picture, wherein the target object template is selected by a user from preset alternative object templates;
an object determination module configured to determine a target object in the screen image that matches the target object template;
and the coordinate returning module is configured to return the area coordinates of the target area corresponding to the target object in the original picture to the client, so that after the client adjusts the bullet screen layer in the target area to be positioned below the picture layer, the target picture corresponding to the target area is drawn and displayed according to the adjusted picture layer.
Optionally, the object determination module is further configured to:
identifying all picture objects in the picture image;
sequentially calculating the matching degree between each picture object and the target object template;
and determining the picture object with the highest matching degree or the picture object with the matching degree higher than a preset matching degree threshold value as the target object.
Optionally, the method further includes:
a failure message returning module configured to return a matching failure message to the client when the matching degrees corresponding to all the screen objects are not higher than the matching degree threshold, where the matching failure message is used to indicate that the target object does not exist in the screen image.
According to a fifth aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method for displaying a video frame according to any one of the embodiments of the first aspect or the second aspect.
According to a sixth aspect of the embodiments of the present disclosure, a storage medium is provided, where instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method for displaying a video picture according to any one of the first aspect or the second aspect.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
according to the embodiment of the disclosure, the client responds to the region designation operation implemented by the user for the original picture, and determines the target object needing to be blocked and the corresponding blocking prevention region, so that the user is allowed to set the blocking prevention region according to own will, the video picture displayed by the client and the display effect of the corresponding barrage thereof are ensured to better meet the watching requirement of the user, and the user experience is improved to a certain extent.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is an architectural diagram of a video service platform provided by an exemplary embodiment;
FIG. 2 is a schematic diagram illustrating a video picture rendering principle according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating a method of presenting a video frame according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram illustrating another method of presenting video pictures according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram illustrating still another method of presenting video pictures according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram illustrating one type of mapping a target region in accordance with an embodiment of the present disclosure;
fig. 7 is a schematic diagram illustrating a further method of presenting a video frame according to an embodiment of the present disclosure;
FIG. 8 is a schematic diagram illustrating a determination of a target area using an area template according to an embodiment of the present disclosure;
FIG. 9 is a schematic diagram illustrating a determination of a target region using an object template according to an embodiment of the present disclosure;
FIG. 10 is an interactive flow chart illustrating a method of presenting video frames according to an embodiment of the present disclosure;
FIG. 11 is a schematic block diagram of a video picture presentation apparatus shown in accordance with an embodiment of the present disclosure;
FIG. 12 is a schematic block diagram of another video picture presentation apparatus shown in accordance with an embodiment of the present disclosure;
fig. 13 is a block diagram illustrating an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a schematic diagram of an architecture of a video service platform according to an exemplary embodiment. As shown in fig. 1, the system may include a network 10, a server 11, a number of electronic devices such as a cell phone 12, a cell phone 13, a cell phone 14, and so on.
The server 11 may be a physical server comprising a separate host, or the server 11 may be a virtual server carried by a cluster of hosts. During the operation, the server 11 may operate a server-side program of an application to implement a related service function of the application, for example, when the server 11 operates a program of a video service platform, the server may be implemented as a server of the video service platform. In the technical solution of one or more embodiments of the present specification, the server 11 may cooperate with the clients running on the mobile phones 12 to 14 to implement a presentation scheme of a video picture including a bullet screen.
In this embodiment, the video service platform may not only implement a video service function, but also be an integrated functional platform with many other functions, such as detection of area drawing operation, displaying and selecting of an alternative outline template, displaying and selecting of an alternative object template, determining of a target area, drawing of a target picture, and the like, which is not limited in one or more embodiments of this specification.
Handsets 12-14 are just one type of electronic device that a user may use. In fact, it is obvious that the user can also use electronic devices of the type such as: tablet devices, notebook computers, Personal Digital Assistants (PDAs), wearable devices (e.g., smart glasses, smart watches, etc.), etc., which are not limited by one or more embodiments of the present disclosure. In the operation process, the electronic device may operate a program on a client side of an application to implement a related service function of the application, for example, when the electronic device operates a program of a video service platform, the electronic device may be implemented as a client of the video service platform, for example, the mobile phone 12 may be implemented as a video providing client, and the mobile phone 13 and the mobile phone 14 may be implemented as video playing clients.
It should be noted that: an application program of a client of the video service platform can be pre-installed on the electronic equipment, so that the client can be started and run on the electronic equipment; of course, when an online "client" such as HTML5 technology is employed, the client can be obtained and run without installing a corresponding application on the electronic device.
And the network 10 for interaction between the handsets 12-14 and the server 11 may include various types of wired or wireless networks.
The current mainstream video playing platform usually provides a bullet screen display function for the audience, that is, a bullet screen related to the video is displayed simultaneously in the video playing process. The original picture usually includes a picture layer and a pop-up layer, that is, the original picture of the displayed target video is drawn according to the picture layer and the pop-up layer located above the picture layer.
As shown in fig. 2(a), a bullet screen 201a is displayed in a video frame 202a corresponding to a target video played by a client. As shown in fig. 2(b), the video picture 202a viewed by the user corresponds to an overlay display of the bullet screen layer 201b located above and the picture layer 202b located below. Since the pop-up layer 201b is located above the picture layer 202b, in the video picture 202a drawn according to the pop-up layer 201b and the picture layer 202b, the pop-up is displayed above the video picture, which may cause the pop-up to block the video picture.
In order to avoid the blocking of the barrage on the video picture, in the related art, the server often determines the object position of the target object in the video in advance, and then provides the video and the object position to the client, so that the client draws the barrage below the object when drawing the video picture at the position, and thus only the target object is shown at the object position without showing the corresponding barrage, and the blocking prevention display effect on the target object is realized. However, since the target object and the corresponding blocking position in the above manner are determined before being extracted by the server, and are not related to the user behavior, the finally achieved blocking display effect often fails to meet the viewing requirements of the audience users, and the user experience is poor.
Fig. 3 is a flowchart illustrating a method for displaying a video frame according to an exemplary embodiment of the present disclosure. As shown in fig. 3, the method applied to the client may include the following steps:
step 302, in response to a region designation operation performed by a user for an original picture of a target video, determining a target region in the original picture, wherein a video picture in the target region is obtained by drawing according to a picture layer and a bullet screen layer located above the picture layer.
In this embodiment, the target area set by the user is an area where only the display screen is not displayed during the final display, that is, an area in the target area where the bullet screen is displayed in a blocking-proof manner. For convenience of describing the present embodiment, the description will be simply referred to as "blocking prevention area" hereinafter.
In fact, the server may determine the target area in various ways. In one embodiment, the user can perform the operation of drawing the target area in the original picture in the process of watching the displayed original picture, and correspondingly, the client can detect the target area drawn by the user in the original picture, so that the corresponding target mask area is specified by the user in a drawing mode, the final anti-blocking area better accords with the watching desire of the user, and the better anti-blocking display effect is convenient to realize. Specifically, the user may draw the target area directly in the original picture, or the client may display a blank mask above the original picture, at this time, the user may draw the target area on the blank mask, and correspondingly, the server may detect the target mask area drawn by the user in the blank mask, and finally determine a corresponding area of the target mask area in the original picture as the target area. The size of the blank mask can be the same as the size of the original picture and the position of the blank mask is overlapped, and at the moment, a user can draw a target mask area which accords with the user's will at will above the original picture; in addition, the blank mask may further include a preset non-drawing area, at this time, the user may draw the target mask area only in the drawing area except for the drawing area, the non-drawing area is used to forcibly set an area that the user cannot designate as the target area, and the non-drawing area may be preset in the client by the user or may be set by the server side in a unified manner and then sent to the client. The target mask area drawn by the user in the blank mask may be any shape, such as a rectangle, a circle, an ellipse, a trapezoid, or an irregular figure, and the size of the target mask area may be set by the user at will (for example, by dragging the area boundary with a mouse). Of course, the user can trigger the client to display the blank mask by triggering a preset gear-proof setting switch.
In an embodiment, the client may provide alternative outline templates for selection by the user. For example, the client may determine a target contour template selected by the user from the candidate contour templates, and after detecting that the user places the target contour template in the display area of the original screen, determine a corresponding area of the target contour template in the display area as the target area. The user can control the client to display the alternative contour templates by triggering the preset anti-blocking setting switch, then select the alternative contour template which is more in line with the own watching intention or the edge contour image of the target object (the display object in the video picture which the user wants to realize the anti-blocking display effect) from the displayed alternative contour templates as the target contour template, and further can directly drag and place the target contour template at a proper position of the original picture, such as the position of the target object, so as to realize the designation of the target area. The target contour template may be an object contour region corresponding to a common video object such as a person, food, an animal, a building, a book, or a screen, and the user may also perform a zoom operation on the target contour template to control the size of the target contour template, which is not limited in this disclosure. Through the method, the user only needs to select the target contour template which is more in line with the contour of the target object from the displayed alternative contour templates, and then the target contour template is placed at the corresponding position in the original picture through simple dragging operation and is adjusted, so that the operation steps of the user for specifying the target area are greatly simplified, and the efficiency of the user for specifying the target area is improved.
In an embodiment, the client may provide an alternative object template for selection by the user. For example, the client may determine a target object template selected by the user from the displayed candidate object templates, detect a target object in the original screen that matches the target object template, and determine a corresponding region of the detected target object in the original screen as a target region. The user can control the client to display the alternative object template by triggering the preset gear prevention setting switch, and then the target object template can be selected from the alternative object templates displayed by the client. The target object determined according to the target object template is the video object which is interested by the user and wants to realize the anti-blocking display effect. By the method, the client can determine the corresponding target object according to the target object template specified by the user. Or, the client may also show the target object selection control for the user, so that the user selects the corresponding target object in the current video picture in a self-defined manner, the problem that the user cannot select the target object when the target object does not exist in the candidate object template is avoided, and the selection result of the target object can be ensured to be more in line with the user's intention. By the method, the client can realize high-efficiency accurate identification of the target object through local real-time detection of the target video, so that dynamic tracking of the target object in the target video is realized, and a dynamic anti-blocking display effect for the target object is realized.
In an embodiment, in order to reduce the operation pressure when the client identifies the target object, the client may also request the server to obtain the target object corresponding to the target object template selected by the user, that is, the server determines the target object according to the target object template. For example, the client may first determine a target object template selected by a user from preset candidate object templates, then provide a picture image corresponding to the target object template and an original picture to the server, and finally receive area coordinates of a target area returned by the server, where the target area corresponds to a target object in the picture image that matches the target object template. Similarly, the user may control the client to display the candidate object templates by triggering a preset blocking setting switch, and then may select a target object template corresponding to the target object from the candidate object templates displayed by the client. Wherein, the picture image corresponding to the original picture can be a current video frame image; or an image snapshot of the current video frame image; or, in order to reduce the data transmission pressure between the server and the server, the server may further identify a video frame identifier (such as a frame image sequence number) of the current video frame image, and correspondingly, the server may determine a corresponding video frame image in the target video stored locally according to the video frame identifier, which is not limited in this disclosure. In addition, the area coordinates of the target area may be pixel coordinates of each pixel in a contour line of the target area. Through the mode, the object identification and the matching task with large computation amount are completed by the server, so that the computation pressure of the client is reduced, the computation advantages of the server can be fully exerted, the playing card pause of the client is reduced, and the watching experience of a user is further improved.
And 204, adjusting the bullet screen layer in the target area to be positioned below the picture layer.
As described above, the original picture is drawn according to the picture layer and the bullet screen layer located above the picture layer, so after the target area is determined, the client may adjust the bullet screen layer located in the target area to be below the picture layer, and other areas outside the target area do not perform the adjustment.
And step 206, drawing and displaying a target picture corresponding to the target area according to the adjusted picture layer.
In this embodiment, the client may draw the target picture corresponding to the target area according to the adjusted picture layer (i.e., the layer located at the top), and the specific drawing process may refer to the drawing picture elements and the related content of page rendering in the related art, which is not described herein again.
In an embodiment, the client may further determine other regions of the original picture different from the target region, then splice the video pictures of the other regions and the target picture into a complete video picture, and display the complete video picture. At this time, in the complete video picture displayed through the process, the target picture only displays the picture content, but does not display the corresponding barrage; and the picture content and the corresponding barrage are displayed in the video pictures corresponding to other areas at the same time, and the barrage can still shield the picture content, so that the targeted anti-blocking display effect on the picture content in the target area is realized.
In an embodiment, the client may be built based on an HTML5(Hyper Text Markup Language, fifth generation hypertext Markup Language) technology, and accordingly, the original picture and the target picture are also displayed in an HTML5 page, and at this time, the target picture may be drawn using a native canvas function (such as canvas) of an HTML5 page, so as to fully exert technical advantages of better compatibility and adaptation degree of the native canvas of the page, and a specific drawing process may participate in the record in the related technology, which is not described herein again.
According to the embodiment of the disclosure, the client responds to the region designation operation implemented by the user for the original picture, and further determines the target object needing to be blocked and the corresponding blocking prevention region, so that the user is allowed to set the blocking prevention region according to own will, and further the display effect of the displayed video picture and the corresponding bullet screen can better meet the actual watching requirement of the user, and the user experience is improved to a certain extent.
Correspondingly, the disclosure also provides a display method of the video picture at the server side. Fig. 2 is a flowchart illustrating another method for displaying a video frame according to an exemplary embodiment of the present disclosure. As shown in fig. 2, the method applied to the server may include the following steps:
step 402, receiving a target object template provided by a client and a picture image corresponding to an original picture, wherein the target object template is selected by a user from preset alternative object templates.
Step 404, determining a target object in the picture image, which matches the target object template.
Step 406, returning the area coordinates of the target area corresponding to the target object in the original picture to the client, so that after the client adjusts the bullet screen layer in the target area to be located below the picture layer, drawing and displaying the target picture corresponding to the target area according to the adjusted picture layer.
In an embodiment, the server may first identify all the screen objects in the screen image, then sequentially calculate a matching degree between each screen object and the target object template, and then determine the screen object with the highest matching degree or the screen object with the matching degree higher than a preset matching degree threshold as the target object. Wherein, the picture image corresponding to the original picture can be a current video frame image; or an image snapshot of the current video frame image; the video frame identifier (for example, a frame image sequence number) of the current video frame image may also be used, and correspondingly, the server may determine the corresponding video frame image in the target video locally stored according to the video frame identifier, which is not limited in this disclosure. Through the identification and matching process, the server can determine at least one target object in the video picture and return the area coordinates of the target area corresponding to each target object to the client, so that the self operation advantages are fully exerted, and the operation pressure of the client is reduced.
In addition, under the condition that the matching degrees corresponding to all the identified picture objects are not higher than the matching degree threshold value, the server side can return a matching failure message to the client side, wherein the matching failure message is used for indicating that no target object exists in the picture image. Correspondingly, after receiving the matching failure message, the client can directly display the original picture without performing the layer adjustment. At this time, the displayed video picture does not have a block prevention area, that is, the block prevention display effect is not generated at this time.
Since the target video is composed of a plurality of video frame images, the process of displaying the target video is a process of sequentially displaying each video frame image. In fact, the processing procedure of the embodiment shown in fig. 3 and fig. 4 may be a processing procedure of the client or the server for the video frame image being displayed at the current time, and certainly, in order to achieve a near-real-time anti-blocking display effect, the video frame image corresponding to the client or the server when identifying the target object may also be a video frame image corresponding to a time after the current time by a preset time length (e.g., 1s, 2s, etc.), so that the target frame achieving the anti-blocking display effect can be displayed in real time after the preset time length set by the user.
The following describes the method for displaying the video frame in detail with reference to the embodiments shown in fig. 5 to 10. Fig. 5 is a flowchart illustrating a method for displaying a video frame according to an exemplary embodiment of the present disclosure. As shown in fig. 5, the method is applied to a client, and the process of drawing and displaying a video frame corresponding to the method may include the following steps:
step 502, detecting that a user turns on a gear-prevention setting switch.
In an embodiment, the client may display the anti-blocking setting switch in the video display interface of the target video, so that the user may trigger the anti-blocking setting switch in the video display interface to control to start to specify the anti-blocking area in the original picture.
As shown in fig. 6(a), when the pop-up switch 604a corresponding to the video display interface 601a corresponding to the target video (e.g., the video V) is in the on state, the screen content 602a and the pop-up 603a are displayed on the video display interface 601a, and the blocking setting switch 605a is further displayed below the video display interface 601 a. Of course, the "anti-gear setting" shown in fig. 6(a) is merely exemplary, and the switch may be presented by other names when actually presented, and the present disclosure does not limit this.
Step 504, a blank mask is displayed on the top of the original frame.
Alternatively, when the target video is in the playing state or the tentative state, a user (a viewer of the target video) corresponding to the client may open the anti-gear setting switch 605a by triggering the anti-gear setting switch. As shown in fig. 6(b), in the case that the anti-gear setting switch 603b is in an open state, if it is detected that the anti-gear setting switch is triggered, the client may display a blank mask 601b above the video display interface 601a, where the blank mask 601b may be in a semi-transparent form, and the transparency of the blank mask 601b may be preset in the client by a user or may be set by a system of the client or the server.
Step 506, detect a region drawing operation performed by the user.
Through the blank mask 601b in a semi-transparent form, the user can observe the picture content of the original picture displayed below the blank mask 601b, and can further draw the target mask area 602b at the response position in the blank mask 601b through mouse or touch control.
As shown in fig. 6(b), the user can draw a rectangular target mask region 602b in a blank mask 601b above a portrait in the original screen. In fact, the user may adjust the parameters of the position, size, angle, etc. of the target mask area 602b at will, for example, the shape of the target mask area 602b may be a rectangle, a circle, an ellipse, a trapezoid, or an irregular figure, etc., and the size of the target mask area may be adjusted at will by the user dragging the area boundary with the mouse, and the present disclosure does not limit the specific style of the target mask area and the drawing manner thereof.
In addition, in the process of drawing the target mask area 602b by the user, the client may correspondingly display a real-time anti-blocking preview effect, so that the user can appropriately adjust position parameters such as a starting point, a size, an angle and the like of the target mask area 602b according to the preview effect, thereby realizing a better anti-blocking display effect. The user can draw a number of target mask areas 602b in the blank mask 601b, and after the drawing is completed, the user can trigger (e.g., click) the confirmation control 604b to notify the client that the target mask areas 602b are completely drawn.
And step 508, determining a target area corresponding to the target mask area drawn by the user.
After detecting the trigger operation performed by the user for the confirmation control, the client may determine a target area corresponding to the target mask area drawn by the user in the original screen, and determine the target area, that is, determine the area coordinate of the target area in the original screen. For example, in the case where the user directly draws the target region in the original screen, the actual position of the target region is the position thereof in the original screen.
For another example, in the case where the blank mask has the same size as the original screen and is overlaid on the original screen (see fig. 6 (b)), since the blank mask completely corresponds to the original screen, the area coordinates of the target mask area in the blank mask can be set as the area coordinates of the target area in the original screen; and under the condition that the blank mask and the original picture do not completely correspond, the area coordinate of the target mask area in the blank mask can be determined firstly, and then the area coordinate of the target area in the original picture is correspondingly calculated according to the coordinate offset and/or the zoom amount of the blank mask relative to the original picture, and the specific process is not repeated.
Step 510, adjusting the bullet screen layer in the target area to be below the picture layer.
Because the original picture is drawn according to the picture layer and the bullet screen layer positioned above the picture layer, in order to realize the anti-blocking display effect on the picture content, after the target area is determined, the client can adjust the bullet screen layer positioned in the target area to be below the picture layer, and other areas outside the target area do not carry out the adjustment.
Step 512, drawing the target picture corresponding to the target area.
In this embodiment, the client may draw the target picture corresponding to the target area according to the adjusted picture layer (i.e., the layer located at the top), and the specific process of drawing the target picture may refer to the drawing picture elements and the related content of page rendering described in the related art, which is not described herein again. In addition, the client can also determine other areas different from the target area in the original picture, and then splice the video pictures of the other areas and the target picture into a complete video picture. In fact, the process of drawing the target picture corresponding to the target area and the process of drawing the video picture of the other area may be performed simultaneously, that is, the two processes are not performed independently, but constitute a complete picture drawing process.
In an embodiment, the client may be built based on an HTML5 technology, and accordingly, the original picture and the target picture are also displayed in an HTML5 page, at this time, the target picture may be drawn by using a native canvas provided by an HTML5 technology, and a specific drawing process may be recorded in the related art, which is not described herein again.
Step 514, displaying the target picture.
After the drawing is finished, the drawn target picture can be displayed in a video display interface corresponding to the target video. It can be understood that, similar to the process of drawing the target picture, the process of displaying the target video and the process of displaying the video pictures corresponding to other areas are performed simultaneously, that is, the client displays the complete video picture formed by the target video and the video pictures corresponding to other areas, rather than displaying the video pictures corresponding to the target picture and the other areas separately.
As shown in fig. 6(c), the video display interface 602c displays a target frame 601c corresponding to the target area, in which only frame content (such as a portrait in the frame) is displayed without displaying a corresponding barrage, and displays the video frame and the barrage 603c in another area different from the target area. In the video picture playing process, the mobile played bullet screen is automatically hidden after entering the target area, and is similar to the bullet screen which is shielded below the target picture and cannot be observed, so that the blocking prevention display effect of the picture content in the target area is realized.
In the complete video pictures displayed through the process, the target picture only displays the picture content, but does not display the corresponding barrage; and the picture content and the corresponding barrage are displayed in the video pictures corresponding to other areas at the same time, and the barrage can still shield the picture content, so that the targeted anti-blocking display effect on the picture content in the target area is realized.
Fig. 7 is a schematic diagram illustrating a further method for displaying a video frame according to an embodiment of the present disclosure, where as shown in fig. 7, the method is applied to a client, and a process of drawing and displaying a video frame corresponding to the method may include the following steps:
step 702, it is detected that a user turns on a gear prevention setting switch.
In one embodiment, the client may display the anti-blocking setting switch in the video display interface of the target video, so that the user may trigger the anti-blocking setting switch in the video display interface to start to specify the anti-blocking area in the original screen.
When detecting that the gear-guard setting switch is triggered to be turned on, the client may display a corresponding setting mode selection control so that the user selects a mode for performing gear-guard setting. For example, the user may select to set a barrier zone by way of zone setting (at this time, go to step 704a), or the user may select to set a barrier zone by way of object setting (at this time, go to step 704 b).
Step 704a, an alternative outline template is presented.
In the case that it is detected that the user selects to set the blocking prevention area by the area setting, the client may present the alternative contour template to the user for the user to select a target contour template corresponding to a target object (a screen object that the user wants to achieve the blocking prevention presentation effect).
As shown in fig. 8, in a case that the gear prevention setting switch 803 is in an on state, the client may present an outline template selection interface 802 above a video presentation interface corresponding to the target video (video V), where the outline template selection interface 802 may include at least one candidate outline template. The illustrated alternative outline templates may correspond to a variety of objects, such as alternative outline template a corresponding to a lady's frontal face outline, alternative outline template B corresponding to a men's side face outline, alternative outline template C corresponding to a tableware (plate) outline, alternative outline template D corresponding to an open book outline, and so forth. The alternative outline template can be integrated in an installation program of the client side, so that the alternative outline template can be displayed after the client side is installed; or, in order to ensure the timely update of the template, the template video can be acquired from the server before the template video is displayed or in the process of displaying the target video; the alternative outline template can be obtained by the server side through extraction of a preset model algorithm according to the mass videos in the video library. The alternative contour templates can also be classified in advance, and corresponding alternative contour templates are provided according to the category to which the target video belongs, which is not described again.
In step 706a, the target contour template selected by the user is determined.
At step 708a, a placement operation by the user is detected.
The user may select a target contour template from the presented several candidate contour templates, for example, by mouse clicking, touch operation, or the like. As shown in fig. 8, if the target object that the user wants to achieve the anti-blocking display effect is the lady front face 801 shown in the original screen, the user may select an alternative contour template a having a similar shape to the human face contour corresponding to the lady front face in the alternative contour template (obviously, the alternative contour template a is the target contour template), directly drag the target contour template a to a corresponding position of the original screen (after dragging, the contour template selection interface 802 may be hidden by itself), and then adjust the size and position of the target contour template a to cover the lady front face 801 by dragging the contour model boundary, so as to achieve a better anti-blocking display effect. Of course, the user may also select other alternative contour templates as the target contour template, and the specific adjustment manner is subject to the actual operation of the user, which is not limited by the present disclosure.
And step 710a, determining a placement area of the placed target outline template in the original picture.
Further, the client may determine the placed target contour template by detecting the placement operation, and further determine a placement area of the target contour template in the original screen. The placement area may be represented by a coordinate value of each pixel point corresponding to the area boundary of the target contour template in the original screen, and certainly, when the contour boundary of the target contour template is in a standard shape, the placement area may also be represented by a feature such as a center point coordinate of the target contour template and a template size, which is not limited in the present disclosure.
Step 704b, presenting the alternative object template.
In the case of detecting that the user selects to set the blocking area by means of object setting, the client may present the alternative object template to the user for the user to select a target object template corresponding to a target object (a screen object that the user wants to achieve a blocking display effect).
As shown in fig. 9, in a case where the gear prevention setting switch 903 is in an on state, the client may present an object template selection interface 902 above a video presentation interface corresponding to a target video (video V), where the object template selection interface 902 may include at least one candidate template, and the presented candidate templates may correspond to a plurality of objects, for example, a candidate template a corresponding to a person, a candidate template B corresponding to a food, a candidate template C corresponding to an animal, a candidate template D corresponding to an opened book, and the like. The alternative object template can be integrated in an installation program of the client, so that the client can be displayed after installation is finished; or, in order to ensure the timely update of the template, the template video can be obtained from the server side in advance before the template video is displayed or in the process of displaying the target video; the candidate object template can be extracted by the server according to the massive videos in the video library, and can be classified in advance, and a corresponding candidate object template is provided according to the category of the target video, which is not described again.
Step 706b, determining the target object template selected by the user.
The user may select a target object template from the displayed candidate object templates, for example, the target object template may be selected by mouse clicking, touch operation, or the like. As shown in fig. 9, if a target object that a user wants to achieve the effect of the blocking prevention display is a lady front face 901 shown in an original screen, the user may select an object candidate template a corresponding to a person among object candidate templates. The user may select at least one candidate template as a target object template and trigger the corresponding determination control 904 after the selection is completed, thereby determining the selected candidate template as the target object template.
In step 708b, a scene object in the scene image is detected.
At this time, the client may determine the picture image at the current time first, for example, the video frame image at the current time may be used as the picture image; or performing picture snapshot on the video frame image at the current moment, and taking the obtained snapshot image as a picture image; or, in order to realize a near-real-time anti-blocking display effect, the video frame image or the snapshot image thereof corresponding to the time after the current time by the preset duration may be determined as the picture image.
Further, the picture image may be processed through a preset object recognition algorithm, so as to recognize the picture object in the picture image. For example, a clustering algorithm and a deep learning algorithm in the related art may be adopted for implementation, and certainly, the above-mentioned picture object may also be identified by using a self-defined image identification model, which is not described again.
Step 710b, determining a target object in the screen objects that matches the target object template.
After identifying the frame objects in the frame image, the client may sequentially calculate a matching degree between each frame object and the target object template, for example, the matching degree may be calculated by using various feature parameters such as color, contour, motion trajectory, and the like. It can be understood that, the closer the feature parameter of any picture object is to the feature parameter of the target object template, the higher the matching degree between the two, that is, the matching degree between the picture object and the target object template is positively correlated to the proximity degree between the two corresponding feature parameters. Further, the screen object with the highest matching degree or the screen object with the matching degree higher than a preset matching degree threshold value may be determined as the target object; of course, if the matching degrees of all the picture objects are lower than the preset threshold, it indicates that no target object exists in the current picture image, and at this time, the client may directly end the processing process, and may start the processing process for the next video frame image instead of performing subsequent processing on the current video frame image.
In step 712, a corresponding target area of the target object in the original image is determined.
And determining the target area, namely determining the area coordinates corresponding to the target area. Corresponding to the above-described steps 704a-710a, the client may directly determine the area coordinates of the placement area as the area coordinates of the target area. Corresponding to the above steps 704b-710b, the client may determine contour coordinates corresponding to the object contour of the target object in the original screen as the area coordinates of the target area.
And 714, adjusting the bullet screen layer in the target area to be below the picture layer.
In step 716, a target frame corresponding to the target area is rendered.
Step 718, displaying the target picture.
The above-mentioned steps 714-718 are not substantially different from the above-mentioned steps 510-514 in the embodiment shown in fig. 5, so the specific process of the steps 714-718 can be referred to the above description, and will not be described herein again.
In fact, corresponding to the embodiment of steps 510-514, the steps of identifying the frame object and determining the target object may also be executed by the server, which will be described below with reference to fig. 10. Fig. 10 is an interaction flowchart of a method for displaying a video picture according to an embodiment of the present disclosure, and as shown in fig. 10, the process of drawing and displaying a target picture corresponding to a target area may include the following steps:
in step 1002, the client detects that a user turns on a gear-prevention setting switch.
In step 1004, the client presents the alternative object template.
In step 1006, the client determines the target object template selected by the user.
In step 1008, the client determines the picture image.
The steps 1002-1008 are not substantially different from the steps 702-706b in the embodiment shown in fig. 7, and therefore the specific process of the steps 1002-1008 can be referred to the above description, and will not be described herein again.
Step 1010, the client provides the target object template and the picture image to the server in an associated manner.
In step 1012, the server detects a frame object in the frame image.
In step 1014, the server determines the target object in the screen object.
In step 1016, the server determines the target area in the original frame.
In step 1018, the server returns the area coordinates of the target area to the client.
After the target object template and the picture image are determined, the client can provide the target object template and the picture image to the server, correspondingly, the server can detect the picture object in the picture image through an object recognition algorithm, determine the target object in the picture object through matching degree calculation, finally determine the target area of the target object in the original picture, and return the area coordinates of the target area to the client. The specific process of the above identification and matching can be referred to the records of the above steps 706b-712, and is not described herein again.
And step 1020, adjusting the bullet screen layer in the target area to be below the picture layer.
In step 1022, a target frame corresponding to the target area is rendered.
Step 1024, displaying the target picture.
The steps 1020-1024 are not substantially different from the steps 510-514 in the embodiment shown in fig. 5, and therefore the specific process of the steps 1020-1024 can be referred to the above description, and will not be described herein again.
Through the process, the server side carries out object identification and matching calculation with large calculation amount, so that the calculation advantages of the server side are brought into full play, the calculation pressure of the client side can be reduced, the client side is prevented from being blocked to a certain extent, and the user experience is improved.
Fig. 11 is a schematic block diagram illustrating a presentation apparatus of a video picture according to an embodiment of the present disclosure. The display device of the video image shown in this embodiment may be applied to a client of a video playing application, where the application is applied to a terminal, and the terminal includes, but is not limited to, a mobile phone, a tablet computer, a wearable device, and an electronic device such as a personal computer. The video playing application may be an application installed in the terminal, or may be a web application integrated in the browser, and the user may play a video through the video playing application, where the played video may be a long video, such as a movie and a tv series, or a short video, such as a video clip and a scene short series.
As shown in fig. 11, the video picture presentation apparatus may include:
the region determining module 1101 is configured to determine a target region in an original picture of a target video in response to a region specifying operation performed by a user for the original picture, where a video picture in the target region is drawn according to a picture layer and a bullet screen layer located above the picture layer;
the layer adjusting module 1102 is configured to adjust the bullet screen layer in the target area to be located below the picture layer;
a drawing and displaying module 1103 configured to draw and display a target picture corresponding to the target area according to the adjusted picture layer.
Optionally, the region determining module 1101 is further configured to at least one of:
detecting the target area drawn in an original picture by a user;
determining a target contour template selected by a user from alternative contour templates, and determining a corresponding area of the target contour template in a display area of an original picture as the target area after detecting that the user places the target contour template in the display area;
determining a target object template selected by a user from alternative object templates, and determining a corresponding area of the target object in an original picture as a target area under the condition that the target object matched with the target object template in the original picture is detected;
determining a target object template selected by a user from alternative object templates, providing the target object template and a picture image corresponding to the original picture to a server, and receiving area coordinates of a target area returned by the server, wherein the target area corresponds to a target object matched with the target object template in the picture image.
Optionally, the method further includes:
a further region determining module 1104 configured to determine a further region of the original picture that is distinct from the target region;
and a picture splicing module 1105 configured to splice the video pictures of the other areas and the target picture into a complete video picture, and display the complete video picture.
Optionally, the original screen and the target screen are displayed in an HTML5 page, and the rendering and displaying module 1103 is further configured to:
the target screen corresponding to the target area is rendered using the native canvas function of the HTML5 page.
Fig. 12 is a schematic block diagram illustrating a presentation apparatus of a video picture according to an embodiment of the present disclosure. The video display device shown in this embodiment may be applied to a server of a video playing application, where the application is applied to a server, and the server includes, but is not limited to, a physical server including an independent host, a virtual server carried by a host cluster, a cloud server, and the like. The played video may be a long video, such as a movie or a tv series, or a short video, such as a video clip or a scene short series.
As shown in fig. 12, the video picture presentation apparatus may include:
a template receiving module 1201, configured to receive a target object template and a picture image corresponding to an original picture provided by a client, where the target object template is selected by a user from preset candidate object templates;
an object determination module 1202 configured to determine a target object in the screen image that matches the target object template;
a coordinate returning module 1203, configured to return the area coordinates of the target area corresponding to the target object in the original picture to the client, so that after the client adjusts the bullet screen layer in the target area to be located below the picture layer, the target picture corresponding to the target area is drawn and displayed according to the adjusted picture layer.
Optionally, the object determination module 1202 is further configured to:
identifying all picture objects in the picture image;
sequentially calculating the matching degree between each picture object and the target object template;
and determining the picture object with the highest matching degree or the picture object with the matching degree higher than a preset matching degree threshold value as the target object.
Optionally, the method further includes:
a failure message returning module 1204, configured to, if the matching degrees corresponding to all the screen objects are not higher than the matching degree threshold, return a matching failure message to the client, where the matching failure message is used to indicate that the target object does not exist in the screen image.
An embodiment of the present disclosure also provides an electronic device, including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method for displaying a video frame according to any of the above embodiments.
Embodiments of the present disclosure also provide a storage medium, where when instructions in the storage medium are executed by a processor of an electronic device, the electronic device is enabled to execute the method for displaying a video picture according to any of the above embodiments.
Embodiments of the present disclosure further provide a computer program product, where the computer program product is configured to execute the method for displaying a video frame according to any of the above embodiments.
Fig. 13 is a schematic block diagram illustrating an electronic device in accordance with an embodiment of the present disclosure. For example, the electronic device 1300 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and so forth.
Referring to fig. 13, electronic device 1300 may include one or more of the following components: processing component 1302, memory 1304, power component 1306, multimedia component 1308, audio component 1310, input/output (I/O) interface 1312, sensor component 1314, and communication component 1318.
The processing component 1302 generally controls overall operation of the electronic device 1300, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 1302 may include one or more processors 1320 to execute instructions to perform all or part of the steps of the method for presenting video frames described above. Further, the processing component 1302 can include one or more modules that facilitate interaction between the processing component 1302 and other components. For example, the processing component 1302 may include a multimedia module to facilitate interaction between the multimedia component 1308 and the processing component 1302.
The memory 1304 is configured to store various types of data to support operation at the electronic device 1300. Examples of such data include instructions for any application or method operating on the electronic device 1300, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1304 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 1306 provides power to the various components of the electronic device 1300. Power components 1306 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for electronic device 1300.
The multimedia component 1308 includes a screen that provides an output interface between the electronic device 1300 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1308 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the electronic device 1300 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 1310 is configured to output and/or input audio signals. For example, the audio component 1310 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 1300 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 1304 or transmitted via the communication component 1318. In some embodiments, the audio component 1310 also includes a speaker for outputting audio signals.
The I/O interface 1312 provides an interface between the processing component 1302 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 1314 includes one or more sensors for providing various aspects of state assessment for the electronic device 1300. For example, the sensor assembly 1314 may detect an open/closed state of the electronic device 1300, the relative positioning of components, such as a display and keypad of the electronic device 1300, the sensor assembly 1314 may also detect a change in the position of the electronic device 1300 or a component of the electronic device 1300, the presence or absence of user contact with the electronic device 1300, orientation or acceleration/deceleration of the electronic device 1300, and a change in the temperature of the electronic device 1300. The sensor assembly 1314 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 1314 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1314 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The image acquisition component 1316 may be used to acquire image data of a subject to form an image about the subject and may perform the necessary processing on the image. The Image capture component 1316 may include a camera module in which an Image Sensor (Sensor) senses light from a subject through a lens, provides the resulting exposure data to an Image Signal Processor (ISP), and generates an Image corresponding to the subject from the exposure data. The image sensor may be a CMOS sensor or a CCD sensor, and may also be an infrared sensor, a depth sensor, or the like; the camera module may be built in the electronic device 1300 or may be an external module of the electronic device 1300; the ISP may be built in the camera module or may be externally hung on the electronic device (not in the camera module).
The communication component 1318 is configured to facilitate communications between the electronic device 1300 and other devices in a wired or wireless manner. The electronic device 1300 may access a wireless network based on a communication standard, such as WiFi, a carrier network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 1318 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 1318 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an embodiment of the present disclosure, the electronic device 1300 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components, for performing the above-mentioned video display method.
In an embodiment of the present disclosure, a non-transitory computer-readable storage medium including instructions, such as the memory 1304 including instructions, which are executable by the processor 1320 of the electronic device 1300 to perform the method for displaying the video frame is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
It is noted that, in the present disclosure, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The method and apparatus provided by the embodiments of the present disclosure are described in detail above, and the principles and embodiments of the present disclosure are explained herein by applying specific examples, and the above description of the embodiments is only used to help understanding the method and core ideas of the present disclosure; meanwhile, for a person skilled in the art, based on the idea of the present disclosure, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present disclosure should not be construed as a limitation to the present disclosure.

Claims (10)

1. A method for displaying a video frame, comprising:
responding to an area designation operation implemented by a user for an original image of a target video, and determining a target area in the original image, wherein a video image in the target area is obtained by drawing according to an image layer and a bullet screen image layer positioned above the image layer;
adjusting the bullet screen layer in the target area to be positioned below the picture layer;
and drawing and displaying a target picture corresponding to the target area according to the adjusted picture layer.
2. The method according to claim 1, wherein the determining the target area in the original picture of the target video in response to the area specifying operation performed by the user for the original picture comprises at least one of:
detecting the target area drawn in an original picture by a user;
determining a target contour template selected by a user from alternative contour templates, and determining a corresponding area of the target contour template in a display area of an original picture as the target area after detecting that the user places the target contour template in the display area;
determining a target object template selected by a user from alternative object templates, and determining a corresponding area of the target object in an original picture as a target area under the condition that the target object matched with the target object template in the original picture is detected;
determining a target object template selected by a user from alternative object templates, providing the target object template and a picture image corresponding to the original picture to a server, and receiving area coordinates of a target area returned by the server, wherein the target area corresponds to a target object matched with the target object template in the picture image.
3. The method of claim 1, further comprising:
determining other areas in the original picture, which are different from the target area;
and splicing the video pictures of the other areas and the target picture into a complete video picture, and displaying the complete video picture.
4. The method of any of claims 1-3, wherein the original frame and the target frame are presented in an HTML5 page, and wherein the rendering the target frame corresponding to the target area comprises:
the target screen corresponding to the target area is rendered using the native canvas function of the HTML5 page.
5. A method for displaying a video frame, comprising:
receiving a target object template and a picture image corresponding to an original picture, wherein the target object template is selected by a user from preset alternative object templates;
determining a target object in the picture image that matches the target object template;
and returning the area coordinates of the target area corresponding to the target object in the original picture to the client so that the client can draw and display the target picture corresponding to the target area according to the adjusted picture layer after adjusting the bullet screen layer in the target area to be positioned below the picture layer.
6. The method of claim 4, wherein the determining the target object in the frame image that matches the target object template comprises:
identifying all picture objects in the picture image;
sequentially calculating the matching degree between each picture object and the target object template;
and determining the picture object with the highest matching degree or the picture object with the matching degree higher than a preset matching degree threshold value as the target object.
7. The method of claim 5, further comprising:
and if the matching degrees corresponding to all the picture objects are not higher than the matching degree threshold value, returning a matching failure message to the client, wherein the matching failure message is used for indicating that the target object does not exist in the picture image.
8. A video picture presentation apparatus, comprising:
the device comprises an area determining module, a display module and a display module, wherein the area determining module is configured to respond to an area specifying operation which is carried out by a user aiming at an original image of a target video, and determine a target area in the original image, and a video image in the target area is drawn according to an image layer and a bullet screen image layer positioned above the image layer;
the layer adjusting module is configured to adjust the bullet screen layer in the target area to be located below the picture layer;
and the drawing and displaying module is configured to draw and display a target picture corresponding to the target area according to the adjusted picture layer.
9. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of presenting a video picture according to any one of claims 1 to 7.
10. A computer-readable storage medium, wherein instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform a method of presenting a video picture according to any one of claims 1 to 7.
CN202011080616.6A 2020-10-10 2020-10-10 Video picture display method and device, electronic equipment and storage medium Pending CN112312190A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011080616.6A CN112312190A (en) 2020-10-10 2020-10-10 Video picture display method and device, electronic equipment and storage medium
PCT/CN2021/113055 WO2022073389A1 (en) 2020-10-10 2021-08-17 Video picture display method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011080616.6A CN112312190A (en) 2020-10-10 2020-10-10 Video picture display method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112312190A true CN112312190A (en) 2021-02-02

Family

ID=74488325

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011080616.6A Pending CN112312190A (en) 2020-10-10 2020-10-10 Video picture display method and device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN112312190A (en)
WO (1) WO2022073389A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113766339A (en) * 2021-09-07 2021-12-07 网易(杭州)网络有限公司 Bullet screen display method and device
WO2022073389A1 (en) * 2020-10-10 2022-04-14 游艺星际(北京)科技有限公司 Video picture display method and electronic device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117201856A (en) * 2022-05-31 2023-12-08 北京字跳网络技术有限公司 Information interaction method, device, electronic equipment and storage medium

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104484868A (en) * 2014-10-08 2015-04-01 浙江工业大学 Moving object aerial photograph tracking method with template matching and picture contour being combined
CN106462774A (en) * 2014-02-14 2017-02-22 河谷控股Ip有限责任公司 Object ingestion through canonical shapes, systems and methods
CN107147941A (en) * 2017-05-27 2017-09-08 努比亚技术有限公司 Barrage display methods, device and the computer-readable recording medium of video playback
CN107330447A (en) * 2017-06-05 2017-11-07 三峡大学 The outline identifying system that a kind of reaction type ICM neutral nets and FPF are combined
CN107705240A (en) * 2016-08-08 2018-02-16 阿里巴巴集团控股有限公司 Virtual examination cosmetic method, device and electronic equipment
CN107809658A (en) * 2017-10-18 2018-03-16 维沃移动通信有限公司 A kind of barrage content display method and terminal
US20180191987A1 (en) * 2017-01-04 2018-07-05 International Business Machines Corporation Barrage message processing
CN108989870A (en) * 2017-06-02 2018-12-11 中国电信股份有限公司 Control the method and system in barrage region
CN109089170A (en) * 2018-09-11 2018-12-25 传线网络科技(上海)有限公司 Barrage display methods and device
CN109309861A (en) * 2018-10-30 2019-02-05 广州虎牙科技有限公司 A kind of media processing method, device, terminal device and storage medium
CN109862380A (en) * 2019-01-10 2019-06-07 北京达佳互联信息技术有限公司 Video data handling procedure, device and server, electronic equipment and storage medium
CN110392293A (en) * 2019-06-21 2019-10-29 平安普惠企业管理有限公司 Barrage control method, device, equipment and storage medium based on canvas
CN110784755A (en) * 2019-11-18 2020-02-11 上海极链网络科技有限公司 Bullet screen information display method and device, terminal and storage medium
US20200058270A1 (en) * 2017-04-28 2020-02-20 Huawei Technologies Co., Ltd. Bullet screen display method and electronic device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160342287A1 (en) * 2015-05-19 2016-11-24 Vipeline, Inc. System and methods for video comment threading
CN107135415A (en) * 2017-04-11 2017-09-05 青岛海信电器股份有限公司 Video caption processing method and processing device
CN111277910B (en) * 2020-03-07 2022-03-22 咪咕互动娱乐有限公司 Bullet screen display method and device, electronic equipment and storage medium
CN111580729B (en) * 2020-04-22 2021-07-13 江西博微新技术有限公司 Processing method and system for selecting overlapped graphics primitives, readable storage medium and electronic equipment
CN111698533A (en) * 2020-06-12 2020-09-22 上海极链网络科技有限公司 Video processing method, device, equipment and storage medium
CN112312190A (en) * 2020-10-10 2021-02-02 游艺星际(北京)科技有限公司 Video picture display method and device, electronic equipment and storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106462774A (en) * 2014-02-14 2017-02-22 河谷控股Ip有限责任公司 Object ingestion through canonical shapes, systems and methods
CN104484868A (en) * 2014-10-08 2015-04-01 浙江工业大学 Moving object aerial photograph tracking method with template matching and picture contour being combined
CN107705240A (en) * 2016-08-08 2018-02-16 阿里巴巴集团控股有限公司 Virtual examination cosmetic method, device and electronic equipment
US20180191987A1 (en) * 2017-01-04 2018-07-05 International Business Machines Corporation Barrage message processing
US20200058270A1 (en) * 2017-04-28 2020-02-20 Huawei Technologies Co., Ltd. Bullet screen display method and electronic device
CN107147941A (en) * 2017-05-27 2017-09-08 努比亚技术有限公司 Barrage display methods, device and the computer-readable recording medium of video playback
CN108989870A (en) * 2017-06-02 2018-12-11 中国电信股份有限公司 Control the method and system in barrage region
CN107330447A (en) * 2017-06-05 2017-11-07 三峡大学 The outline identifying system that a kind of reaction type ICM neutral nets and FPF are combined
CN107809658A (en) * 2017-10-18 2018-03-16 维沃移动通信有限公司 A kind of barrage content display method and terminal
CN109089170A (en) * 2018-09-11 2018-12-25 传线网络科技(上海)有限公司 Barrage display methods and device
CN109309861A (en) * 2018-10-30 2019-02-05 广州虎牙科技有限公司 A kind of media processing method, device, terminal device and storage medium
CN109862380A (en) * 2019-01-10 2019-06-07 北京达佳互联信息技术有限公司 Video data handling procedure, device and server, electronic equipment and storage medium
CN110392293A (en) * 2019-06-21 2019-10-29 平安普惠企业管理有限公司 Barrage control method, device, equipment and storage medium based on canvas
CN110784755A (en) * 2019-11-18 2020-02-11 上海极链网络科技有限公司 Bullet screen information display method and device, terminal and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022073389A1 (en) * 2020-10-10 2022-04-14 游艺星际(北京)科技有限公司 Video picture display method and electronic device
CN113766339A (en) * 2021-09-07 2021-12-07 网易(杭州)网络有限公司 Bullet screen display method and device
CN113766339B (en) * 2021-09-07 2023-03-14 网易(杭州)网络有限公司 Bullet screen display method and device

Also Published As

Publication number Publication date
WO2022073389A1 (en) 2022-04-14

Similar Documents

Publication Publication Date Title
CN110662083B (en) Data processing method and device, electronic equipment and storage medium
CN112153400B (en) Live broadcast interaction method and device, electronic equipment and storage medium
CN106791893A (en) Net cast method and device
CN112312190A (en) Video picture display method and device, electronic equipment and storage medium
CN111866596A (en) Bullet screen publishing and displaying method and device, electronic equipment and storage medium
CN111343476A (en) Video sharing method and device, electronic equipment and storage medium
CN111970456B (en) Shooting control method, device, equipment and storage medium
CN109660873B (en) Video-based interaction method, interaction device and computer-readable storage medium
EP3260998A1 (en) Method and device for setting profile picture
CN112153396B (en) Page display method, device, system and storage medium
CN111770381A (en) Video editing prompting method and device and electronic equipment
CN112511779B (en) Video data processing method and device, computer storage medium and electronic equipment
CN114009003A (en) Image acquisition method, device, equipment and storage medium
CN113099297A (en) Method and device for generating click video, electronic equipment and storage medium
CN109521938B (en) Method and device for determining data evaluation information, electronic device and storage medium
CN108986803B (en) Scene control method and device, electronic equipment and readable storage medium
CN108829473B (en) Event response method, device and storage medium
US11252341B2 (en) Method and device for shooting image, and storage medium
CN104133553B (en) Webpage content display method and device
CN107105311B (en) Live broadcasting method and device
CN112115341A (en) Content display method, device, terminal, server, system and storage medium
CN113709571B (en) Video display method and device, electronic equipment and readable storage medium
CN113989424A (en) Three-dimensional virtual image generation method and device and electronic equipment
CN110769282A (en) Short video generation method, terminal and server
WO2021237744A1 (en) Photographing method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination