WO2022073389A1 - 视频画面的展示方法及电子设备 - Google Patents

视频画面的展示方法及电子设备 Download PDF

Info

Publication number
WO2022073389A1
WO2022073389A1 PCT/CN2021/113055 CN2021113055W WO2022073389A1 WO 2022073389 A1 WO2022073389 A1 WO 2022073389A1 CN 2021113055 W CN2021113055 W CN 2021113055W WO 2022073389 A1 WO2022073389 A1 WO 2022073389A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
picture
area
image
layer
Prior art date
Application number
PCT/CN2021/113055
Other languages
English (en)
French (fr)
Inventor
韩旭
Original Assignee
游艺星际(北京)科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 游艺星际(北京)科技有限公司 filed Critical 游艺星际(北京)科技有限公司
Publication of WO2022073389A1 publication Critical patent/WO2022073389A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/752Contour matching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • the present disclosure relates to the field of video display, and in particular, to a display method and electronic device of a video picture.
  • the server determines the object position of the target object in the video in advance, and then provides the video and the object position to the client, and the client renders the video according to the object position, thereby Only the target object is displayed at the object position without displaying the corresponding bullet screen.
  • the present disclosure provides a display method and an electronic device for a video picture.
  • the technical solutions of the present disclosure are as follows:
  • a method for displaying a video picture which includes: in response to a region specifying operation on an original picture of a target video, determining a target region in the original picture, where the original picture includes the The screen layer of the target video and the bullet screen layer located above the screen layer; the bullet screen layer in the target area is adjusted to be below the screen layer; based on the adjustment in the target area of the picture layer, renders the target picture corresponding to the target area, and displays the target picture.
  • a method for displaying a video image including: receiving a target object template sent by a client and a screen image corresponding to an original image, where the target object template is selected from candidate object templates determine the target object in the picture image that matches the target object template; return the area coordinates of the target area to the client, where the target area is the area corresponding to the target object in the original picture , the client is used to adjust the bullet screen layer in the target area to be below the screen layer, and based on the adjusted screen layer in the target area, render the screen corresponding to the target area.
  • a target screen displaying the target screen.
  • an apparatus for displaying a video picture including: a region determination module configured to determine a target in the original picture in response to a region specifying operation performed on an original picture of a target video area, the original picture includes a picture layer and a bullet screen layer located above the picture layer; a layer adjustment module is configured to adjust the bullet screen layer in the target area to the picture Below the layer; a drawing and display module configured to render a target image corresponding to the target area based on the adjusted image layer in the target area, and display the target image.
  • an apparatus for displaying video images including: a template receiving module configured to receive a target object template sent by a client and a screen image corresponding to an original image, where the target object template is It is selected from the candidate object template; the object determination module is configured to determine the target object in the screen image that matches the target object template; the coordinate return module is configured to return the area of the target area to the client Coordinates, the target area is the area corresponding to the target object in the original screen, the client is used to adjust the bullet screen layer in the target area to be below the screen layer, based on the The adjusted picture layer in the target area is used to render the target picture corresponding to the target area, and the target picture is displayed.
  • an electronic device comprising: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to execute the instructions to achieve The steps are as follows: in response to a region designation operation on an original picture of the target video, determining a target region in the original picture, where the original picture includes a picture layer of the target video and a bullet screen located above the picture layer layer; adjust the bullet screen layer in the target area to be below the picture layer; render the target picture corresponding to the target area based on the adjusted picture layer in the target area to display the target screen.
  • an electronic device comprising: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to execute the instructions to achieve The following steps: receiving the target object template sent by the client and the picture image corresponding to the original picture, the target object template is selected from the candidate object templates; determining the target object in the picture image that matches the target object template ;Return the area coordinates of the target area to the client, the target area is the area corresponding to the target object in the original screen, and the client is used to convert the bullet screen layer in the target area Adjusting to the bottom of the picture layer, rendering a target picture corresponding to the target area based on the adjusted picture layer in the target area, and displaying the target picture.
  • a storage medium when the instructions in the storage medium are executed by a processor of an electronic device, the electronic device can perform the following steps: in response to an operation of an original picture of a target video Region designation operation, determining the target region in the original picture, the original picture including the picture layer of the target video and the bullet screen layer located above the picture layer; The bullet screen layer is adjusted below the picture layer; based on the adjusted picture layer in the target area, a target picture corresponding to the target area is rendered, and the target picture is displayed.
  • a storage medium When an instruction in the storage medium is executed by a processor of an electronic device, the electronic device can perform the following steps: receiving a target object template sent by a client; The picture image corresponding to the original picture, the target object template is selected from the candidate object template; determine the target object in the picture image that matches the target object template; return the area coordinates of the target area to the client , the target area is the area corresponding to the target object in the original picture, and the client is used to adjust the bullet screen layer in the target area to be below the picture layer, based on the The adjusted picture layer in the target area renders the target picture corresponding to the target area, and displays the target picture.
  • the client determines the target object that needs to be blocked and the corresponding blocking area in response to the area designation operation performed by the user on the original image, thereby improving the quality of the video image displayed by the client and its corresponding bullet screen. Display of results.
  • FIG. 1 is a schematic diagram of the architecture of a video service platform according to an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of a drawing principle of a video picture according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of a method for displaying a video picture according to an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram of another method for displaying a video picture according to an embodiment of the present disclosure.
  • FIG. 5 is a schematic diagram of yet another method for displaying a video picture according to an embodiment of the present disclosure
  • FIG. 6 is a schematic diagram of a drawing target area according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic diagram of still another method for displaying a video picture according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram of determining a target area by using an area template according to an embodiment of the present disclosure
  • FIG. 9 is a schematic diagram of determining a target area by using an object template according to an embodiment of the present disclosure.
  • FIG. 10 is an interactive flowchart of a method for displaying a video picture according to an embodiment of the present disclosure
  • FIG. 11 is a schematic block diagram of an apparatus for displaying a video picture according to an embodiment of the present disclosure
  • FIG. 12 is a schematic block diagram of another apparatus for displaying video pictures according to an embodiment of the present disclosure.
  • FIG. 13 is a structural diagram of an electronic device according to an embodiment of the present disclosure.
  • FIG. 1 is a schematic structural diagram of a video service platform according to an embodiment of the present disclosure.
  • the system includes a network 10 , a server 11 , and several electronic devices, such as a mobile phone 12 , a mobile phone 13 , and a mobile phone 14 .
  • the server 11 is a physical server including an independent host, or the server 11 is a virtual server hosted by a host cluster. During the running process, the server 11 runs the program on the server side of an application to realize the relevant business functions of the application. For example, when the server 11 runs the program of the video service platform, it is realized as the server of the video service platform. . In the technical solution of one or more embodiments of the present disclosure, the server 11 cooperates with the clients running on the mobile phones 12-14 to realize the display solution of the video screen including the bullet screen.
  • the video service platform can not only realize the video service function, but also can be used as an integrated function platform for many other functions, such as the detection of the area drawing operation, the display and selection of the candidate contour template, the candidate object
  • One or more embodiments of the present disclosure do not limit the display and selection of templates, the determination of target areas, and the rendering of target images.
  • Cell phones 12-14 are just one type of electronic device used by users.
  • the user can also use electronic devices such as the following types: tablet devices, notebook computers, PDAs (Personal Digital Assistants), wearable devices (such as smart glasses, smart watches, etc.), etc., one or more of the present disclosure
  • electronic devices such as the following types: tablet devices, notebook computers, PDAs (Personal Digital Assistants), wearable devices (such as smart glasses, smart watches, etc.), etc., one or more of the present disclosure
  • the electronic device runs the program on the client side of an application to realize the relevant business functions of the application.
  • the electronic device runs the program of the video service platform
  • it is realized as the client side of the video service platform.
  • the mobile phone 12 is implemented as a video providing client
  • the mobile phone 13 and the mobile phone 14 are implemented as video playback clients.
  • the application program of the client of the video service platform is installed on the electronic device, and the client is started and run on the electronic device; The client can be obtained and run by installing the corresponding application on the computer.
  • the current video playback platform can provide viewers with a bullet screen display function, that is, the bullet screen related to the video is simultaneously displayed during the video playback process.
  • the original picture of the target video includes a picture layer and a bullet screen layer, that is, the original picture of the displayed target video is rendered based on the picture layer and the bullet screen layer located above the picture layer.
  • a video screen 201 corresponding to the target video played by the client terminal displays a bullet screen 202 .
  • the video picture 201 viewed by the user is equivalent to a superimposed display of the bullet screen layer 203 located above and the picture layer 204 located below. Because the bullet screen layer 203 is located above the screen layer 204, in the video screen 201 rendered according to the bullet screen layer 203 and the screen layer 204, the bullet screen is displayed above the video screen, causing the bullet screen to affect the video screen. occlude.
  • the server determines the object position of the target object in the video in advance, and then provides the video and the position of the object to the client.
  • the bullet screen is rendered below the object, so that only the target object is displayed at the position of the object and the corresponding bullet screen is not displayed, so as to realize the anti-block display effect of the target object.
  • the target object and the corresponding blocking position in the above manner are determined by the server before extraction and have nothing to do with the user's behavior, the final blocking display effect is difficult to meet the viewing needs of the audience user.
  • FIG. 3 is a flowchart of a method for displaying a video picture according to an embodiment of the present disclosure. As shown in Figure 3, the method is applied to the client and includes the following steps:
  • Step 302 in response to the region designation operation performed by the user on the original picture of the target video, determine a target region in the original picture, and the video picture in the target region is rendered according to the picture layer and the bullet screen layer located above the picture layer. get.
  • the video picture in the target area is the original picture of the target video in the target area. That is, the client determines the target region in the original picture in response to the region specifying operation on the original picture of the target video.
  • the original picture includes the picture layer of the target video and the bullet screen layer located above the picture layer, the picture layer and the bullet screen layer are the complete layers corresponding to the target video, and the region designation operation refers to determining the original The operation of the target area in the screen.
  • the target area set by the user is the area where only the display screen does not display the bullet screen in the final display, that is, the area where the anti-block display function of the bullet screen is realized in the target area.
  • the target area is also referred to as "blocking area” hereinafter, which is hereby explained.
  • the client determines the target area in a number of ways.
  • the user performs the operation of drawing the target area in the original image during the process of viewing the displayed original image.
  • the client detects the target area drawn by the user in the original image, that is, the client responds to the original image in the original image.
  • the area drawing operation in the picture determines the target area drawn in the original picture, so that the user can customize the corresponding target area by drawing, so as to achieve a better anti-blocking display effect.
  • the client determines a movement track corresponding to the region drawing operation in response to the region drawing operation in the original picture, and determines the region selected by the movement track frame as the target region. For example, the client displays multiple candidate patterns, the user selects the candidate patterns, and draws the target area in the original screen based on the candidate patterns, where the candidate patterns include patterns such as lines, rectangles, circles, and triangles.
  • the user needs to draw a rectangle, select the rectangle, move the mouse or touch in the original screen, select the rectangle area based on the movement track, and use the rectangle area selected by the frame as the target area; or, the user needs to draw a human figure
  • a pattern since the humanoid pattern is an irregular pattern, it is necessary to select a line, use the mouse or touch to move in the original screen, select the required humanoid area based on the movement track frame, and use the selected humanoid area as the target. area.
  • the drawn movement trajectory is closed or not closed. In the case where the movement trajectory is not closed, the client connects the start point and the end point of the movement trajectory with a straight line together, get the closed area.
  • the user draws the target area directly in the original picture.
  • the client displays a blank mask above the original screen, and the user draws the target area on the blank mask.
  • the server detects the target mask area drawn by the user in the blank mask, that is, the client responds to the blank mask in the blank mask.
  • the area drawing operation in the mask determines the target mask area drawn in the blank mask, and determines the area corresponding to the target mask area in the original picture as the target area.
  • the size of the blank mask is the same as the size of the original screen and the position is coincident, and the user freely draws a target mask area that conforms to his or her wishes on the top of the original screen.
  • the blank mask further includes a set non-drawing area
  • the user draws the target mask area in the drawing area other than the non-drawing area
  • the non-drawing area is used to set the area that the user cannot specify as the target area
  • the non-drawing area is preset by the user in the client or uniformly set by the server and then delivered to the client.
  • the target mask area drawn by the user in the blank mask is any shape, such as rectangle, circle, ellipse, trapezoid or irregular shape, etc., and the size of the target mask area is arbitrarily set by the user (such as setting the area by dragging with the mouse). boundary).
  • the user can trigger the client to display the blank mask by triggering the anti-block setting switch.
  • the client provides alternative contour templates for the user to select. For example, the client first determines the target contour template selected by the user from the candidate contour templates, and after detecting that the user places the target contour template in the display area of the original image, determines the corresponding area of the target contour template in the display area for the target area. That is, in response to the selection operation of any candidate contour template, the client determines the selected contour template as the target contour template, moves the target contour template to the original image, and moves the area corresponding to the target contour template in the original image. Determined as the target area.
  • the user controls the client to display the alternative contour template by triggering the anti-blocking setting switch, and then selects the alternative contour template from the displayed alternative contour templates as the target contour template, wherein the user can select the one that is more suitable for the user to watch.
  • the outline template of the edge outline image of the intention or the target object, the target object refers to the display object in the video screen where the user wants to achieve the anti-block display effect.
  • the target contour template is directly dragged and placed in the appropriate position of the original screen, such as the position of the target object, etc., so as to realize the designation of the target area.
  • the target contour template is the object contour area corresponding to common video objects such as people, food, animals, buildings, books, screens, etc.
  • the user can also perform a zoom operation on the target contour template to control its size, which is not limited in the present disclosure.
  • the outline template By selecting the outline template, the user only needs to select the target outline template that is more in line with the outline of the target object from the displayed alternative outline templates, and then place it in the corresponding position in the original image by simply dragging and adjusting it. That is, the operation steps for specifying the target area by the user are simplified, and the efficiency of specifying the target area by the user is improved.
  • the client provides alternative object templates for the user to select. For example, the client first determines the target object template selected by the user from the displayed candidate object templates, then detects the target object in the original image that matches the target object template, and then places the detected target object in the corresponding area of the original image. Determined as the target area. That is, in response to the selection operation of any candidate object template, the client determines the selected candidate object template as the target object template, and in response to the original picture having a target object matching the target object template, the original picture is The area corresponding to the target object is determined as the target area.
  • the user controls the client to display the candidate object template by triggering the blocking setting switch, and then selects the target object template from the candidate object templates displayed by the client.
  • the target object determined according to the target object template is the video object that the user is interested in and wants to achieve the effect of preventing blocking.
  • the client can determine the corresponding target object according to the target object template specified by the user.
  • the client terminal displays the target object selection control for the user, so that the user can customize the selection of the corresponding target object in the current video screen, so as to avoid the problem that the user cannot select the target object when there is no target object in the candidate object template.
  • the client can realize the efficient and accurate identification of the target object through the local real-time detection of the target video, so as to realize the dynamic tracking of the target object in the target video, and then realize the dynamic anti-blocking display effect for the target object.
  • the client requests the server to obtain the target object corresponding to the target object template selected by the user, that is, the server determines the target object according to the target object template. For example, the client first determines the target object template selected by the user from the candidate object templates, then provides the target object template and the image corresponding to the original image to the server, and finally receives the regional coordinates of the target area returned by the server. That is, in response to the selection operation of any candidate object template, the client determines the selected candidate object template as the target object template, sends the target object template and the picture image corresponding to the original picture to the server, and receives the server return.
  • the area coordinates of , and the target area is determined based on the area coordinates. The target area corresponds to the target object in the screen image, and the target object matches the target object template.
  • the user controls the client to display the candidate object template by triggering the blocking setting switch, and then selects the target object template corresponding to the target object from the candidate object templates displayed by the client.
  • the picture image corresponding to the original picture is the current video frame image; or the picture image is the image snapshot of the current video frame image; or in order to reduce the data transmission pressure between the server and the server, the picture image is the video frame identification of the current video frame image (such as frame image serial number, etc.), correspondingly, the server determines the corresponding video frame image in the target video stored locally according to the video frame identifier, which is not limited in the present disclosure.
  • the area coordinates of the target area are the pixel point coordinates of each pixel point in the contour line of the target area.
  • the task of object recognition and matching with a large amount of calculation is handed over to the server to complete, which not only reduces the computing pressure of the client, but also fully utilizes the computing advantages of the server, which helps to reduce the playback freeze of the client.
  • Step 304 Adjust the bullet screen layer in the target area to be below the screen layer.
  • the original picture is rendered according to the picture layer and the bullet chat layer above the picture layer. Therefore, after determining the target area, the client adjusts the bullet chat layer located in the target area to the bottom of the picture layer, and the target area The above adjustments are not made in other areas other than the above.
  • the bullet chat layer and the picture layer have corresponding levels, and a layer with a higher level is located above a layer with a lower level.
  • the client determines the target area, it adjusts the level of the bullet chat layer in the target area to a level lower than that of the screen layer. Based on the level of the screen layer and the level of the adjusted bullet chat layer in the target area, the target The bullet chatter layer in the area is adjusted to be below the screen layer.
  • Step 306 Draw and display the target image corresponding to the target area according to the adjusted image layer. That is, based on the adjusted picture layer in the target area, the target picture corresponding to the target area is rendered, and the target picture is displayed.
  • the client renders the target image corresponding to the target area according to the adjusted image layer in the target area (that is, the layer located at the top), and for the rendering process, refer to Rendering Image Elements and Pages in Related Art The related content of rendering will not be repeated here.
  • the client determines other areas in the original picture that are different from the target area, and then splices the original pictures in the other areas and the target picture into a video picture, and displays the video picture.
  • the other area can be called the first area.
  • the target screen only displays the screen content, but does not display the corresponding bullet screen; while the original screen corresponding to other areas displays the screen content and the corresponding bullet screen at the same time, the bullet screen will still be displayed.
  • the screen content is blocked, so as to achieve a targeted anti-blocking display effect for the screen content in the target area.
  • the client is built based on HTML5 (Hyper Text Markup Language, the fifth generation of hypertext markup language) technology.
  • HTML5 Hyper Text Markup Language
  • the original picture and the target picture can be displayed in the HTML5 page.
  • the client is based on the The adjusted picture layer in the target area uses the native canvas function of the HTML5 page (such as canvas, etc.) to render the target picture, and displays the target picture in the HTML5 page.
  • the native canvas function of the HTML5 page such as canvas, etc.
  • the technical advantages of the native canvas technology of the HTML5 page and the high degree of compatibility and adaptation of the HTML5 page can be fully utilized.
  • For the rendering process refer to the records in the related technologies, which will not be repeated here.
  • the client determines the target object to be blocked and the corresponding blocking area in response to the area designation operation performed by the user on the original image, thereby improving the display of the displayed video image and its corresponding bullet screen Effect.
  • FIG. 4 is a flowchart of another method for displaying a video picture according to an embodiment of the present disclosure. As shown in Figure 4, the method is applied to the server and includes the following steps:
  • Step 402 Receive the target object template provided by the client and the picture image corresponding to the original picture, where the target object template is selected from the candidate object templates.
  • Step 404 Determine the target object in the screen image that matches the target object template.
  • Step 406 Return the area coordinates of the target area to the client, the target area is the area corresponding to the target object in the original screen, the client is used to adjust the bullet screen layer in the target area to the bottom of the screen layer, based on the target The adjusted picture layer in the area renders and displays the target picture corresponding to the target area.
  • the server first identifies all picture objects in the picture image, then sequentially determines the matching degree between each picture object and the target object template, and then determines the target object according to the matching degree corresponding to each picture object. That is, the server determines the image object with the highest matching degree as the target object; or determines the image object whose matching degree is higher than the matching degree threshold as the target object.
  • the picture object refers to the object in the picture image, and the picture image corresponding to the original picture is the current video frame image; or the picture image is the image snapshot of the current video frame image; or the picture image is the video frame identifier of the current video frame image (such as frame image serial number, etc.), correspondingly, the server determines the corresponding video frame image in the target video stored locally according to the video frame identifier, which is not limited in the present disclosure.
  • the server can determine at least one target object in the video screen, and return the regional coordinates of the target area corresponding to each target object to the client, so as to give full play to its own computing advantages and reduce the client's operational pressure.
  • the server returns a matching failure message to the client, where the matching failure message indicates that the target object does not exist in the screen image.
  • the client does not perform layer adjustment after receiving the matching failure message, but directly displays the original image.
  • the displayed video picture does not have a blocking area, that is, no blocking display effect is produced at this time.
  • the process of displaying the target video is a process of sequentially displaying each video frame image.
  • the processing process of the above-mentioned embodiment shown in FIG. 3 and FIG. 4 is the processing process of the client or server for the video frame image being displayed at the current moment.
  • the server recognizes the target object, it recognizes the video frame image corresponding to the target time in the video screen.
  • the target time is the time after the current time and the first time interval from the current time, and the first time is any time. After the target time, the target screen with the anti-block display effect can be displayed in real time.
  • FIG. 5 is a flow chart of yet another method for displaying a video picture according to an embodiment of the present disclosure. As shown in Figure 5, the method is applied to the client, and the rendering and display process of the video image corresponding to the method includes the following steps:
  • Step 502 it is detected that the user turns on the anti-block setting switch.
  • the client displays the blocking setting switch in the video presentation interface of the target video, so the user can trigger the blocking setting switch in the video presentation interface to control the start of specifying the blocking area in the original image.
  • the video display interface 611 corresponding to the target video (eg, video V)
  • the video display interface 611 displays the screen content 612 and the bullet screen 613
  • an anti-block setting switch 615 is also displayed below the video display interface 611 .
  • the "blocking setting" shown in FIG. 6 is only exemplary, and the switch can be displayed with other names when actually displayed, which is not limited in the present disclosure.
  • Step 504 displaying a blank mask above the original image.
  • a user corresponding to the client turns on the blocking setting switch by triggering the blocking setting switch 615 .
  • the anti-blocking setting switch 623 when it is detected that the anti-blocking setting switch is triggered, the client displays a blank mask 621 above the video display interface 611 , and the blank mask 621 is half Transparent form, the transparency is set by the user in the client or the system setting of the client or server.
  • Step 506 detecting the region drawing operation performed by the user.
  • the user Through the blank mask 621 in translucent form, the user observes the screen content of the original screen displayed below the blank mask 621, and then draws the target mask area 622 at the corresponding position in the blank mask 621 through mouse or touch control.
  • the user draws a rectangular target mask area 622 in the blank mask 621 above the person's portrait in the original screen.
  • the user can freely adjust the position, size, angle and other parameters of the target mask area 622.
  • the shape of the target mask area 622 is a rectangle, a circle, an ellipse, a trapezoid or an irregular shape, etc.
  • the target mask area The size of the target mask area can be arbitrarily adjusted by the user by dragging the area boundary with the mouse, and the present disclosure does not limit the style and drawing method of the target mask area.
  • the client when the user draws the target mask area 622, the client displays a real-time anti-blocking preview effect, so that the user can appropriately adjust the starting point, size, angle and other position parameters of the target mask area 622 according to the preview effect , so as to achieve a better anti-block display effect.
  • the user draws several target mask areas 622 in the blank mask 621.
  • the user triggers (eg clicks) the confirmation control 624, and the client determines the target mask area 622 in response to the triggering operation on the confirmation control 624. Finished drawing.
  • Step 508 Determine the target area corresponding to the target mask area drawn by the user.
  • the client After the client detects the trigger operation performed by the user on the confirmation control, it determines the target area corresponding to the target mask area drawn by the user in the original image, and the process of determining the target area is to determine the area of the target area in the original image. process of coordinates. For example, in the case where the user directly draws the target area in the original picture, the actual position of the target area is its position in the original picture.
  • the target mask area can be placed on the blank mask.
  • the area coordinates of the target area in the original picture are taken as the area coordinates of the target area in the original picture; and in the case that the blank mask does not completely correspond to the original picture, first determine the area coordinates of the target mask area in the blank mask, and then according to the blank mask.
  • the area coordinates of the target area in the original image are calculated according to the coordinate offset and/or zoom amount of the original image relative to the original image, and the specific process will not be repeated.
  • Step 510 Adjust the bullet screen layer in the target area to be below the screen layer.
  • the client will place the bullet screen image in the target area.
  • the layer is adjusted below the picture layer, and other areas outside the target area are not adjusted as above.
  • Step 512 rendering a target image corresponding to the target area.
  • the client renders the target image corresponding to the target area according to the adjusted image layer in the target area (ie, the layer located at the top), and the process of rendering the target image is described in the related art. Rendering screen elements and related content of page rendering will not be repeated here.
  • the client determines a first area in the original image that is different from the target area, and then splices the original image in the first area and the target image into a video image.
  • the above process of rendering the target image corresponding to the target area and the process of rendering the original image of the first area are performed simultaneously, that is, the two are not performed independently, but constitute a complete video image rendering process.
  • the client renders the adjusted picture layer and the bullet chat layer in the target area to obtain the target picture of the target area; or, since the bullet chat layer in the target area is located below the picture layer, even if When rendering the bullet screen layer in the target area, the rendered bullet screen will also be occluded in the target area. Therefore, the client can only render the screen layer in the target area to obtain the target screen of the target area.
  • the client is built based on HTML5 technology, and correspondingly, the original picture and the target picture are displayed on the HTML5 page.
  • the native canvas provided by the HTML5 technology is used to render the above-mentioned target picture, and the rendering process participates in the relevant The records in the technology will not be repeated here.
  • Step 514 displaying the target screen.
  • the rendered target image is displayed in the video display interface corresponding to the target video. It can be understood that, similar to the process of rendering the target image, the process of displaying the target video and displaying the video image corresponding to the first area are also performed simultaneously, that is, the client displays the video image formed by the two, rather than displaying the target separately. The picture corresponds to the original picture of the first area.
  • a target screen 631 corresponding to the target area is displayed in the video display interface 632.
  • the screen content such as a portrait in the screen
  • the screen content and the bullet screen 633 are displayed at the same time.
  • the bullet screen that is moved and played is automatically hidden after entering the target area, similar to being blocked under the target screen and cannot be observed, so as to achieve the effect of preventing the screen content in the target area from being displayed.
  • the target screen only displays the screen content, but does not display the corresponding bullet screen; while the video screen corresponding to the first area displays the screen content and the corresponding bullet screen at the same time, the bullet screen will still affect the screen.
  • the content is occluded, so as to achieve a targeted anti-blocking display effect for the screen content in the target area.
  • FIG. 7 is a schematic diagram of yet another method for displaying a video image according to an embodiment of the present disclosure. As shown in FIG. 7 , the method is applied to a client, and the process of rendering and displaying a video image corresponding to the method includes the following steps:
  • Step 702 it is detected that the user turns on the anti-block setting switch.
  • the client displays the blocking setting switch in the video presentation interface of the target video, so the user triggers the blocking setting switch in the video presentation interface to start specifying the blocking area in the original picture.
  • the client In the case of detecting that the anti-blocking setting switch is triggered to be turned on, the client displays a corresponding setting mode selection control, so that the user can select the mode of performing the anti-blocking setting. For example, the user chooses to set the blocking area by means of area setting, and then step 7041 is performed; or the user chooses to set the blocking area by means of object setting, and then step 7042 is performed.
  • Step 7041 displaying the candidate contour template.
  • the client terminal When it is detected that the user chooses to set the anti-blocking area by means of area setting, the client terminal displays the candidate contour template to the user for the user to select the target contour template corresponding to the target object.
  • the client displays an outline template selection interface 802 above the video display interface corresponding to the target video (video V), and the outline template selection interface 802 includes At least one alternative contour template.
  • the displayed candidate contour templates correspond to a variety of objects, such as the candidate contour template A corresponding to the outline of the front face of a woman, the candidate contour template B corresponding to the profile of the man's side face, and the alternative contour template corresponding to the outline of the cutlery (plate).
  • Outline template C, alternative outline template D corresponding to the outline of the open book, etc.
  • the above alternative contour templates are integrated in the installation program of the client, so that after the client is installed, each alternative contour template can be displayed;
  • Each candidate contour template is obtained from the server; the above candidate contour templates are extracted by the server through the model algorithm according to the massive videos in the video library.
  • the candidate contour templates are classified, and corresponding candidate contour templates are provided according to the category to which the target video belongs, which will not be repeated.
  • Step 7061 Determine the target contour template selected by the user.
  • Step 7081 detecting the user's placing operation.
  • the user selects the target contour template from among the displayed multiple candidate contour templates, for example, by means of a mouse click, a touch operation, and the like.
  • the target object that the user wants to achieve the anti-blocking display effect is the woman's front face 801 shown in the original picture
  • the user selects the face contour corresponding to the woman's front face in the candidate contour template
  • An alternative contour template A with a similar shape, that is, the alternative contour template A is used as the target contour template, and the target contour template A is directly dragged to the corresponding position of the original screen, and then the target contour template A is dragged by the border of the contour model. Adjust the size and position to cover the lady's front face 801 to achieve a better anti-block display effect.
  • the contour template selection interface 802 can be directly hidden.
  • the user can also select other candidate contour templates as the target contour template, and the adjustment method is subject to the actual operation of the user, which is not limited in the present disclosure.
  • Step 7101 Determine the placement area of the placed target contour template in the original image.
  • the client determines the placed target contour template by detecting the placement operation, and then determines the placement area of the target contour template in the original image.
  • the placement area is represented by the coordinate value of each pixel corresponding to the area boundary of the target contour template in the original image.
  • the contour boundary of the target contour template is a standard shape, it can also pass through the center of the target contour template.
  • Features such as point coordinates and template size are represented, which are not limited in the present disclosure.
  • Step 7042 displaying the candidate object template.
  • the client terminal displays the candidate object template to the user for the user to select the target object template corresponding to the target object, where the target object refers to the target object that the user wants to achieve The screen object that prevents the display effect.
  • the client displays an object template selection interface 902 above the video display interface corresponding to the target video (video V), and the object template selection interface 902 includes At least one candidate object template, the displayed candidate object template corresponds to a variety of objects, such as candidate object template A corresponding to characters, candidate object template B corresponding to food, and candidate object template C corresponding to animals , the candidate object template D corresponding to the opened book, and so on.
  • the above-mentioned candidate object templates are integrated in the installation program of the client, so that each candidate object template can be displayed after the client is installed; or, in order to ensure the timely update of the templates, before or during the display of the target video
  • the candidate object templates are classified, and corresponding candidate object templates are provided according to the category to which the target video belongs, which is not repeated here.
  • Step 7062 Determine the target object template selected by the user.
  • the user selects the target object template from the displayed several candidate object templates, for example, selecting by means of a mouse click, a touch operation, and the like.
  • the target object the user wants to achieve the anti-blocking display effect is the lady's front face 901 shown in the original screen
  • the user selects the candidate object template A corresponding to the character in the candidate object template.
  • the user selects at least one candidate object template as the target object template, and triggers the corresponding determination control 904 after the selection is completed, thereby determining the selected candidate object template as the target object template.
  • Step 7082 Detect screen objects in the screen image.
  • the client first determines the screen image at the current moment, for example, takes the video frame image at the current moment as the screen image; or implements a screen snapshot to the video frame image at the current moment, and uses the obtained snapshot image as the screen image;
  • the real-time anti-blocking display effect determines the video frame image or snapshot image corresponding to the target moment as the screen image.
  • the target time is a time after the current time and separated from the current time by a first duration, and the first duration is any duration.
  • the picture image is processed by an object recognition algorithm, so as to identify the picture object in the picture image.
  • an object recognition algorithm so as to identify the picture object in the picture image.
  • it is realized by using the clustering algorithm and the deep learning algorithm in the related art, and of course, the above-mentioned picture objects can also be recognized by using a self-defined image recognition model, which will not be described again.
  • Step 7102 Determine the target object in the picture object that matches the target object template.
  • the client calculates the matching degree between each picture object and the target object template in turn, for example, calculating the above matching degree through various characteristic parameters such as color, outline and motion trajectory. It can be understood that the closer the characteristic parameter of any picture object is to the characteristic parameter of the target object template, the higher the matching degree between the picture object and the target object template, that is, the matching degree between the picture object and the target object template. There is a positive correlation with the closeness between the corresponding characteristic parameters of the two.
  • the picture object with the highest matching degree or the picture object with the matching degree higher than the matching degree threshold is determined as the target object; of course, when the matching degree of all the picture objects is lower than the matching degree threshold If there is no target object, at this time, the client directly ends this processing process, and does not perform subsequent processing on the current video frame image, and starts processing on the next video frame image.
  • Step 712 Determine the target area corresponding to the target object in the original image.
  • the process of determining the target area is the process of determining the area coordinates corresponding to the target area.
  • the client directly determines the area coordinates of the placement area as the area coordinates of the target area.
  • the client determines the contour coordinates corresponding to the object contour of the target object in the original screen as the area coordinates of the target area.
  • Step 714 Adjust the bullet screen layer in the target area to be below the screen layer.
  • Step 716 rendering the target picture corresponding to the target area.
  • Step 718 displaying the target screen.
  • steps 714 to 718 are not substantially different from the foregoing steps 510 to 514 in the embodiment shown in FIG. 5 . Therefore, the implementation process of the steps 714 to 718 refers to the foregoing description, and is not repeated here.
  • FIG. 10 is an interactive flowchart of a method for displaying a video image according to an embodiment of the present disclosure.
  • the above-mentioned process of rendering a target image corresponding to a target area and displaying it includes the following steps:
  • Step 1002 the client detects that the user has turned on the anti-block setting switch.
  • Step 1004 the client displays the candidate object template.
  • Step 1006 the client determines the target object template selected by the user.
  • Step 1008 the client determines the screen image.
  • steps 1002-1008 are not substantially different from the steps 702-7062 in the aforementioned embodiment shown in FIG. 7 . Therefore, the implementation process of the steps 1002-1008 can be referred to the foregoing description, and details are not repeated here.
  • Step 1010 the client provides the target object template and the screen image to the server in association.
  • Step 1012 the server detects the screen object in the screen image.
  • Step 1014 the server determines the target object in the screen object.
  • Step 1016 the server determines the target area in the original picture.
  • Step 1018 the server returns the area coordinates of the target area to the client.
  • the client After determining the target object template and the screen image, the client provides the target object template and the screen image to the server.
  • the server detects the screen object in the screen image through the object recognition algorithm, and determines the screen object through the matching degree calculation.
  • the target object is finally determined in the target area of the target object in the original screen, and the area coordinates of the target area are returned to the client.
  • the above identification and matching process please refer to the records of the aforementioned steps 7062-712, which will not be repeated here.
  • Step 1020 the client adjusts the bullet chat layer in the target area to be below the screen layer.
  • Step 1022 the client renders the target image corresponding to the target area.
  • Step 1024 the client displays the target screen.
  • steps 1020-1024 are not substantially different from the foregoing steps 510-514 in the embodiment shown in FIG. 5, so the implementation process of the steps 1020-1024 can be referred to the foregoing description, and details are not repeated here.
  • the object recognition and matching calculation with a large amount of calculation performed by the server not only helps to give full play to its own computing advantages, but also reduces the computing pressure on the client and avoids the client from being stuck to a certain extent.
  • the above-mentioned method for displaying video images can be applied to a live broadcast scenario.
  • the above-mentioned video display method can be used to determine the target area corresponding to the host in the live screen of the live video. Adjust the bullet chat layer in the target area to the bottom of the screen layer, render the target screen corresponding to the target area based on the screen layer adjusted in the target area, and display the target screen, so that the anchor is displayed in the target screen without displaying the bullet. screen.
  • Fig. 11 is a schematic block diagram of an apparatus for displaying a video picture according to an embodiment of the present disclosure.
  • the video screen display device shown in the embodiments of the present disclosure is suitable for a client of a video playback application, and the video playback application is suitable for a terminal, and the terminal includes but is not limited to electronic devices such as mobile phones, tablet computers, wearable devices, and personal computers.
  • the video playback application is an application installed in a terminal, or a web version application integrated in a browser.
  • the user plays a video through the video playback application, and the played video is a long video, such as a movie, a TV series, or a short video , such as video clips, sitcoms, etc.
  • the display device of the video screen includes:
  • the area determination module 1101 is configured to determine the target area in the original image in response to the area designation operation on the original image of the target video, and the original image includes the image layer of the target video and the bullet screen layer located above the image layer;
  • the layer adjustment module 1102 is configured to adjust the bullet screen layer in the target area to be below the screen layer;
  • the drawing and displaying module 1103 is configured to render the target image corresponding to the target area based on the adjusted image layer in the target area, and display the target image.
  • the region determination module 1101 is configured to determine a target region to be drawn in the original picture in response to the region drawing operation in the original picture.
  • the area determination module 1101 is configured to, in response to the area drawing operation, determine a movement trajectory corresponding to the area drawing operation, and determine the area selected by the movement trajectory frame as the target area.
  • the region determination module 1101 is configured to:
  • the selected alternative contour template is determined as the target contour template
  • the region determination module 1101 is configured to:
  • the selected candidate object template is determined as the target object template
  • a region corresponding to the target object in the original picture is determined as the target region.
  • the region determination module 1101 is configured to:
  • the selected candidate object template is determined as the target object template
  • the target area corresponds to the target object in the screen image, and the target object matches the target object template.
  • the apparatus further includes:
  • other area determination module 1104 configured to determine a first area in the original picture that is different from the target area
  • the picture stitching module 1105 is configured to stitch the original picture and the target picture of the first area into a video picture, and display the video picture.
  • the original image is displayed in an HTML5 page
  • the drawing and displaying module 1103 is further configured to:
  • the target screen is rendered using the native canvas function of the HTML5 page.
  • Fig. 12 is a schematic block diagram of an apparatus for displaying a video picture according to an embodiment of the present disclosure.
  • the video screen display device shown in the embodiments of the present disclosure is applicable to the server side of the video playback application, and the video playback application is applicable to the server.
  • the server includes but is not limited to a physical server including an independent host, a virtual server carried by a host cluster, and a cloud server. Wait.
  • the video to be played is a long video, such as a movie, a TV series, or a short video, such as a video clip, a situational short play, and the like.
  • the display device of the video screen includes:
  • the template receiving module 1201 is configured to receive the target object template sent by the client and the picture image corresponding to the original picture, and the target object template is selected from the candidate object template;
  • an object determination module 1202 configured to determine a target object in the screen image that matches the target object template
  • the coordinate returning module 1203 is configured to return the area coordinates of the target area to the client, the target area is the area corresponding to the target object in the original screen, and the client is used to adjust the bullet screen layer in the target area to the screen layer Below, based on the adjusted image layer in the target area, the target image corresponding to the target area is rendered to display the target image.
  • the object determination module 1202 is further configured to:
  • the target object is determined according to the degree of matching corresponding to each picture object.
  • the object determination module 1202 is further configured to:
  • a picture object whose matching degree is higher than the matching degree threshold is determined as the target object.
  • the apparatus further includes:
  • the failure message returning module 1204 is configured to return a matching failure message to the client when the matching degree corresponding to each picture object is not higher than the matching degree threshold, and the matching failure message knows that the target object does not exist in the picture image.
  • Embodiments of the present disclosure also provide an electronic device, comprising: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to execute the instructions to implement the following steps:
  • the target region in the original picture is determined, and the original picture includes the picture layer of the target video and the bullet screen layer located above the picture layer;
  • the target picture corresponding to the target area is rendered, and the target picture is displayed.
  • the processor is configured to execute the instructions to implement the method for displaying video pictures provided by other embodiments of the foregoing method embodiments.
  • Embodiments of the present disclosure also provide an electronic device, comprising: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to execute the instructions to implement the following steps: receiving a target object sent by a client The image corresponding to the template and the original image, the target object template is selected from the candidate object template; determine the target object in the image that matches the target object template; return the area coordinates of the target area to the client, and the target area is the target object In the area corresponding to the original image, the client is used to adjust the bullet chat layer in the target area to the lower part of the image layer, and based on the adjusted image layer in the target area, render the target image corresponding to the target area, and display the target screen.
  • the processor is configured to execute the instructions to implement the method for displaying video pictures provided by other embodiments of the foregoing method embodiments.
  • Embodiments of the present disclosure also provide a computer-readable storage medium, when the instructions in the storage medium are executed by the processor of the electronic device, the electronic device can perform the following steps: in response to an operation of specifying an area of an original picture of a target video , determine the target area in the original picture, the original picture includes the picture layer of the target video and the bullet screen layer above the picture layer; adjust the bullet chat layer in the target area to the bottom of the picture layer; based on the target area The adjusted picture layer renders the target picture corresponding to the target area and displays the target picture.
  • the electronic device when the instructions in the storage medium are executed by the processor of the electronic device, the electronic device is enabled to execute the video picture display method provided by other embodiments of the foregoing method embodiments.
  • Embodiments of the present disclosure also provide a computer-readable storage medium, when the instructions in the storage medium are executed by the processor of the electronic device, the electronic device can perform the following steps: receiving a target object template sent by a client and corresponding to an original picture
  • the target object template is selected from the candidate object template; determine the target object in the picture image that matches the target object template; return the regional coordinates of the target area to the client, and the target area is the target object in the original picture.
  • the client is used to adjust the bullet screen layer in the target area to the lower part of the screen layer, and based on the adjusted screen layer in the target area, render the target screen corresponding to the target area and display the target screen.
  • the electronic device when the instructions in the storage medium are executed by the processor of the electronic device, the electronic device is enabled to execute the video picture display method provided by other embodiments of the foregoing method embodiments.
  • Embodiments of the present disclosure also provide a computer program product, the computer program product being configured to perform the following steps: in response to a region specifying operation on an original picture of a target video, determining a target region in the original picture, the original picture including an area of the target video Screen layer and the bullet chat layer above the screen layer; adjust the bullet screen layer in the target area to the bottom of the screen layer; render the target screen corresponding to the target area based on the adjusted screen layer in the target area to display the target screen.
  • the computer program product is further configured to execute the method for displaying video pictures provided by other embodiments of the above method embodiments.
  • An embodiment of the present disclosure also provides a computer program product, the computer program product is configured to perform the following steps: receiving a target object template sent by a client and a picture image corresponding to an original picture, where the target object template is selected from candidate object templates determine the target object in the screen image that matches the target object template; return the area coordinates of the target area to the client, the target area is the area corresponding to the target object in the original screen, and the client is used to convert the bullet screen in the target area
  • the layer is adjusted to the bottom of the picture layer, and based on the adjusted picture layer in the target area, the target picture corresponding to the target area is rendered, and the target picture is displayed.
  • the computer program product is further configured to execute the method for displaying video pictures provided by other embodiments of the above method embodiments.
  • Fig. 13 is a schematic block diagram of an electronic device according to an embodiment of the present disclosure.
  • electronic device 1300 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, fitness device, personal digital assistant, and the like.
  • an electronic device 1300 may include one or more of the following components: a processing component 1302, a memory 1304, a power supply component 1306, a multimedia component 1308, an audio component 1310, an input/output (I/O) interface 1312, a sensor component 1314 , and the communication component 1318.
  • a processing component 1302 a memory 1304, a power supply component 1306, a multimedia component 1308, an audio component 1310, an input/output (I/O) interface 1312, a sensor component 1314 , and the communication component 1318.
  • the processing component 1302 generally controls the overall operation of the electronic device 1300, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 1302 may include one or more processors 1320 to execute the instructions, so as to complete all or part of the steps of the above-mentioned method for displaying video images.
  • processing component 1302 may include one or more modules that facilitate interaction between processing component 1302 and other components.
  • processing component 1302 may include a multimedia module to facilitate interaction between multimedia component 1308 and processing component 1302.
  • the memory 1304 is configured to store various types of data to support operation at the electronic device 1300 . Examples of such data include instructions for any application or method operating on electronic device 1300, contact data, phonebook data, messages, pictures, videos, and the like.
  • Memory 1304 may be implemented by any type of volatile or nonvolatile storage device or combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Magnetic or Optical Disk Magnetic Disk
  • Power supply assembly 1306 provides power to various components of electronic device 1300 .
  • Power supply components 1306 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to electronic device 1300 .
  • Multimedia component 1308 includes a screen that provides an output interface between electronic device 1300 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user.
  • the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundaries of a touch or swipe action, but also detect the duration and pressure associated with the touch or swipe action.
  • the multimedia component 1308 includes a front-facing camera and/or a rear-facing camera. When the electronic device 1300 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each of the front and rear cameras can be a fixed optical lens system or have focal length and optical zoom capability.
  • Audio component 1310 is configured to output and/or input audio signals.
  • audio component 1310 includes a microphone (MIC) that is configured to receive external audio signals when electronic device 1300 is in operating modes, such as call mode, recording mode, and voice recognition mode. The received audio signal may be further stored in memory 1304 or transmitted via communication component 1318.
  • audio component 1310 also includes a speaker for outputting audio signals.
  • the I/O interface 1312 provides an interface between the processing component 1302 and a peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to: home button, volume buttons, start button, and lock button.
  • Sensor assembly 1314 includes one or more sensors for providing status assessment of various aspects of electronic device 1300 .
  • the sensor component 1314 can detect the open/closed state of the electronic device 1300, the relative positioning of components, such as the display and the keypad of the electronic device 1300, the sensor component 1314 can also detect the electronic device 1300 or one of the electronic device 1300 Changes in the position of components, presence or absence of user contact with the electronic device 1300 , orientation or acceleration/deceleration of the electronic device 1300 and changes in the temperature of the electronic device 1300 .
  • Sensor assembly 1314 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact.
  • Sensor assembly 1314 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor assembly 1314 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 1318 is configured to facilitate wired or wireless communication between electronic device 1300 and other devices.
  • Electronic device 1300 may access wireless networks based on communication standards, such as WiFi, carrier networks (eg, 2G, 3G, 4G, or 5G), or a combination thereof.
  • the communication component 1318 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 1318 also includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module may be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • the electronic device 1300 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field A programmable gate array (FPGA), a controller, a microcontroller, a microprocessor or other electronic components are implemented for executing the above-mentioned method for displaying video images.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field A programmable gate array
  • controller a microcontroller, a microprocessor or other electronic components are implemented for executing the above-mentioned method for displaying video images.
  • a non-transitory computer-readable storage medium including instructions is also provided, such as a memory 1304 including instructions, and the above-mentioned instructions can be executed by the processor 1320 of the electronic device 1300 to complete the above-mentioned presentation of the video picture.
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
  • An embodiment of the present disclosure further provides a method for displaying a video picture, including: in response to a region designation operation performed by a user on an original picture of a target video, determining a target region in the original picture, and the video picture in the target region is displayed according to the picture diagram Layer and the bullet chat layer above the screen layer are rendered; adjust the bullet chat layer in the target area to be below the screen layer; render and display the target screen corresponding to the target area according to the adjusted screen layer.
  • determining the target region in the original picture includes at least one of the following: detecting the target region rendered by the user in the original picture; The target contour template selected from the candidate contour templates, and after detecting that the user placed the target contour template in the display area of the original screen, the corresponding area of the target contour template in the display area is determined as the target area; The target object template selected in the object template is selected, and in the case of detecting the target object matching the target object template in the original picture, the corresponding area of the target object in the original picture is determined as the target area; The target object template selected in the template, provide the target object template and the image corresponding to the original screen to the server, and receive the regional coordinates of the target area returned by the server, the target area corresponds to the target in the screen image that matches the target object template object.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本公开关于视频画面的展示方法及电子设备,所述方法包括:响应于对目标视频的原始画面的区域指定操作,确定原始画面中的目标区域,该原始画面包括画面图层和位于画面图层上方的弹幕图层;将目标区域中的弹幕图层调整至画面图层下方;基于目标区域中调整后的画面图层,渲染对应于目标区域的目标画面,展示该目标画面。

Description

视频画面的展示方法及电子设备
本公开基于申请日为2020年10月10日、申请号为202011080616.6的中国专利申请,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本公开作为参考。
技术领域
本公开涉及视频展示领域,尤其涉及视频画面的展示方法及电子设备。
背景技术
当前的视频播放平台通常会为观众提供弹幕展示功能,即在视频播放过程中同时展示与视频相关的弹幕。
为避免弹幕对视频画面的遮挡,相关技术中,由服务端提前确定视频中目标对象的对象位置,然后将视频和该对象位置提供至客户端,客户端按照该对象位置渲染视频画面,从而在该对象位置处仅展示目标对象而不展示相应弹幕。
发明内容
本公开提供了视频画面的展示方法及电子设备。本公开的技术方案如下:
根据本公开实施例的一方面,提出一种视频画面的展示方法,包括:响应于对目标视频的原始画面的区域指定操作,确定所述原始画面中的目标区域,所述原始画面包括所述目标视频的画面图层和位于所述画面图层上方的弹幕图层;将所述目标区域中的所述弹幕图层调整至所述画面图层下方;基于所述目标区域中调整后的所述画面图层,渲染对应于所述目标区域的目标画面,展示所述目标画面。
根据本公开实施例的另一方面,提出一种视频画面的展示方法,包括:接收客户端发送的目标对象模板和原始画面对应的画面图像,所述目标对象模板是从备选对象模板中选取的;确定所述画面图像中匹配于所述目标对象模板的目标对象;向所述客户端返回目标区域的区域坐标,所述目标区域为所述目标对象在所述原始画面中所对应的区域,所述客户端用于将所述目标区域中的弹幕图层调整至所述画面图层下方,基于所述目标区域中调整后的所述画面图层,渲染对应于所述目标区域的目标画面,展示所述目标画面。
根据本公开实施例的另一方面,提出一种视频画面的展示装置,包括:区域确定模块,被配置为响应于对目标视频的原始画面实施的区域指定操作,确定所述原始画面中的目标区域,所述原始画面包括画面图层和位于所述画面图层上方的弹幕图层;图层调整模块,被配置为将所述目标区域中的所述弹幕图层调整至所述画面图层下方;绘制及展示模块,被配置为基于所述目标区域中调整后的所述画面图层,渲染对应于所述目标区域的目标画面,展示所述目标画面。
根据本公开实施例的另一方面,提出一种视频画面的展示装置,包括:模板接收模块,被配置为接收客户端发送的目标对象模板和原始画面对应的画面图像,所述目标对象模板是从备选对象模板中选取的;对象确定模块,被配置为确定所述画面图像中匹配于所述目标对象模板的目标对象;坐标返回模块,被配置为向所述客户端返回目标区域的区域坐标,所述目标区域为所述目标对象在所述原始画面中所对应的区域,所述客户端用于将所述目标区域中的弹幕图层调整至所述画面图层下方,基于所述目标区域中调整后的所述画面图层,渲染对应于所述目标区域的目标画面,展示所述目标画面。
根据本公开实施例的另一方面,提出一种电子设备,包括:处理器;用于存储所述处 理器可执行指令的存储器;其中,所述处理器被配置为执行所述指令,以实现如下步骤:响应于对目标视频的原始画面的区域指定操作,确定所述原始画面中的目标区域,所述原始画面包括所述目标视频的画面图层和位于所述画面图层上方的弹幕图层;将所述目标区域中的所述弹幕图层调整至所述画面图层下方;基于所述目标区域中调整后的所述画面图层,渲染对应于所述目标区域的目标画面,展示所述目标画面。
根据本公开实施例的另一方面,提出一种电子设备,包括:处理器;用于存储所述处理器可执行指令的存储器;其中,所述处理器被配置为执行所述指令,以实现如下步骤:接收客户端发送的目标对象模板和原始画面对应的画面图像,所述目标对象模板是从备选对象模板中选取的;确定所述画面图像中匹配于所述目标对象模板的目标对象;向所述客户端返回目标区域的区域坐标,所述目标区域为所述目标对象在所述原始画面中所对应的区域,所述客户端用于将所述目标区域中的弹幕图层调整至所述画面图层下方,基于所述目标区域中调整后的所述画面图层,渲染对应于所述目标区域的目标画面,展示所述目标画面。
根据本公开实施例的另一方面,提出一种存储介质,当所述存储介质中的指令由电子设备的处理器执行时,使得电子设备能够执行如下步骤:响应于对目标视频的原始画面的区域指定操作,确定所述原始画面中的目标区域,所述原始画面包括所述目标视频的画面图层和位于所述画面图层上方的弹幕图层;将所述目标区域中的所述弹幕图层调整至所述画面图层下方;基于所述目标区域中调整后的所述画面图层,渲染对应于所述目标区域的目标画面,展示所述目标画面。
根据本公开实施例的另一方面,提出一种存储介质,当所述存储介质中的指令由电子设备的处理器执行时,使得电子设备能够执行如下步骤:接收客户端发送的目标对象模板和原始画面对应的画面图像,所述目标对象模板是从备选对象模板中选取的;确定所述画面图像中匹配于所述目标对象模板的目标对象;向所述客户端返回目标区域的区域坐标,所述目标区域为所述目标对象在所述原始画面中所对应的区域,所述客户端用于将所述目标区域中的弹幕图层调整至所述画面图层下方,基于所述目标区域中调整后的所述画面图层,渲染对应于所述目标区域的目标画面,展示所述目标画面。
根据本公开的实施例,由客户端响应于用户针对原始画面实施的区域指定操作,确定需要防挡的目标对象和相应的防挡区域,从而改进客户端所展示视频画面及其相应弹幕的展示效果。
附图说明
图1是根据本公开的实施例示出的一种视频服务平台的架构示意图;
图2是根据本公开的实施例示出的一种视频画面的绘制原理的示意图;
图3是根据本公开的实施例示出的一种视频画面的展示方法的示意图;
图4是根据本公开的实施例示出的另一种视频画面的展示方法的示意图;
图5是根据本公开的实施例示出的又一种视频画面的展示方法的示意图;
图6是根据本公开的实施例示出的一种绘制目标区域的示意图;
图7是根据本公开的实施例示出的再一种视频画面的展示方法的示意图;
图8是根据本公开的实施例示出的一种利用区域模板确定目标区域的示意图;
图9是根据本公开的实施例示出的一种利用对象模板确定目标区域的示意图;
图10是根据本公开的实施例示出的一种视频画面的展示方法的交互流程图;
图11是根据本公开的实施例示出的一种视频画面的展示装置的示意框图;
图12是根据本公开的实施例示出的另一种视频画面的展示装置的示意框图;
图13是根据本公开的实施例示出的一种电子设备的结构图。
具体实施方式
图1是根据本公开的实施例示出的一种视频服务平台的架构示意图。如图1所示,该系统包括网络10、服务器11、若干电子设备,比如手机12、手机13和手机14等。
服务器11为包含一独立主机的物理服务器,或者该服务器11为主机集群承载的虚拟服务器。在运行过程中,服务器11运行某一应用的服务器侧的程序,以实现该应用的相关业务功能,比如在该服务器11运行视频服务平台的程序的情况下,实现为该视频服务平台的服务端。而在本公开的一个或多个实施例的技术方案中,由服务器11通过与手机12-14上运行的客户端相互配合,实现包含弹幕的视频画面的展示方案。
在本公开的实施例中,视频服务平台不仅能够实现视频服务功能,还能够作为诸多其他功能的集成化功能平台,比如对于区域绘制操作的检测、备选轮廓模板的展示与选取、备选对象模板的展示与选取、目标区域的确定、目标画面的渲染等,本公开的一个或多个实施例并不对此进行限制。
手机12-14只是用户使用的一种类型的电子设备。实际上,用户还能够使用诸如下述类型的电子设备:平板设备、笔记本电脑、掌上电脑(PDAs,Personal Digital Assistants)、可穿戴设备(如智能眼镜、智能手表等)等,本公开的一个或多个实施例并不对此进行限制。在运行过程中,该电子设备运行某一应用的客户端侧的程序,以实现该应用的相关业务功能,比如当该电子设备运行视频服务平台的程序时,实现为该视频服务平台的客户端,例如手机12实现为视频提供客户端,手机13和手机14实现为视频播放客户端。
需要指出的是:视频服务平台的客户端的应用程序被安装在电子设备上,在该电子设备上启动并运行该客户端;当然,在采用诸如HTML5技术的在线客户端的情况下,无需在电子设备上安装相应的应用程序,即可获得并运行该客户端。
而对于手机12-14与服务器11之间进行交互的网络10,包括多种类型的有线或无线网络。
当前的视频播放平台能够为观众提供弹幕展示功能,即在视频播放过程中同时展示与视频相关的弹幕。目标视频的原始画面包括画面图层和弹幕图层,即所展示的目标视频的原始画面是基于画面图层和位于画面图层上方的弹幕图层渲染得到的。
如图2所示,客户端所播放目标视频对应的视频画面201中展示有弹幕202。用户观看到的视频画面201相当于位于上方的弹幕图层203和位于下方的画面图层204的叠加展示。因为弹幕图层203位于画面图层204上方,所以在根据弹幕图层203和画面图层204渲染而成的视频画面201中,弹幕展示在视频画面上方,导致弹幕对视频画面的遮挡。
为避免弹幕对视频画面的遮挡,相关技术中由服务端提前确定视频中目标对象的对象位置,然后将视频和该对象位置提供至客户端,客户端在渲染该位置处的视频画面时将弹幕渲染在对象下方,从而在该对象位置处仅展示目标对象而不展示相应弹幕,实现对该目标对象的防挡展示效果。然而,因为上述方式中的目标对象和相应的防挡位置由服务端提取前确定,而与用户行为无关,因此最终实现的防挡展示效果难以符合观众用户的观看需求。
图3是根据本公开的实施例示出的一种视频画面的展示方法的流程图。如图3所示,该方法应用于客户端,包括以下步骤:
步骤302,响应于用户针对目标视频的原始画面实施的区域指定操作,确定原始画面中的目标区域,该目标区域中的视频画面被根据画面图层和位于画面图层上方的弹幕图层渲染得到。
其中,目标区域中的视频画面为目标视频在目标区域中的原始画面。也即是客户端响应于对目标视频的原始画面的区域指定操作,确定原始画面中的目标区域。其中,该原始画面包括目标视频的画面图层和位于画面图层上方的弹幕图层,该画面图层和弹幕图层为 目标视频对应的完整的图层,区域指定操作是指确定原始画面中的目标区域的操作。
在本公开的实施例中,用户设置完成的目标区域,即为最终展示时仅展示画面不展示弹幕的区域,亦即在目标区域中实现弹幕的防挡展示功能的区域。为便于描述本方案,下文中将目标区域也称为“防挡区域”,特此说明。
客户端通过多种方式确定目标区域。在一些实施例中,用户在观看展示的原始画面过程中,在原始画面中实施绘制目标区域的操作,相应的,客户端检测用户在原始画面中绘制的目标区域,即客户端响应于在原始画面中的区域绘制操作,确定在原始画面中绘制的目标区域,从而,通过绘制的方式由用户自定义相应的目标区域,实现更好的防挡展示效果。
在一些实施例中,客户端响应于在原始画面中的区域绘制操作,确定该区域绘制操作对应的移动轨迹,将该移动轨迹框选出的区域确定为目标区域。例如,客户端显示多个备选图案,用户选取备选图案,基于该备选图案在原始画面中绘制目标区域,其中备选图案包括线条、矩形、圆形、三角形等图案。例如,用户需要绘制矩形的情况下,选取矩形,采用鼠标或触控方式在原始画面中移动,基于移动轨迹框选出矩形区域,将框选出的矩形区域作为目标区域;或者,用户需要绘制人形图案的情况下,由于该人形图案是不规则的图案,需要选取线条,采用鼠标或触控方式在原始画面中移动,基于移动轨迹框选出需要的人形区域,将框选出来的人形区域作为目标区域。在一些实施例中,用户在采用线条进行绘制时,绘制的移动轨迹是封闭的,或者不是封闭的,在移动轨迹不是封闭的情况下,客户端将移动轨迹的起始点和终止点采用直线连接在一起,得到封闭区域。
在一些实施例中,用户直接在原始画面中绘制目标区域。或者,客户端在原始画面上方展示空白蒙版,用户在该空白蒙版上绘制目标区域,相应的,服务端检测用户在空白蒙版中绘制的目标蒙版区域,即客户端响应于在空白蒙版中的区域绘制操作,确定在空白蒙版中绘制的目标蒙版区域,将目标蒙版区域在原始画面中对应的区域确定为目标区域。其中,空白蒙版的大小与原始画面尺寸大小相同且位置重合,用户在原始画面上方随意绘制符合自已意愿的目标蒙版区域。在一些实施例中,空白蒙版中还包含设置的非绘制区域,用户在除非绘制区域之外的绘制区域中绘制目标蒙版区域,非绘制区域用于设置用户无法指定为目标区域的区域,非绘制区域由用户在客户端中预先设置或者由服务端统一设置后下发至客户端。用户在空白蒙版中绘制的目标蒙版区域为任意形状,例如矩形、圆形、椭圆形、梯形或者不规则图形等,而且目标蒙版区域的大小由用户随意设置(如通过鼠标拖拉设置区域边界)。当然,用户能够通过触发防挡设置开关触发客户端展示空白蒙版。
在一些实施例中,客户端提供备选轮廓模板供用户选择。例如,客户端先确定用户从备选轮廓模板中选取的目标轮廓模板,并在检测到用户将目标轮廓模板放置在原始画面的展示区域中之后,将目标轮廓模板在展示区域中的对应区域确定为目标区域。也即是,客户端响应于对任一备选轮廓模板的选取操作,将选取的轮廓模板确定为目标轮廓模板,将目标轮廓模板移动至原始画面中,将原始画面中目标轮廓模板对应的区域确定为目标区域。
在一些实施例中,用户通过触发防挡设置开关控制客户端展示备选轮廓模板,然后在所展示的备选轮廓模板中选取备选轮廓模板作为目标轮廓模板,其中用户能够选取更符合自己观看意愿或目标对象的边缘轮廓形象的轮廓模板,目标对象是指用户想要实现防挡展示效果的视频画面中的展示对象。进而将目标轮廓模板直接拖动并放置在原始画面种的合适位置,例如目标对象所在位置等,从而实现对目标区域的指定。其中,目标轮廓模板为人物、食物、动物、建筑、书籍、屏幕等常见视频对象对应的对象轮廓区域,用户还能够对目标轮廓模板实施缩放操作控制其大小,本公开对此并不进行限制。通过选取轮廓模板的方式,用户只需要在所展示出的备选轮廓模板中选择更符合目标对象外形轮廓的目标轮 廓模板,然后将其通过简单地拖拉操作放置在原始画面中的相应位置并调整即可,简化了用户指定目标区域的操作步骤,提升了用户指定目标区域的效率。
在一些实施例中,客户端提供备选对象模板供用户选择。例如,客户端先确定用户从所展示的备选对象模板中选取的目标对象模板,然后检测原始画面中匹配于目标对象模板的目标对象,再将检测到的目标对象在原始画面中的对应区域确定为目标区域。也即是客户端响应于对任一备选对象模板的选取操作,将选取的备选对象模板确定为目标对象模板,响应于原始画面中具有匹配于目标对象模板的目标对象,将原始画面中目标对象对应的区域确定为目标区域。
在一些实施例中,用户通过触发防挡设置开关控制客户端展示备选对象模板,然后在客户端展示的备选对象模板中选取目标对象模板。其中,根据目标对象模板确定出的目标对象即为用户感兴趣的想要实现防挡展示效果的视频对象。通过选取对象模板的方式,客户端能够根据用户指定的目标对象模板确定相应的目标对象。或者,客户端为用户展示目标对象选取控件,从而由用户在当前视频画面中自定义选取相应的目标对象,以避免备选对象模板中不存在目标对象时用户无法选取的问题。通过选取目标对象的方式,客户端能够通过对目标视频的本地实时检测实现目标对象的高效准确识别,从而实现对目标视频中目标对象的动态跟踪,进而实现针对目标对象的动态防挡展示效果。
在一些实施例中,为减轻客户端识别目标对象时的运算压力,客户端向服务端请求获取对应于用户选取的目标对象模板的目标对象,即由服务端根据目标对象模板确定目标对象。例如,客户端先确定用户从备选对象模板中选取的目标对象模板,然后将目标对象模板和原始画面对应的画面图像提供至服务端,最后接收服务端返回的目标区域的区域坐标。也即是,客户端响应于对任一备选对象模板的选取操作,将选取的备选对象模板确定为目标对象模板,向服务器发送目标对象模板和原始画面对应的画面图像,接收服务端返回的区域坐标,基于该区域坐标确定目标区域。其中,目标区域对应于画面图像中的目标对象,该目标对象匹配于目标对象模板。
类似的,用户通过触发防挡设置开关控制客户端展示备选对象模板,然后在客户端展示的备选对象模板中选取对应于目标对象的目标对象模板。其中,原始画面对应的画面图像为当前视频帧图像;或者画面图像为当前视频帧图像的图像快照;或者为减轻与服务端之间的数据传输压力,画面图像为当前视频帧图像的视频帧标识(如帧图像序号等),相应的,服务端根据该视频帧标识在本地保存的目标视频中确定相应的视频帧图像,本公开对此并不进行限制。另外,目标区域的区域坐标为目标区域轮廓线中各个像素点的像素点坐标。通过上述方式,将运算量较大的对象识别与匹配任务交由服务端完成,从而不仅减小了客户端的运算压力,而且能够充分发挥服务端的运算优势,有助于减少客户端的播放卡顿。
步骤304,将目标区域中的弹幕图层调整至画面图层下方。
原始画面根据画面图层和位于画面图层上方的弹幕图层渲染得到,因此,在确定目标区域后,客户端将位于目标区域中的弹幕图层调整至画面图层下方,而目标区域之外的其他区域则不进行上述调整。
在一些实施例中,弹幕图层和画面图层具有对应的层级,层级高的图层位于层级低的图层的上方。客户端确定目标区域之后,将目标区域中的弹幕图层的层级调整至低于画面图层的层级,基于画面图层的层级和目标区域中调整后的弹幕图层的层级,将目标区域中的弹幕图层调整至画面图层下方。
步骤306,根据调整后的画面图层绘制并展示对应于目标区域的目标画面。也即是基于目标区域中调整后的画面图层,渲染对应于目标区域的目标画面,展示该目标画面。
在本公开的实施例中,客户端根据目标区域中调整后的画面图层(即位于最上方的图 层),渲染对应于目标区域的目标画面,渲染过程参见相关技术中渲染画面元素以及页面渲染的相关内容,此处不再赘述。
在一些实施例中,客户端确定原始画面中区别于目标区域的其他区域,然后将其他区域的原始画面与目标画面拼接为视频画面,并展示该视频画面。其中,其他区域可称为第一区域。此时,经过上述过程展示出的视频画面中,目标画面仅展示画面内容,而不展示相应的弹幕;而其他区域对应的原始画面中同时展示画面内容和相应的弹幕,弹幕仍然会对画面内容产生遮挡,从而实现对于目标区域中画面内容的针对性防挡展示效果。
在一些实施例中,客户端是基于HTML5(Hyper Text Markup Language,第五代超文本标记语言)技术搭建的,相应的,原始画面和目标画面能够在HTML5页面中展示,此时,客户端基于目标区域中调整后的画面图层,使用HTML5页面的原生画布功能(如canvas画布等)渲染目标画面,在该HTML5页面中展示该目标画面。从而充分发挥HTML5页面的原生画布技与HTML5页面的兼容性及适配程度较高的技术优势,渲染过程参见相关技术中的记载,此处不再赘述。
根据本公开的实施例,由客户端响应于用户针对原始画面实施的区域指定操作,进而确定需要防挡的目标对象和相应的防挡区域,从而改进所展示视频画面及其相应弹幕的展示效果。
相应的,本公开还提出了服务端侧的视频画面的展示方法。图4是根据本公开的实施例示出的另一种视频画面的展示方法的流程图。如图4所示,该方法应用于服务端,包括以下步骤:
步骤402,接收客户端提供的目标对象模板和原始画面对应的画面图像,目标对象模板是从备选对象模板中选取的。
步骤404,确定画面图像中匹配于目标对象模板的目标对象。
步骤406,向客户端返回目标区域的区域坐标,该目标区域为目标对象在原始画面中所对应的区域,客户端用于将目标区域中的弹幕图层调整至画面图层下方,基于目标区域中调整后的画面图层渲染并展示对应于目标区域的目标画面。
在一些实施例中,服务端先识别画面图像中的所有画面对象,然后依次确定各个画面对象与目标对象模板之间的匹配度,再根据各个画面对象对应的匹配度,确定目标对象。即服务端将匹配度最高的画面对象,确定为目标对象;或者将匹配度高于匹配度阈值的画面对象,确定为目标对象。其中,画面对象是指画面图像中的对象,原始画面对应的画面图像为当前视频帧图像;或者画面图像为当前视频帧图像的图像快照;或者画面图像为当前视频帧图像的视频帧标识(如帧图像序号等),相应的,服务端根据该视频帧标识在本地保存的目标视频中,确定相应的视频帧图像,本公开对此并不进行限制。通过上述识别及匹配过程,服务端能够确定出视频画面中的至少一个目标对象,并将各个目标对象所对应目标区域的区域坐标返回至客户端,从而充分发挥自身的运算优势,以减轻客户端的运算压力。
另外,在识别出的所有画面对象对应的匹配度均不高于匹配度阈值的情况下,服务端向客户端返回匹配失败消息,该匹配失败消息指示画面图像中不存在目标对象。相应的,客户端在接收到匹配失败消息后不进行图层调整,而直接展示原始画面即可。此时所展示的视频画面并不存在防挡区域,即此时并未产生防挡展示效果。
因为目标视频由多张视频帧图像组成,所以展示目标视频的过程即为依次展示各张视频帧图像的过程。实际上,上述图3和图4所示实施例的处理过程,为客户端或服务端针对当前时刻正在展示的视频帧图像的处理过程,当然,为实现接近实时的防挡展示效果,客户端或服务端识别目标对象时,识别视频画面中目标时刻对应的视频帧图像,该目标时刻为位于当前时刻之后、且与当前时刻间隔第一时长的时刻,第一时长为任一时长,从而到 达目标时刻后即可实时展示具有防挡展示效果的目标画面。
下面结合图5-图10所示实施例,对上述视频画面的展示方法进行详细说明。图5是根据本公开的实施例示出的又一种视频画面的展示方法的流程图。如图5所示,该方法应用于客户端,该方法对应的视频画面的渲染及展示过程包括以下步骤:
步骤502,检测到用户打开防挡设置开关。
在一些实施例中,客户端在目标视频的视频展示界面中展示防挡设置开关,因此用户能够在视频展示界面中触发该防挡设置开关,以控制开始在原始画面中指定防挡区域。
如图6所示,对于目标视频(如视频V)对应的视频展示界面611,在其对应的弹幕开关614处于打开状态的情况下,视频展示界面611中展示有画面内容612和弹幕613,视频展示界面611下方还展示有防挡设置开关615。当然,图6所示的“防挡设置”仅是示例性的,该开关在实际展示时能够展示为其他名称,本公开对此并不进行限制。
步骤504,在原始画面上方展示空白蒙版。
在一些实施例中,在目标视频处于播放状态或暂定状态的情况下,客户端对应的用户(目标视频的观众)通过触发防挡设置开关615打开该防挡设置开关。如图6所示,在防挡设置开关623处于打开状态的情况下,检测到防挡设置开关被触发时,客户端在视频展示界面611上方展示空白蒙版621,该空白蒙版621为半透明形式,透明度由用户在客户端中设置或者采用客户端或服务端的系统设置。
步骤506,检测用户实施的区域绘制操作。
透过半透明形式的空白蒙版621,用户观察到空白蒙版621下方展示的原始画面的画面内容,进而通过鼠标或触摸控制在该空白蒙版621中的响应位置绘制目标蒙版区域622。
如图6所示,用户在原始画面中某人像上方的空白蒙版621中绘制矩形的目标蒙版区域622。实际上,用户能够随意调整目标蒙版区域622的位置、大小、角度等参数,例如,目标蒙版区域622的形状为矩形、圆形、椭圆形、梯形或者不规则图形等,目标蒙版区域的大小由用户通过鼠标拖拉区域边界进行任意调整,本公开对于目标蒙版区域的样式及绘制方式并不进行限制。
在一些实施例中,在用户绘制目标蒙版区域622的过程中,客户端展示实时的防挡预览效果,以便用户根据该预览效果适当调整目标蒙版区域622的起点、大小、角度等位置参数,从而实现更佳的防挡展示效果。用户在空白蒙版621中绘制若干个目标蒙版区域622,当绘制完毕后,用户触发(如单击)确认控件624,客户端响应于对确定控件624的触发操作,确定目标蒙版区域622绘制完毕。
步骤508,确定用户绘制的目标蒙版区域对应的目标区域。
客户端在检测到用户针对确认控件实施的触发操作之后,确定用户所绘制的目标蒙版区域在原始画面中对应的目标区域,确定目标区域的过程,即为确定目标区域在原始画面中的区域坐标的过程。例如,在用户直接在原始画面中绘制目标区域的情况下,目标区域的实际位置即为其在原始画面中的位置。
又例如,在空白蒙版与原始画面尺寸相同且覆盖在原始画面上方(如图6所示)的情况下,因为空白蒙版与原始画面完全对应,所以可将目标蒙版区域在空白蒙版中的区域坐标作为目标区域在原始画面中的区域坐标;而在空白蒙版与原始画面并不完全对应的情况下,先确定目标蒙版区域在空白蒙版中的区域坐标,然后根据空白蒙版相对于原始画面的坐标偏移量和/或缩放量相应计算目标区域在原始画面中的区域坐标,具体过程不再赘述。
步骤510,将目标区域中的弹幕图层调整至画面图层下方。
因为原始画面根据画面图层和位于画面图层上方的弹幕图层渲染得到,所以要实现对画面内容的防挡展示效果,在确定目标区域后,客户端将位于目标区域中的弹幕图层调整至画面图层下方,而目标区域之外的其他区域则不进行上述调整。
步骤512,渲染对应于目标区域的目标画面。
在本公开的实施例中,客户端根据目标区域中调整后的画面图层(即位于最上方的图层),渲染对应于目标区域的目标画面,渲染目标画面的过程参见相关技术中记载的渲染画面元素以及页面渲染的相关内容,此处不再赘述。另外,客户端确定原始画面中区别于目标区域的第一区域,然后将第一区域的原始画面与目标画面拼接为视频画面。实际上,上述渲染目标区域所对应目标画面的过程,与渲染第一区域的原始画面的过程是同时进行的,即二者并非独立进行,而是构成一个完整的视频画面渲染过程。
在一些实施例中,客户端渲染目标区域中调整后的画面图层和弹幕图层,得到目标区域的目标画面;或者,由于目标区域中的弹幕图层位于画面图层的下方,即使渲染目标区域中的弹幕图层,渲染得到的弹幕在目标区域中也会被遮挡,因此,客户端能够仅渲染目标区域中的画面图层,得到目标区域的目标画面。
在一些实施例中,客户端是基于HTML5技术搭建的,相应的,原始画面和目标画面在HTML5页面中展示,此时,使用HTML5技术提供的原生的canvas画布渲染上述目标画面,渲染过程参加相关技术中的记载,此处不再赘述。
步骤514,展示目标画面。
渲染完成后,在目标视频对应的视频展示界面中展示渲染得到的目标画面。可以理解的是,和渲染目标画面的过程类似,展示目标视频的过程与展示第一区域对应的视频画面也是同时进行的,即客户端对二者构成的视频画面进行展示,而并非单独展示目标画面与第一区域对应的原始画面。
如图6所示,视频展示界面632中展示有目标区域对应的目标画面631,该区域中仅展示画面内容(如画面中的人像)而不展示相应的弹幕,而在区别于目标区域的第一区域中同时展示画面内容和弹幕633。在视频画面播放过程中,移动播放的弹幕进入目标区域后自动被隐藏,类似于被遮挡在目标画面下方而无法观察到,从而实现对目标区域中画面内容的防挡展示效果。
经过上述过程展示出的视频画面中,目标画面仅展示画面内容,而不展示相应的弹幕;而第一区域对应的视频画面中同时展示画面内容和相应的弹幕,弹幕仍然会对画面内容产生遮挡,从而实现对于目标区域中画面内容的针对性防挡展示效果。
图7是根据本公开的实施例示出的再一种视频画面的展示方法的示意图,如图7所示,该方法应用于客户端,该方法对应的视频画面的渲染及展示过程包括以下步骤:
步骤702,检测到用户打开防挡设置开关。
在一些实施例中,客户端在目标视频的视频展示界面中展示防挡设置开关,因此用户在视频展示界面中触发该防挡设置开关,以开始在原始画面中指定防挡区域。
在检测到防挡设置开关被触发开启的情况下,客户端展示相应的设置方式选择控件,以由用户选择进行防挡设置的方式。例如,用户选择通过区域设置的方式设置防挡区域,然后执行步骤7041;或者用户选择通过对象设置的方式设置防挡区域,然后执行步骤7042。
步骤7041,展示备选轮廓模板。
在检测到用户选择通过区域设置的方式设置防挡区域的情况下,客户端向用户展示备选轮廓模板,供用户选择对应于目标对象的目标轮廓模板,该目标对象是指用户想要实现防挡展示效果的画面对象。
如图8所示,在防挡设定开关803处于开启状态的情况下,客户端在目标视频(视频V)对应的视频展示界面上方展示轮廓模板选取界面802,该轮廓模板选取界面802中包含至少一个备选轮廓模板。展示出的备选轮廓模板对应于多种对象,例如对应于女士正脸轮廓的备选轮廓模板A、对应于男士侧脸轮廓的备选轮廓模板B、对应于餐具(盘子)轮 廓的备选轮廓模板C、对应于打开的书籍轮廓的备选轮廓模板D等。上述备选轮廓模板集成在客户端的安装程序中,从而在客户端安装完毕后,即可展示各个备选轮廓模板;或者,为保证模板的及时更新,在展示目标视频之前或者展示目标视频的过程中从服务端获取各个备选轮廓模板;上述备选轮廓模板由服务端根据视频库中的海量视频通过模型算法提取得到。在一些实施例中,对备选轮廓模板进行分类,并根据目标视频所属类别提供相应的备选轮廓模板,不再赘述。
步骤7061,确定用户选取的目标轮廓模板。
步骤7081,检测用户的放置操作。
用户在展示出的多个备选轮廓模板中选取目标轮廓模板,例如通过鼠标点击、触控操作等方式进行选取。如图8所示,在用户想要实现防挡展示效果的目标对象为原始画面中所示的女士正脸801的情况下,用户选择备选轮廓模板中与该女士正脸对应的人脸轮廓形状相似的备选轮廓模板A,即将备选轮廓模板A作为目标轮廓模板,并直接拖动该目标轮廓模板A至原始画面的相应位置,然后通过拖拉轮廓模型边界的方式将该目标轮廓模板A的大小和位置调整至覆盖女士正脸801,以达到较佳的防挡展示效果。其中,在用户拖动备选轮廓模板A之后,轮廓模板选取界面802可直接隐藏。当然,用户也能够选择其他备选轮廓模板作为目标轮廓模板,调整方式以用户实际操作为准,本公开对此并不进行限制。
步骤7101,确定放置后的目标轮廓模板在原始画面中的放置区域。
进一步的,客户端通过检测放置操作,确定放置后的目标轮廓模板,进而确定该目标轮廓模板在原始画面中的放置区域。该放置区域通过目标轮廓模板的区域边界在原始画面中对应的各个像素点的坐标值来表示,当然,在目标轮廓模板的轮廓边界为标准形状的情况下,也能够通过该目标轮廓模板的中心点坐标和模板尺寸等特征来表示,本公开对此并不进行限制。
步骤7042,展示备选对象模板。
在检测到用户选择通过对象设置的方式设置防挡区域的情况下,客户端向用户展示备选对象模板,以供用户选择对应于目标对象的目标对象模板,该目标对象是指用户想要实现防挡展示效果的画面对象。
如图9所示,在防挡设定开关903处于开启状态的情况下,客户端在目标视频(视频V)对应的视频展示界面上方展示对象模板选取界面902,该对象模板选取界面902中包含至少一个备选对象模板,展示出的备选对象模板对应于多种对象,例如对应于人物的备选对象模板A、对应于食物的备选对象模板B、对应于动物的备选对象模板C、对应于打开的书籍的备选对象模板D等。上述备选对象模板集成在客户端的安装程序中,从而在客户端安装完毕后即可展示各个备选对象模板;或者,为保证模板的及时更新,在展示目标视频之前或者展示目标视频的过程中从服务端获取备选对象模板;上述备选对象模板由服务端根据视频库中的海量视频中提取得到。在一些实施例中,对备选对象模板进行分类,并根据目标视频所属类别提供相应的备选对象模板,不再赘述。
步骤7062,确定用户选取的目标对象模板。
用户在展示出的若干个备选对象模板中选取目标对象模板,例如通过鼠标点击、触控操作等方式进行选取。如图9所示,用户想要实现防挡展示效果的目标对象为原始画面中所示的女士正脸901的情况下,用户选取备选对象模板中对应于人物的备选对象模板A。用户选取至少一个备选对象模板作为目标对象模板,并在选取完成后触发相应的确定控件904,从而将选取的备选对象模板确定为目标对象模板。
步骤7082,检测画面图像中的画面对象。
客户端先确定当前时刻的画面图像,例如,将当前时刻的视频帧图像作为画面图像; 或者对当前时刻的视频帧图像实施画面快照,并将得到的快照图像作为画面图像;或者,为实现接近实时的防挡展示效果,将目标时刻对应的视频帧图像或快照图像确定为画面图像。其中,目标时刻为位于当前时刻之后、且与当前时刻间隔第一时长的时刻,第一时长为任一时长。
进一步的,通过对象识别算法对画面图像进行处理,从而识别出画面图像中的画面对象。例如,采用相关技术中的聚类算法、深度学习算法实现,当然也能够采用自定义的图像识别模型识别上述画面对象,不再赘述。
步骤7102,确定画面对象中匹配于目标对象模板的目标对象。
在识别出画面图像中的画面对象后,客户端依次计算各个画面对象与目标对象模板之间的匹配度,例如通过颜色、轮廓、运动轨迹等多种特征参数计算上述匹配度。可以理解的是,任一画面对象的特征参数与目标对象模板的特征参数越接近,则该画面对象与目标对象模板之间的匹配度越高,即画面对象与目标对象模板之间的匹配度与二者所对应特征参数之间的接近程度呈正相关。进而,将匹配度最高的画面对象或者匹配度高于匹配度阈值的画面对象确定为目标对象;当然,在所有画面对象的匹配度均低于匹配度阈值的情况下,则表明当前画面图像中并不存在目标对象,此时客户端直接结束本次处理过程,不再针对当前视频帧图像进行后续处理,开始进行针对下一视频帧图像的处理过程。
步骤712,确定目标对象在原始图像中对应的目标区域。
确定目标区域的过程,即为确定目标区域对应的区域坐标的过程。对应于上述步骤7041-7101,客户端直接将放置区域的区域坐标确定为目标区域的区域坐标。对应于上述步骤7042-7102,客户端将目标对象在原始画面中的对象轮廓对应的轮廓坐标确定为目标区域的区域坐标。
步骤714,将目标区域中的弹幕图层调整至画面图层下方。
步骤716,渲染对应于目标区域的目标画面。
步骤718,展示目标画面。
上述步骤714-718与前述图5所示实施例中步骤510-514并不存在本质区别,因此步骤714-718的实施过程参见前述记载,此处不再赘述。
实际上,对应于上述步骤510-514的实施例,识别画面对象并确定目标对象的步骤,也能够由服务端执行,下面结合图10进行说明。图10是根据本公开的实施例示出的一种视频画面的展示方法的交互流程图,如图10所示,上述渲染目标区域对应的目标画面并进行展示的过程包括下述步骤:
步骤1002,客户端检测到用户打开防挡设置开关。
步骤1004,客户端展示备选对象模板。
步骤1006,客户端确定用户选取的目标对象模板。
步骤1008,客户端确定画面图像。
上述步骤1002-1008与前述图7所示实施例中步骤702-7062并不存在本质区别,因此步骤1002-1008的实施过程参见前述记载,此处不再赘述。
步骤1010,客户端将目标对象模板和画面图像关联提供至服务端。
步骤1012,服务端检测画面图像中的画面对象。
步骤1014,服务端确定画面对象中的目标对象。
步骤1016,服务端确定原始画面中的目标区域。
步骤1018,服务端将目标区域的区域坐标返回至客户端。
确定目标对象模板和画面图像后,客户端将目标对象模板和画面图像提供至服务端,相应的,服务端通过对象识别算法检测画面图像中的画面对象,并通过匹配度计算确定画面对象中的目标对象,最后确定目标对象在原始画面中的目标区域,并将目标区域的区域 坐标返回至客户端。上述识别、匹配的过程参见前述步骤7062-712的记载,此处不再赘述。
步骤1020,客户端将目标区域中的弹幕图层调整至画面图层下方。
步骤1022,客户端渲染对应于目标区域的目标画面。
步骤1024,客户端展示目标画面。
上述步骤1020-1024与前述图5所示实施例中步骤510-514并不存在本质区别,因此步骤1020-1024的实施过程参见前述记载,此处不再赘述。
通过上述过程,由服务端进行运算量较大的对象识别和匹配计算,不仅有助于充分发挥自身的运算优势,而且能够减轻客户端的运算压力,一定程度上避免客户端卡顿。
在一些实施例中,上述视频画面的展示方法能够应用于直播场景下。例如,用户在基于直播应用客户端观看直播时,为了避免直播间中的其他用户发送的弹幕遮挡主播,能够采用上述视频画面的展示方法,确定直播视频的直播画面中主播对应的目标区域,将该目标区域中的弹幕图层调整至画面图层下方,基于目标区域调整后的画面图层,渲染对应于目标区域的目标画面,展示目标画面,使目标画面中显示主播而不显示弹幕。
图11是根据本公开的实施例示出的一种视频画面的展示装置的示意框图。本公开的实施例所示的视频画面的展示装置适用于视频播放应用的客户端,该视频播放应用适用于终端,该终端包括但不限于手机、平板电脑、可穿戴设备、个人计算机等电子设备。该视频播放应用是安装在终端中的应用程序,或者是集成在浏览器中的网页版应用,用户通过视频播放应用播放视频,其中播放的视频是长视频,例如电影、电视剧,或者是短视频,例如视频剪辑、情景短剧等。
如图11所示,视频画面的展示装置包括:
区域确定模块1101,被配置为响应于对目标视频的原始画面的区域指定操作,确定原始画面中的目标区域,原始画面包括目标视频的画面图层和位于画面图层上方的弹幕图层;
图层调整模块1102,被配置为将目标区域中的弹幕图层调整至画面图层下方;
绘制及展示模块1103,被配置为基于目标区域中调整后的画面图层,渲染对应于目标区域的目标画面,展示目标画面。
在一些实施例中,区域确定模块1101被配置为响应于在原始画面中的区域绘制操作,确定在原始画面中绘制的目标区域。
在一些实施例中,区域确定模块1101被配置为响应于区域绘制操作,确定区域绘制操作对应的移动轨迹,将移动轨迹框选出的区域确定为目标区域。
在一些实施例中,区域确定模块1101被配置为:
响应于对任一备选轮廓模板的选取操作,将选取的备选轮廓模板确定为目标轮廓模板;
将目标轮廓模板移动至原始画面中,将原始画面中目标轮廓模板对应的区域确定为目标区域。
在一些实施例中,区域确定模块1101被配置为:
响应于对任一备选对象模板的选取操作,将选取的备选对象模板确定为目标对象模板;
响应于原始画面中具有匹配于目标对象模板的目标对象,将原始画面中目标对象对应的区域确定为目标区域。
在一些实施例中,区域确定模块1101被配置为:
响应于对任一备选对象模板的选取操作,将选取的备选对象模板确定为目标对象模板;
向服务端发送目标对象模板和原始画面对应的画面图像;
接收服务端返回的区域坐标,基于区域坐标确定目标区域,目标区域对应于画面图像中的目标对象,目标对象匹配于目标对象模板。
在一些实施例中,该装置还包括:
其他区域确定模块1104,被配置为确定原始画面中区别于目标区域的第一区域;
画面拼接模块1105,被配置为将第一区域的原始画面与目标画面拼接为视频画面,并展示视频画面。
在一些实施例中,原始画面在HTML5页面中展示,绘制及展示模块1103还被配置为:
基于目标区域中调整后的画面图层,使用HTML5页面的原生画布功能渲染目标画面。
图12是根据本公开的实施例示出的一种视频画面的展示装置的示意框图。本公开的实施例所示的视频画面的展示装置适用于视频播放应用的服务端,视频播放应用适用于服务器,服务器包括但不限于包含独立主机的物理服务器、主机集群承载的虚拟服务器、云服务器等。其中,播放的视频是长视频,例如电影、电视剧,或者是短视频,例如视频剪辑、情景短剧等。
如图12所示,视频画面的展示装置包括:
模板接收模块1201,被配置为接收客户端发送的目标对象模板和原始画面对应的画面图像,目标对象模板是从备选对象模板中选取的;
对象确定模块1202,被配置为确定画面图像中匹配于目标对象模板的目标对象;
坐标返回模块1203,被配置为向客户端返回目标区域的区域坐标,目标区域为目标对象在原始画面中所对应的区域,客户端用于将目标区域中的弹幕图层调整至画面图层下方,基于目标区域中调整后的画面图层,渲染对应于目标区域的目标画面,展示目标画面。
在一些实施例中,对象确定模块1202还被配置为:
识别画面图像中的所有画面对象;
依次确定各个画面对象与目标对象模板之间的匹配度;
根据各个画面对象对应的匹配度,确定目标对象。
在一些实施例中,对象确定模块1202还被配置为:
将匹配度最高的画面对象,确定为目标对象;或者,
将匹配度高于匹配度阈值的画面对象,确定为目标对象。
在一些实施例中,该装置还包括:
失败消息返回模块1204,被配置为在各个画面对象对应的匹配度均不高于匹配度阈值的情况下,向客户端返回匹配失败消息,匹配失败消息知识画面图像中不存在目标对象。
本公开的实施例还提出一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,处理器被配置为执行指令,以实现如下步骤:
响应于对目标视频的原始画面的区域指定操作,确定原始画面中的目标区域,原始画面包括目标视频的画面图层和位于画面图层上方的弹幕图层;
将目标区域中的弹幕图层调整至画面图层下方;
基于目标区域中调整后的画面图层,渲染对应于目标区域的目标画面,展示目标画面。
在一些实施例中,处理器被配置为执行指令,以实现上述方法实施例中的其他实施例提供的视频画面的展示方法。
本公开的实施例还提出一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,处理器被配置为执行指令,以实现如下步骤:接收客户端发送的目标对象模板和原始画面对应的画面图像,目标对象模板是从备选对象模板中选取的;确定画面图 像中匹配于目标对象模板的目标对象;向客户端返回目标区域的区域坐标,目标区域为目标对象在原始画面中所对应的区域,客户端用于将目标区域中的弹幕图层调整至画面图层下方,基于目标区域中调整后的画面图层,渲染对应于目标区域的目标画面,展示目标画面。
在一些实施例中,处理器被配置为执行指令,以实现上述方法实施例中的其他实施例提供的视频画面的展示方法。
本公开的实施例还提出一种计算机可读存储介质,当存储介质中的指令由电子设备的处理器执行时,使得电子设备能够执行如下步骤:响应于对目标视频的原始画面的区域指定操作,确定原始画面中的目标区域,原始画面包括目标视频的画面图层和位于画面图层上方的弹幕图层;将目标区域中的弹幕图层调整至画面图层下方;基于目标区域中调整后的画面图层,渲染对应于目标区域的目标画面,展示目标画面。
在一些实施例中,当存储介质中的指令由电子设备的处理器执行时,使得电子设备能够执行上述方法实施例中的其他实施例提供的视频画面的展示方法。
本公开的实施例还提出一种计算机可读存储介质,当存储介质中的指令由电子设备的处理器执行时,使得电子设备能够执行如下步骤:接收客户端发送的目标对象模板和原始画面对应的画面图像,目标对象模板是从备选对象模板中选取的;确定画面图像中匹配于目标对象模板的目标对象;向客户端返回目标区域的区域坐标,目标区域为目标对象在原始画面中所对应的区域,客户端用于将目标区域中的弹幕图层调整至画面图层下方,基于目标区域中调整后的画面图层,渲染对应于目标区域的目标画面,展示目标画面。
在一些实施例中,当存储介质中的指令由电子设备的处理器执行时,使得电子设备能够执行上述方法实施例中的其他实施例提供的视频画面的展示方法。
本公开的实施例还提出一种计算机程序产品,计算机程序产品被配置为执行如下步骤:响应于对目标视频的原始画面的区域指定操作,确定原始画面中的目标区域,原始画面包括目标视频的画面图层和位于画面图层上方的弹幕图层;将目标区域中的弹幕图层调整至画面图层下方;基于目标区域中调整后的画面图层,渲染对应于目标区域的目标画面,展示目标画面。
在一些实施例中,计算机程序产品还被配置为执行上述方法实施例中的其他实施例提供的视频画面的展示方法。
本公开的实施例还提出一种计算机程序产品,计算机程序产品被配置为执行如下步骤:接收客户端发送的目标对象模板和原始画面对应的画面图像,目标对象模板是从备选对象模板中选取的;确定画面图像中匹配于目标对象模板的目标对象;向客户端返回目标区域的区域坐标,目标区域为目标对象在原始画面中所对应的区域,客户端用于将目标区域中的弹幕图层调整至画面图层下方,基于目标区域中调整后的画面图层,渲染对应于目标区域的目标画面,展示目标画面。
在一些实施例中,计算机程序产品还被配置为执行上述方法实施例中的其他实施例提供的视频画面的展示方法。
图13是根据本公开的实施例示出的一种电子设备的示意框图。例如,电子设备1300可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
参照图13,电子设备1300可以包括以下一个或多个组件:处理组件1302,存储器1304,电源组件1306,多媒体组件1308,音频组件1310,输入/输出(I/O)的接口1312,传感器组件1314,以及通信组件1318。
处理组件1302通常控制电子设备1300的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件1302可以包括一个或多个处理器1320 来执行指令,以完成上述视频画面的展示方法的全部或部分步骤。此外,处理组件1302可以包括一个或多个模块,便于处理组件1302和其他组件之间的交互。例如,处理组件1302可以包括多媒体模块,以方便多媒体组件1308和处理组件1302之间的交互。
存储器1304被配置为存储各种类型的数据以支持在电子设备1300的操作。这些数据的示例包括用于在电子设备1300上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器1304可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件1306为电子设备1300的各种组件提供电力。电源组件1306可以包括电源管理系统,一个或多个电源,及其他与为电子设备1300生成、管理和分配电力相关联的组件。
多媒体组件1308包括在电子设备1300和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件1308包括一个前置摄像头和/或后置摄像头。当电子设备1300处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件1310被配置为输出和/或输入音频信号。例如,音频组件1310包括一个麦克风(MIC),当电子设备1300处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器1304或经由通信组件1318发送。在一些实施例中,音频组件1310还包括一个扬声器,用于输出音频信号。
I/O接口1312为处理组件1302和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件1314包括一个或多个传感器,用于为电子设备1300提供各个方面的状态评估。例如,传感器组件1314可以检测到电子设备1300的打开/关闭状态,组件的相对定位,例如所述组件为电子设备1300的显示器和小键盘,传感器组件1314还可以检测电子设备1300或电子设备1300一个组件的位置改变,用户与电子设备1300接触的存在或不存在,电子设备1300方位或加速/减速和电子设备1300的温度变化。传感器组件1314可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件1314还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件1314还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件1318被配置为便于电子设备1300和其他设备之间有线或无线方式的通信。电子设备1300可以接入基于通信标准的无线网络,如WiFi,运营商网络(如2G、3G、4G或5G),或它们的组合。在一个示例性实施例中,通信组件1318经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件1318还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在本公开一实施例中,电子设备1300可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述视频画面的展示方法。
在本公开一实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器1304,上述指令可由电子设备1300的处理器1320执行以完成上述视频画面的展示方法。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
本公开实施例还提供了一种视频画面的展示方法,包括:响应于用户针对目标视频的原始画面实施的区域指定操作,确定原始画面中的目标区域,目标区域中的视频画面被根据画面图层和位于画面图层上方的弹幕图层渲染得到;将目标区域中的弹幕图层调整至位于画面图层下方;根据调整后的画面图层渲染并展示对应于目标区域的目标画面。
在一些实施例中,响应于用户针对目标视频的原始画面实施的区域指定操作,确定原始画面中的目标区域,包括下述至少之一:检测用户在原始画面中渲染的目标区域;确定用户从备选轮廓模板中选取的目标轮廓模板,并在检测到用户将目标轮廓模板放置在原始画面的展示区域中之后,将目标轮廓模板在展示区域中的对应区域确定为目标区域;确定用户从备选对象模板中选取的目标对象模板,并在检测到原始画面中匹配于目标对象模板的目标对象的情况下,将目标对象在原始画面中的对应区域确定为目标区域;确定用户从备选对象模板中选取的目标对象模板,将目标对象模板和原始画面对应的画面图像提供至服务端,并接收服务端返回的目标区域的区域坐标,目标区域对应于画面图像中匹配于目标对象模板的目标对象。
本公开所有实施例均可以单独被执行,也可以与其他实施例相结合被执行,均视为本公开要求的保护范围。

Claims (28)

  1. 一种视频画面的展示方法,包括:
    响应于对目标视频的原始画面的区域指定操作,确定所述原始画面中的目标区域,所述原始画面包括所述目标视频的画面图层和位于所述画面图层上方的弹幕图层;
    将所述目标区域中的所述弹幕图层调整至所述画面图层下方;
    基于所述目标区域中调整后的所述画面图层,渲染对应于所述目标区域的目标画面,展示所述目标画面。
  2. 根据权利要求1所述的方法,其中,所述响应于对目标视频的原始画面的区域指定操作,确定所述原始画面中的目标区域,包括:
    响应于在所述原始画面中的区域绘制操作,确定在所述原始画面中绘制的所述目标区域。
  3. 根据权利要求2所述的方法,其特征在于,所述响应于在所述原始画面中的区域绘制操作,确定在所述原始画面中绘制的所述目标区域,包括:
    响应于所述区域绘制操作,确定所述区域绘制操作对应的移动轨迹,将所述移动轨迹框选出的区域确定为所述目标区域。
  4. 根据权利要求1所述的方法,其中,所述响应于对目标视频的原始画面的区域指定操作,确定所述原始画面中的目标区域,包括:
    响应于对任一备选轮廓模板的选取操作,将选取的备选轮廓模板确定为目标轮廓模板;
    将所述目标轮廓模板移动至所述原始画面中,将所述原始画面中所述目标轮廓模板对应的区域确定为所述目标区域。
  5. 根据权利要求1所述的方法,其中,所述响应于对目标视频的原始画面的区域指定操作,确定所述原始画面中的目标区域,包括:
    响应于对任一备选对象模板的选取操作,将选取的备选对象模板确定为目标对象模板;
    响应于所述原始画面中具有匹配于所述目标对象模板的目标对象,将所述原始画面中所述目标对象对应的区域确定为所述目标区域。
  6. 根据权利要求1所述的方法,其中,所述响应于对目标视频的原始画面的区域指定操作,确定所述原始画面中的目标区域,包括:
    响应于对任一备选对象模板的选取操作,将选取的备选对象模板确定为目标对象模板;
    向服务端发送所述目标对象模板和所述原始画面对应的画面图像;
    接收所述服务端返回的区域坐标,基于所述区域坐标确定所述目标区域,所述目标区域对应于所述画面图像中的目标对象,所述目标对象匹配于所述目标对象模板。
  7. 根据权利要求1-6中任一项所述的方法,其中,还包括:
    确定所述原始画面中区别于所述目标区域的第一区域;
    将所述第一区域的原始画面与所述目标画面拼接为视频画面,展示所述视频画面。
  8. 根据权利要求1-6中任一项所述的方法,其中,所述原始画面在HTML5页面中展示,所述基于所述目标区域中调整后的所述画面图层,渲染对应于所述目标区域的目标画面,包括:
    基于所述目标区域中调整后的所述画面图层,使用所述HTML5页面的原生画布功能渲染所述目标画面。
  9. 一种视频画面的展示方法,包括:
    接收客户端发送的目标对象模板和原始画面对应的画面图像,所述目标对象模板是从备选对象模板中选取的;
    确定所述画面图像中匹配于所述目标对象模板的目标对象;
    向所述客户端返回目标区域的区域坐标,所述目标区域为所述目标对象在所述原始画面中所对应的区域,所述客户端用于将所述目标区域中的弹幕图层调整至所述画面图层下方,基于所述目标区域中调整后的所述画面图层,渲染对应于所述目标区域的目标画面,展示所述目标画面。
  10. 根据权利要求9所述的方法,其中,所述确定所述画面图像中匹配于所述目标对象模板的目标对象,包括:
    识别所述画面图像中的所有画面对象;
    依次确定各个所述画面对象与所述目标对象模板之间的匹配度;
    根据各个所述画面对象对应的匹配度,确定所述目标对象。
  11. 根据权利要求10所述的方法,其特征在于,所述根据各个所述画面对象对应的匹配度,确定所述目标对象,包括:
    将所述匹配度最高的所述画面对象,确定为所述目标对象;或者,
    将所述匹配度高于匹配度阈值的所述画面对象,确定为所述目标对象。
  12. 根据权利要求10所述的方法,其中,还包括:
    在各个所述画面对象对应的匹配度均不高于匹配度阈值的情况下,向所述客户端返回匹配失败消息,所述匹配失败消息指示所述画面图像中不存在所述目标对象。
  13. 一种视频画面的展示装置,其特征在于,包括:
    区域确定模块,被配置为响应于对目标视频的原始画面实施的区域指定操作,确定所述原始画面中的目标区域,所述原始画面包括画面图层和位于所述画面图层上方的弹幕图层;
    图层调整模块,被配置为将所述目标区域中的所述弹幕图层调整至所述画面图层下方;
    绘制及展示模块,被配置为基于所述目标区域中调整后的所述画面图层,渲染对应于所述目标区域的目标画面,展示所述目标画面。
  14. 一种视频画面的展示装置,其特征在于,包括:
    模板接收模块,被配置为接收客户端发送的目标对象模板和原始画面对应的画面图像,所述目标对象模板是从备选对象模板中选取的;
    对象确定模块,被配置为确定所述画面图像中匹配于所述目标对象模板的目标对象;
    坐标返回模块,被配置为向所述客户端返回目标区域的区域坐标,所述目标区域为所述目标对象在所述原始画面中所对应的区域,所述客户端用于将所述目标区域中的弹幕图层调整至所述画面图层下方,基于所述目标区域中调整后的所述画面图层,渲染对应于所述目标区域的目标画面,展示所述目标画面。
  15. 一种电子设备,包括:
    处理器;
    用于存储所述处理器可执行指令的存储器;
    其中,所述处理器被配置为执行所述指令,以实现如下步骤:
    响应于对目标视频的原始画面的区域指定操作,确定所述原始画面中的目标区域,所述原始画面包括所述目标视频的画面图层和位于所述画面图层上方的弹幕图层;
    将所述目标区域中的所述弹幕图层调整至所述画面图层下方;
    基于所述目标区域中调整后的所述画面图层,渲染对应于所述目标区域的目标画面,展示所述目标画面。
  16. 根据权利要求14所述的电子设备,其中,所述处理器被配置为执行所述指令,以实现如下步骤:
    响应于在所述原始画面中的区域绘制操作,确定在所述原始画面中绘制的所述目标区域。
  17. 根据权利要求16所述的电子设备,其中,所述处理器被配置为执行所述指令,以实现如下步骤:
    响应于所述区域绘制操作,确定所述区域绘制操作对应的移动轨迹,将所述移动轨迹框选出的区域确定为所述目标区域。
  18. 根据权利要求15所述的电子设备,其中,所述处理器被配置为执行所述指令,以实现如下步骤:
    响应于对任一备选轮廓模板的选取操作,将选取的备选轮廓模板确定为目标轮廓模板;
    将所述目标轮廓模板移动至所述原始画面中,将所述原始画面中所述目标轮廓模板对应的区域确定为所述目标区域。
  19. 根据权利要求15所述的电子设备,其中,所述处理器被配置为执行所述指令,以实现如下步骤:
    响应于对任一备选对象模板的选取操作,将选取的备选对象模板确定为目标对象模板;
    响应于所述原始画面中具有匹配于所述目标对象模板的目标对象,将所述原始画面中所述目标对象对应的区域确定为所述目标区域。
  20. 根据权利要求15所述的电子设备,其中,所述处理器被配置为执行所述指令,以实现如下步骤:
    响应于对任一备选对象模板的选取操作,将选取的备选对象模板确定为目标对象模板;
    向服务端发送所述目标对象模板和所述原始画面对应的画面图像;
    接收所述服务端返回的区域坐标,基于所述区域坐标确定所述目标区域,所述目标区域对应于所述画面图像中的目标对象,所述目标对象匹配于所述目标对象模板。
  21. 根据权利要求15-20中任一项所述的电子设备,其中,所述处理器被配置为执行所述指令,以实现如下步骤:
    确定所述原始画面中区别于所述目标区域的第一区域;
    将所述第一区域的原始画面与所述目标画面拼接为视频画面,展示所述视频画面。
  22. 根据权利要求15-20中任一项所述的电子设备,其中,所述原始画面在HTML5页面中展示,所述处理器被配置为执行所述指令,以实现如下步骤:
    基于所述目标区域中调整后的所述画面图层,使用所述HTML5页面的原生画布功能渲染所述目标画面。
  23. 一种电子设备,其特征在于,包括:
    处理器;
    用于存储所述处理器可执行指令的存储器;
    其中,所述处理器被配置为执行所述指令,以实现如下步骤:
    接收客户端发送的目标对象模板和原始画面对应的画面图像,所述目标对象模板是从备选对象模板中选取的;
    确定所述画面图像中匹配于所述目标对象模板的目标对象;
    向所述客户端返回目标区域的区域坐标,所述目标区域为所述目标对象在所述原始画面中所对应的区域,所述客户端用于将所述目标区域中的弹幕图层调整至所述画面图层下 方,基于所述目标区域中调整后的所述画面图层,渲染对应于所述目标区域的目标画面,展示所述目标画面。
  24. 根据权利要求23所述的电子设备,其中,所述处理器被配置为执行所述指令,以实现如下步骤:
    识别所述画面图像中的所有画面对象;
    依次确定各个所述画面对象与所述目标对象模板之间的匹配度;
    根据各个所述画面对象对应的匹配度,确定所述目标对象。
  25. 根据权利要求24所述的电子设备,其中,所述处理器被配置为执行所述指令,以实现如下步骤:
    将所述匹配度最高的所述画面对象,确定为所述目标对象;或者,
    将所述匹配度高于匹配度阈值的所述画面对象,确定为所述目标对象。
  26. 根据权利要求24所述的电子设备,其中,所述处理器被配置为执行所述指令,以实现如下步骤:
    在所有所述画面对象对应的匹配度均不高于匹配度阈值的情况下,向所述客户端返回匹配失败消息,所述匹配失败消息指示所述画面图像中不存在所述目标对象。
  27. 一种计算机可读存储介质,当所述存储介质中的指令由电子设备的处理器执行时,使得所述电子设备能够执行如下步骤:
    响应于对目标视频的原始画面的区域指定操作,确定所述原始画面中的目标区域,所述原始画面包括所述目标视频的画面图层和位于所述画面图层上方的弹幕图层;
    将所述目标区域中的所述弹幕图层调整至所述画面图层下方;
    基于所述目标区域中调整后的所述画面图层,渲染对应于所述目标区域的目标画面,展示所述目标画面。
  28. 一种计算机可读存储介质,当所述存储介质中的指令由电子设备的处理器执行时,使得所述电子设备能够执行如下步骤:
    接收客户端发送的目标对象模板和原始画面对应的画面图像,所述目标对象模板是从备选对象模板中选取的;
    确定所述画面图像中匹配于所述目标对象模板的目标对象;
    向所述客户端返回目标区域的区域坐标,所述目标区域为所述目标对象在所述原始画面中所对应的区域,所述客户端用于将所述目标区域中的弹幕图层调整至所述画面图层下方,基于所述目标区域中调整后的所述画面图层,渲染对应于所述目标区域的目标画面,展示所述目标画面。
PCT/CN2021/113055 2020-10-10 2021-08-17 视频画面的展示方法及电子设备 WO2022073389A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011080616.6A CN112312190A (zh) 2020-10-10 2020-10-10 视频画面的展示方法、装置、电子设备和存储介质
CN202011080616.6 2020-10-10

Publications (1)

Publication Number Publication Date
WO2022073389A1 true WO2022073389A1 (zh) 2022-04-14

Family

ID=74488325

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/113055 WO2022073389A1 (zh) 2020-10-10 2021-08-17 视频画面的展示方法及电子设备

Country Status (2)

Country Link
CN (1) CN112312190A (zh)
WO (1) WO2022073389A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023231661A1 (zh) * 2022-05-31 2023-12-07 北京字跳网络技术有限公司 信息交互方法、装置、电子设备和存储介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112312190A (zh) * 2020-10-10 2021-02-02 游艺星际(北京)科技有限公司 视频画面的展示方法、装置、电子设备和存储介质
CN113766339B (zh) * 2021-09-07 2023-03-14 网易(杭州)网络有限公司 一种弹幕显示方法及装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3096529A1 (en) * 2015-05-19 2016-11-23 Vipeline, Inc. System and methods for video comment threading
CN107135415A (zh) * 2017-04-11 2017-09-05 青岛海信电器股份有限公司 视频字幕处理方法及装置
CN109309861A (zh) * 2018-10-30 2019-02-05 广州虎牙科技有限公司 一种媒体处理方法、装置、终端设备和存储介质
CN111277910A (zh) * 2020-03-07 2020-06-12 咪咕互动娱乐有限公司 弹幕显示方法、装置、电子设备及存储介质
CN111580729A (zh) * 2020-04-22 2020-08-25 江西博微新技术有限公司 一种重叠图元选中的处理方法、系统、可读存储介质及电子设备
CN111698533A (zh) * 2020-06-12 2020-09-22 上海极链网络科技有限公司 一种视频处理方法、装置、设备和存储介质
CN112312190A (zh) * 2020-10-10 2021-02-02 游艺星际(北京)科技有限公司 视频画面的展示方法、装置、电子设备和存储介质

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106462774B (zh) * 2014-02-14 2020-01-24 河谷控股Ip有限责任公司 通过规范形状的对象摄取、系统和方法
CN104484868B (zh) * 2014-10-08 2017-06-30 浙江工业大学 一种结合模板匹配和图像轮廓的运动目标航拍跟踪方法
CN107705240B (zh) * 2016-08-08 2021-05-04 阿里巴巴集团控股有限公司 虚拟试妆方法、装置和电子设备
US10284806B2 (en) * 2017-01-04 2019-05-07 International Business Machines Corporation Barrage message processing
CN107181976B (zh) * 2017-04-28 2021-01-29 华为技术有限公司 一种弹幕显示方法及电子设备
CN107147941A (zh) * 2017-05-27 2017-09-08 努比亚技术有限公司 视频播放的弹幕显示方法、装置及计算机可读存储介质
CN108989870A (zh) * 2017-06-02 2018-12-11 中国电信股份有限公司 控制弹幕区域的方法和系统
CN107330447B (zh) * 2017-06-05 2020-04-24 三峡大学 一种反馈式icm神经网络和fpf相结合的剪影识别系统
CN107809658A (zh) * 2017-10-18 2018-03-16 维沃移动通信有限公司 一种弹幕内容显示方法和终端
CN109089170A (zh) * 2018-09-11 2018-12-25 传线网络科技(上海)有限公司 弹幕显示方法及装置
CN109862380B (zh) * 2019-01-10 2022-06-03 北京达佳互联信息技术有限公司 视频数据处理方法、装置及服务器、电子设备和存储介质
CN110392293B (zh) * 2019-06-21 2023-04-07 平安普惠企业管理有限公司 基于canvas的弹幕控制方法、装置、设备及存储介质
CN110784755A (zh) * 2019-11-18 2020-02-11 上海极链网络科技有限公司 一种弹幕信息的显示方法、装置、终端和存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3096529A1 (en) * 2015-05-19 2016-11-23 Vipeline, Inc. System and methods for video comment threading
CN107135415A (zh) * 2017-04-11 2017-09-05 青岛海信电器股份有限公司 视频字幕处理方法及装置
CN109309861A (zh) * 2018-10-30 2019-02-05 广州虎牙科技有限公司 一种媒体处理方法、装置、终端设备和存储介质
CN111277910A (zh) * 2020-03-07 2020-06-12 咪咕互动娱乐有限公司 弹幕显示方法、装置、电子设备及存储介质
CN111580729A (zh) * 2020-04-22 2020-08-25 江西博微新技术有限公司 一种重叠图元选中的处理方法、系统、可读存储介质及电子设备
CN111698533A (zh) * 2020-06-12 2020-09-22 上海极链网络科技有限公司 一种视频处理方法、装置、设备和存储介质
CN112312190A (zh) * 2020-10-10 2021-02-02 游艺星际(北京)科技有限公司 视频画面的展示方法、装置、电子设备和存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023231661A1 (zh) * 2022-05-31 2023-12-07 北京字跳网络技术有限公司 信息交互方法、装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN112312190A (zh) 2021-02-02

Similar Documents

Publication Publication Date Title
US11114130B2 (en) Method and device for processing video
WO2022073389A1 (zh) 视频画面的展示方法及电子设备
US20150179147A1 (en) Trimming content for projection onto a target
CN107977083B (zh) 基于vr系统的操作执行方法及装置
WO2016192325A1 (zh) 视频文件的标识处理方法及装置
CN111970456B (zh) 拍摄控制方法、装置、设备及存储介质
US11641493B2 (en) Method and electronic device for displaying bullet screens
US11880999B2 (en) Personalized scene image processing method, apparatus and storage medium
CN107526591B (zh) 切换直播间类型的方法和装置
CN106774849B (zh) 虚拟现实设备控制方法及装置
US20200312022A1 (en) Method and device for processing image, and storage medium
CN111770381A (zh) 视频编辑的提示方法、装置以及电子设备
WO2022198934A1 (zh) 卡点视频的生成方法及装置
CN108122195B (zh) 图片处理方法及装置
US11310443B2 (en) Video processing method, apparatus and storage medium
WO2022089284A1 (zh) 拍摄处理方法、装置、电子设备和可读存储介质
WO2020233201A1 (zh) 图标位置确定方法和装置
CN107566878B (zh) 直播中显示图片的方法及装置
CN110209445B (zh) 信息提醒方法、装置、终端及存储介质
CN112511779B (zh) 视频数据的处理方法、装置、计算机存储介质和电子设备
CN107437269B (zh) 一种处理图片的方法及装置
CN111343329B (zh) 锁屏显示控制方法、装置及存储介质
CN112882784A (zh) 一种应用界面显示方法、装置、智能设备及介质
CN108829473B (zh) 事件响应方法、装置及存储介质
CN108986803B (zh) 场景控制方法及装置、电子设备、可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21876909

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21876909

Country of ref document: EP

Kind code of ref document: A1