CN113691866A - Video processing method, video processing device, electronic equipment and medium - Google Patents

Video processing method, video processing device, electronic equipment and medium Download PDF

Info

Publication number
CN113691866A
CN113691866A CN202110978074.2A CN202110978074A CN113691866A CN 113691866 A CN113691866 A CN 113691866A CN 202110978074 A CN202110978074 A CN 202110978074A CN 113691866 A CN113691866 A CN 113691866A
Authority
CN
China
Prior art keywords
data
video
current
current video
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110978074.2A
Other languages
Chinese (zh)
Inventor
李宇航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110978074.2A priority Critical patent/CN113691866A/en
Publication of CN113691866A publication Critical patent/CN113691866A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4318Generation of visual interfaces for content selection or interaction; Content or additional data rendering by altering the content in the rendering process, e.g. blanking, blurring or masking an image region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4782Web browsing, e.g. WebTV
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8166Monomedia components thereof involving executable data, e.g. software
    • H04N21/8173End-user applications, e.g. Web browser, game

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • User Interface Of Digital Computer (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The present disclosure discloses a video processing method, apparatus, device, medium and product, relating to the fields of computer vision, image processing and the like. The video processing method comprises the following steps: processing the gray data of the current video aiming at the current video with the gray data and the color data to obtain transparency data; constructing at least one image to be displayed associated with the current video based on the transparency data and the color data; and displaying at least one image to be displayed.

Description

Video processing method, video processing device, electronic equipment and medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to the fields of computer vision, image processing, and the like, and more particularly, to a video processing method, apparatus, electronic device, medium, and program product.
Background
With the rapid development of the internet, the requirement of users on video playing is higher and higher. For example, when a user browses related content on a browser page, it is desirable that a video played on the browser page does not affect the user browsing the related content as much as possible, so that the user can browse the related content and view the played video at the same time. However, the video playing technology in the related art is difficult to meet the requirements of users, and the experience of users for browsing related content and watching videos is reduced.
Disclosure of Invention
The present disclosure provides a video processing method, apparatus, electronic device, storage medium, and program product.
According to an aspect of the present disclosure, there is provided a video processing method including: processing the gray data of the current video aiming at the current video with the gray data and the color data to obtain transparency data; constructing at least one image to be displayed associated with the current video based on the transparency data and the color data; and displaying the at least one image to be displayed.
According to another aspect of the present disclosure, there is provided a video processing apparatus including: the device comprises a first processing module, a construction module and a first display module. The system comprises a first processing module, a second processing module and a display module, wherein the first processing module is used for processing the gray data of a current video with gray data and color data to obtain transparency data; a construction module for constructing at least one image to be displayed associated with the current video based on the transparency data and the color data; the first display module is used for displaying the at least one image to be displayed.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor and a memory communicatively coupled to the at least one processor. Wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the video processing method described above.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the above-described video processing method.
According to another aspect of the present disclosure, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the video processing method described above.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 schematically illustrates an application scenario of a video processing method and apparatus according to an embodiment of the present disclosure;
FIG. 2 schematically shows a flow diagram of a video processing method according to an embodiment of the present disclosure;
fig. 3 schematically shows a flow chart of a video processing method according to another embodiment of the present disclosure;
fig. 4 schematically shows a schematic diagram of a video processing method according to an embodiment of the present disclosure;
fig. 5 schematically shows a block diagram of a video processing apparatus according to an embodiment of the present disclosure; and
FIG. 6 is a block diagram of an electronic device for performing video processing used to implement an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
The embodiment of the disclosure provides a video processing method. The video processing method comprises the following steps: and processing the gray data of the current video aiming at the current video with the gray data and the color data to obtain transparency data. Then, at least one image to be displayed associated with the current video is constructed based on the transparency data and the color data. Next, at least one image to be displayed is displayed.
Fig. 1 schematically illustrates an application scenario of a video processing method and apparatus according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of an application scenario in which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, but does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, an application scenario 100 of an embodiment of the present disclosure includes a browser page 110, and related content may be displayed on the browser page 110.
For example, the relevant content displayed on the browser page 110 is denoted by an "X", and the displayed relevant content includes, but is not limited to, text, pictures, icons, and the like. For example, when a user searches through a browser, the relevant content may include search results for the user's search terms.
Illustratively, a video may be played on the browser page 110. Related Art when playing a video 111, the video 111 typically obscures the relevant content displayed on the browser page 110.
To address the problem of the video 111 obscuring the relevant content displayed on the browser page 110, the video 112 with transparency data may be played on the browser page 110, as far as possible to avoid the video 112 obscuring the relevant content displayed on the browser page 110. The background portion of the video 112 having transparency data is, for example, transparent.
However, the browser of the related art does not generally support playing the video 112 with transparency data, and thus, by the video processing method of the embodiment of the present disclosure, displaying the video 112 with transparency data on the browser page 110 may be achieved.
The embodiment of the present disclosure provides a video processing method, and a video processing method according to an exemplary embodiment of the present disclosure is described below with reference to fig. 2 to 4 in conjunction with the application scenario of fig. 1.
Fig. 2 schematically shows a flow chart of a video processing method according to an embodiment of the present disclosure.
As shown in fig. 2, the video processing method 200 of the embodiment of the present disclosure may include, for example, operations S210 to S230.
In operation S210, for a current video having gray data and color data, gray data of the current video is processed to obtain transparency data.
In operation S220, at least one image to be displayed associated with the current video is constructed based on the transparency data and the color data.
In operation S230, at least one image to be displayed is displayed.
Illustratively, the current video includes, for example, a gray channel indicating gray data and a color channel indicating color data. The color channels include, for example, RGB channels, R represents red, G represents green, and B represents blue, that is, the RGB channels include three channels, the pixel value of each channel may be different, and the value range of the pixel value of each channel is, for example, 0 to 255. The gray channel may comprise a channel having pixel values in the range of, for example, 0-255. Alternatively, the gray channel may also include three channels, and the pixel values of the three channels all have the same value, for example, all values are 0 to 255.
The grayscale data of the current video is, for example, for a background portion in each image frame in the current video, and the color data is, for example, for a portion in each image frame in the current video where a main object is located. For example, if the current video is for a user, the color data is for the portion where the user is located, and the grayscale data is for other portions than the user.
Next, the gray scale data is processed to obtain transparency data characterizing the transparency of the background portion of each image frame in the current video. And constructing at least one image to be displayed based on the transparency data and the color data, wherein the at least one image to be displayed corresponds to the image frame in the current video. At least one image to be displayed can form a video to be displayed, the video to be displayed is a transparent video, and the background part in the transparent video is transparent. In an example, the number of the at least one image to be displayed and the number of image frames in the current video may coincide. After at least one image to be displayed is obtained, the image to be displayed can be displayed, and then the transparent video can be played.
According to the embodiment of the present disclosure, a player or a browser does not support playing a video having transparency data in some scenes, and thus the video having transparency data cannot be imported into the player or the browser. According to the technical scheme of the embodiment of the disclosure, after the current video with the gray data is imported into the player or the browser, the gray data is processed to obtain the transparency data, the plurality of images to be displayed corresponding to the current video are rendered based on the transparency data and the color data, the background part of the images to be displayed is transparent, the video with the transparency data is played through the player or the browser indirectly by displaying the images to be displayed, and the requirement of a user on video playing is met.
Fig. 3 schematically shows a flow chart of a video processing method according to another embodiment of the present disclosure.
As shown in fig. 3, the video processing method 200 of the embodiment of the present disclosure may include, for example, operations S310 to S360. Operation S340 includes, for example, operations S341 to S343.
In operation S310, an initial video including transparency data and color data is acquired.
Taking an image frame in the original video as an example, a certain pixel in the image frame may be represented as RGBA (r, g, b, a), where r, g, b represent, for example, color channels, and r, g, b all take a value range of 0-255. a represents, for example, a transparent channel, and a has a value ranging from 0 to 1, and represents, for example, completely transparent when a is 0, completely opaque when a is 1, and semitransparent when a is 0.5.
In operation S320, the transparency data is processed to obtain gray data.
In operation S330, a current video is obtained based on the gray data and the color data.
For example, the transparency data is weighted by a first weight to obtain gradation data. In one example, the first weight may be a value of 255, and the transparency data is multiplied by the first weight to obtain corresponding gray scale data. For example, for a pixel (r, g, b, a), the transparency data a therein is processed to obtain grayscale data a × 255, and the processed pixel may be represented as (r, g, b, a × 255). Based on each processed pixel, a current video is obtained.
In operation S340, for a current video having gray data and color data, gray data of the current video is processed to obtain transparency data. For example, the operation S340 includes the following operations S341 to S343.
In operation S341, the current video is played on the browser page.
In operation S342, a current image frame is determined from the played current video.
In operation S343, the gray data of the current image frame is processed to obtain transparency data for the current image frame.
For example, when the current video is played on the browser page, the current video may be played in a hidden format, that is, the user cannot see the hidden current video on the browser page, so as to avoid the current video from blocking the related content on the browser page. Then, the current image frame being displayed at this time is determined from the current video being hidden-played.
Then, the gray data of the current image frame is weighted by the second weight, and transparency data of the current image frame is obtained. For example, for a current image frame, each pixel of the current image frame includes color data gray scale data, and the gray scale data of each pixel is multiplied by a second weight to obtain transparency data.
For example, the second weight and the first weight are reciprocal, and when the first weight is 255, the second weight is 1/255. For example, for one pixel (r, g, b, a × 255), a × 255 is the gradation data, and the gradation data a × 255 is processed based on the second weight to obtain the transparency data a × 255 × 1/255 ═ a, and the processed pixel may be represented as (r, g, b, a), and a represents the transparency data.
In operation S350, at least one image to be displayed associated with the current video is constructed based on the transparency data and the color data.
For example, one image to be displayed corresponding to the current image frame is constructed based on the transparency data for the current image frame and the color data for the current image frame. In other words, when the current video is hidden and played, the current image frame of the current video being displayed can be detected in real time, and a corresponding image to be displayed is rendered based on the current image frame, wherein the image to be displayed has transparency data.
In operation S360, at least one image to be displayed is displayed. For example, at least one image to be displayed is displayed on a canvas of a browser, that is, the image to be displayed is generated by rendering frame by detecting the playing progress of the current video, and the image to be displayed is displayed on the canvas.
According to the embodiment of the present disclosure, the browser does not support playing the initial video with transparency data in some scenes, and thus the initial video with transparency data cannot be imported into the browser. According to the technical scheme of the embodiment of the disclosure, transparency data in the initial video is processed into gray data, so that the current video is obtained, and then the current video with the gray data is imported into a browser.
After the current video is imported into the browser, the current video is played in a hidden mode, a current image frame of the current video which is being displayed is detected in real time, the gray data of the current image frame is processed to render to obtain an image to be displayed with transparency data, the video with the transparency data is played through the browser indirectly by displaying the image to be displayed, the problem that the browser does not support the playing of the video with the transparency data in some scenes is solved, and the requirement of a user on the video playing is met.
According to the embodiment of the disclosure, a control can be further displayed on the canvas of the browser, and the control is used for controlling the playing state of the current video. For example, the play status includes at least one of start of play, pause of play, and progress of play. The control can control the playing, pause, playing progress and the like of the current video. The display control may include a play key, a pause key, a progress bar, and the like, and for example, by dragging the progress bar, the play progress of the current video may be adjusted.
The current video originally has an original control to control the playing state of the current video, but when the current video is played in a hidden manner, the original control is also hidden. Although the current video is played in a hidden manner, the playing state of the current video can be controlled by rendering and generating a control on the canvas. And when the input operation is received through the control, controlling the playing state of the current video based on the input operation. And changing the display mode of at least one image to be displayed based on the playing state of the current video. For example, at least one image to be displayed can be used as a video to be displayed, the video to be displayed is a transparent video, and the display state of the video to be displayed is synchronously changed by detecting the playing state of the current video, so that the display state of the video to be displayed is consistent with the state of the current video. For example, when the input operation received through the control is pause, the playing state of the current video is controlled to be the pause state, and the display state of the video to be displayed is changed from the playing state to the pause state.
According to the embodiment of the disclosure, the control is generated by rendering on the canvas of the browser, the playing state of the current video is controlled through the control, and then the state of the transparent video is synchronized to be consistent with the state of the current video, so that the effect of indirectly playing the transparent video through the browser is realized, and the problem that the browser does not support video playing with transparency data in some scenes is solved. And the state of the transparent video can be controlled according to the user requirements through the control, so that the user experience is improved.
Fig. 4 schematically shows a schematic diagram of a video processing method according to an embodiment of the present disclosure.
As shown in FIG. 4, the current video 410 includes a grayscale channel 411 and a color channel 412, where the grayscale channel 411 and the color channel 412 combine to form the current video 410.
Importing the current video 410 into a browser, for example, taking the current video 410 as an input of a webGL interface in the browser, where the webGL is a JavaScript API that can render high-performance interactive 3D and 2D images in a compatible Web browser. The gray channel 411 is then converted to a transparent channel, for example, by parsing the gray pixel data corresponding to the gray channel 411 frame by frame, and converting the gray pixel data to transparent pixel data. Next, the transparent channel and the color channel 412 are merged, resulting in a transparent video 420.
When the current video 410 is played in a hidden manner in the browser, the current video is rendered on the canvas of the browser frame by frame, so that the effect of displaying the transparent video 420 on the canvas is realized, wherein the transparent video 420 comprises a plurality of frames of images to be displayed. In addition, audio data corresponding to the current video 410 may be taken as audio data for the transparent video 420.
It can be understood that the transparency data of the initial video is converted into the gray data to obtain the current video, so that the current video is guided into the browser, and then the transparent video is generated by analyzing and rendering through the webGL, so that the effect of playing the transparent video on the browser page is realized, and the video playing requirement is met.
Fig. 5 schematically shows a block diagram of a video processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 5, the video processing apparatus 500 of the embodiment of the present disclosure includes, for example, a first processing module 510, a construction module 520, and a first display module 530.
The first processing module 510 may be configured to, for a current video having gray data and color data, process the gray data of the current video to obtain transparency data. According to the embodiment of the present disclosure, the first processing module 510 may perform, for example, the operation S210 described above with reference to fig. 2, which is not described herein again.
The construction module 520 may be configured to construct at least one image to be displayed associated with the current video based on the transparency data and the color data. According to the embodiment of the present disclosure, the building module 520 may perform, for example, the operation S220 described above with reference to fig. 2, which is not described herein again.
The first display module 530 may be used to display at least one image to be displayed. According to the embodiment of the disclosure, the first display module 530 may perform, for example, the operation S230 described above with reference to fig. 2, which is not described herein again.
According to an embodiment of the present disclosure, the apparatus 500 may further include: the device comprises a first acquisition module, a second processing module and a second acquisition module. The system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring an initial video, and the initial video comprises transparency data and color data; the second processing module is used for processing the transparency data to obtain gray data; and the second acquisition module is used for acquiring the current video based on the gray data and the color data.
According to an embodiment of the disclosure, the second processing module is further configured to: and carrying out weighting processing on the transparency data by using the first weight to obtain gray data.
According to an embodiment of the present disclosure, the first processing module 510 includes: the device comprises a playing submodule, a determining submodule and a processing submodule. The playing submodule is used for playing the current video on the browser page; the determining submodule is used for determining a current image frame from a played current video; and the processing submodule is used for processing the gray data of the current image frame to obtain transparency data aiming at the current image frame, wherein the current video is played in a hidden format.
According to an embodiment of the disclosure, the processing submodule is further configured to: and weighting the gray data of the current image frame by using a second weight to obtain transparency data aiming at the current image frame, wherein the second weight and the first weight are reciprocal.
According to an embodiment of the present disclosure, the building module 520 is further configured to: and constructing one image to be displayed corresponding to the current image frame based on the transparency data and the color data of the current image frame.
According to an embodiment of the present disclosure, the first display module 530 is further configured to: at least one image to be displayed is displayed on a canvas of a browser.
According to an embodiment of the present disclosure, the apparatus 500 may further include: and the second display module is used for displaying a control on the canvas of the browser, wherein the control is used for controlling the playing state of the current video, and the playing state comprises at least one of starting playing, pausing playing and playing progress.
According to an embodiment of the present disclosure, the apparatus 500 may further include: a control module and a change module. The control module is used for responding to the input operation received through the control and controlling the playing state of the current video based on the input operation; and the changing module is used for changing the display mode of at least one image to be displayed based on the playing state of the current video.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 6 is a block diagram of an electronic device for performing video processing used to implement an embodiment of the present disclosure.
FIG. 6 illustrates a schematic block diagram of an example electronic device 600 that can be used to implement embodiments of the present disclosure. The electronic device 600 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 can also be stored. The calculation unit 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
A number of components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 601 performs the respective methods and processes described above, such as a video processing method. For example, in some embodiments, the video processing method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the video processing method described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the video processing method in any other suitable way (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, to the at least one input device, and to the at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable video processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (21)

1. A video processing method, comprising:
processing the gray data of the current video aiming at the current video with the gray data and the color data to obtain transparency data;
constructing at least one image to be displayed associated with the current video based on the transparency data and the color data; and
and displaying the at least one image to be displayed.
2. The method of claim 1, further comprising:
acquiring an initial video, wherein the initial video comprises transparency data and color data;
processing the transparency data to obtain gray data; and
and obtaining the current video based on the gray data and the color data.
3. The method of claim 2, wherein said processing said transparency data to obtain gray scale data comprises:
and carrying out weighting processing on the transparency data by a first weight to obtain the gray data.
4. The method according to any one of claims 1-3, wherein the processing the gray scale data of the current video to obtain transparency data comprises:
playing the current video on a browser page;
determining a current image frame from a played current video; and
processing the gray data of the current image frame to obtain transparency data for the current image frame,
wherein the current video is played in a hidden format.
5. The method of claim 4, wherein said processing the gray scale data of the current image frame to obtain transparency data for the current image frame comprises:
weighting the gray data of the current image frame by a second weight to obtain transparency data aiming at the current image frame,
wherein the second weight and the first weight are reciprocal.
6. The method of claim 4 or 5, wherein said constructing at least one image to be displayed associated with the current video based on the transparency data and the color data comprises:
and constructing one image to be displayed corresponding to the current image frame based on the transparency data aiming at the current image frame and the color data aiming at the current image frame.
7. The method of claim 6, wherein the displaying the at least one image to be displayed comprises:
and displaying the at least one image to be displayed on a canvas of the browser.
8. The method of any of claims 1-7, further comprising:
displaying a control on a canvas of the browser, wherein the control is used for controlling a play state of the current video,
wherein the playing state comprises at least one of starting playing, pausing playing and playing progress.
9. The method of claim 8, further comprising:
in response to receiving an input operation through the control, controlling a playing state of the current video based on the input operation; and
and changing the display mode of the at least one image to be displayed based on the playing state of the current video.
10. A video processing apparatus comprising:
the system comprises a first processing module, a second processing module and a display module, wherein the first processing module is used for processing the gray data of a current video with gray data and color data to obtain transparency data;
a construction module for constructing at least one image to be displayed associated with the current video based on the transparency data and the color data; and
the first display module is used for displaying the at least one image to be displayed.
11. The apparatus of claim 10, further comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring an initial video, and the initial video comprises transparency data and color data;
the second processing module is used for processing the transparency data to obtain gray data; and
and the second acquisition module is used for acquiring the current video based on the gray data and the color data.
12. The apparatus of claim 11, wherein the second processing module is further configured to:
and carrying out weighting processing on the transparency data by a first weight to obtain the gray data.
13. The apparatus of any of claims 10-12, wherein the first processing module comprises:
the playing submodule is used for playing the current video on a browser page;
the determining submodule is used for determining a current image frame from a played current video; and
a processing submodule for processing the gray data of the current image frame to obtain transparency data for the current image frame,
wherein the current video is played in a hidden format.
14. The apparatus of claim 13, wherein the processing sub-module is further to:
weighting the gray data of the current image frame by a second weight to obtain transparency data aiming at the current image frame,
wherein the second weight and the first weight are reciprocal.
15. The apparatus of claim 13 or 14, wherein the build module is further to:
and constructing one image to be displayed corresponding to the current image frame based on the transparency data aiming at the current image frame and the color data aiming at the current image frame.
16. The apparatus of claim 15, wherein the first display module is further configured to:
and displaying the at least one image to be displayed on a canvas of the browser.
17. The apparatus of any of claims 10-16, further comprising:
a second display module for displaying a control on the canvas of the browser, wherein the control is used for controlling the playing state of the current video,
wherein the playing state comprises at least one of starting playing, pausing playing and playing progress.
18. The apparatus of claim 17, further comprising:
the control module is used for responding to the input operation received through the control and controlling the playing state of the current video based on the input operation; and
and the changing module is used for changing the display mode of the at least one image to be displayed based on the playing state of the current video.
19. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-9.
20. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-9.
21. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-9.
CN202110978074.2A 2021-08-24 2021-08-24 Video processing method, video processing device, electronic equipment and medium Pending CN113691866A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110978074.2A CN113691866A (en) 2021-08-24 2021-08-24 Video processing method, video processing device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110978074.2A CN113691866A (en) 2021-08-24 2021-08-24 Video processing method, video processing device, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN113691866A true CN113691866A (en) 2021-11-23

Family

ID=78582086

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110978074.2A Pending CN113691866A (en) 2021-08-24 2021-08-24 Video processing method, video processing device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN113691866A (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6981227B1 (en) * 2002-02-04 2005-12-27 Mircrosoft Corporation Systems and methods for a dimmable user interface
US20080111822A1 (en) * 2006-09-22 2008-05-15 Yahoo, Inc.! Method and system for presenting video
CN102221953A (en) * 2010-04-14 2011-10-19 上海中标软件有限公司 Realization method for transparent user interface video player and player thereof
CN105446585A (en) * 2014-08-29 2016-03-30 优视科技有限公司 Video display method and device of Android intelligent terminal browser
US20170105053A1 (en) * 2012-04-24 2017-04-13 Skreens Entertainment Technologies, Inc. Video display system
CN108235055A (en) * 2017-12-15 2018-06-29 苏宁云商集团股份有限公司 Transparent video implementation method and equipment in AR scenes
CN109462731A (en) * 2018-11-27 2019-03-12 北京潘达互娱科技有限公司 Playback method, device, terminal and the server of effect video are moved in a kind of live streaming
CN109729417A (en) * 2019-03-28 2019-05-07 深圳市酷开网络科技有限公司 A kind of video-see play handling method, smart television and storage medium
CN109874048A (en) * 2019-01-11 2019-06-11 平安科技(深圳)有限公司 The translucent display methods of video window component, device and computer equipment
CN111669646A (en) * 2019-03-07 2020-09-15 北京陌陌信息技术有限公司 Method, device, equipment and medium for playing transparent video
CN112399245A (en) * 2019-08-18 2021-02-23 海信视像科技股份有限公司 Playing method and display device
CN112884665A (en) * 2021-01-25 2021-06-01 腾讯科技(深圳)有限公司 Animation playing method and device, computer equipment and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6981227B1 (en) * 2002-02-04 2005-12-27 Mircrosoft Corporation Systems and methods for a dimmable user interface
US20080111822A1 (en) * 2006-09-22 2008-05-15 Yahoo, Inc.! Method and system for presenting video
CN102221953A (en) * 2010-04-14 2011-10-19 上海中标软件有限公司 Realization method for transparent user interface video player and player thereof
US20170105053A1 (en) * 2012-04-24 2017-04-13 Skreens Entertainment Technologies, Inc. Video display system
CN105446585A (en) * 2014-08-29 2016-03-30 优视科技有限公司 Video display method and device of Android intelligent terminal browser
CN108235055A (en) * 2017-12-15 2018-06-29 苏宁云商集团股份有限公司 Transparent video implementation method and equipment in AR scenes
CN109462731A (en) * 2018-11-27 2019-03-12 北京潘达互娱科技有限公司 Playback method, device, terminal and the server of effect video are moved in a kind of live streaming
CN109874048A (en) * 2019-01-11 2019-06-11 平安科技(深圳)有限公司 The translucent display methods of video window component, device and computer equipment
CN111669646A (en) * 2019-03-07 2020-09-15 北京陌陌信息技术有限公司 Method, device, equipment and medium for playing transparent video
CN109729417A (en) * 2019-03-28 2019-05-07 深圳市酷开网络科技有限公司 A kind of video-see play handling method, smart television and storage medium
CN112399245A (en) * 2019-08-18 2021-02-23 海信视像科技股份有限公司 Playing method and display device
CN112884665A (en) * 2021-01-25 2021-06-01 腾讯科技(深圳)有限公司 Animation playing method and device, computer equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ADAM L. ANDERSON; BINGXIONG LIN; YU SUN: "Virtually Transparent Epidermal Imagery (VTEI): On New Approaches to In Vivo Wireless High-Definition Video and Image Processing", IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS, 4 June 2013 (2013-06-04) *
于述春;赵嫦花;何佳;: "富视频类Web应用系统的一种可扩展框架", 怀化学院学报, no. 11, 28 November 2017 (2017-11-28) *

Similar Documents

Publication Publication Date Title
CN110989878B (en) Animation display method and device in applet, electronic equipment and storage medium
JP7270661B2 (en) Video processing method and apparatus, electronic equipment, storage medium and computer program
CN111654746B (en) Video frame insertion method and device, electronic equipment and storage medium
CN113453073B (en) Image rendering method and device, electronic equipment and storage medium
CN115022679B (en) Video processing method, device, electronic equipment and medium
CN109545333A (en) The method and device that Dicom image shows, handles
CN114092675A (en) Image display method, image display device, electronic apparatus, and storage medium
JP2021006982A (en) Method and device for determining character color
CN114187392A (en) Virtual even image generation method and device and electronic equipment
CN113870399A (en) Expression driving method and device, electronic equipment and storage medium
CN114071190B (en) Cloud application video stream processing method, related device and computer program product
JP2023070068A (en) Video stitching method, apparatus, electronic device, and storage medium
CN113691866A (en) Video processing method, video processing device, electronic equipment and medium
CN113628311B (en) Image rendering method, image rendering device, electronic device, and storage medium
CN115861510A (en) Object rendering method, device, electronic equipment, storage medium and program product
CN113873323B (en) Video playing method, device, electronic equipment and medium
CN115834930A (en) Video frame transmission method and device, electronic equipment and storage medium
CN114723855A (en) Image generation method and apparatus, device and medium
CN113836455A (en) Special effect rendering method, device, equipment, storage medium and computer program product
CN114760526A (en) Video rendering method and device, electronic equipment and storage medium
CN114125498A (en) Video data processing method, device, equipment and storage medium
CN113608809A (en) Component layout method, device, equipment, storage medium and program product
CN114125135B (en) Video content presentation method and device, electronic equipment and storage medium
CN113542620B (en) Special effect processing method and device and electronic equipment
EP4009284A1 (en) Method and apparatus for rendering three-dimensional objects in an extended reality environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination