CN110933418B - Video data processing method, device, medium and apparatus - Google Patents

Video data processing method, device, medium and apparatus Download PDF

Info

Publication number
CN110933418B
CN110933418B CN201911175598.7A CN201911175598A CN110933418B CN 110933418 B CN110933418 B CN 110933418B CN 201911175598 A CN201911175598 A CN 201911175598A CN 110933418 B CN110933418 B CN 110933418B
Authority
CN
China
Prior art keywords
video data
sampling mode
data
sampling
component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911175598.7A
Other languages
Chinese (zh)
Other versions
CN110933418A (en
Inventor
刘会淼
邵黎明
陈家大
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN201911175598.7A priority Critical patent/CN110933418B/en
Publication of CN110933418A publication Critical patent/CN110933418A/en
Application granted granted Critical
Publication of CN110933418B publication Critical patent/CN110933418B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

This specification discloses a video data processing method, apparatus, medium, and device, including: determining first video data and second video data displayed on a pixel point, wherein the second video data is video data which is displayed in a transparent mode; merging and encoding the first video data and the second video data according to the determined first sampling mode, wherein the second video data is used as alpha data of the first video data; and respectively rendering and displaying the first video data and the second video data on the pixel points under the condition that the first video data and the second video data are obtained through decoding. The video data which are displayed transparently and the video data which are not displayed transparently are combined and coded according to a sampling mode, so that special coding and decoding equipment is not needed, and the practicability of transparent display is enhanced; the real-time requirement of video data playing can be guaranteed, and the processing efficiency of the video data is effectively improved.

Description

Video data processing method, device, medium and apparatus
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, a medium, and a device for processing video data.
Background
The transparent video is video data played in a transparent display mode. The transparent display mode here can be understood as that the video content 2 is played on the video content 1 that is normally played (i.e. played in the non-transparent display mode), and the user can see the video content 1 through the video content 2. The video content is played in a transparent display mode, so that a cool and dazzling audio-visual effect can be displayed under the condition that the playing of the background video content is not influenced.
Therefore, embodiments of the present specification provide a video data processing method, which aims to improve the processing efficiency of transparent video.
Disclosure of Invention
In view of this, embodiments of the present specification provide a video data processing method, device, medium, and apparatus, which are used to improve the processing efficiency of transparent video.
The embodiment of the specification adopts the following technical scheme:
an embodiment of the present specification provides a video data processing method, including:
determining first video data and second video data displayed on a pixel point, wherein the second video data is video data which is displayed in a transparent mode;
merging and encoding the first video data and the second video data according to the determined first sampling mode, wherein the second video data is used as alpha data of the first video data;
and respectively rendering and displaying the first video data and the second video data on the pixel points under the condition that the first video data and the second video data are obtained through decoding.
An embodiment of the present specification further provides a video data processing apparatus, including:
the device comprises a determining unit, a display unit and a display unit, wherein the determining unit determines first video data and second video data displayed on one pixel point, and the second video data is video data which is transparently displayed;
an encoding unit configured to combine and encode the first video data and the second video data according to the determined first sampling mode, where the second video data is used as alpha data of the first video data;
and the processing unit is used for respectively rendering and displaying the first video data and the second video data on the pixel points under the condition that the first video data and the second video data are obtained through decoding.
Embodiments of the present specification also provide a computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the video data processing method described above.
An embodiment of the present specification further provides a data processing apparatus, including: at least one processor, at least one memory, and computer program instructions stored in the memory that, when executed by the processor, implement the video data processing method described above.
The embodiment of the specification adopts at least one technical scheme which can achieve the following beneficial effects:
according to the technical scheme provided by the embodiment of the specification, first video data and second video data displayed on one pixel point are determined, and the second video data is video data which is displayed in a transparent mode; merging and encoding the first video data and the second video data according to the determined first sampling mode, wherein the second video data is used as alpha data of the first video data; and respectively rendering and displaying the first video data and the second video data on the pixel points under the condition that the first video data and the second video data are obtained through decoding. The video data of transparent display and the video data of non-transparent display are combined and coded according to a sampling mode, so that special coding and decoding equipment is not needed, and the practicability of transparent display is enhanced; the real-time requirement of video data playing can be guaranteed, and the processing efficiency of the transparent video data is effectively improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the specification and are incorporated in and constitute a part of this specification, illustrate embodiments of the specification and together with the description serve to explain the specification and not to limit the specification in a non-limiting sense. In the drawings:
fig. 1 is a schematic flowchart of a video data processing method according to an embodiment of the present disclosure;
fig. 2(a) is a schematic diagram of video data merging and encoding provided by an embodiment of the present disclosure;
fig. 2(b) is a schematic diagram of video data merging and encoding provided by an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a video data processing apparatus provided in an embodiment of the present specification;
fig. 4 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present disclosure.
Detailed Description
In practical application, the following are found: currently, the industry does not establish a relevant standard for transparent video. PNG is capable of achieving transparent display effects by defining 256 transparent levels for the original image, so that the edge of the color image can be smoothly merged with any background, thereby completely eliminating the jagged edge to achieve the transparent effect.
Alpha channels are used as terms in computer graphics to refer to a particular channel, i.e., an "achromatic" channel. For the target image, assuming that a transparent effect needs to be realized in a region of the target image, the region may be edited through the Alpha channel, and the transparent effect can be realized after a series of operations. However, the PNG codec has poor real-time performance and cannot meet the requirement of video data playing.
In addition, in order to realize transparent display of video data, a video data processing method is proposed: the normally displayed Video data and the transparent displayed Video data are processed in a manner similar to 3D MVC (multi view Video Coding) to display the transparent Video data on the normally displayed Video data. However, the processing method has the disadvantages of high requirements on video data encoding and decoding equipment and poor practicability.
In order to solve the problems described in the present specification and achieve the object of the present specification, an embodiment of the present specification provides a video data processing method, device, medium, and apparatus, which determine first video data and second video data displayed on one pixel point, where the second video data is video data that is transparently displayed; merging and encoding the first video data and the second video data according to the determined first sampling mode, wherein the second video data is used as alpha data of the first video data; and respectively rendering and displaying the first video data and the second video data on the pixel points under the condition that the first video data and the second video data are obtained through decoding. The video data of transparent display and the video data of non-transparent display are combined and coded according to a sampling mode, so that special coding and decoding equipment is not needed, and the practicability of transparent display is enhanced; the real-time requirement of video data playing can be guaranteed, and the processing efficiency of the transparent video data is effectively improved.
It should be noted that the video data processing method described in this embodiment of the present disclosure may be applied to the field of analog video data of a television system, may also be applied to the field of digital video data of an intelligent terminal device, and may also be applied to the field of UI (User Interface) display of an application client, where this embodiment of the present disclosure is not specifically limited to the application field.
For the "alpha data" described in the embodiments of the present specification, which is similar to the alpha channel in the PNG video format, the alpha data corresponds to video data of a transparent display. That is to say, in the embodiment of the present specification, at the stage of compressing original video data, the video data supporting the first sampling form of the non-transparent display and the video data supporting the transparent display are merged, so that the merged video data satisfies the data format of the second sampling form, and thus, a standard encoding and decoding device is called to encode and decode the merged video data according to the second sampling form, thereby achieving the purpose of improving the video data processing efficiency.
The sampling method described in the embodiments of the present specification may be understood as an encoding method, for example: and a YUV sampling mode. Wherein "Y" represents brightness (Luma or Luma), i.e., a gray scale value; "U" and "V" denote Chroma (Chroma) which is used to describe the color and saturation of an image for specifying the color of a pixel. Specific sampling forms include, but are not limited to: YUV444, representing one set of UV components for each Y; YUV422, representing every two ys share a set of UV components; YUV420, representing a set of UV components shared by every four ys; and so on.
In the present embodiment, "first" and "second" of "first video data" and "second video data" do not refer to any particular video data, but refer to any video data in general, and "first" does not mean that the first is limited and "second" does not mean that the second is limited. The terms "first" and "second" in the "first sampling method" and "second sampling method" do not denote any particular sampling method, but rather denote any sampling method in general, with the term "first" not being used to mean the first one, and the term "second" not being used to mean the second one.
The technical solutions in the present specification will be clearly and completely described below with reference to the specific embodiments of the present specification and the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present specification without any creative effort belong to the protection scope of the present specification.
The technical solutions provided by the embodiments of the present description are described in detail below with reference to the accompanying drawings.
Fig. 1 is a flowchart illustrating a video data processing method according to an embodiment of the present disclosure. The method may be as follows.
Step 101: determining first video data and second video data displayed on a pixel point, wherein the second video data is video data which is displayed in a transparent mode.
In the embodiment of the specification, the data is video data collected by a video collecting device (such as a camera, a video camera and the like), or video data produced by video software for playing, or data designed by UI design software for UI display (the data can be video data or image data); etc., which may be referred to as the first video data recorded in step 101. The first video data is displayed in a normal (or non-transparent) display mode.
In the case of obtaining the first video data, the first video data may be processed in the existing manner (e.g., encoding, encapsulating, transmitting, decoding, playing); according to the video data processing method described in the embodiment of the present specification, other video data (for example, the second video data described in the embodiment of the present specification) may be added to the first video data, so as to achieve the purpose that the other video data is transparently displayed on the interface where the first video data is played.
Specifically, data such as display positions of first video data, second video data, and second video data with respect to the first video data are input to the video data processing apparatus described in the embodiment of the present specification, and the video data processing apparatus starts the video data processing flow described in the embodiment of the present specification after receiving the input of these pieces of information.
In the embodiment of the present specification, a video data processing flow is described in units of one pixel point. After the first video data and the second video data are received, a data block corresponding to the first video data and a data block corresponding to the second video data displayed on any one pixel point are determined. For a pixel point, it is possible to display only a data block corresponding to the first video data, and it is also possible to display a data block corresponding to the first video data and a data block corresponding to the second video data.
Step 103: and merging and encoding the first video data and the second video data according to the determined first sampling mode, wherein the second video data is used as alpha data of the first video data.
In this embodiment, the first sampling mode may be understood as an encoding mode adopted in an encoding stage, and is different from a sampling mode in a video data rendering and displaying stage.
How to determine the first sampling pattern is described in detail below.
Selecting a second sampling mode from a database according to the color rendered and displayed by the first video data;
and determining the first sampling mode from the database according to the second sampling mode.
In this embodiment of the present specification, the second sampling manner may be understood as a sampling manner adopted in the video data rendering and displaying stage, and may be selected from a database based on a color of the video data rendering and displaying; or may be determined based on the original data format selected during video data acquisition, and the determination manner of the second sampling manner is not particularly limited.
After determining the second sampling pattern, a first sampling pattern satisfying a condition may be selected from the database based on the second sampling pattern.
The condition to be satisfied here may be determined according to actual needs, or may be defined according to user needs, and the specific content of the condition is not limited here.
Preferably, if the YUV sampling method is adopted as the first sampling method and the second sampling method, when the Y component in the first sampling method and the Y component in the second sampling method are the same, the UV component in the first sampling method is larger than the UV component in the second sampling method.
For example: the second sampling mode is YUV420, then the first sampling mode can select YUV422 or YUV 444; the second sampling mode is YUV422, then the first sampling mode may select YUV 444.
Under the condition of determining a first sampling mode and a second sampling mode, determining that the second video data is alpha data of the first video data according to the first sampling mode and the second sampling mode;
and combining the first video data and the second video data according to the first sampling mode, and encoding the combined video data.
Specifically, the second video data is determined to be alpha data of the first video data according to the UV component in the first sampling mode and the UV component in the second sampling mode.
According to the relation between the Y component and the UV component in the first sampling mode, taking the first video data as basic data and the second video data as alpha data, and creating a coding matrix corresponding to the first sampling mode; and inputting the coding matrix into an encoder to encode data according to the first sampling mode.
The merging and encoding of the first video data and the second video data will be described below by taking YUV422 as an example of the first sampling method and YUV420 as an example of the second sampling method.
Since the first video data uses YUV420, it is determined that the first video data includes Y data wide by high, U data wide by high/4, and V data wide by high/4. Its corresponding matrix can be represented as shown in fig. 2 (a).
Compared with the YUV422 data format, the video data is required to include Y data wide by high, U data wide by high/2, and V data wide by high/2. To form a data format satisfying YUV422, the second video data may be converted into a width-by-height/2 data format during encoding using YUV422, and the second video data is added to the end of the first video data, so as to satisfy the data format of YUV 422. Its corresponding matrix can be represented as shown in fig. 2 (b).
In the same way, if the first sampling mode is YUV444 and the second sampling mode is YUV420, the second video data may be converted into a data format of width × 3/4 during the encoding process using YUV444, and the second video data may be added at the end of the first video data, so as to satisfy the data format of YUV 444.
If the first sampling mode is YUV444 and the second sampling mode is YUV422, the second video data can be converted into a width-height/2 data format in the process of encoding by using YUV444, and the second video data is added at the tail of the first video data, so that the data format of YUV444 can be met.
That is, on the basis of the first video data, if there is a need for transparent display, according to the solution described in the embodiment of the present specification, on the basis of the YUV component corresponding to the first video data, the component corresponding to the alpha data (i.e., the video data for transparent display) thereof is adjusted as needed, so as to achieve the transparent display effect.
Step 105: and respectively rendering and displaying the first video data and the second video data on the pixel points under the condition that the first video data and the second video data are obtained through decoding.
In this embodiment of the present specification, the first video data and the second video data are encoded into compressed encoded data through the encoding in step 103, and then the compressed encoded data is encapsulated according to a certain encapsulation format and transmitted to a network or a terminal device through a streaming media protocol. And then, obtaining the first video data and the second video data through operations of protocol de-encoding, de-encapsulating, decoding and the like, and rendering and displaying the first video data and the second video data on the pixel points respectively.
Specifically, the first video data is rendered and displayed on the pixel points according to the second sampling mode. For fig. 2(b), the front data (i.e. the first video data) is extracted through deprotocolation, decapsulation, decoding, and the like, and the first video data is rendered and displayed according to the format supported by the first video data (i.e. YUV 420); and rendering and displaying the second video data according to the display requirements of the tail data (namely the second video data).
According to the technical scheme provided by the embodiment of the specification, first video data and second video data displayed on one pixel point are determined, wherein the second video data are video data which are displayed in a transparent mode; merging and encoding the first video data and the second video data according to the determined first sampling mode, wherein the second video data is used as alpha data of the first video data; and respectively rendering and displaying the first video data and the second video data on the pixel points under the condition that the first video data and the second video data are obtained through decoding. The video data of transparent display and the video data of non-transparent display are combined and coded according to a sampling mode, so that special coding and decoding equipment is not needed, and the practicability of transparent display is enhanced; the real-time requirement of video data playing can be guaranteed, and the processing efficiency of the transparent video data is effectively improved.
Based on the same inventive concept, fig. 3 is a schematic structural diagram of a video data processing apparatus provided in an embodiment of this specification. The video data processing apparatus includes: a determining unit 301, an encoding unit 302 and a processing unit 303, wherein:
a determining unit 301 that determines first video data and second video data displayed on one pixel point, where the second video data is video data that is transparently displayed;
an encoding unit 302, configured to combine and encode the first video data and the second video data according to the determined first sampling manner, where the second video data is used as alpha data of the first video data;
and the processing unit 303, under the condition that the first video data and the second video data are obtained by decoding, respectively rendering and displaying the first video data and the second video data on the pixel points according to a determined second sampling mode.
In another embodiment provided in this specification, the encoding unit 302, based on the determined first sampling manner, merges and encodes the first video data and the second video data, where the second video data is alpha data of the first video data, and includes:
selecting a first sampling mode and a second sampling mode from a database;
determining the second video data as alpha data of the first video data according to the first sampling mode and the second sampling mode;
and combining the first video data and the second video data according to the first sampling mode, and encoding the combined video data.
In another embodiment provided in this specification, the encoding unit 302, based on the determined first sampling manner, merges and encodes the first video data and the second video data, where the second video data is alpha data of the first video data, and includes:
selecting a first sampling mode and a second sampling mode from a database;
determining the second video data as alpha data of the first video data according to the first sampling mode and the second sampling mode;
and combining the first video data and the second video data according to the first sampling mode, and encoding the combined video data.
In another embodiment provided in this specification, the encoding unit 302 selects the first sampling mode and the second sampling mode from a database, and includes:
selecting a second sampling mode from a database according to the color rendered and displayed by the first video data;
and determining the first sampling mode from the database according to the second sampling mode.
In another embodiment provided in this specification, if the YUV sampling method is used as the first sampling method and the second sampling method, when the Y component in the first sampling method and the Y component in the second sampling method are the same, the UV component in the first sampling method is larger than the UV component in the second sampling method.
In another embodiment provided in this specification, the determining, by the encoding unit 302, that the second video data is alpha data of the first video data according to the first sampling mode and the second sampling mode includes:
and determining the second video data to be alpha data of the first video data according to the UV component in the first sampling mode and the UV component in the second sampling mode.
In another embodiment provided in this specification, the encoding unit 302 combines the first video data and the second video data according to the first sampling mode, and encodes the combined video data, including:
according to the relation between the Y component and the UV component in the first sampling mode, taking the first video data as basic data and the second video data as alpha data, and creating a coding matrix corresponding to the first sampling mode;
and inputting the coding matrix into an encoder to encode data according to the first sampling mode.
In another embodiment provided in this specification, the rendering and displaying the first video data on the pixel point by the processing unit 303 includes:
and rendering and displaying the first video data on the pixel points according to the second sampling mode.
It should be noted that the video data processing device provided in the embodiment of the present disclosure may be implemented by software, or may be implemented by hardware, and is not limited specifically herein. The video data processing device determines first video data and second video data displayed on a pixel point, wherein the second video data is video data which is transparently displayed; merging and encoding the first video data and the second video data according to the determined first sampling mode, wherein the second video data is used as alpha data of the first video data; and respectively rendering and displaying the first video data and the second video data on the pixel points under the condition that the first video data and the second video data are obtained through decoding. The video data of transparent display and the video data of non-transparent display are combined and coded according to a sampling mode, so that special coding and decoding equipment is not needed, and the practicability of transparent display is enhanced; the real-time requirement of video data playing can be guaranteed, and the processing efficiency of the transparent video data is effectively improved.
In addition, in combination with the video data processing method in the foregoing embodiments, the embodiments of the present specification may be implemented by providing a computer-readable storage medium. The computer readable storage medium having stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement any of the video data processing methods of the above embodiments.
Fig. 4 shows a hardware configuration diagram of a data processing apparatus provided in an embodiment of the present specification.
The data processing apparatus may comprise a processor 401 and a memory 402 in which computer program instructions are stored.
Specifically, the processor 401 may include a Central Processing Unit (CPU), or an Application Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of the embodiments of the present specification.
Memory 402 may include mass storage for data or instructions. By way of example, and not limitation, memory 402 may include a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, tape, or Universal Serial Bus (USB) Drive or a combination of two or more of these. Memory 402 may include removable or non-removable (or fixed) media, where appropriate. The memory 402 may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory 402 is a non-volatile solid-state memory. In a particular embodiment, the memory 402 includes Read Only Memory (ROM). Where appropriate, the ROM may be mask-programmed ROM, Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory or a combination of two or more of these.
The processor 401 may implement any of the video data processing methods in the above embodiments by reading and executing computer program instructions stored in the memory 402.
In one example, the data processing apparatus may also include a communication interface 403 and a bus 410. As shown in fig. 4, the processor 401, the memory 402, and the communication interface 403 are connected via a bus 410 to complete communication therebetween.
The communication interface 403 is mainly used for implementing communication between modules, apparatuses, units and/or devices in this specification.
Bus 410 comprises hardware, software, or both coupling the components of the signaling data processing apparatus to each other. By way of example, and not limitation, a bus may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a Hypertransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus or a combination of two or more of these. Bus 410 may include one or more buses, where appropriate. Although this description embodiment describes and illustrates a particular bus, the present invention contemplates any suitable bus or interconnect.
By the video data processing method and device provided by the embodiment of the specification, first video data and second video data displayed on one pixel point are determined, and the second video data is transparently displayed video data; merging and encoding the first video data and the second video data according to the determined first sampling mode, wherein the second video data is used as alpha data of the first video data; and respectively rendering and displaying the first video data and the second video data on the pixel points under the condition that the first video data and the second video data are obtained through decoding. The video data of transparent display and the video data of non-transparent display are combined and coded according to a sampling mode, so that special coding and decoding equipment is not needed, and the practicability of transparent display is enhanced; the real-time requirement of video data playing can be guaranteed, and the processing efficiency of the transparent video data is effectively improved.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Hardware Description Language), traffic, pl (core universal Programming Language), HDCal (jhdware Description Language), lang, Lola, HDL, laspam, hardward Description Language (vhr Description Language), vhal (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functions of the various elements may be implemented in the same one or more software and/or hardware implementations of the present description.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the specification. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable risk control device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable risk control device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable risk control device to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable risk control device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer implemented process such that the instructions which execute on the computer or other programmable device provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
This description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present specification, and is not intended to limit the present specification. Various modifications and alterations to this description will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present specification should be included in the scope of the claims of the present specification.

Claims (8)

1. A video data processing method, comprising:
determining first video data and second video data displayed on a pixel point, wherein the second video data is alpha data of the first video data;
determining a first sampling mode according to a second sampling mode adopted by the first video data and a constraint relation between the first sampling mode and the second sampling mode; the constraint relationship is that if the first sampling mode and the second sampling mode adopt a YUV sampling mode, when the Y component in the first sampling mode and the Y component in the second sampling mode are the same, the UV component in the first sampling mode is larger than the UV component in the second sampling mode;
adding the second video data at the tail part of the first video data according to the determined first sampling mode, merging the second video data into the video data with the sampling mode being the first sampling mode so as to meet the data format of the first sampling mode, and encoding the merged video data;
and decoding the merged and coded video data, and rendering and displaying the first video data and the second video data on the pixel points according to the second sampling mode under the condition that the first video data and the second video data are obtained by decoding.
2. The video data processing method according to claim 1, wherein the adding the second video data as UV data to the end of the first video data according to the determined first sampling mode, merging the UV data into video data having the first sampling mode, and encoding the merged video data comprises:
selecting a first sampling mode and a second sampling mode from a database;
determining the second video data as alpha data of the first video data according to the first sampling mode and the second sampling mode;
and combining the first video data and the second video data according to the first sampling mode, and encoding the combined video data.
3. The video data processing method of claim 2, wherein selecting the first sampling mode and the second sampling mode from the database comprises:
selecting a second sampling mode from a database according to the color rendered and displayed by the first video data;
and determining the first sampling mode from the database according to the second sampling mode.
4. The video data processing method according to claim 3, wherein determining that the second video data is alpha data of the first video data according to the first sampling mode and the second sampling mode comprises:
and determining the second video data to be alpha data of the first video data according to the UV component in the first sampling mode and the UV component in the second sampling mode.
5. The video data processing method according to claim 4, wherein the merging the first video data and the second video data according to the first sampling mode, and encoding the merged video data comprises:
according to the relation between the Y component and the UV component in the first sampling mode, taking the first video data as basic data and the second video data as alpha data, and creating a coding matrix corresponding to the first sampling mode;
and inputting the coding matrix into an encoder to encode data according to the first sampling mode.
6. A video data processing apparatus, the video data processing apparatus comprising:
a determining unit configured to determine first video data and second video data displayed on one pixel point, the second video data being alpha data of the first video data;
a sampling mode determining unit, which determines a first sampling mode according to a second sampling mode adopted by the first video data and a constraint relation between the first sampling mode and the second sampling mode; the constraint relationship is that if the first sampling mode and the second sampling mode adopt a YUV sampling mode, when the Y component in the first sampling mode and the Y component in the second sampling mode are the same, the UV component in the first sampling mode is larger than the UV component in the second sampling mode;
the encoding unit is used for adding the second video data to the tail part of the first video data according to the determined first sampling mode, merging the second video data into the video data with the first sampling mode, so as to meet the data format of the first sampling mode, and encoding the merged video data;
and the processing unit is used for decoding the merged and coded video data, and rendering and displaying the first video data and the second video data on the pixel points according to the second sampling mode under the condition that the first video data and the second video data are obtained through decoding.
7. A computer-readable storage medium having stored thereon computer program instructions, which, when executed by a processor, implement the video data processing method of any of claims 1 to 5.
8. A data processing apparatus, comprising: at least one processor, at least one memory, and computer program instructions stored in the memory that, when executed by the processor, implement the video data processing method of any of claims 1 to 5.
CN201911175598.7A 2019-11-26 2019-11-26 Video data processing method, device, medium and apparatus Active CN110933418B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911175598.7A CN110933418B (en) 2019-11-26 2019-11-26 Video data processing method, device, medium and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911175598.7A CN110933418B (en) 2019-11-26 2019-11-26 Video data processing method, device, medium and apparatus

Publications (2)

Publication Number Publication Date
CN110933418A CN110933418A (en) 2020-03-27
CN110933418B true CN110933418B (en) 2021-12-21

Family

ID=69851220

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911175598.7A Active CN110933418B (en) 2019-11-26 2019-11-26 Video data processing method, device, medium and apparatus

Country Status (1)

Country Link
CN (1) CN110933418B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112351283A (en) * 2020-12-24 2021-02-09 杭州米络星科技(集团)有限公司 Transparent video processing method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101945273A (en) * 2009-07-01 2011-01-12 雅马哈株式会社 Compression-encoding device and Visual Display control device
CN107071514A (en) * 2017-04-08 2017-08-18 腾讯科技(深圳)有限公司 A kind of photograph document handling method and intelligent terminal
CN108475330A (en) * 2015-11-09 2018-08-31 港大科桥有限公司 Auxiliary data for there is the View synthesis of pseudomorphism perception
CN108769694A (en) * 2018-05-31 2018-11-06 郑州云海信息技术有限公司 A kind of method and device of the Alpha channel codings based on FPGA

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI248073B (en) * 2002-01-17 2006-01-21 Media Tek Inc Device and method for displaying static pictures

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101945273A (en) * 2009-07-01 2011-01-12 雅马哈株式会社 Compression-encoding device and Visual Display control device
CN108475330A (en) * 2015-11-09 2018-08-31 港大科桥有限公司 Auxiliary data for there is the View synthesis of pseudomorphism perception
CN107071514A (en) * 2017-04-08 2017-08-18 腾讯科技(深圳)有限公司 A kind of photograph document handling method and intelligent terminal
CN108769694A (en) * 2018-05-31 2018-11-06 郑州云海信息技术有限公司 A kind of method and device of the Alpha channel codings based on FPGA

Also Published As

Publication number Publication date
CN110933418A (en) 2020-03-27

Similar Documents

Publication Publication Date Title
CN109348226B (en) Picture file processing method and intelligent terminal
KR102414567B1 (en) Adaptive transfer function for video encoding and decoding
CN110933418B (en) Video data processing method, device, medium and apparatus
US11871004B2 (en) Video image processing method and device, and storage medium
CN105144726A (en) Custom data indicating nominal range of samples of media content
CN110049347B (en) Method, system, terminal and device for configuring images on live interface
US20140003789A1 (en) Playback of video content based on frame-level ratings
JP7067655B2 (en) Image coding equipment, image decoding equipment, and image processing equipment
US20150288979A1 (en) Video frame reconstruction
CN113055681A (en) Video decoding display method, device, electronic equipment and storage medium
CN101237543B (en) Setting method for caption window attribute and related TV system
KR20220023341A (en) Sub-pictures and sub-picture sets with level derivation
US9483810B2 (en) Reducing the number of IO requests to memory when executing a program that iteratively processes contiguous data
CN111654706A (en) Video compression method, device, equipment and medium
CN114501149A (en) Method, device and equipment for decoding audio/video file and readable medium
US10846142B2 (en) Graphics processor workload acceleration using a command template for batch usage scenarios
CN104079941A (en) Depth information encoding and decoding methods, devices and video processing and playing equipment
CN112511838A (en) Method, device, equipment and readable medium for reducing video transcoding delay
CN111131857A (en) Image compression method and device and electronic equipment
CN117221504B (en) Video matting method and device
CN111526420A (en) Video rendering method, electronic device and storage medium
CN117241068B (en) Video subtitle generating method and device
CN111225210B (en) Video coding method, video coding device and terminal equipment
CN107079153B (en) Encoding method, apparatus, system and storage medium
KR102582121B1 (en) System of transmitting display data and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant