WO2021057464A1 - 视频处理方法和装置、存储介质和电子装置 - Google Patents

视频处理方法和装置、存储介质和电子装置 Download PDF

Info

Publication number
WO2021057464A1
WO2021057464A1 PCT/CN2020/113981 CN2020113981W WO2021057464A1 WO 2021057464 A1 WO2021057464 A1 WO 2021057464A1 CN 2020113981 W CN2020113981 W CN 2020113981W WO 2021057464 A1 WO2021057464 A1 WO 2021057464A1
Authority
WO
WIPO (PCT)
Prior art keywords
resolution
pixel point
point set
edge pixel
edge
Prior art date
Application number
PCT/CN2020/113981
Other languages
English (en)
French (fr)
Inventor
高欣玮
李蔚然
毛煦楠
谷沉沉
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP20868851.5A priority Critical patent/EP4037324A4/en
Publication of WO2021057464A1 publication Critical patent/WO2021057464A1/zh
Priority to US17/449,109 priority patent/US11838503B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/59Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234309Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4 or from Quicktime to Realvideo
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234363Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the spatial resolution, e.g. for clients with a lower screen resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2662Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • H04N21/4621Controlling the complexity of the content stream or additional data, e.g. lowering the resolution or bit-rate of the video stream for a mobile client with a small screen

Definitions

  • This application relates to the field of audio and video coding and decoding, and in particular to a video processing method and device, a storage medium, and an electronic device.
  • a video processing method includes: determining at least one pair of decoded blocks to be reconstructed from a decoded video frame currently to be processed, wherein each pair of decoded blocks in the at least one pair of decoded blocks includes adopting a first resolution
  • the first decoding block and the second decoding block adopting the second resolution, the first decoding block and the second decoding block are adjacent decoding blocks; the first resolution of the first decoding block is adjusted as the target Resolution, and adjust the second resolution of the second decoded block to the target resolution; determine the first edge pixel point set from the first decoded block, and determine the first edge pixel point set from the second decoded block Two edge pixel point sets, wherein the position of the first edge pixel point set is adjacent to the position of the second edge pixel point set; filtering processing is performed on the first edge pixel point set to obtain the filtered first edge pixel Point set, and perform filtering processing on the second edge pixel point set to obtain the filtered second edge pixel point set, where the filtered first edge pixel
  • a video processing method comprising: determining at least one pair of coded blocks to be reconstructed from a currently to-be-processed coded video frame, wherein each pair of coded blocks in the at least one pair of coded blocks includes adopting a first resolution
  • the first coding block and the second coding block adopting the second resolution, the first coding block and the second coding block are adjacent coding blocks; the first resolution of the first coding block is adjusted as the target Resolution, and adjust the second resolution of the second encoding block to the target resolution; determine the first edge pixel point set from the first encoding block, and determine the first edge pixel point set from the second encoding block Two edge pixel point sets, wherein the position of the first edge pixel point set is adjacent to the position of the second edge pixel point set; filtering processing is performed on the first edge pixel point set to obtain the filtered first edge pixel Point set, and perform filtering processing on the second edge pixel point set to obtain the filtered second edge pixel point set, where the filtered first edge
  • a video processing device comprising: a first determining unit, configured to determine at least one pair of decoding blocks to be reconstructed from a decoded video frame currently to be processed, wherein each pair of decoding blocks in the at least one pair of decoding blocks is decoded
  • the block includes a first decoded block with a first resolution and a second decoded block with a second resolution.
  • the first decoded block and the second decoded block are adjacently located decoded blocks; the adjustment unit is configured to combine the first decoded block with the second decoded block.
  • the first resolution of a decoding block is adjusted to the target resolution
  • the second resolution of the second decoding block is adjusted to the target resolution
  • the second determining unit determines the first resolution from the first decoding block An edge pixel point set, and a second edge pixel point set is determined from the second decoding block, wherein the position of the first edge pixel point set is adjacent to the position of the second edge pixel point set
  • the filter processing unit Used for filtering the first edge pixel point set to obtain the filtered first edge pixel point set, and filtering the second edge pixel point set to obtain the filtered second edge pixel point set , Wherein the filtered first edge pixel point set matches the filtered second edge pixel point set.
  • the adjustment unit includes:
  • the first sampling module is configured to down-sample the first resolution to obtain the target resolution when the first resolution is greater than the target resolution; or, in the first resolution If the rate is less than the target resolution, up-sampling the first resolution to obtain the target resolution; and
  • the second sampling module is configured to down-sample the second resolution to obtain the target resolution when the second resolution is greater than the target resolution; or, in the second resolution If the rate is less than the target resolution, the second resolution is up-sampled to obtain the target resolution.
  • a video processing device comprising: a first determining unit, configured to determine at least one pair of coded blocks to be reconstructed from a currently to-be-processed coded video frame, wherein each pair of coded blocks in the at least one pair of coded blocks is coded
  • the block includes a first coding block with a first resolution and a second coding block with a second resolution.
  • the first coding block and the second coding block are positionally adjacent coding blocks; the adjustment unit is configured to combine the first coding block with the second coding block.
  • the first resolution of an encoding block is adjusted to the target resolution
  • the second resolution of the second encoding block is adjusted to the target resolution
  • the second determining unit is configured to determine from the first encoding block Extracting a first edge pixel point set, and determining a second edge pixel point set from the second encoding block, wherein the position of the first edge pixel point set is adjacent to the position of the second edge pixel point set
  • filtering processing The unit is configured to perform filtering processing on the first edge pixel point set to obtain the filtered first edge pixel point set, and perform filtering processing on the second edge pixel point set to obtain the filtered second edge pixel point set Point set, wherein the filtered first edge pixel point set matches the filtered second edge pixel point set.
  • the filter processing unit includes:
  • a first determining module configured to determine a first reference pixel point associated with the first edge pixel point set from the first code block, and determine a first reference pixel point associated with the second edge pixel point set from the second code block The second reference pixel point associated with the pixel point set;
  • a filter processing module configured to perform filter processing on the first edge pixel point set and the second edge pixel point set according to the pixel value of the first reference pixel point and the pixel value of the second reference pixel point , wherein, the pixel value of the i-th pixel in the first edge pixel point set after filtering and the value of the j-th pixel point corresponding to the i-th pixel point in the second edge pixel point set after filtering
  • the first difference between the pixel values is smaller than the pixel value of the i-th pixel in the first edge pixel set and the j-th pixel in the second edge pixel set
  • Said i is a positive integer and is less than or equal to the total number of pixels in the first edge pixel point concentration
  • said j is a positive integer and is less than or equal to the number of pixels in the second edge pixel point concentration total.
  • the filter processing module includes: a processing sub-module configured to sequentially execute the following steps until the first edge pixel point set and the second edge pixel point set are traversed: from the first edge The pixel point set and the second edge pixel point collectively determine the current edge pixel point; performing a weighted summation on the pixel value of the first reference pixel point and the pixel value of the second reference pixel point to obtain a target pixel value; And updating the pixel value of the current edge pixel to the target pixel value to obtain the filtered current edge pixel.
  • the processing sub-module is used to determine the position of the current edge pixel; sequentially obtain the position of each of the first reference pixel and the second reference pixel and the current The distance between the positions of the edge pixels; determining the weight matching the each reference pixel according to the distance; and using the weight to compare the pixel value of the first reference pixel and the second reference pixel The pixel values of the points are weighted and summed to obtain the target pixel value.
  • One or more non-volatile computer-readable storage media storing computer-readable instructions.
  • the computer-readable instructions When executed by one or more processors, the one or more processors perform the following steps:
  • each pair of decoded blocks in the at least one pair of decoded blocks includes a first decoded block with a first resolution and a A second decoding block with a second resolution, where the first decoding block and the second decoding block are adjacent decoding blocks;
  • a first edge pixel point set is determined from the first decoding block, and a second edge pixel point set is determined from the second decoding block, wherein the position of the first edge pixel point set is the same as the position of the first edge pixel point set.
  • the positions of the second edge pixel point set are adjacent;
  • An electronic device comprising a memory and one or more processors, and computer readable instructions are stored in the memory.
  • the one or more processors execute the following steps:
  • each pair of decoded blocks in the at least one pair of decoded blocks includes a first decoded block with a first resolution and a A second decoding block with a second resolution, where the first decoding block and the second decoding block are adjacent decoding blocks;
  • a first edge pixel point set is determined from the first decoding block, and a second edge pixel point set is determined from the second decoding block, wherein the position of the first edge pixel point set is the same as the position of the first edge pixel point set.
  • the positions of the second edge pixel point set are adjacent;
  • One or more non-volatile computer-readable storage media storing computer-readable instructions.
  • the computer-readable instructions When executed by one or more processors, the one or more processors perform the following steps:
  • each pair of encoding blocks in the at least one pair of encoding blocks includes a first encoding block with a first resolution and A second coding block with a second resolution, where the first coding block and the second coding block are adjacent coding blocks;
  • a first edge pixel point set is determined from the first encoding block, and a second edge pixel point set is determined from the second encoding block, wherein the position of the first edge pixel point set is the same as that of the The positions of the second edge pixel point set are adjacent;
  • An electronic device comprising a memory and one or more processors, and computer readable instructions are stored in the memory.
  • the one or more processors execute the following steps:
  • each pair of encoding blocks in the at least one pair of encoding blocks includes a first encoding block with a first resolution and A second coding block with a second resolution, where the first coding block and the second coding block are adjacent coding blocks;
  • a first edge pixel point set is determined from the first encoding block, and a second edge pixel point set is determined from the second encoding block, wherein the position of the first edge pixel point set is the same as that of the The positions of the second edge pixel point set are adjacent;
  • Fig. 1 is a schematic diagram of an application environment of an optional video processing method according to an embodiment of the present application
  • Fig. 2 is a flowchart of an optional video processing method according to an embodiment of the present application
  • Fig. 3 is a schematic diagram of an optional video processing method according to an embodiment of the present application.
  • Fig. 4 is a schematic diagram of another optional video processing method according to an embodiment of the present application.
  • Fig. 5 is a schematic diagram of yet another optional video processing method according to an embodiment of the present application.
  • Fig. 6 is a flowchart of another optional video processing method according to an embodiment of the present application.
  • Fig. 7 is a schematic structural diagram of an optional video processing device according to an embodiment of the present application.
  • Fig. 8 is a schematic structural diagram of another optional video processing device according to an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of yet another optional video processing device according to an embodiment of the present application.
  • Fig. 10 is a schematic structural diagram of an optional electronic device according to an embodiment of the present application.
  • a video processing method is provided.
  • the above video processing method can be, but not limited to, applied to the application environment shown in FIG. 1 Video processing system.
  • the video processing system includes a terminal 102 and a server 104, and the above-mentioned terminal 102 and server 104 communicate through a network.
  • the aforementioned terminal 102 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, etc., but is not limited to this.
  • the aforementioned server 104 may be, but is not limited to, a computer processing device with strong data processing capability and a certain storage space.
  • the adjacently located edges are determined from the encoding blocks or decoding blocks with different resolutions and adjacent positions in the video encoding and decoding process.
  • the pixel point set is used to perform edge filtering processing on the edge pixel point set, so as to avoid obvious seams in the reconstructed video frame, overcome the problem of video distortion caused by different resolutions in the related technology, and then make the filtered
  • the encoding block or decoding block can truly restore the content in the video frame and improve the encoding and decoding efficiency.
  • the terminal 102 may include but is not limited to the following components: an image processing unit 1021, a processor 1022, a storage medium 1023, a memory 1024, a network interface 1025, a display screen 1026, and an input device 1027.
  • the above-mentioned components can be, but are not limited to, connected via the system bus 1028.
  • the above-mentioned image processing unit 1021 is used to provide at least the rendering capability of the display interface; the above-mentioned processor 1022 is used to provide calculation and control capabilities to support the operation of the terminal 102;
  • the storage medium 1023 stores an operating system 1023-2 and video encoding And/or video decoder 1023-4.
  • the operating system 1023-2 is used to provide control operation instructions, and the video encoder and/or video decoder 1023-4 is used to perform encoding/decoding operations according to the control operation instructions.
  • the aforementioned memory provides an operating environment for the video encoder and/or video decoder 1023-4 in the storage medium 1023, and the network interface 1025 is used for network communication with the network interface 1043 in the server 104.
  • the above-mentioned display screen is used to display application interfaces, such as decoding video; the input device 1027 is used to receive commands or data input by the user.
  • the display screen 1026 and the input device 1027 may be touch screens.
  • a specific terminal or server may include a diagram More or fewer parts are shown in, or some parts are combined, or have a different arrangement of parts.
  • the aforementioned server 104 may include but is not limited to the following components: a processor 1041, a memory 1042, a network interface 1043, and a storage medium 1044.
  • the above-mentioned components can be connected via the system bus 1045, but are not limited to.
  • the aforementioned storage medium 1044 includes an operating system 1044-1, a database 1044-2, a video encoder and/or a video decoder 1044-3.
  • the above-mentioned processor 1041 is used to provide computing and control capabilities to support the operation of the server 104.
  • the memory 1042 provides an environment for the operation of the video encoder and/or the video decoder 1044-3 in the storage medium 1044.
  • the network interface 1043 communicates with the network interface 1025 of the external terminal 102 through a network connection.
  • the operating system 1044-1 in the aforementioned storage medium is used to provide control operation instructions; the video encoder and/or video decoder 1044-3 is used to perform encoding/decoding operations according to the control operation instructions; and the database 1044-2 is used to store data.
  • the internal structure of the server shown in Figure 1 above is only a block diagram of part of the structure related to the solution of this application, and does not constitute a limitation on the computer equipment to which the solution of this application is applied. Specific computer equipment has different components. Layout.
  • the aforementioned network may include, but is not limited to, a wired network.
  • the above-mentioned wired network may include, but is not limited to: a wide area network, a metropolitan area network, and a local area network.
  • a wide area network a wide area network
  • a metropolitan area network a metropolitan area network
  • a local area network a local area network
  • the foregoing video processing method may be applied to the decoding side, and the method includes:
  • each pair of decoded blocks in the at least one pair of decoded blocks includes a first decoded block with a first resolution and a first decoded block with a first resolution.
  • S204 Adjust the first resolution of the first decoded block to the target resolution, and adjust the second resolution of the second decoded block to the target resolution.
  • S206 Determine a first edge pixel point set from the first decoding block, and determine a second edge pixel point set from the second decoding block, where the position of the first edge pixel point set is the same as the second edge pixel point set Adjacent to the location;
  • S208 Perform filtering processing on the first edge pixel point set to obtain a filtered first edge pixel point set, and perform filtering processing on the second edge pixel point set to obtain a filtered second edge pixel point set, where filtering The filtered first edge pixel point set matches the filtered second edge pixel point set.
  • the video processing method shown in FIG. 2 above can be, but not limited to, used in the video decoder shown in FIG. 1. Through the interaction of the video decoder and other components, the above-mentioned video processing process is completed.
  • the above-mentioned video processing method may be, but not limited to, applied to application scenarios such as video playback applications, video sharing applications, or video session applications.
  • the video transmitted in the above application scenario may include, but is not limited to: long video, short video, such as long video may be a play episode with a long play time (for example, play time greater than 10 minutes), or in a long video session
  • the displayed picture, the short video may be a voice message that two or more parties interact, or a video with a short playing time (for example, playing time less than or equal to 30 seconds) displayed on a sharing platform.
  • the video processing method provided in this embodiment can be, but is not limited to, applied to the playback device used to play video in the foregoing application scenarios. After obtaining the decoded video frame currently to be processed, it is determined to be reconstructed. Each pair of decoded blocks in the at least one pair of decoded blocks includes a first decoded block with a first resolution and a second decoded block with a second resolution.
  • adjusting the first resolution of the first decoding block to the target resolution, and adjusting the second resolution of the second decoding block to the target resolution includes:
  • the first resolution is adjusted to the third resolution
  • the second resolution is adjusted to the third resolution, where the third resolution is the same as the first resolution. Different, and different from the second resolution.
  • the above-mentioned third resolution may include but is not limited to one of the following: the original resolution of the decoded video frame, and the highest resolution obtained by up-sampling the decoded video frame. That is, in this embodiment, by performing resolution unification processing on the first decoded block and the second decoded block, the resolution can be unified to the resolution adopted by any one of the pair of decoded blocks. In addition, the resolution can also be unified to a third resolution, where the third resolution is a pre-appointed resolution.
  • the above-mentioned target resolution can be but not limited to one of the following: the original resolution is 8*8, the first resolution is 4*4, and the second resolution is 16*16.
  • the unification of resolution is achieved by up-sampling or down-sampling the decoded block.
  • adjusting the first resolution of the first decoding block to the target resolution, and adjusting the second resolution of the second decoding block to the target resolution includes:
  • first resolution and/or the second resolution are greater than the target resolution, down-sampling the above-mentioned first resolution and/or the second resolution to obtain the target resolution;
  • the above-mentioned first resolution and/or the second resolution are up-sampled to obtain the target resolution.
  • determining the first edge pixel point set from the first decoding block and determining the second edge pixel point set from the second decoding block may include, but is not limited to: obtaining pre-configured pixels Row position and/or pixel column position; the pixel points in the above-mentioned pixel row position and/or pixel column position are determined as the first edge pixel point set and the second edge pixel point set.
  • the first edge pixel point set and the second edge pixel point set are determined according to the pre-configured pixel row position; If the decoding blocks are column-adjacent, the first edge pixel point set and the second edge pixel point set are determined according to the pre-configured pixel column position.
  • the decoded video frame includes a pair of adjacent decoding blocks: decoding block A (including circular pixels as shown in part (b) in Figure 3) and decoding block B (in Figure 3 ( Part b) shows square pixels).
  • the first edge pixel set can be determined from the pixels on the side adjacent to the decoding block B in the decoding block A, as shown in Figure 3 (b) (part) Shows that n1 solid circles are the first edge pixel point set; and the second edge pixel point set is determined from the pixels on the side adjacent to the decoded block A in the decoding block B, as shown in part (b) of Figure 3
  • the n2 solid boxes are the second edge pixel set.
  • the first edge pixel point set is filtered to obtain the filtered first edge pixel point.
  • the second edge pixel point set is filtered to obtain the filtered second edge pixel point set, so that the pixel value of each pixel point in the first edge pixel point set is the same as that of each pixel point in the second edge pixel point set.
  • the pixel values are adapted to avoid obvious seams and reduce subjective parallax.
  • the filtering process is performed on the first edge pixel point set
  • the filtering process is performed on the second edge pixel point set, which may include but is not limited to: from the first edge pixel point set and the second edge pixel point set.
  • the weight used in the above weighted summation is determined according to the distance between the reference pixel and the current edge pixel.
  • the above-mentioned first reference pixel may, but is not limited to, a reference pixel located in the first decoding block and adjacent to the second decoding block.
  • the above-mentioned second reference pixel may, but is not limited to, a reference pixel located in the second decoding block and adjacent to the first decoding block.
  • the above-mentioned reference pixels are used to update the pixel value of each pixel point in the first edge pixel point set and the pixel value of each pixel point in the second edge pixel point set. In this way, edge filtering processing of the first decoded block and the second decoded block unified into the target resolution is realized.
  • the filtered first edge pixel point set and the filtered second edge pixel point set may be, but not limited to, the first edge pixel point set after filtering and the filtered second edge pixel point set.
  • the filtered second edge pixel point set is rendered and displayed, and the decoded video frame including the filtered first edge pixel point set and the filtered second edge pixel point set can also be used as a reference frame for subsequent videos. Frames are decoded for reference.
  • each pair of decoded blocks in the at least one pair of decoded blocks includes adopting the first resolution The first decoding block and the second decoding block with the second resolution.
  • the first edge pixel point set is filtered to obtain the filtered first edge pixel point set
  • the second edge pixel point set is filtered to obtain the filtered second edge pixel
  • the point set includes:
  • the first edge pixel point set and the second edge pixel point set are filtered.
  • the first difference between the pixel value of each pixel and the pixel value of the j-th pixel corresponding to the i-th pixel in the filtered second edge pixel set is smaller than the i-th pixel value in the first edge pixel set
  • the second difference between the pixel value of the pixel and the pixel value of the j-th pixel in the second edge pixel set, i is a positive integer, and is less than or equal to the total number of pixels in the first edge pixel set, j is positive Integer, and less than or equal to the total number of pixels in the second edge pixel set.
  • first reference pixel point associated with the first edge pixel point set may include, but is not limited to, multiple pixels located in the first decoding block and adjacent to the second decoding block, and the multiple pixels
  • the point includes the first edge pixel point set
  • the second reference pixel point associated with the second edge pixel point set may include, but is not limited to, multiple pixels located in the second decoding block and adjacent to the first decoding block
  • the plurality of pixel points includes a second edge pixel point set.
  • a pair of decoding blocks are included in the decoded video frame: decoding block A and decoding block B.
  • the first edge pixel set can be determined from the pixels on the side adjacent to the decoded block B in the decoded block A, as shown in part (a) of Fig. 4 n1
  • the two solid circles are the first edge pixel point set
  • the second edge pixel point set is determined from the pixels on the side adjacent to the decoded block A in the decoding block B, as shown in part (a) of Figure 4, n2
  • the solid box is the second edge pixel set.
  • the second reference pixel points associated with the edge pixel point set are m2 slashed boxes as shown in part (b) of Fig. 4.
  • the solid circle in the dashed frame in the decoded block A shown in part (a) of Figure 4 is the current i-th pixel
  • the solid square in the dashed frame in the decoded block B corresponds to the above-mentioned i-th pixel.
  • the j-th pixel After filtering the first edge pixel point set and the second edge pixel point set according to the pixel value of the first reference pixel point and the pixel value of the second reference pixel point, it is determined that the first edge pixel point set in the decoded block A after filtering is filtered.
  • the pixel value of the first reference pixel and the pixel value of the second reference pixel are used to filter the first edge pixel point set and the second edge pixel point set, so as to achieve fusion and
  • the pixel values of pixels adjacent to the edge pixels are filtered to make the pixel values of the edge pixels match, avoiding obvious seams during video playback, thereby reducing the user's subjective parallax and improving the user's viewing experience.
  • performing filtering processing on the first edge pixel point set and the second edge pixel point set according to the pixel value of the first reference pixel point and the pixel value of the second reference pixel point includes:
  • the pixel values of adjacent edge pixels in the decoding block will be further subjected to edge filtering processing. , So that the edge pixels are smoothed.
  • the above steps of performing a weighted summation on the pixel value of the first reference pixel and the pixel value of the second reference pixel to obtain the target pixel value include:
  • the pixel value of the first reference pixel and the pixel value of the second reference pixel are weighted and summed by the weight to obtain the target pixel value.
  • first edge pixel set in the first decoding block A is the solid circle shown in Figure 5
  • second edge pixel set in the second decoding block B is the solid box shown in Figure 5
  • first reference pixel in the first decoding block A is all the circles shown in Fig. 5
  • the second reference pixel in the second decoding block B is all the blocks shown in Fig. 5, and the pixel value of each reference pixel is from the left From Q1 to Q7 to the right.
  • the current edge pixel point is the current edge pixel point P shown in FIG.
  • the distance between the position of each reference pixel and the position of the current edge pixel P is obtained in sequence, such as the distance from left to right: d1 to d7, and corresponding different weights will be determined according to the above different distances. , Such as ⁇ 1 to ⁇ 7 from left to right. That is, the weight corresponding to d1 is ⁇ 1, and so on.
  • the pixel value of the i-th pixel in the decoded block A after filtering is compared with the value of the j-th pixel in the decoded block B.
  • the first difference between the pixel values f1, and the second difference f2 between the pixel value of the i-th pixel in the decoded block A and the pixel value of the j-th pixel in the decoded block B before filtering is determined.
  • f1 ⁇ f2 that is, the difference in the pixel values of the pixel points in the filtered edge pixel sets in the two decoded blocks is smaller.
  • the pixel value of each reference pixel is merged to calculate the pixel value of each edge pixel to achieve the purpose of edge filtering processing, thereby avoiding obvious seams after the video is restored, and reducing the user’s burden.
  • Subjective parallax Subjective parallax.
  • determining the first edge pixel point set from the first decoding block and determining the second edge pixel point set from the second decoding block includes:
  • the first edge pixel point set is determined according to the first row position and/or the first column position
  • the second edge pixel point set is determined according to the second row position and/or the second column position.
  • the pre-configured first row position/second row position can be but not limited to one or more row positions in the decoding block
  • the pre-configured first column position/second column position can be, but not limited to, the decoding block The position of one or more columns in.
  • the number of positions in the first row and the number of positions in the second row may be equal or different; the number of positions in the first column and the number of positions in the second column may be equal or different. The foregoing number can be determined in combination with specific application scenarios, which is not limited in this embodiment.
  • the first edge pixel set is the pixels indicated by the solid circles shown in Figure 5; in the second decoding block Among the multiple rows and multiple columns of pixels in B, it can be determined that the second edge pixel set is the pixel indicated by the solid box shown in FIG. 5.
  • the edge pixel point set is determined according to the pre-configured row position/column position, so as to implement unified edge filtering processing on the decoded block.
  • adjusting the first resolution of the first decoding block to the target resolution and adjusting the second resolution of the second decoding block to the target resolution includes:
  • the first resolution is adjusted to the third resolution
  • the second resolution is adjusted to the third resolution, where the third resolution is different from the first resolution
  • the third resolution is different from the second resolution
  • the third resolution may be unified, where the third resolution may be, but is not limited to, the original resolution of the decoded block or the highest resolution supported by the decoded block.
  • adjusting the first resolution of the first decoding block to the target resolution, and adjusting the second resolution of the second decoding block to the target resolution includes:
  • the first resolution is greater than the target resolution, down-sampling the first resolution to obtain the target resolution; or, if the first resolution is less than the target resolution, up-sampling the first resolution , Get the target resolution;
  • the second resolution is greater than the target resolution, down-sampling the second resolution to obtain the target resolution; or, if the second resolution is less than the target resolution, up-sampling the second resolution , Get the target resolution.
  • the original resolution of the first decoding block and the second decoding block is 8*8
  • the first resolution used by the first decoding block is 4*4
  • the second decoding block used is the first resolution.
  • the second resolution is 16*16
  • the above target resolution is the original resolution of 8*8.
  • the first resolution 4*4 is up-sampled to obtain the target resolution 8*8; the second resolution is greater than the target resolution, then the second resolution After 16*16 downsampling, the target resolution is 8*8.
  • edge filtering processing is performed on the first decoded block and the second decoded block whose resolution is unified to the target resolution of 8*8.
  • the first resolution and the second resolution are processed uniformly by up-sampling difference or down-sampling sampling, so that the first decoding block and the second decoding block with different resolutions
  • the resolution can be unified to the target resolution, thereby facilitating subsequent edge filter processing operations, thereby overcoming the problem of video distortion caused by different resolutions in related technologies.
  • a video processing method is provided.
  • the above video processing method may be, but not limited to, be applied to the environment shown in FIG. 1.
  • the foregoing video processing method may be applied to the encoding side, and the method includes:
  • S602. Determine at least one pair of encoding blocks to be reconstructed from the currently to-be-processed encoded video frame, where each pair of encoding blocks in the at least one pair of encoding blocks includes a first encoding block with a first resolution and a first encoding block with a first resolution.
  • S604 Adjust the first resolution of the first encoding block to the target resolution, and adjust the second resolution of the second encoding block to the target resolution.
  • S606 Determine a first edge pixel point set from the first code block, and determine a second edge pixel point set from the second code block, where the position of the first edge pixel point set is the same as the second edge pixel point set Adjacent to the location;
  • S608 Perform filtering processing on the first edge pixel point set to obtain a filtered first edge pixel point set, and perform filtering processing on the second edge pixel point set to obtain a filtered second edge pixel point set, where filtering The filtered first edge pixel point set matches the filtered second edge pixel point set.
  • the video processing method shown in FIG. 2 above can be, but not limited to, used in the video encoder shown in FIG. 1. Through the interaction of the video encoder and other components, the above-mentioned video processing process is completed.
  • the above-mentioned video processing method may be, but not limited to, applied to application scenarios such as video playback applications, video sharing applications, or video session applications.
  • the video transmitted in the above application scenario may include, but is not limited to: long video, short video, such as long video may be a play episode with a long play time (for example, play time greater than 10 minutes), or in a long video session
  • the displayed picture, the short video may be a voice message that two or more parties interact, or a video with a short playing time (for example, playing time less than or equal to 30 seconds) displayed on a sharing platform.
  • long video such as long video may be a play episode with a long play time (for example, play time greater than 10 minutes)
  • the short video may be a voice message that two or more parties interact, or a video with a short playing time (for example, playing time less than or equal to 30 seconds) displayed on a sharing platform.
  • the above is only an example.
  • the video processing method provided in this embodiment can be but not limited to be applied to the playback device used to play video in the above application scenarios. After acquiring the currently to-be-processed encoded video frame, it is determined that the reconstruction is to be performed. At least one pair of coding blocks in the above-mentioned at least one pair of coding blocks includes a first coding block with a first resolution and a second coding block with a second resolution. By adjusting the resolution of the encoding block, and performing edge filtering on the edge pixel point set determined in the encoding block, it is possible to avoid obvious seams in the video during the reconstruction process, and to overcome the video distortion in the related technology The problem.
  • the first edge pixel point set is filtered to obtain the filtered first edge pixel point set
  • the second edge pixel point set is filtered to obtain the filtered second edge pixel
  • the point set includes:
  • the first edge pixel point set and the second edge pixel point set are filtered.
  • the first difference between the pixel value of each pixel and the pixel value of the j-th pixel corresponding to the i-th pixel in the filtered second edge pixel set is smaller than the i-th pixel value in the first edge pixel set
  • the second difference between the pixel value of the pixel and the pixel value of the j-th pixel in the second edge pixel set, i is a positive integer, and is less than or equal to the total number of pixels in the first edge pixel set, j is positive Integer, and less than or equal to the total number of pixels in the second edge pixel set.
  • performing filtering processing on the first edge pixel point set and the second edge pixel point set according to the pixel value of the first reference pixel point and the pixel value of the second reference pixel point includes:
  • performing a weighted summation on the pixel value of the first reference pixel and the pixel value of the second reference pixel to obtain the target pixel value includes:
  • the pixel value of the first reference pixel and the pixel value of the second reference pixel are weighted and summed by the weight to obtain the target pixel value.
  • the device includes:
  • the first determining unit 702 is configured to determine at least one pair of decoded blocks to be reconstructed from the decoded video frame currently to be processed, wherein each pair of decoded blocks in the at least one pair of decoded blocks includes a decoding block with a first resolution.
  • the adjusting unit 704 is configured to adjust the first resolution of the first decoded block to the target resolution, and adjust the second resolution of the second decoded block to the target resolution;
  • the second determining unit 706 determines the first edge pixel point set from the first decoded block, and determines the second edge pixel point set from the second decoded block, wherein the position of the first edge pixel point set is the same as that of the second edge pixel point set.
  • the position of the edge pixel set is adjacent;
  • the filtering processing unit 708 is configured to perform filtering processing on the first edge pixel point set to obtain a filtered first edge pixel point set, and perform filtering processing on the second edge pixel point set to obtain a filtered second edge pixel point set Where the filtered first edge pixel point set matches the filtered second edge pixel point set.
  • the filtering processing unit 708 includes:
  • the first determining module is used to determine the first reference pixel point associated with the first edge pixel point set from the first decoding block, and determine the second reference pixel point associated with the second edge pixel point set from the second decoding block pixel;
  • the filtering processing module is used to perform filtering processing on the first edge pixel point set and the second edge pixel point set according to the pixel value of the first reference pixel and the pixel value of the second reference pixel, wherein the filtered first
  • the first difference between the pixel value of the i-th pixel in the edge pixel set and the pixel value of the j-th pixel corresponding to the i-th pixel in the filtered second edge pixel set is smaller than the first edge
  • the second difference between the pixel value of the i-th pixel in the pixel set and the pixel value of the j-th pixel in the second edge pixel set, i is a positive integer and is less than or equal to the pixel in the first edge pixel set
  • J is a positive integer, and is less than or equal to the total number of pixels in the second edge pixel set.
  • the filtering processing module includes:
  • the processing sub-module is used to execute the following steps in sequence until the first edge pixel point set and the second edge pixel point set are traversed:
  • the processing sub-module implements a weighted summation of the pixel value of the first reference pixel and the pixel value of the second reference pixel through the following steps to obtain the target pixel value:
  • the pixel value of the first reference pixel and the pixel value of the second reference pixel are weighted and summed by the weight to obtain the target pixel value.
  • the second determining unit 706 includes:
  • An acquiring module configured to acquire the first row position and/or the first column position preconfigured in the first decoding block, and the second row position and/or the second column position preconfigured in the second decoding block;
  • the third determining module is configured to determine the first edge pixel point set according to the first row position and/or the first column position, and determine the second edge pixel point set according to the second row position and/or the second column position.
  • the adjustment unit 704 includes:
  • the first adjustment module is configured to adjust the second resolution to the first resolution when the target resolution is equal to the first resolution
  • the second adjustment module is configured to adjust the first resolution to the second resolution when the target resolution is equal to the second resolution
  • the third adjustment module is configured to adjust the first resolution to the third resolution and the second resolution to the third resolution when the target resolution is equal to the third resolution, where the third resolution It is different from the first resolution, and the third resolution is different from the second resolution.
  • the adjustment unit 704 includes:
  • the first sampling module is configured to down-sample the first resolution to obtain the target resolution when the first resolution is greater than the target resolution; or, to obtain the target resolution when the first resolution is less than the target resolution. Up-sampling at the first resolution to obtain the target resolution;
  • the second sampling module is used for down-sampling the second resolution to obtain the target resolution when the second resolution is greater than the target resolution; or, when the second resolution is less than the target resolution, Up-sampling is performed on the second resolution to obtain the target resolution.
  • the device includes:
  • the first determining unit 802 is configured to determine at least one pair of coded blocks to be reconstructed from the currently to-be-processed coded video frame, where each pair of coded blocks in the at least one pair of coded blocks includes the one with the first resolution A first coding block and a second coding block with a second resolution, where the first coding block and the second coding block are adjacent coding blocks;
  • the adjusting unit 804 is configured to adjust the first resolution of the first encoding block to the target resolution, and adjust the second resolution of the second encoding block to the target resolution;
  • the second determining unit 806 is configured to determine the first edge pixel point set from the first encoding block and determine the second edge pixel point set from the second encoding block, wherein the position of the first edge pixel point set is the same as The positions of the second edge pixel point set are adjacent;
  • the filtering processing unit 808 is configured to perform filtering processing on the first edge pixel point set to obtain a filtered first edge pixel point set, and perform filtering processing on the second edge pixel point set to obtain a filtered second edge pixel point set Where the filtered first edge pixel point set matches the filtered second edge pixel point set.
  • the filtering processing unit 808 includes:
  • the first determining module is configured to determine a first reference pixel point associated with the first edge pixel point set from the first encoding block, and determine a second reference pixel point associated with the second edge pixel point set from the second encoding block pixel;
  • the filtering processing module is used to perform filtering processing on the first edge pixel point set and the second edge pixel point set according to the pixel value of the first reference pixel and the pixel value of the second reference pixel, wherein the filtered first
  • the first difference between the pixel value of the i-th pixel in the edge pixel set and the pixel value of the j-th pixel corresponding to the i-th pixel in the filtered second edge pixel set is smaller than the first edge
  • the second difference between the pixel value of the i-th pixel in the pixel set and the pixel value of the j-th pixel in the second edge pixel set, i is a positive integer and is less than or equal to the pixel in the first edge pixel set
  • J is a positive integer, and is less than or equal to the total number of pixels in the second edge pixel set.
  • the filtering processing module includes:
  • the processing sub-module is used to execute the following steps in sequence until the first edge pixel point set and the second edge pixel point set are traversed:
  • the processing sub-module implements a weighted summation of the pixel value of the first reference pixel and the pixel value of the second reference pixel through the following steps to obtain the target pixel value:
  • the pixel value of the first reference pixel and the pixel value of the second reference pixel are weighted and summed by the weight to obtain the target pixel value.
  • the electronic device for implementing the foregoing video processing method.
  • the electronic device includes a memory 902 and a processor 904.
  • the memory 902 stores a computer Readable instructions
  • the processor 904 is configured to execute the steps in any one of the foregoing method embodiments through computer-readable instructions.
  • the above-mentioned electronic device may be located in at least one network device among a plurality of network devices in a computer network.
  • the foregoing processor may be configured to execute the following steps through computer-readable instructions:
  • each pair of decoded blocks in the at least one pair of decoded blocks includes a first decoded block with a first resolution and a second decoded block with a first resolution.
  • the second decoding block of the resolution, the first decoding block and the second decoding block are adjacent decoding blocks;
  • the first edge pixel point set is determined from the first decoding block, and the second edge pixel point set is determined from the second decoding block, where the position of the first edge pixel point set and the position of the second edge pixel point set are Adjacent
  • the first edge pixel point set is filtered to obtain the filtered first edge pixel point set
  • the second edge pixel point set is filtered to obtain the filtered second edge pixel point set, where the filtered first edge pixel point set is The first edge pixel point set is matched with the filtered second edge pixel point set.
  • FIG. 9 is only for illustration, and the electronic device may also be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, and a mobile Internet device (Mobile Internet Devices, MID), PAD and other terminal devices.
  • FIG. 9 does not limit the structure of the above-mentioned electronic device.
  • the electronic device may also include more or fewer components (such as a network interface, etc.) than shown in FIG. 9 or have a different configuration from that shown in FIG. 9.
  • the memory 902 can be used to store computer-readable instructions and modules, such as program instructions/modules corresponding to the video processing method and device in the embodiments of the present application.
  • the processor 904 runs the computer-readable instructions and modules stored in the memory 902 , So as to perform various functional applications and data processing, that is, to achieve the above-mentioned video processing method.
  • the memory 902 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
  • the memory 902 may further include a memory remotely provided with respect to the processor 904, and these remote memories may be connected to the terminal through a network.
  • the memory 902 can be specifically, but not limited to, used to store information such as decoded video frames and related resolutions.
  • the foregoing memory 902 may, but is not limited to, include the first determining unit 702, the adjusting unit 704, the second determining unit 706, and the filtering processing unit 708 in the foregoing video processing apparatus.
  • it may also include, but is not limited to, other module units in the above-mentioned video processing device, which will not be repeated in this example.
  • the aforementioned transmission device 906 is used to receive or send data via a network.
  • the above-mentioned specific examples of networks may include wired networks and wireless networks.
  • the transmission device 906 includes a network adapter (Network Interface Controller, NIC), which can be connected to other network devices and routers via a network cable so as to communicate with the Internet or a local area network.
  • the transmission device 906 is a radio frequency (RF) module, which is used to communicate with the Internet in a wireless manner.
  • RF radio frequency
  • the above-mentioned electronic device further includes: a display 908 for displaying the above-mentioned decoded video frame; and a connection bus 910 for connecting each module component in the above-mentioned electronic device.
  • the electronic device for implementing the foregoing video processing method.
  • the electronic device includes a memory 1002 and a processor 1004.
  • the memory 1002 stores a computer Readable instructions
  • the processor 1004 is configured to execute the steps in any one of the foregoing method embodiments through computer-readable instructions.
  • the above-mentioned electronic device may be located in at least one network device among a plurality of network devices in a computer network.
  • the foregoing processor may be configured to execute the following steps through computer-readable instructions:
  • each pair of encoding blocks in the at least one pair of encoding blocks includes a first encoding block with a first resolution and a second encoding block with a first resolution.
  • the first edge pixel point set is determined from the first code block
  • the second edge pixel point set is determined from the second code block, where the position of the first edge pixel point set and the position of the second edge pixel point set Adjacent
  • the first edge pixel point set is filtered to obtain the filtered first edge pixel point set
  • the second edge pixel point set is filtered to obtain the filtered second edge pixel point set, where the filtered first edge pixel point set is The first edge pixel point set is matched with the filtered second edge pixel point set.
  • the structure shown in FIG. 10 is only for illustration, and the electronic device may also be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, and a mobile Internet device (Mobile Internet Devices, MID), PAD and other terminal devices.
  • FIG. 10 does not limit the structure of the above electronic device.
  • the electronic device may also include more or fewer components (such as a network interface, etc.) than shown in FIG. 10, or have a configuration different from that shown in FIG.
  • the memory 1002 can be used to store computer-readable instructions and modules, such as computer-readable instructions/modules corresponding to the video processing method and device in the embodiments of the present application.
  • the processor 1004 runs the computer-readable instructions stored in the memory 1002. And modules, so as to perform various functional applications and data processing, that is, to achieve the above-mentioned video processing method.
  • the memory 1002 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
  • the memory 1002 may further include a memory remotely provided with respect to the processor 1004, and these remote memories may be connected to the terminal through a network.
  • the memory 1002 can be specifically, but not limited to, used to store information such as decoded video frames and related resolutions.
  • the memory 1002 may include, but is not limited to, the first determining unit 802, the adjusting unit 804, the second determining unit 806, and the filtering processing unit 808 in the video processing device.
  • it may also include, but is not limited to, other module units in the above-mentioned video processing device, which will not be repeated in this example.
  • the aforementioned transmission device 1006 is used to receive or send data via a network.
  • the foregoing specific examples of the network may include wired networks and wireless networks.
  • the transmission device 1006 includes a network adapter (Network Interface Controller, NIC), which can be connected to other network devices and routers via a network cable so as to communicate with the Internet or a local area network.
  • the transmission device 1006 is a radio frequency (RF) module, which is used to communicate with the Internet in a wireless manner.
  • RF radio frequency
  • the above-mentioned electronic device further includes: a display 1008 for displaying the above-mentioned decoded video frame; and a connection bus 1010 for connecting each module component in the above-mentioned electronic device.
  • a computer-readable storage medium stores computer-readable instructions, wherein the computer-readable instructions are configured to run at runtime. Perform the steps in any of the above method embodiments.
  • the foregoing computer-readable storage medium may be configured to store computer-readable instructions for executing the following steps:
  • each pair of decoded blocks in the at least one pair of decoded blocks includes a first decoded block with a first resolution and a second decoded block with a first resolution.
  • the second decoding block of the resolution, the first decoding block and the second decoding block are adjacent decoding blocks;
  • the first edge pixel point set is determined from the first decoding block, and the second edge pixel point set is determined from the second decoding block, where the position of the first edge pixel point set and the position of the second edge pixel point set are Adjacent
  • the first edge pixel point set is filtered to obtain the filtered first edge pixel point set
  • the second edge pixel point set is filtered to obtain the filtered second edge pixel point set, where the filtered first edge pixel point set is The first edge pixel point set is matched with the filtered second edge pixel point set.
  • the foregoing computer-readable storage medium may also be configured to store computer-readable instructions for executing the following steps:
  • each pair of encoding blocks in the at least one pair of encoding blocks includes a first encoding block with a first resolution and a second encoding block with a first resolution.
  • the first edge pixel point set is determined from the first code block
  • the second edge pixel point set is determined from the second code block, where the position of the first edge pixel point set and the position of the second edge pixel point set Adjacent
  • the first edge pixel point set is filtered to obtain the filtered first edge pixel point set
  • the second edge pixel point set is filtered to obtain the filtered second edge pixel point set, where the filtered first edge pixel point set is The first edge pixel point set is matched with the filtered second edge pixel point set.
  • the read instruction may be stored in a computer-readable storage medium, which may include: flash disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
  • the integrated unit in the foregoing embodiment is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in the foregoing computer-readable storage medium.
  • the technical solution of the present application essentially or the part that contributes to the existing technology, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium. It includes several computer-readable instructions to enable one or more computer devices (which may be personal computers, servers, or network devices, etc.) to perform all or part of the steps of the methods described in the various embodiments of the present application.
  • the disclosed client can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division.
  • there may be other division methods for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, units or modules, and may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

一种视频处理方法,包括:从当前待处理的解码视频帧中,确定所要重构的至少一对解码块,在至少一对解码块中的每对解码块包括采用第一分辨率的第一解码块和采用第二分辨率的第二解码块;将第一解码块的第一分辨率调整为目标分辨率,并将第二解码块的第二分辨率调整为目标分辨率;从第一解码块中确定出第一边缘像素点集,并从第二解码块中确定出第二边缘像素点集;对第一边缘像素点集进行滤波处理,得到滤波后的第一边缘像素点集,并对第二边缘像素点集进行滤波处理,得到滤波后的第二边缘像素点集。

Description

视频处理方法和装置、存储介质和电子装置
本申请要求于2019年09月27日提交中国专利局,申请号为201910927038.6,申请名称为“视频处理方法和装置、存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及音视频编解码领域,具体而言,涉及一种视频处理方法和装置、存储介质和电子装置。
背景技术
随着数字媒体技术和计算机技术的发展,视频已经应用在各个领域,如移动通信、网络监控、网络电视等。随着硬件性能和屏幕分辨率的提高,用户对高清视频的需求日益强烈。
目前在相关技术中,为了节省传输带宽,会针对不同音视频帧或同一音视频帧中不同的编解码块,采用不同分辨率进行编解码。而在对上述不同分辨率的音视频帧或编解码块进行重构时,由于分辨率不一致,将无法保证还原出的原始的视频内容,从而导致视频失真的问题。
针对上述的问题,目前尚未提出有效的解决方案。
发明内容
一种视频处理方法,包括:从当前待处理的解码视频帧中,确定所要重构的至少一对解码块,其中,在上述至少一对解码块中的每对解码块包括采用第一分辨率的第一解码块和采用第二分辨率的第二解码块,上述第一解码块与上述第二解码块为位置邻接的解码块;将上述第一解码块的上述第一分辨率调整为目标分辨率,并将上述第二解码块的上述第二分辨率调整为上述目标分辨率;从上述第一解码块中确定出第一边缘像素点集,并从上述第二解码块中确定出第二边缘像素点集,其中,上述第一边缘像素点集的位置与 上述第二边缘像素点集的位置邻接;对上述第一边缘像素点集进行滤波处理,得到滤波后的上述第一边缘像素点集,并对上述第二边缘像素点集进行滤波处理,得到滤波后的上述第二边缘像素点集,其中,滤波后的上述第一边缘像素点集与滤波后的上述第二边缘像素点集相匹配。
一种视频处理方法,包括:从当前待处理的编码视频帧中,确定所要重构的至少一对编码块,其中,在上述至少一对编码块中的每对编码块包括采用第一分辨率的第一编码块和采用第二分辨率的第二编码块,上述第一编码块与上述第二编码块为位置邻接的编码块;将上述第一编码块的上述第一分辨率调整为目标分辨率,并将上述第二编码块的上述第二分辨率调整为上述目标分辨率;从上述第一编码块中确定出第一边缘像素点集,并从上述第二编码块中确定出第二边缘像素点集,其中,上述第一边缘像素点集的位置与上述第二边缘像素点集的位置邻接;对上述第一边缘像素点集进行滤波处理,得到滤波后的上述第一边缘像素点集,并对上述第二边缘像素点集进行滤波处理,得到滤波后的上述第二边缘像素点集,其中,滤波后的上述第一边缘像素点集与滤波后的上述第二边缘像素点集相匹配。
一种视频处理装置,包括:第一确定单元,用于从当前待处理的解码视频帧中,确定所要重构的至少一对解码块,其中,在上述至少一对解码块中的每对解码块包括采用第一分辨率的第一解码块和采用第二分辨率的第二解码块,上述第一解码块与上述第二解码块为位置邻接的解码块;调整单元,用于将上述第一解码块的上述第一分辨率调整为目标分辨率,并将上述第二解码块的上述第二分辨率调整为上述目标分辨率;第二确定单元,从上述第一解码块中确定出第一边缘像素点集,并从上述第二解码块中确定出第二边缘像素点集,其中,上述第一边缘像素点集的位置与上述第二边缘像素点集的位置邻接;滤波处理单元,用于对上述第一边缘像素点集进行滤波处理,得到滤波后的上述第一边缘像素点集,并对上述第二边缘像素点集进行滤波处理,得到滤波后的上述第二边缘像素点集,其中,滤波后的上述第一边缘像素点集与滤波后的上述第二边缘像素点集相匹配。
在一个实施例中,所述调整单元包括:
第一采样模块,用于在所述第一分辨率大于所述目标分辨率的情况下,对所述第一分辨率进行下采样,得到所述目标分辨率;或者,在所述第一分辨率小于所述目标分辨率的情况下,对所述第一分辨率进行上采样,得到所述目标分辨率;及
第二采样模块,用于在所述第二分辨率大于所述目标分辨率的情况下,对所述第二分辨率进行下采样,得到所述目标分辨率;或者,在所述第二分辨率小于所述目标分辨率的情况下,对所述第二分辨率进行上采样,得到所述目标分辨率。
一种视频处理装置,包括:第一确定单元,用于从当前待处理的编码视频帧中,确定所要重构的至少一对编码块,其中,在上述至少一对编码块中的每对编码块包括采用第一分辨率的第一编码块和采用第二分辨率的第二编码块,上述第一编码块与上述第二编码块为位置邻接的编码块;调整单元,用于将上述第一编码块的上述第一分辨率调整为目标分辨率,并将上述第二编码块的上述第二分辨率调整为上述目标分辨率;第二确定单元,用于从上述第一编码块中确定出第一边缘像素点集,并从上述第二编码块中确定出第二边缘像素点集,其中,上述第一边缘像素点集的位置与上述第二边缘像素点集的位置邻接;滤波处理单元,用于对上述第一边缘像素点集进行滤波处理,得到滤波后的上述第一边缘像素点集,并对上述第二边缘像素点集进行滤波处理,得到滤波后的上述第二边缘像素点集,其中,滤波后的上述第一边缘像素点集与滤波后的上述第二边缘像素点集相匹配。
在一个实施例中,所述滤波处理单元包括:
第一确定模块,用于从所述第一编码块中确定出与所述第一边缘像素点集关联的第一参考像素点,从所述第二编码块中确定出与所述第二边缘像素点集关联的第二参考像素点;及
滤波处理模块,用于根据所述第一参考像素点的像素值与所述第二参考像素点的像素值,对所述第一边缘像素点集和所述第二边缘像素点集进行滤波处理,其中,滤波后的所述第一边缘像素点集中第i个像素点的像素值与滤波后的所述第二边缘像素点集中与所述第i个像素点对应的第j个像素点 的像素值之间的第一差值,小于所述第一边缘像素点集中所述第i个像素点的像素值与所述第二边缘像素点集中所述第j个像素点的像素值之间的第二差值,所述i为正整数,且小于等于所述第一边缘像素点集中像素点的总数,所述j为正整数,且小于等于所述第二边缘像素点集中像素点的总数。
在一个实施例中,所述滤波处理模块包括:处理子模块,用于依次执行以下步骤,直至遍历所述第一边缘像素点集和所述第二边缘像素点集:从所述第一边缘像素点集和所述第二边缘像素点集中确定当前边缘像素点;对所述第一参考像素点的像素值与所述第二参考像素点的像素值进行加权求和,得到目标像素值;及将所述当前边缘像素点的像素值更新为所述目标像素值,以得到滤波后的所述当前边缘像素点。
在一个实施例中,处理子模块用于确定所述当前边缘像素点的位置;依次获取所述第一参考像素点和所述第二参考像素点中每个参考像素点的位置与所述当前边缘像素点的位置之间的距离;根据所述距离确定与所述每个参考像素点匹配的权重;及利用所述权重对所述第一参考像素点的像素值与所述第二参考像素点的像素值进行加权求和,以得到所述目标像素值。
一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行以下步骤:
从当前待处理的解码视频帧中,确定所要重构的至少一对解码块,其中,在所述至少一对解码块中的每对解码块包括采用第一分辨率的第一解码块和采用第二分辨率的第二解码块,所述第一解码块与所述第二解码块为位置邻接的解码块;
将所述第一解码块的所述第一分辨率调整为目标分辨率,并将所述第二解码块的所述第二分辨率调整为所述目标分辨率;
从所述第一解码块中确定出第一边缘像素点集,并从所述第二解码块中确定出第二边缘像素点集,其中,所述第一边缘像素点集的位置与所述第二边缘像素点集的位置邻接;及
对所述第一边缘像素点集进行滤波处理,得到滤波后的所述第一边缘像 素点集,并对所述第二边缘像素点集进行滤波处理,得到滤波后的所述第二边缘像素点集,其中,滤波后的所述第一边缘像素点集与滤波后的所述第二边缘像素点集相匹配。
一种电子装置,包括存储器和一个或多个处理器,存储器中储存有计算机可读指令,计算机可读指令被处理器执行时,使得一个或多个处理器执行以下步骤:
从当前待处理的解码视频帧中,确定所要重构的至少一对解码块,其中,在所述至少一对解码块中的每对解码块包括采用第一分辨率的第一解码块和采用第二分辨率的第二解码块,所述第一解码块与所述第二解码块为位置邻接的解码块;
将所述第一解码块的所述第一分辨率调整为目标分辨率,并将所述第二解码块的所述第二分辨率调整为所述目标分辨率;
从所述第一解码块中确定出第一边缘像素点集,并从所述第二解码块中确定出第二边缘像素点集,其中,所述第一边缘像素点集的位置与所述第二边缘像素点集的位置邻接;及
对所述第一边缘像素点集进行滤波处理,得到滤波后的所述第一边缘像素点集,并对所述第二边缘像素点集进行滤波处理,得到滤波后的所述第二边缘像素点集,其中,滤波后的所述第一边缘像素点集与滤波后的所述第二边缘像素点集相匹配。
一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行以下步骤:
从当前待处理的编码视频帧中,确定所要重构的至少一对编码块,其中,在所述至少一对编码块中的每对编码块包括采用第一分辨率的第一编码块和采用第二分辨率的第二编码块,所述第一编码块与所述第二编码块为位置邻接的编码块;
将所述第一编码块的所述第一分辨率调整为目标分辨率,并将所述第二编码块的所述第二分辨率调整为所述目标分辨率;
从所述第一编码块中确定出第一边缘像素点集,并从所述第二编码块中确定出第二边缘像素点集,其中,所述第一边缘像素点集的位置与所述第二边缘像素点集的位置邻接;及
对所述第一边缘像素点集进行滤波处理,得到滤波后的所述第一边缘像素点集,并对所述第二边缘像素点集进行滤波处理,得到滤波后的所述第二边缘像素点集,其中,滤波后的所述第一边缘像素点集与滤波后的所述第二边缘像素点集相匹配。
一种电子装置,包括存储器和一个或多个处理器,存储器中储存有计算机可读指令,计算机可读指令被处理器执行时,使得一个或多个处理器执行以下步骤:
从当前待处理的编码视频帧中,确定所要重构的至少一对编码块,其中,在所述至少一对编码块中的每对编码块包括采用第一分辨率的第一编码块和采用第二分辨率的第二编码块,所述第一编码块与所述第二编码块为位置邻接的编码块;
将所述第一编码块的所述第一分辨率调整为目标分辨率,并将所述第二编码块的所述第二分辨率调整为所述目标分辨率;
从所述第一编码块中确定出第一边缘像素点集,并从所述第二编码块中确定出第二边缘像素点集,其中,所述第一边缘像素点集的位置与所述第二边缘像素点集的位置邻接;及
对所述第一边缘像素点集进行滤波处理,得到滤波后的所述第一边缘像素点集,并对所述第二边缘像素点集进行滤波处理,得到滤波后的所述第二边缘像素点集,其中,滤波后的所述第一边缘像素点集与滤波后的所述第二边缘像素点集相匹配。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的 前提下,还可以根据这些附图获得其他的附图。
图1是根据本申请实施例的一种可选的视频处理方法的应用环境的示意图;
图2是根据本申请实施例的一种可选的视频处理方法的流程图;
图3是根据本申请实施例的一种可选的视频处理方法的示意图;
图4是根据本申请实施例的另一种可选的视频处理方法的示意图;
图5是根据本申请实施例的又一种可选的视频处理方法的示意图;
图6是根据本申请实施例的另一种可选的视频处理方法的流程图;
图7是根据本申请实施例的一种可选的视频处理装置的结构示意图;
图8是根据本申请实施例的另一种可选的视频处理装置的结构示意图;
图9是根据本申请实施例的又一种可选的视频处理装置的结构示意图;
图10是根据本申请实施例的一种可选的电子装置的结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方 法、产品或设备固有的其它步骤或单元。
根据本申请实施例的一个方面,提供了一种视频处理方法,可选地,作为一种可选的实施方式,上述视频处理方法可以但不限于应用于如图1所示的应用环境中的视频处理系统。其中,该视频处理系统中包括终端102和服务器104,上述终端102和服务器104通过网络进行通信。其中,上述终端102可以但不限于为智能手机、平板电脑、笔记本电脑、台式计算机等,但并不局限于此。上述服务器104可以但不限于为数据处理能力较强,且具有一定存储空间的计算机处理设备。
需要说明的是,通过上述图1所示的终端102与服务器104的交互过程,从视频编解码过程中采用不同分辨率且位置相邻的编码块或解码块中,确定出位置相邻的边缘像素点集,以对该边缘像素点集进行边缘滤波处理,从而避免重构的视频帧中出现明显的接缝,克服相关技术中由于不同分辨率所导致的视频失真的问题,进而使得滤波后的编码块或解码块可以真实地还原出视频帧中的内容,并提升编解码效率。
在一个实施例中,终端102可以包括但不限于以下部件:图像处理单元1021、处理器1022、存储介质1023、内存1024、网络接口1025、显示屏幕1026和输入设备1027。上述部件可以但不限于通过系统总线1028连接。其中,上述图像处理单元1021用于至少提供显示界面的绘制能力;上述处理器1022用于提供计算和控制能力,以支持终端102的运行;存储介质1023中存储有操作系统1023-2、视频编码器和/或视频解码器1023-4。操作系统1023-2用于提供控制操作指令,视频编码器和/或视频解码器1023-4用于根据控制操作指令执行编码/解码操作。此外,上述内存为存储介质1023中的视频编码器和/或视频解码器1023-4提供运行环境,网络接口1025用于与服务器104中的网络接口1043进行网络通信。上述显示屏幕用于显示应用界面等,如解码视频;输入设备1027用于接收用户输入的命令或数据等。对于带触摸屏的终端102,显示屏幕1026和输入设备1027可为触摸屏。上述图1所示出的终端内部的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对 本申请方案所应用于其上的终端的限定,具体的终端或服务器可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,上述服务器104可以包括但不限于以下部件:处理器1041、内存1042、网络接口1043和存储介质1044。上述部件可以但不限于通过系统总线1045连接。上述存储介质1044包括操作系统1044-1、数据库1044-2、视频编码器和/或视频解码器1044-3。其中,上述处理器1041用于提供计算和控制能力,以支持服务器104的运行。内存1042为存储介质1044中的视频编码器和/或视频解码1044-3的运行提供环境。网络接口1043与外部的终端102的网络接口1025通过网络连接通信。上述存储介质中的操作系统1044-1用于提供控制操作指令;视频编码器和/或视频解码器1044-3用于根据控制操作指令执行编码/解码操作;数据库1044-2用于存储数据。上述图1所示出的服务器内部的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备具有不同的部件布置。
在一个实施例中,上述网络可以包括但不限于有线网络。其中,上述有线网络可以包括但不限于:广域网、城域网、局域网。上述仅是一种示例,本实施例中对此不作任何限定。
可选地,作为一种可选的实施方式,如图2所示,上述视频处理方法,可以应用于解码侧,该方法包括:
S202,从当前待处理的解码视频帧中,确定所要重构的至少一对解码块,其中,在至少一对解码块中的每对解码块包括采用第一分辨率的第一解码块和采用第二分辨率的第二解码块,第一解码块与第二解码块为位置邻接的解码块;
S204,将第一解码块的第一分辨率调整为目标分辨率,并将第二解码块的第二分辨率调整为目标分辨率;
S206,从第一解码块中确定出第一边缘像素点集,并从第二解码块中确定出第二边缘像素点集,其中,第一边缘像素点集的位置与第二边缘像素点 集的位置邻接;
S208,对第一边缘像素点集进行滤波处理,得到滤波后的第一边缘像素点集,并对第二边缘像素点集进行滤波处理,得到滤波后的第二边缘像素点集,其中,滤波后的第一边缘像素点集与滤波后的第二边缘像素点集相匹配。
需要说明的是,上述图2所示视频处理方法可以但不限于用于图1所示的视频解码器中。通过该视频解码器与其他部件的交互配合,来完成上述视频处理过程。
可选地,在本实施例中,上述视频处理方法可以但不限于应用于视频播放应用、视频共享应用或视频会话应用等应用场景中。其中,上述应用场景中所传输的视频可以包括但不限于:长视频、短视频,如长视频可以为播放时间较长(例如播放时长大于10分钟)的播放剧集,或长时间视频会话中所展示的画面,短视频可以为双方或多方交互的语音消息,或用于在共享平台展示的播放时间较短(例如播放时长小于等于30秒)的视频。上述仅是示例,本实施例中所提供的视频处理方法可以但不限于应用于上述应用场景中用于播放视频的播放设备中,在获取到当前待处理的解码视频帧后,确定所要重构的至少一对解码块,其中,上述至少一对解码块中每对解码块包括采用第一分辨率的第一解码块及采用第二分辨率的第二解码块。通过对上述解码块进行分辨率调整,并对解码块中确定出的边缘像素点集进行边缘滤波处理,以使得在重构过程中可以避免在视频中出现明显接缝,克服相关技术中视频失真的问题。
可选地,在本实施例中,将第一解码块的第一分辨率调整为目标分辨率,并将第二解码块的第二分辨率调整为目标分辨率包括:
在目标分辨率等于第一分辨率的情况下,将第二分辨率调整为第一分辨率;
在目标分辨率等于第二分辨率的情况下,将第一分辨率调整为第二分辨率;
在目标分辨率等于第三分辨率的情况下,将第一分辨率调整为第三分辨 率,并将第二分辨率调整为第三分辨率,其中,该第三分辨率与第一分辨率不同,且与第二分辨率不同。
需要说明的是,在本实施例中,上述第三分辨率可以包括但不限于以下之一:解码视频帧的原始分辨率、对解码视频帧进行上采样得到的最高分辨率。也就是说,在本实施例中,对第一解码块和第二解码块进行分辨率统一处理,可以将分辨率统一为一对解码块中的任意一个解码块所采用的分辨率。此外,还可以将分辨率统一为第三分辨率,其中,该第三分辨率为预先约定的分辨率。
例如,假设第一解码块和第二解码块的原始分辨率为8*8,第一解码块所采用的第一分辨率为4*4,第二解码块所采用的第二分辨率为16*16,则上述目标分辨率可以但不限于为以下之一:原始分辨率8*8,第一分辨率为4*4,第二分辨率为16*16。通过对解码块进行上采样或下采样,来实现分辨率的统一。
可选地,在本实施例中,将第一解码块的第一分辨率调整为目标分辨率,并将第二解码块的第二分辨率调整为目标分辨率包括:
在第一分辨率和/或第二分辨率大于目标分辨率的情况下,对上述第一分辨率和/或第二分辨率进行下采样,以得到目标分辨率;
在第一分辨率和/或第二分辨率小于目标分辨率的情况下,对上述第一分辨率和/或第二分辨率进行上采样,以得到目标分辨率。
可选地,在本实施例中,从第一解码块中确定出第一边缘像素点集和从第二解码块中确定出第二边缘像素点集可以包括但不限于:获取预配置的像素行位置和/或像素列位置;将上述像素行位置和/或像素列位置中的像素点确定为第一边缘像素点集和第二边缘像素点集。
需要说明的是,在第一解码块与第二解码块为行邻接,则根据预配置的像素行位置确定第一边缘像素点集和第二边缘像素点集;在第一解码块与第二解码块为列邻接,则根据预配置的像素列位置确定第一边缘像素点集和第二边缘像素点集。如图3所示,假设解码视频帧中包括行邻接的一对解码块: 解码块A(如图3中(b)部分所示包括圆形像素点)和解码块B(如图3中(b)部分所示包括方形像素点)。在确定二者为行邻接的情况下,则可以在解码块A中与解码块B相邻的一边的像素点中确定出第一边缘像素点集,如图3中(b)(部分)所示n1个实心圆为第一边缘像素点集;并在解码块B中与解码块A相邻的一边的像素点中确定出第二边缘像素点集,如图3中(b)部分所示n2个实心方框为第二边缘像素点集。
此外,在本实施例中,在对第一解码块与第二解码块的分辨率统一为目标分辨率之后,再对第一边缘像素点集进行滤波处理,得到滤波后的第一边缘像素点集,并对第二边缘像素点集进行滤波处理,得到滤波后的第二边缘像素点集,从而使得第一边缘像素点集中各个像素点的像素值,与第二边缘像素点集中各个像素点的像素值相适配,避免出现明显的接缝,减少主观视差。
可选地,在本实施例中,对第一边缘像素点集进行滤波处理,并对第二边缘像素点集进行滤波处理,可以包括但不限于:从第一边缘像素点集和第二边缘像素点集中确定当前边缘像素点;对与第一边缘像素点集关联的第一参考像素点的像素值,和与第二边缘像素点集关联的第二参考像素点的像素值进行加权求和,得到目标像素值;利用该目标像素值更新上述当前边缘像素点的像素值。其中,上述加权求和所使用的权重根据参考像素点与当前边缘像素点之间的距离确定。
需要说明的是,在本实施例中,上述第一参考像素点可以但不限于为位于第一解码块中,且与第二解码块相邻一边的像素点中的参考像素点。上述第二参考像素点可以但不限于为位于第二解码块中,且与第一解码块相邻一边的像素点中的参考像素点。利用上述各个参考像素点来更新第一边缘像素点集中各个像素点的像素值和第二边缘像素点集中各个像素点的像素值。从而实现对统一为目标分辨率后的第一解码块与第二解码块的边缘滤波处理。
可选地,在本实施例中,在获取到滤波后的第一边缘像素点集和滤波后的第二边缘像素点集之后,可以但不限于按照上述滤波后的第一边缘像素点集和滤波后的第二边缘像素点集进行渲染展示,还可以但不限于将包含上述 滤波后的第一边缘像素点集和滤波后的第二边缘像素点集的解码视频帧作为参考帧供后续视频帧进行解码参考。
通过本申请提供的实施例,在获取到当前待处理的解码视频帧后,确定所要重构的至少一对解码块,其中,上述至少一对解码块中每对解码块包括采用第一分辨率的第一解码块及采用第二分辨率的第二解码块。通过对上述解码块进行分辨率调整,并对解码块中确定出的边缘像素点集进行边缘滤波处理,以使得在重构过程中可以避免在视频中出现明显接缝,克服相关技术中视频失真的问题。
作为一种可选的方案,对第一边缘像素点集进行滤波处理,得到滤波后的第一边缘像素点集,并对第二边缘像素点集进行滤波处理,得到滤波后的第二边缘像素点集包括:
从第一解码块中确定出与第一边缘像素点集关联的第一参考像素点,从第二解码块中确定出与第二边缘像素点集关联的第二参考像素点;
根据第一参考像素点的像素值与第二参考像素点的像素值,对第一边缘像素点集和第二边缘像素点集进行滤波处理,其中,滤波后的第一边缘像素点集中第i个像素点的像素值与滤波后的第二边缘像素点集中与第i个像素点对应的第j个像素点的像素值之间的第一差值,小于第一边缘像素点集中第i个像素点的像素值与第二边缘像素点集中第j个像素点的像素值之间的第二差值,i为正整数,且小于等于第一边缘像素点集中像素点的总数,j为正整数,且小于等于第二边缘像素点集中像素点的总数。
需要说明的是,与第一边缘像素点集关联的第一参考像素点可以包括但不限于位于第一解码块中,且与第二解码块相邻一边的多个像素点,该多个像素点包括第一边缘像素点集;与第二边缘像素点集关联的第二参考像素点可以包括但不限于位于第二解码块中,且与第一解码块相邻一边的多个像素点,该多个像素点包括第二边缘像素点集。也就是说,参考两个解码块的相邻边(如相邻行/相邻列)中的多个像素点的像素值,来确定边缘像素点的像素值,从而实现对统一分辨率后的解码块进行边缘滤波的效果,避免解码后还原的视频中出现明显的接缝。
具体结合图4进行说明:假设解码视频帧中包括一对解码块:解码块A和解码块B。在确定二者为行邻接的情况下,则可以在解码块A中与解码块B相邻的一边的像素点中确定出第一边缘像素点集,如图4中(a)部分所示n1个实心圆为第一边缘像素点集,并在解码块B中与解码块A相邻的一边的像素点中确定出第二边缘像素点集,如图4中(a)部分所示n2个实心方框为第二边缘像素点集。
根据上述n1个实心圆确定出第一边缘像素点集关联的第一参考像素点,如图4中(b)部分所示m1个斜线圆;并根据上述n2个实心方框确定出第二边缘像素点集关联的第二参考像素点,如图4中(b)部分所示m2个斜线方框。
进一步假设图4中(a)部分所示的解码块A中虚线框内的实心圆为当前第i个像素点,解码块B中虚线框内的实心方框为与上述第i个像素点对应的第j个像素点。在根据上述第一参考像素点的像素值与上述第二参考像素点的像素值,对第一边缘像素点集和第二边缘像素点集进行滤波处理之后,确定滤波后解码块A中的第i个像素点的像素值与解码块B中的第j个像素点的像素值之间的第一差值f1,并确定滤波前解码块A中的第i个像素点的像素值与解码块B中的第j个像素点的像素值之间的第二差值f2,通过比对上述差值,可以确定f1<f2,也就是说,两个解码块中滤波后的边缘像素集中像素点的像素值的差异更小。
通过本申请提供的实施例,利用第一参考像素点的像素值和第二参考像素点的像素值,来对第一边缘像素点集和第二边缘像素点集进行滤波处理,从而实现融合与边缘像素点邻近的像素点的像素值来完成滤波处理,以使得边缘像素点的像素值相适配,避免在视频播放时出现明显的接缝,进而减少用户的主观视差,改善用户观看体验。
作为一种可选的方案,根据第一参考像素点的像素值与第二参考像素点的像素值,对第一边缘像素点集和第二边缘像素点集进行滤波处理包括:
依次执行以下步骤,直至遍历第一边缘像素点集和第二边缘像素点集:
从第一边缘像素点集和第二边缘像素点集中确定当前边缘像素点;
对第一参考像素点的像素值与第二参考像素点的像素值进行加权求和,得到目标像素值;
将当前边缘像素点的像素值更新为目标像素值,以得到滤波后的当前边缘像素点。
需要说明的是,在本实施例中,针对分辨率不同的解码块,在将分辨率统一为目标分辨率之后,还将进一步对解码块中相邻的边缘像素点的像素值进行边缘滤波处理,以使得边缘像素点得到平滑处理。
可选地,在本实施例中,上述步骤对第一参考像素点的像素值与第二参考像素点的像素值进行加权求和,得到目标像素值包括:
确定当前边缘像素点的位置;
依次获取第一参考像素点和第二参考像素点中每个参考像素点的位置与当前边缘像素点的位置之间的距离;
根据距离确定与每个参考像素点匹配的权重;
利用权重对第一参考像素点的像素值与第二参考像素点的像素值进行加权求和,以得到目标像素值。
具体结合图5所示进行说明:假设第一解码块A中第一边缘像素点集为图5所示实心圆,第二解码块B中第二边缘像素点集为图5所示实心方框。此外,第一解码块A中第一参考像素点为图5所示全部圆圈,第二解码块B中第二参考像素点为图5所示全部方框,各个参考像素点的像素值从左至右依次为Q1至Q7。
进一步,假设确定当前边缘像素点为图5所示的当前边缘像素点P(如图所示虚线框选中的像素点),位于第二解码块B第一行的第二列的位置。此外,依次获取各个参考像素点的位置与当前边缘像素点P所在位置之间的距离,如该距离从左至右依次为:d1至d7,并根据上述不同的距离将分别确定对应不同的权重,如从左至右依次为α1至α7。即,与d1对应的权重为α1, 以此类推。
基于上述假设条件计算目标像素值Q,计算公式如下:
Q=Q1*α1+Q2*α2+Q3*α3+Q4*α4+Q5*α5+Q6*α6+Q7*α7
将第一边缘像素点集和第二边缘像素点集中的每个边缘像素点,分别作为当前边缘像素点,重复执行上述步骤,以确定出每个边缘像素点的像素值,从而达到对第一边缘像素点集和第二边缘像素点集进行滤波处理的目的。
进一步,在对第一边缘像素点集和第二边缘像素点集进行滤波处理之后,确定滤波后解码块A中的第i个像素点的像素值与解码块B中的第j个像素点的像素值之间的第一差值f1,并确定滤波前解码块A中的第i个像素点的像素值与解码块B中的第j个像素点的像素值之间的第二差值f2,通过比对上述差值,可以确定f1<f2,也就是说,两个解码块中滤波后的边缘像素集中像素点的像素值的差异更小。其中,i为第一边缘像素点集中第i个像素点,i是正整数,j为第二边缘像素点集中第j个像素点,j是正整数,
通过本申请提供的实施例,融合各个参考像素点的像素值,来计算每个边缘像素点的像素值,以达到边缘滤波处理的目的,从而避免还原视频后出现明显的接缝,减少用户的主观视差。
作为一种可选的方案,从第一解码块中确定出第一边缘像素点集,并从第二解码块中确定出第二边缘像素点集包括:
获取在第一解码块中预先配置的第一行位置和/或第一列位置,以及在第二解码块中预先配置的第二行位置和/或第二列位置;
根据第一行位置和/或第一列位置确定出第一边缘像素点集,并根据第二行位置和/或第二列位置确定出第二边缘像素点集。
需要说明的是,在第一解码块与第二解码块为行邻接的情况下,则获取预先配置的第一列位置和第二列位置;在第一解码块与第二解码块为列邻接的情况下,则获取预先配置的第一行位置和第二行位置。其中,上述预先配置的第一行位置/第二行位置可以但不限于为解码块中的一行或多行位置,上 述预先配置的第一列位置/第二列位置可以但不限于为解码块中的一列或多列位置。此外,上述第一行位置的数量与第二行位置的数量可以相等,也可以不等;上述第一列位置的数量与第二列位置的数量可以相等,也可以不等。上述数量可以结合具体应用场景确定,本实施例中对此不作任何限定。
例如,如图5所示,在第一解码块A中的多行多列像素点中,可以确定第一边缘像素点集为图5所示实心圆所指示的像素点;在第二解码块B中的多行多列像素点中,可以确定第二边缘像素点集为图5所示实心方框所指示的像素点。
通过本申请提供的实施例,按照预先配置的行位置/列位置确定边缘像素点集,从而实现对解码块进行统一的边缘滤波处理。
作为一种可选的方案,将第一解码块的第一分辨率调整为目标分辨率,并将第二解码块的第二分辨率调整为目标分辨率包括:
在目标分辨率等于第一分辨率的情况下,将第二分辨率调整为第一分辨率;
在目标分辨率等于第二分辨率的情况下,将第一分辨率调整为第二分辨率;
在目标分辨率等于第三分辨率的情况下,将第一分辨率调整为第三分辨率,并第二分辨率调整为第三分辨率,其中,第三分辨率与第一分辨率不同,且第三分辨率与第二分辨率不同。
需要说明的是,在本实施例中,在对分辨率进行统一时,可以选择将分辨率统一为一对解码块中任意一个分辨率,如第一分辨率或第二分辨率;此外,还可以统一为第三分辨率,其中,该第三分辨率可以但不限于为解码块的原始分辨率或解码块所支持的最高分辨率。上述仅是示例,本实施例中对此不作任何限定。
可选地,在本实施例中,将第一解码块的第一分辨率调整为目标分辨率,并将第二解码块的第二分辨率调整为目标分辨率包括:
在第一分辨率大于目标分辨率的情况下,对第一分辨率进行下采样,得到目标分辨率;或者,在第一分辨率小于目标分辨率的情况下,对第一分辨率进行上采样,得到目标分辨率;
在第二分辨率大于目标分辨率的情况下,对第二分辨率进行下采样,得到目标分辨率;或者,在第二分辨率小于目标分辨率的情况下,对第二分辨率进行上采样,得到目标分辨率。
具体结合以下示例进行说明:假设第一解码块和第二解码块的原始分辨率为8*8,第一解码块所采用的第一分辨率为4*4,第二解码块所采用的第二分辨率为16*16,上述目标分辨率为原始分辨率8*8。
比对后确定,第一分辨率小于目标分辨率,则对第一分辨率4*4进行上采样后得到目标分辨率8*8;第二分辨率大于目标分辨率,则对第二分辨率16*16进行下采样后得到目标分辨率8*8。进一步,对分辨率统一为目标分辨率8*8的第一解码块和第二解码块进行边缘滤波处理,处理过程可以参考上述实施例,本实施例在此不再赘述。
通过本申请提供的实施例,通过上采样差值或下采样抽样的方式,来对第一分辨率和第二分辨率进行统一处理,以使得分辨率不同的第一解码块与第二解码块可以将分辨率统一为目标分辨率,从而便于后续执行边缘滤波处理操作,进而克服相关技术中分辨率不同导致的视频失真的问题。
根据本申请实施例的另一个方面,提供了一种视频处理方法,可选地,作为一种可选的实施方式,上述视频处理方法可以但不限于应用于如图1所示的环境中。
可选地,作为一种可选的实施方式,如图6所示,上述视频处理方法,可以应用于编码侧,该方法包括:
S602,从当前待处理的编码视频帧中,确定所要重构的至少一对编码块,其中,在至少一对编码块中的每对编码块包括采用第一分辨率的第一编码块和采用第二分辨率的第二编码块,第一编码块与第二编码块为位置邻接的编码块;
S604,将第一编码块的第一分辨率调整为目标分辨率,并将第二编码块的第二分辨率调整为目标分辨率;
S606,从第一编码块中确定出第一边缘像素点集,并从第二编码块中确定出第二边缘像素点集,其中,第一边缘像素点集的位置与第二边缘像素点集的位置邻接;
S608,对第一边缘像素点集进行滤波处理,得到滤波后的第一边缘像素点集,并对第二边缘像素点集进行滤波处理,得到滤波后的第二边缘像素点集,其中,滤波后的第一边缘像素点集与滤波后的第二边缘像素点集相匹配。
需要说明的是,上述图2所示视频处理方法可以但不限于用于图1所示的视频编码器中。通过该视频编码器与其他部件的交互配合,来完成上述视频处理过程。
可选地,在本实施例中,上述视频处理方法可以但不限于应用于视频播放应用、视频共享应用或视频会话应用等应用场景中。其中,上述应用场景中所传输的视频可以包括但不限于:长视频、短视频,如长视频可以为播放时间较长(例如播放时长大于10分钟)的播放剧集,或长时间视频会话中所展示的画面,短视频可以为双方或多方交互的语音消息,或用于在共享平台展示的播放时间较短(例如播放时长小于等于30秒)的视频。上述仅是示例,本实施例中所提供的视频处理方法可以但不限于应用于上述应用场景中用于播放视频的播放设备中,在获取到当前待处理的编码视频帧后,确定所要重构的至少一对编码块,其中,上述至少一对编码块中每对编码块包括采用第一分辨率的第一编码块及采用第二分辨率的第二编码块。通过对上述编码块进行分辨率调整,并对编码块中确定出的边缘像素点集进行边缘滤波处理,以使得在重构过程中可以避免在视频中出现明显接缝,克服相关技术中视频失真的问题。
作为一种可选的方案,对第一边缘像素点集进行滤波处理,得到滤波后的第一边缘像素点集,并对第二边缘像素点集进行滤波处理,得到滤波后的第二边缘像素点集包括:
从第一编码块中确定出与第一边缘像素点集关联的第一参考像素点,从第二编码块中确定出与第二边缘像素点集关联的第二参考像素点;
根据第一参考像素点的像素值与第二参考像素点的像素值,对第一边缘像素点集和第二边缘像素点集进行滤波处理,其中,滤波后的第一边缘像素点集中第i个像素点的像素值与滤波后的第二边缘像素点集中与第i个像素点对应的第j个像素点的像素值之间的第一差值,小于第一边缘像素点集中第i个像素点的像素值与第二边缘像素点集中第j个像素点的像素值之间的第二差值,i为正整数,且小于等于第一边缘像素点集中像素点的总数,j为正整数,且小于等于第二边缘像素点集中像素点的总数。
作为一种可选的方案,根据第一参考像素点的像素值与第二参考像素点的像素值,对第一边缘像素点集和第二边缘像素点集进行滤波处理包括:
依次执行以下步骤,直至遍历第一边缘像素点集和第二边缘像素点集:
从第一边缘像素点集和第二边缘像素点集中确定当前边缘像素点;
对第一参考像素点的像素值与第二参考像素点的像素值进行加权求和,得到目标像素值;
将当前边缘像素点的像素值更新为目标像素值,以得到滤波后的当前边缘像素点。
作为一种可选的方案,对第一参考像素点的像素值与第二参考像素点的像素值进行加权求和,得到目标像素值包括:
确定当前边缘像素点的位置;
依次获取第一参考像素点和第二参考像素点中每个参考像素点的位置与当前边缘像素点的位置之间的距离;
根据距离确定与每个参考像素点匹配的权重;
利用权重对第一参考像素点的像素值与第二参考像素点的像素值进行加权求和,以得到目标像素值。
此外,在本实施例中,关于统一分辨率的相关操作,及边缘滤波处理操作,可以参考上述解码侧的实施例,本实施例中在此不再赘述。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
根据本申请实施例的另一个方面,还提供了一种用于实施上述视频处理方法的视频处理装置。如图7所示,应用于解码侧,该装置包括:
第一确定单元702,用于从当前待处理的解码视频帧中,确定所要重构的至少一对解码块,其中,在至少一对解码块中的每对解码块包括采用第一分辨率的第一解码块和采用第二分辨率的第二解码块,第一解码块与第二解码块为位置邻接的解码块;
调整单元704,用于将第一解码块的第一分辨率调整为目标分辨率,并将第二解码块的第二分辨率调整为目标分辨率;
第二确定单元706,从第一解码块中确定出第一边缘像素点集,并从第二解码块中确定出第二边缘像素点集,其中,第一边缘像素点集的位置与第二边缘像素点集的位置邻接;
滤波处理单元708,用于对第一边缘像素点集进行滤波处理,得到滤波后的第一边缘像素点集,并对第二边缘像素点集进行滤波处理,得到滤波后的第二边缘像素点集,其中,滤波后的第一边缘像素点集与滤波后的第二边缘像素点集相匹配。
具体实施例可以参考上述解码侧的视频处理方法中所示示例,本示例中在此不再赘述。
作为一种可选的方案,滤波处理单元708包括:
第一确定模块,用于从第一解码块中确定出与第一边缘像素点集关联的第一参考像素点,从第二解码块中确定出与第二边缘像素点集关联的第二参考像素点;
滤波处理模块,用于根据第一参考像素点的像素值与第二参考像素点的像素值,对第一边缘像素点集和第二边缘像素点集进行滤波处理,其中,滤波后的第一边缘像素点集中第i个像素点的像素值与滤波后的第二边缘像素点集中与第i个像素点对应的第j个像素点的像素值之间的第一差值,小于第一边缘像素点集中第i个像素点的像素值与第二边缘像素点集中第j个像素点的像素值之间的第二差值,i为正整数,且小于等于第一边缘像素点集中像素点的总数,j为正整数,且小于等于第二边缘像素点集中像素点的总数。
具体实施例可以参考上述解码侧的视频处理方法中所示示例,本示例中在此不再赘述。
作为一种可选的方案,滤波处理模块包括:
处理子模块,用于依次执行以下步骤,直至遍历第一边缘像素点集和第二边缘像素点集:
从第一边缘像素点集和第二边缘像素点集中确定当前边缘像素点;
对第一参考像素点的像素值与第二参考像素点的像素值进行加权求和,得到目标像素值;
将当前边缘像素点的像素值更新为目标像素值,以得到滤波后的当前边缘像素点。
具体实施例可以参考上述解码侧的视频处理方法中所示示例,本示例中在此不再赘述。
作为一种可选的方案,处理子模块通过以下步骤实现对第一参考像素点的像素值与第二参考像素点的像素值进行加权求和,得到目标像素值:
确定当前边缘像素点的位置;
依次获取第一参考像素点和第二参考像素点中每个参考像素点的位置与 当前边缘像素点的位置之间的距离;
根据距离确定与每个参考像素点匹配的权重;
利用权重对第一参考像素点的像素值与第二参考像素点的像素值进行加权求和,以得到目标像素值。
具体实施例可以参考上述解码侧的视频处理方法中所示示例,本示例中在此不再赘述。
作为一种可选的方案,第二确定单元706包括:
获取模块,用于获取在第一解码块中预先配置的第一行位置和/或第一列位置,以及在第二解码块中预先配置的第二行位置和/或第二列位置;
第三确定模块,用于根据第一行位置和/或第一列位置确定出第一边缘像素点集,并根据第二行位置和/或第二列位置确定出第二边缘像素点集。
具体实施例可以参考上述解码侧的视频处理方法中所示示例,本示例中在此不再赘述。
作为一种可选的方案,调整单元704包括:
第一调整模块,用于在目标分辨率等于第一分辨率的情况下,将第二分辨率调整为第一分辨率;
第二调整模块,用于在目标分辨率等于第二分辨率的情况下,将第一分辨率调整为第二分辨率;
第三调整模块,用于在目标分辨率等于第三分辨率的情况下,将第一分辨率调整为第三分辨率,并第二分辨率调整为第三分辨率,其中,第三分辨率与第一分辨率不同,且第三分辨率与第二分辨率不同。
具体实施例可以参考上述解码侧的视频处理方法中所示示例,本示例中在此不再赘述。
作为一种可选的方案,调整单元704包括:
第一采样模块,用于在第一分辨率大于目标分辨率的情况下,对第一分 辨率进行下采样,得到目标分辨率;或者,在第一分辨率小于目标分辨率的情况下,对第一分辨率进行上采样,得到目标分辨率;
第二采样模块,用于在第二分辨率大于目标分辨率的情况下,对第二分辨率进行下采样,得到目标分辨率;或者,在第二分辨率小于目标分辨率的情况下,对第二分辨率进行上采样,得到目标分辨率。
具体实施例可以参考上述解码侧的视频处理方法中所示示例,本示例中在此不再赘述。
根据本申请实施例的另一个方面,还提供了一种用于实施上述视频处理方法的视频处理装置。如图8所示,应用于编码侧,该装置包括:
第一确定单元802,用于从当前待处理的编码视频帧中,确定所要重构的至少一对编码块,其中,在至少一对编码块中的每对编码块包括采用第一分辨率的第一编码块和采用第二分辨率的第二编码块,第一编码块与第二编码块为位置邻接的编码块;
调整单元804,用于将第一编码块的第一分辨率调整为目标分辨率,并将第二编码块的第二分辨率调整为目标分辨率;
第二确定单元806,用于从第一编码块中确定出第一边缘像素点集,并从第二编码块中确定出第二边缘像素点集,其中,第一边缘像素点集的位置与第二边缘像素点集的位置邻接;
滤波处理单元808,用于对第一边缘像素点集进行滤波处理,得到滤波后的第一边缘像素点集,并对第二边缘像素点集进行滤波处理,得到滤波后的第二边缘像素点集,其中,滤波后的第一边缘像素点集与滤波后的第二边缘像素点集相匹配。
具体实施例可以参考上述编码侧的视频处理方法中的解释说明,关于统一分辨率的相关操作,及边缘滤波处理操作,可以参考上述解码侧的实施例,本实施例中在此不再赘述。
作为一种可选的方案,滤波处理单元808包括:
第一确定模块,用于从第一编码块中确定出与第一边缘像素点集关联的第一参考像素点,从第二编码块中确定出与第二边缘像素点集关联的第二参考像素点;
滤波处理模块,用于根据第一参考像素点的像素值与第二参考像素点的像素值,对第一边缘像素点集和第二边缘像素点集进行滤波处理,其中,滤波后的第一边缘像素点集中第i个像素点的像素值与滤波后的第二边缘像素点集中与第i个像素点对应的第j个像素点的像素值之间的第一差值,小于第一边缘像素点集中第i个像素点的像素值与第二边缘像素点集中第j个像素点的像素值之间的第二差值,i为正整数,且小于等于第一边缘像素点集中像素点的总数,j为正整数,且小于等于第二边缘像素点集中像素点的总数。
具体实施例可以参考上述解码侧的视频处理方法中所示示例,本示例中在此不再赘述。
作为一种可选的方案,滤波处理模块包括:
处理子模块,用于依次执行以下步骤,直至遍历第一边缘像素点集和第二边缘像素点集:
从第一边缘像素点集和第二边缘像素点集中确定当前边缘像素点;
对第一参考像素点的像素值与第二参考像素点的像素值进行加权求和,得到目标像素值;
将当前边缘像素点的像素值更新为目标像素值,以得到滤波后的当前边缘像素点。
具体实施例可以参考上述解码侧的视频处理方法中所示示例,本示例中在此不再赘述。
作为一种可选的方案,处理子模块通过以下步骤实现对第一参考像素点的像素值与第二参考像素点的像素值进行加权求和,得到目标像素值:
确定当前边缘像素点的位置;
依次获取第一参考像素点和第二参考像素点中每个参考像素点的位置与当前边缘像素点的位置之间的距离;
,根据距离确定与每个参考像素点匹配的权重;
利用权重对第一参考像素点的像素值与第二参考像素点的像素值进行加权求和,以得到目标像素值。
根据本申请实施例的又一个方面,还提供了一种用于实施上述视频处理方法的电子装置,如图9所示,该电子装置包括存储器902和处理器904,该存储器902中存储有计算机可读指令,该处理器904被设置为通过计算机可读指令执行上述任一项方法实施例中的步骤。
可选地,在本实施例中,上述电子装置可以位于计算机网络的多个网络设备中的至少一个网络设备。
可选地,在本实施例中,上述处理器可以被设置为通过计算机可读指令执行以下步骤:
从当前待处理的解码视频帧中,确定所要重构的至少一对解码块,其中,在至少一对解码块中的每对解码块包括采用第一分辨率的第一解码块和采用第二分辨率的第二解码块,第一解码块与第二解码块为位置邻接的解码块;
将第一解码块的第一分辨率调整为目标分辨率,并将第二解码块的第二分辨率调整为目标分辨率;
从第一解码块中确定出第一边缘像素点集,并从第二解码块中确定出第二边缘像素点集,其中,第一边缘像素点集的位置与第二边缘像素点集的位置邻接;
对第一边缘像素点集进行滤波处理,得到滤波后的第一边缘像素点集,并对第二边缘像素点集进行滤波处理,得到滤波后的第二边缘像素点集,其中,滤波后的第一边缘像素点集与滤波后的第二边缘像素点集相匹配。
可选地,本领域普通技术人员可以理解,图9所示的结构仅为示意,电 子装置也可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌上电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。图9其并不对上述电子装置的结构造成限定。例如,电子装置还可包括比图9中所示更多或者更少的组件(如网络接口等),或者具有与图9所示不同的配置。
其中,存储器902可用于存储计算机可读指令以及模块,如本申请实施例中的视频处理方法和装置对应的程序指令/模块,处理器904通过运行存储在存储器902内的计算机可读指令以及模块,从而执行各种功能应用以及数据处理,即实现上述的视频处理方法。存储器902可包括高速随机存储器,还可以包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器902可进一步包括相对于处理器904远程设置的存储器,这些远程存储器可以通过网络连接至终端。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。其中,存储器902具体可以但不限于用于存储解码视频帧及相关分辨率等信息。作为一种示例,如图9所示,上述存储器902中可以但不限于包括上述视频处理装置中的第一确定单元702、调整单元704、第二确定单元706及滤波处理单元708。此外,还可以包括但不限于上述视频处理装置中的其他模块单元,本示例中不再赘述。
可选地,上述的传输装置906用于经由一个网络接收或者发送数据。上述的网络具体实例可包括有线网络及无线网络。在一个实例中,传输装置906包括一个网络适配器(Network Interface Controller,NIC),其可通过网线与其他网络设备与路由器相连从而可与互联网或局域网进行通讯。在一个实例中,传输装置906为射频(Radio Frequency,RF)模块,其用于通过无线方式与互联网进行通讯。
此外,上述电子装置还包括:显示器908,用于显示上述解码视频帧;和连接总线910,用于连接上述电子装置中的各个模块部件。
根据本申请实施例的又一个方面,还提供了一种用于实施上述视频处理 方法的电子装置,如图10所示,该电子装置包括存储器1002和处理器1004,该存储器1002中存储有计算机可读指令,该处理器1004被设置为通过计算机可读指令执行上述任一项方法实施例中的步骤。
可选地,在本实施例中,上述电子装置可以位于计算机网络的多个网络设备中的至少一个网络设备。
可选地,在本实施例中,上述处理器可以被设置为通过计算机可读指令执行以下步骤:
从当前待处理的编码视频帧中,确定所要重构的至少一对编码块,其中,在至少一对编码块中的每对编码块包括采用第一分辨率的第一编码块和采用第二分辨率的第二编码块,第一编码块与第二编码块为位置邻接的编码块;
将第一编码块的第一分辨率调整为目标分辨率,并将第二编码块的第二分辨率调整为目标分辨率;
从第一编码块中确定出第一边缘像素点集,并从第二编码块中确定出第二边缘像素点集,其中,第一边缘像素点集的位置与第二边缘像素点集的位置邻接;
对第一边缘像素点集进行滤波处理,得到滤波后的第一边缘像素点集,并对第二边缘像素点集进行滤波处理,得到滤波后的第二边缘像素点集,其中,滤波后的第一边缘像素点集与滤波后的第二边缘像素点集相匹配。
可选地,本领域普通技术人员可以理解,图10所示的结构仅为示意,电子装置也可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌上电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。图10其并不对上述电子装置的结构造成限定。例如,电子装置还可包括比图10中所示更多或者更少的组件(如网络接口等),或者具有与图10所示不同的配置。
其中,存储器1002可用于存储计算机可读指令以及模块,如本申请实施例中的视频处理方法和装置对应的计算机可读指令/模块,处理器1004通过运行存储在存储器1002内的计算机可读指令以及模块,从而执行各种功能应 用以及数据处理,即实现上述的视频处理方法。存储器1002可包括高速随机存储器,还可以包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器1002可进一步包括相对于处理器1004远程设置的存储器,这些远程存储器可以通过网络连接至终端。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。其中,存储器1002具体可以但不限于用于存储解码视频帧及相关分辨率等信息。作为一种示例,如图10所示,上述存储器1002中可以但不限于包括上述视频处理装置中的第一确定单元802、调整单元804、第二确定单元806及滤波处理单元808。此外,还可以包括但不限于上述视频处理装置中的其他模块单元,本示例中不再赘述。
可选地,上述的传输装置1006用于经由一个网络接收或者发送数据。上述的网络具体实例可包括有线网络及无线网络。在一个实例中,传输装置1006包括一个网络适配器(Network Interface Controller,NIC),其可通过网线与其他网络设备与路由器相连从而可与互联网或局域网进行通讯。在一个实例中,传输装置1006为射频(Radio Frequency,RF)模块,其用于通过无线方式与互联网进行通讯。
此外,上述电子装置还包括:显示器1008,用于显示上述解码视频帧;和连接总线1010,用于连接上述电子装置中的各个模块部件。
根据本申请的实施例的又一方面,还提供了一种计算机可读的存储介质,该计算机可读的存储介质中存储有计算机可读指令,其中,该计算机可读指令被设置为运行时执行上述任一项方法实施例中的步骤。
可选地,在本实施例中,上述计算机可读的存储介质可以被设置为存储用于执行以下步骤的计算机可读指令:
从当前待处理的解码视频帧中,确定所要重构的至少一对解码块,其中,在至少一对解码块中的每对解码块包括采用第一分辨率的第一解码块和采用第二分辨率的第二解码块,第一解码块与第二解码块为位置邻接的解码块;
将第一解码块的第一分辨率调整为目标分辨率,并将第二解码块的第二分辨率调整为目标分辨率;
从第一解码块中确定出第一边缘像素点集,并从第二解码块中确定出第二边缘像素点集,其中,第一边缘像素点集的位置与第二边缘像素点集的位置邻接;
对第一边缘像素点集进行滤波处理,得到滤波后的第一边缘像素点集,并对第二边缘像素点集进行滤波处理,得到滤波后的第二边缘像素点集,其中,滤波后的第一边缘像素点集与滤波后的第二边缘像素点集相匹配。
可选地,在本实施例中,上述计算机可读的存储介质还可以被设置为存储用于执行以下步骤的计算机可读指令:
从当前待处理的编码视频帧中,确定所要重构的至少一对编码块,其中,在至少一对编码块中的每对编码块包括采用第一分辨率的第一编码块和采用第二分辨率的第二编码块,第一编码块与第二编码块为位置邻接的编码块;
将第一编码块的第一分辨率调整为目标分辨率,并将第二编码块的第二分辨率调整为目标分辨率;
从第一编码块中确定出第一边缘像素点集,并从第二编码块中确定出第二边缘像素点集,其中,第一边缘像素点集的位置与第二边缘像素点集的位置邻接;
对第一边缘像素点集进行滤波处理,得到滤波后的第一边缘像素点集,并对第二边缘像素点集进行滤波处理,得到滤波后的第二边缘像素点集,其中,滤波后的第一边缘像素点集与滤波后的第二边缘像素点集相匹配。
可选地,在本实施例中,本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过计算机可读指令来指令终端设备相关的硬件来完成,该计算机可读指令可以存储于一计算机可读存储介质中,存储介质可以包括:闪存盘、只读存储器(Read-Only Memory,ROM)、随机存取器(Random Access Memory,RAM)、磁盘或光盘等。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
上述实施例中的集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在上述计算机可读取的存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在存储介质中,包括若干计算机可读指令用以使得一台或多台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。
在本申请的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的客户端,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
以上所述仅是本申请的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润 饰,这些改进和润饰也应视为本申请的保护范围。

Claims (20)

  1. 一种视频处理方法,由电子装置执行,所述方法包括:
    从当前待处理的解码视频帧中,确定所要重构的至少一对解码块,其中,在所述至少一对解码块中的每对解码块包括采用第一分辨率的第一解码块和采用第二分辨率的第二解码块,所述第一解码块与所述第二解码块为位置邻接的解码块;
    将所述第一解码块的所述第一分辨率调整为目标分辨率,并将所述第二解码块的所述第二分辨率调整为所述目标分辨率;
    从所述第一解码块中确定出第一边缘像素点集,并从所述第二解码块中确定出第二边缘像素点集,其中,所述第一边缘像素点集的位置与所述第二边缘像素点集的位置邻接;及
    对所述第一边缘像素点集进行滤波处理,得到滤波后的所述第一边缘像素点集,并对所述第二边缘像素点集进行滤波处理,得到滤波后的所述第二边缘像素点集,其中,滤波后的所述第一边缘像素点集与滤波后的所述第二边缘像素点集相匹配。
  2. 根据权利要求1所述的方法,其特征在于,所述对所述第一边缘像素点集进行滤波处理,得到滤波后的所述第一边缘像素点集,并对所述第二边缘像素点集进行滤波处理,得到滤波后的所述第二边缘像素点集包括:
    从所述第一解码块中确定出与所述第一边缘像素点集关联的第一参考像素点,从所述第二解码块中确定出与所述第二边缘像素点集关联的第二参考像素点;及
    根据所述第一参考像素点的像素值与所述第二参考像素点的像素值,对所述第一边缘像素点集和所述第二边缘像素点集进行滤波处理,其中,滤波后的所述第一边缘像素点集中第i个像素点的像素值与滤波后的所述第二边缘像素点集中与所述第i个像素点对应的第j个像素点的像素值之间的第一差值,小于所述第一边缘像素点集中所述第i个像素点的像素值与所述第二边缘像素点集中所述第j个像素点的像素值之间的第二差值,所述i为正整数,且小于等于所述第一边缘像素点集中像素点的总数,所述j为正整数,且小于等于所述第二边缘像素点集中像素点的总数。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述第一参考像素点的像素值与所述第二参考像素点的像素值,对所述第一边缘像素点集和所述第二边缘像素点集进行滤波处理包括:
    依次执行以下步骤,直至遍历所述第一边缘像素点集和所述第二边缘像素点集:
    从所述第一边缘像素点集和所述第二边缘像素点集中确定当前边缘像素点;
    对所述第一参考像素点的像素值与所述第二参考像素点的像素值进行加权求和,得到目标像素值;及
    将所述当前边缘像素点的像素值更新为所述目标像素值,以得到滤波后的所述当前边缘像素点。
  4. 根据权利要求3所述的方法,其特征在于,所述对所述第一参考像素点的像素值与所述第二参考像素点的像素值进行加权求和,得到目标像素值包括:
    确定所述当前边缘像素点的位置;
    依次获取所述第一参考像素点和所述第二参考像素点中每个参考像素点的位置与所述当前边缘像素点的位置之间的距离;
    根据所述距离确定与所述每个参考像素点匹配的权重;及
    利用所述权重对所述第一参考像素点的像素值与所述第二参考像素点的像素值进行加权求和,以得到所述目标像素值。
  5. 根据权利要求1所述的方法,其特征在于,所述从所述第一解码块中确定出第一边缘像素点集,并从所述第二解码块中确定出第二边缘像素点集包括:
    获取在所述第一解码块中预先配置的第一行位置和/或第一列位置,以及在所述第二解码块中预先配置的第二行位置和/或第二列位置;及
    根据所述第一行位置和/或第一列位置确定出所述第一边缘像素点集,并根据所述第二行位置和/或第二列位置确定出所述第二边缘像素点集。
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,所述将所述第一解码块的所述第一分辨率调整为目标分辨率,并将所述第二解码块的所 述第二分辨率调整为所述目标分辨率包括:
    在所述目标分辨率等于所述第一分辨率的情况下,将所述第二分辨率调整为所述第一分辨率;
    在所述目标分辨率等于所述第二分辨率的情况下,将所述第一分辨率调整为所述第二分辨率;及
    在所述目标分辨率等于第三分辨率的情况下,将所述第一分辨率调整为所述第三分辨率,并所述第二分辨率调整为所述第三分辨率,其中,所述第三分辨率与所述第一分辨率不同,且所述第三分辨率与所述第二分辨率不同。
  7. 根据权利要求1至5中任一项所述的方法,其特征在于,所述将所述第一解码块的所述第一分辨率调整为目标分辨率,并将所述第二解码块的所述第二分辨率调整为所述目标分辨率包括:
    在所述第一分辨率大于所述目标分辨率的情况下,对所述第一分辨率进行下采样,得到所述目标分辨率;或者,在所述第一分辨率小于所述目标分辨率的情况下,对所述第一分辨率进行上采样,得到所述目标分辨率;及
    在所述第二分辨率大于所述目标分辨率的情况下,对所述第二分辨率进行下采样,得到所述目标分辨率;或者,在所述第二分辨率小于所述目标分辨率的情况下,对所述第二分辨率进行上采样,得到所述目标分辨率。
  8. 一种视频处理方法,由电子装置执行,所述方法包括:
    从当前待处理的编码视频帧中,确定所要重构的至少一对编码块,其中,在所述至少一对编码块中的每对编码块包括采用第一分辨率的第一编码块和采用第二分辨率的第二编码块,所述第一编码块与所述第二编码块为位置邻接的编码块;
    将所述第一编码块的所述第一分辨率调整为目标分辨率,并将所述第二编码块的所述第二分辨率调整为所述目标分辨率;
    从所述第一编码块中确定出第一边缘像素点集,并从所述第二编码块中确定出第二边缘像素点集,其中,所述第一边缘像素点集的位置与所述第二边缘像素点集的位置邻接;及
    对所述第一边缘像素点集进行滤波处理,得到滤波后的所述第一边缘像素点集,并对所述第二边缘像素点集进行滤波处理,得到滤波后的所述第二 边缘像素点集,其中,滤波后的所述第一边缘像素点集与滤波后的所述第二边缘像素点集相匹配。
  9. 根据权利要求8所述的方法,其特征在于,所述对所述第一边缘像素点集进行滤波处理,得到滤波后的所述第一边缘像素点集,并对所述第二边缘像素点集进行滤波处理,得到滤波后的所述第二边缘像素点集包括:
    从所述第一编码块中确定出与所述第一边缘像素点集关联的第一参考像素点,从所述第二编码块中确定出与所述第二边缘像素点集关联的第二参考像素点;及
    根据所述第一参考像素点的像素值与所述第二参考像素点的像素值,对所述第一边缘像素点集和所述第二边缘像素点集进行滤波处理,其中,滤波后的所述第一边缘像素点集中第i个像素点的像素值与滤波后的所述第二边缘像素点集中与所述第i个像素点对应的第j个像素点的像素值之间的第一差值,小于所述第一边缘像素点集中所述第i个像素点的像素值与所述第二边缘像素点集中所述第j个像素点的像素值之间的第二差值,所述i为正整数,且小于等于所述第一边缘像素点集中像素点的总数,所述j为正整数,且小于等于所述第二边缘像素点集中像素点的总数。
  10. 根据权利要求9所述的方法,其特征在于,所述根据所述第一参考像素点的像素值与所述第二参考像素点的像素值,对所述第一边缘像素点集和所述第二边缘像素点集进行滤波处理包括:
    依次执行以下步骤,直至遍历所述第一边缘像素点集和所述第二边缘像素点集:
    从所述第一边缘像素点集和所述第二边缘像素点集中确定当前边缘像素点;
    对所述第一参考像素点的像素值与所述第二参考像素点的像素值进行加权求和,得到目标像素值;及
    将所述当前边缘像素点的像素值更新为所述目标像素值,以得到滤波后的所述当前边缘像素点。
  11. 根据权利要求10所述的方法,其特征在于,所述对所述第一参考像素点的像素值与所述第二参考像素点的像素值进行加权求和,得到目标像素 值包括:
    确定所述当前边缘像素点的位置;
    依次获取所述第一参考像素点和所述第二参考像素点中每个参考像素点的位置与所述当前边缘像素点的位置之间的距离;
    根据所述距离确定与所述每个参考像素点匹配的权重;及
    利用所述权重对所述第一参考像素点的像素值与所述第二参考像素点的像素值进行加权求和,以得到所述目标像素值。
  12. 一种视频处理装置,包括:
    第一确定单元,用于从当前待处理的解码视频帧中,确定所要重构的至少一对解码块,其中,在所述至少一对解码块中的每对解码块包括采用第一分辨率的第一解码块和采用第二分辨率的第二解码块,所述第一解码块与所述第二解码块为位置邻接的解码块;
    调整单元,用于将所述第一解码块的所述第一分辨率调整为目标分辨率,并将所述第二解码块的所述第二分辨率调整为所述目标分辨率;
    第二确定单元,从所述第一解码块中确定出第一边缘像素点集,并从所述第二解码块中确定出第二边缘像素点集,其中,所述第一边缘像素点集的位置与所述第二边缘像素点集的位置邻接;
    滤波处理单元,用于对所述第一边缘像素点集进行滤波处理,得到滤波后的所述第一边缘像素点集,并对所述第二边缘像素点集进行滤波处理,得到滤波后的所述第二边缘像素点集,其中,滤波后的所述第一边缘像素点集与滤波后的所述第二边缘像素点集相匹配。
  13. 根据权利要求12所述的装置,其特征在于,所述滤波处理单元包括:
    第一确定模块,用于从所述第一解码块中确定出与所述第一边缘像素点集关联的第一参考像素点,从所述第二解码块中确定出与所述第二边缘像素点集关联的第二参考像素点;及
    滤波处理模块,用于根据所述第一参考像素点的像素值与所述第二参考像素点的像素值,对所述第一边缘像素点集和所述第二边缘像素点集进行滤波处理,其中,滤波后的所述第一边缘像素点集中第i个像素点的像素值与滤波后的所述第二边缘像素点集中与所述第i个像素点对应的第j个像素点 的像素值之间的第一差值,小于所述第一边缘像素点集中所述第i个像素点的像素值与所述第二边缘像素点集中所述第j个像素点的像素值之间的第二差值,所述i为正整数,且小于等于所述第一边缘像素点集中像素点的总数,所述j为正整数,且小于等于所述第二边缘像素点集中像素点的总数。
  14. 根据权利要求13所述的装置,其特征在于,所述滤波处理模块包括:
    处理子模块,用于依次执行以下步骤,直至遍历所述第一边缘像素点集和所述第二边缘像素点集:
    从所述第一边缘像素点集和所述第二边缘像素点集中确定当前边缘像素点;
    对所述第一参考像素点的像素值与所述第二参考像素点的像素值进行加权求和,得到目标像素值;及
    将所述当前边缘像素点的像素值更新为所述目标像素值,以得到滤波后的所述当前边缘像素点。
  15. 根据权利要求14所述的装置,其特征在于,所述处理子模块还用于:
    确定所述当前边缘像素点的位置;
    依次获取所述第一参考像素点和所述第二参考像素点中每个参考像素点的位置与所述当前边缘像素点的位置之间的距离;
    根据所述距离确定与所述每个参考像素点匹配的权重;及
    利用所述权重对所述第一参考像素点的像素值与所述第二参考像素点的像素值进行加权求和,以得到所述目标像素值。
  16. 根据权利要求12所述的装置,其特征在于,所述第二确定单元包括:
    获取模块,用于获取在所述第一解码块中预先配置的第一行位置和/或第一列位置,以及在所述第二解码块中预先配置的第二行位置和/或第二列位置;及
    第三确定模块,用于根据所述第一行位置和/或第一列位置确定出所述第一边缘像素点集,并根据所述第二行位置和/或第二列位置确定出所述第二边缘像素点集。
  17. 根据权利要求12至16中任一项所述的装置,其特征在于,所述调 整单元包括:
    第一调整模块,用于在所述目标分辨率等于所述第一分辨率的情况下,将所述第二分辨率调整为所述第一分辨率;
    第二调整模块,用于在所述目标分辨率等于所述第二分辨率的情况下,将所述第一分辨率调整为所述第二分辨率;及
    第三调整模块,用于在所述目标分辨率等于第三分辨率的情况下,将所述第一分辨率调整为所述第三分辨率,并所述第二分辨率调整为所述第三分辨率,其中,所述第三分辨率与所述第一分辨率不同,且所述第三分辨率与所述第二分辨率不同。
  18. 一种视频处理装置,包括:
    第一确定单元,用于从当前待处理的编码视频帧中,确定所要重构的至少一对编码块,其中,在所述至少一对编码块中的每对编码块包括采用第一分辨率的第一编码块和采用第二分辨率的第二编码块,所述第一编码块与所述第二编码块为位置邻接的编码块;
    调整单元,用于将所述第一编码块的所述第一分辨率调整为目标分辨率,并将所述第二编码块的所述第二分辨率调整为所述目标分辨率;
    第二确定单元,用于从所述第一编码块中确定出第一边缘像素点集,并从所述第二编码块中确定出第二边缘像素点集,其中,所述第一边缘像素点集的位置与所述第二边缘像素点集的位置邻接;
    滤波处理单元,用于对所述第一边缘像素点集进行滤波处理,得到滤波后的所述第一边缘像素点集,并对所述第二边缘像素点集进行滤波处理,得到滤波后的所述第二边缘像素点集,其中,滤波后的所述第一边缘像素点集与滤波后的所述第二边缘像素点集相匹配。
  19. 一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行如权利要求1至11中任一项所述的方法。
  20. 一种电子装置,包括存储器和一个或多个处理器,存储器中储存有计算机可读指令,计算机可读指令被处理器执行时,使得一个或多个处理器 执行如权利要求1至11中任一项所述的方法。
PCT/CN2020/113981 2019-09-27 2020-09-08 视频处理方法和装置、存储介质和电子装置 WO2021057464A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20868851.5A EP4037324A4 (en) 2019-09-27 2020-09-08 VIDEO PROCESSING METHOD AND APPARATUS, STORAGE MEDIA AND ELECTRONIC DEVICE
US17/449,109 US11838503B2 (en) 2019-09-27 2021-09-28 Video processing method and apparatus, storage medium, and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910927038.6 2019-09-27
CN201910927038.6A CN110677690B (zh) 2019-09-27 2019-09-27 视频处理方法和装置、存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/449,109 Continuation US11838503B2 (en) 2019-09-27 2021-09-28 Video processing method and apparatus, storage medium, and electronic device

Publications (1)

Publication Number Publication Date
WO2021057464A1 true WO2021057464A1 (zh) 2021-04-01

Family

ID=69079616

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/113981 WO2021057464A1 (zh) 2019-09-27 2020-09-08 视频处理方法和装置、存储介质和电子装置

Country Status (4)

Country Link
US (1) US11838503B2 (zh)
EP (1) EP4037324A4 (zh)
CN (1) CN110677690B (zh)
WO (1) WO2021057464A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110677690B (zh) 2019-09-27 2022-07-01 腾讯科技(深圳)有限公司 视频处理方法和装置、存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101366281A (zh) * 2006-01-06 2009-02-11 微软公司 用于多分辨率视频编码和解码的重新采样和调整图像大小操作
US20140136686A1 (en) * 2012-11-09 2014-05-15 Institute For Information Industry Dynamic resolution regulating system and dynamic resolution regulating method
CN106157257A (zh) * 2015-04-23 2016-11-23 腾讯科技(深圳)有限公司 图像滤波的方法和装置
CN107517385A (zh) * 2016-06-16 2017-12-26 华为技术有限公司 一种视频图像的编解码方法和装置
CN110677690A (zh) * 2019-09-27 2020-01-10 腾讯科技(深圳)有限公司 视频处理方法和装置、存储介质

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4388366B2 (ja) 2003-12-26 2009-12-24 株式会社ケンウッド 送信装置、受信装置、データ送信方法、データ受信方法及びプログラム
WO2006110890A2 (en) * 2005-04-08 2006-10-19 Sarnoff Corporation Macro-block based mixed resolution video compression system
US9137549B2 (en) * 2006-07-25 2015-09-15 Hewlett-Packard Development Company, L.P. Compressing image data
US9800884B2 (en) * 2013-03-15 2017-10-24 Qualcomm Incorporated Device and method for scalable coding of video information
US9906790B2 (en) * 2014-03-14 2018-02-27 Qualcomm Incorporated Deblock filtering using pixel distance
KR102349788B1 (ko) * 2015-01-13 2022-01-11 인텔렉추얼디스커버리 주식회사 영상의 부호화/복호화 방법 및 장치
US10390021B2 (en) * 2016-03-18 2019-08-20 Mediatek Inc. Method and apparatus of video coding
CN107817385B (zh) 2016-09-14 2020-08-28 株式会社村田制作所 电子元件装置的电气特性测定方法、筛选方法及测定装置
JP7027776B2 (ja) * 2017-10-02 2022-03-02 富士通株式会社 移動ベクトル算出方法、装置、プログラム、及びノイズ除去処理を含む移動ベクトル算出方法
CN108124154B (zh) * 2017-12-28 2020-04-24 北京数码视讯科技股份有限公司 帧间预测模式的快速选择方法、装置及电子设备
US11758125B2 (en) * 2019-01-02 2023-09-12 Lg Electronics Inc. Device and method for processing video signal by using inter prediction

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101366281A (zh) * 2006-01-06 2009-02-11 微软公司 用于多分辨率视频编码和解码的重新采样和调整图像大小操作
US20140136686A1 (en) * 2012-11-09 2014-05-15 Institute For Information Industry Dynamic resolution regulating system and dynamic resolution regulating method
CN106157257A (zh) * 2015-04-23 2016-11-23 腾讯科技(深圳)有限公司 图像滤波的方法和装置
CN107517385A (zh) * 2016-06-16 2017-12-26 华为技术有限公司 一种视频图像的编解码方法和装置
CN110677690A (zh) * 2019-09-27 2020-01-10 腾讯科技(深圳)有限公司 视频处理方法和装置、存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4037324A4 *

Also Published As

Publication number Publication date
EP4037324A1 (en) 2022-08-03
CN110677690B (zh) 2022-07-01
US20220014737A1 (en) 2022-01-13
CN110677690A (zh) 2020-01-10
EP4037324A4 (en) 2022-12-14
US11838503B2 (en) 2023-12-05

Similar Documents

Publication Publication Date Title
US11463700B2 (en) Video picture processing method and apparatus
US9774887B1 (en) Behavioral directional encoding of three-dimensional video
JP2020526994A5 (zh)
CN108063976B (zh) 一种视频处理方法及装置
US20180213202A1 (en) Generating a Video Stream from a 360-Degree Video
WO2021057689A1 (zh) 视频解码方法及装置、视频编码方法及装置、存储介质和电子装置
US8457194B2 (en) Processing real-time video
WO2021057477A1 (zh) 视频编解码方法和相关装置
WO2021057697A1 (zh) 视频编解码方法和装置、存储介质及电子装置
CN115409716B (zh) 视频处理方法、装置、存储介质及设备
US11943473B2 (en) Video decoding method and apparatus, video encoding method and apparatus, storage medium, and electronic device
WO2021057684A1 (zh) 视频解码方法和装置、视频编码方法和装置、存储介质及电子装置
US11979577B2 (en) Video encoding method, video decoding method, and related apparatuses
CN113747242B (zh) 图像处理方法、装置、电子设备及存储介质
WO2021057464A1 (zh) 视频处理方法和装置、存储介质和电子装置
CN110572677B (zh) 视频编解码方法和装置、存储介质及电子装置
US20220078454A1 (en) Video encoding method, video decoding method, and related apparatuses
WO2021057676A1 (zh) 视频编解码方法、装置、电子设备及可读存储介质
KR20170033355A (ko) 4k 및 8k 애플리케이션을 위한 다중 비디오 압축, 압축 해제, 및 디스플레이
CN110677676A (zh) 视频编码方法和装置、视频解码方法和装置及存储介质
CN110636295B (zh) 视频编解码方法和装置、存储介质及电子装置
US12034944B2 (en) Video encoding method and apparatus, video decoding method and apparatus, electronic device and readable storage medium
CN110662060A (zh) 视频编码方法和装置、视频解码方法和装置及存储介质
CN110572676A (zh) 视频编码方法和装置、视频解码方法和装置及存储介质
CN117082224A (zh) 视频处理方法、装置、电子设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20868851

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2020868851

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2020868851

Country of ref document: EP

Effective date: 20220428