CN111698503B - Video high-power compression method based on preprocessing - Google Patents

Video high-power compression method based on preprocessing Download PDF

Info

Publication number
CN111698503B
CN111698503B CN202010578517.4A CN202010578517A CN111698503B CN 111698503 B CN111698503 B CN 111698503B CN 202010578517 A CN202010578517 A CN 202010578517A CN 111698503 B CN111698503 B CN 111698503B
Authority
CN
China
Prior art keywords
video
texture
region
filter
sampling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010578517.4A
Other languages
Chinese (zh)
Other versions
CN111698503A (en
Inventor
李焕青
周彩章
李�根
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Divimath Semiconductor Co ltd
Original Assignee
Shenzhen Divimath Semiconductor Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Divimath Semiconductor Co ltd filed Critical Shenzhen Divimath Semiconductor Co ltd
Priority to CN202010578517.4A priority Critical patent/CN111698503B/en
Publication of CN111698503A publication Critical patent/CN111698503A/en
Application granted granted Critical
Publication of CN111698503B publication Critical patent/CN111698503B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/107Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/625Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using discrete cosine transform [DCT]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation

Abstract

The invention provides a video high-power compression method based on preprocessing, aiming at the problem that the subjective experience and the objective evaluation of a video with low code rate and high compression multiple are poor. The algorithm carries out relevant preprocessing before the video enters the coding and decoding module, and removes redundant information and noise so as to improve the subjective and objective quality of the video. The method comprises the following specific steps: dividing a video processing unit; detecting the complexity of the video and dividing the area; video denoising preprocessing; video self-adaptive down-sampling; down-sampling mode coding; and (5) video coding. The embodiment of the invention particularly aims at the video with low code rate and high compression ratio, and improves the subjective visual experience and objective evaluation of the compressed video under the conditions of not increasing any time delay and using less resources.

Description

Video high-power compression method based on preprocessing
Technical Field
The invention relates to the technical field of video compression coding and decoding, in particular to a video high-power compression method based on preprocessing.
Background
With the rapid development of computers and the internet, multimedia data communication technology taking images and videos as important expression forms is rapidly flying, and the simple text and voice communication forms no longer meet the daily requirements of people. Multimedia communication is popular in all industries, is widely applied to the fields of remote education, teleconferencing, video telephone, security monitoring and the like, and changes the living, learning and working modes of people. However, when multimedia comes across the sky, the amount of data in communication becomes very large. For example, a 24-bit true color map with 1280 × 720 has an original data size of about 18.9Mb, and it takes about 4.7 seconds to transmit a map at 4 Mbps. It follows that multimedia technology puts tremendous pressure on storage medium capacity, channel bandwidth, and computer processing speed. Although the problem can be alleviated by producing larger capacity storage media, increasing communication bandwidth, and developing higher performance computers, it is at a substantial cost. Therefore, it is imperative that video be compression encoded prior to storage to reduce the amount of data in the communication.
Many application scenes have fixed requirements on channel bandwidth and transmission delay, and particularly under the condition of low target code rate, the traditional method is to directly perform video compression, but product compression is directly performed, and obvious block effect and ringing effect appear in decoded and reconstructed images, so that serious discomfort is caused in vision. This phenomenon is mainly due to the high compression rate that must be achieved with coarse quantization at low code rates. However, low bit rate transmission is very widespread in multimedia communication applications, such as video transmission over PSTN, IP networks, and wireless networks. The visual effect is improved in order to improve the image quality. For the transmission of high resolution video, especially high definition video, at a fixed low bit rate, pre-processing of the video prior to compression is essential. With filtered and interpolated samples being the most commonly used algorithms. The commonly used filtering methods are gaussian filtering, bilateral filtering and the like, and the interpolation algorithm is generally nearest neighbor interpolation, linear interpolation and double cubic interpolation. However, the conventional method generally processes the whole frame image, that is, the whole frame image is sampled by a filter, and 1/2 or 1/4 is realized on the whole frame image. Similar to the problems with conventional algorithms: for filtering, the detail is desirably preserved, otherwise the image is blurred. The flat region is desired to reduce redundancy as much as possible, providing conditions for improving compression efficiency. But it is difficult to compromise both detail and smoothing effects with the same filter for all regions. Therefore, it is an urgent need to develop a method for adaptively selecting a suitable filter according to the texture and complexity of an image.
In order to improve the video compression efficiency and the reconstructed video quality, downsampling the video at a low bit rate is indispensable. However, the conventional all-area scaling causes problems that vertical texture horizontal sampling or strong edge details, whether horizontal or vertical sampling, have a very obvious saw-tooth effect when the up-sampling is restored after decoding, and a ringing effect is diffused due to interpolation.
Disclosure of Invention
The invention mainly aims to provide a video high-magnification compression method based on preprocessing, which divides an image into processing units which are not overlapped with each other, reduces the time delay of the image to a certain extent and reduces the image storage, and provides the processing units and the area division for adaptive filtering and adaptive down-sampling; improving the quality of the reconstructed video.
In order to achieve the above object, the present invention provides a video high-magnification compression method based on preprocessing, which includes the following steps:
s1, dividing one frame of image into non-overlapping N M processing units according to the resolution of the original video;
s2, detecting edge information by using a Sobel operator template, and performing complexity detection and texture region division by using the processing unit as a unit;
s3, carrying out adaptive filtering denoising on the processing unit according to different complexity of the marks in the S2;
s4, adaptively downsampling the processing unit according to the different complexity of the marks in S2;
s5, performing downsampling mode encoding on each processing unit;
s6, carrying out video coding on the video content of each processing unit;
wherein N and M are customized according to image resolution and compression multiple.
Further, N and M are each multiples of 2.
Further, using a Value, calculated by a Sobel operator, (sumX) + abs (sumy), two thresholds T1 and T2 are set, T1< T2, sumX indicates an edge intensity in a horizontal direction and an edge intensity in a sumy vertical direction, and counts satisfying the two threshold conditions of T1 and T2 are respectively set as Counter1 and Counter 2; value > T2, Counter2 adds 1 for a strong edge pixel count; when T1< Value < T2, Counter1 increments by 1, which is the texel count; value < T1, flat pixel count.
Two thresholds T3 and T4 are set to determine the type of the current region, i.e., whether the current region is a flat region, a texture region, or an edge region. Wherein, when the connector 2> T3, the edge area is divided; namely when connector 1> T4, divided into texture regions; otherwise the area is marked as flat. For the texture trend of the texture region, the direction is counted, and the calculation formula of the direction angle is as follows:
θ=arctan(abs(SumY)/abs(SumX))
the texture strike determination counts include CounterH and CounterV, and the CounterH count conditions are as follows:
Figure GDA0003775402160000031
the counting conditions for their counter v are as follows:
Figure GDA0003775402160000032
if CounterH > CounterV, the texture region orientation is marked as horizontal orientation, otherwise marked as vertical orientation.
Further, the template of the Sobel operator is as follows:
Figure GDA0003775402160000033
sx represents a vertical detection template, and Sy represents a horizontal detection template; SumX ═ Sx a, SumY ═ Sy ═ a; where a is a 3 x 3 matrix of pixels.
Further, a 9-tap weighted mean filter is adopted for adaptive filtering and denoising of the flat region, and a bilateral filter is adopted for adaptive filtering and denoising of the texture region and the edge region.
Further, the parameters of the 9-tap weighted mean filter are:
Figure GDA0003775402160000034
further, down-sampling both the horizontal and vertical of the flat area by a sampling multiple of 4; the down-sampling filter is a 12-tap filter for MPEG 4; adopting a double-cubic linear filter with higher complexity to perform down-sampling on the texture area, and selecting horizontal down-sampling or vertical down-sampling according to the trend of the texture; the sampling multiple is 2.
Further, the 12-tap Filter is Filter _ down ═ {2, -4, -3,5,19,26,19,5, -3, -4,2,64 }. The filter can determine the number of tap coefficients and the coefficient value according to the division size of the processing unit; wherein the number of tap coefficients is less than half of the minimum of the number of rows or columns of the processing unit.
Compared with the prior art, the invention has the beneficial effects that: and performing adaptive processing on the video before encoding by utilizing different texture characteristics. The method can well protect the texture details of the image and remove the redundancy of the image. In high power video compression applications, the human visual perception can be improved to a great extent.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
FIG. 1 is a schematic flow chart of the steps of the present invention;
FIG. 2 is a flow chart of the video processing unit partitioning according to the present invention;
FIG. 3 is a schematic view of a video complexity detection and region division process according to the present invention;
FIG. 4 is a schematic flow chart of video denoising preprocessing in the present invention;
FIG. 5 is a flow chart illustrating a sampling mode according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of video encoding according to the present invention;
FIG. 7 is a schematic flow chart of video complexity detection according to the present invention;
the implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The present invention provides a video high-power compression method based on preprocessing, which includes the following steps as shown in fig. 1:
s1, dividing a video processing unit, namely dividing a frame of image into non-overlapping N x M processing units according to the resolution of the original video;
s2, detecting the complexity of the video and dividing the video into regions, namely detecting edge information by using a Sobel operator template, and performing complexity detection and texture region division by taking the processing unit as a unit;
s3, video denoising preprocessing, namely, adaptively filtering and denoising the processing unit according to different complexities marked in S2; in order to reduce complexity, the flat area adopts 3 × 3 horizontal smooth filtering, the edge area and the texture area adopt bilateral filtering, and filtering parameters can be adjusted. In one embodiment, the pixel difference parameter delta of the filter is 20, and the distance difference parameter delta _ c is 20.
S4, video adaptive down-sampling, namely, performing adaptive down-sampling on the processing unit according to different complexity of the marks in S2;
s5, performing downsampling mode coding, i.e. performing downsampling mode coding on each processing unit; a bypass coding mode is used. The step is that the decoding end can carry out corresponding up-sampling according to the sampling mode and finally restore the original resolution image;
s6, video coding, namely carrying out video coding on the video content of each processing unit; . In an embodiment, intra coding of H265 is used. The compression multiple is 30 times;
wherein N and M are customized according to the image resolution and the compression multiple.
Further, both N and M are multiples of 2. As shown in fig. 2, where PUx (x is 0,1,2 … n-1 …) denotes a divided processing unit, H denotes the resolution horizontal direction pixel number of the video, and V denotes the vertical direction pixel number of the video.
Further, the Value abs (sumx) + abs (sumy) calculated by the Sobel operator is set to two thresholds T1 and T2 as shown in fig. 3, T1< T2, and the counts satisfying the threshold conditions of T1 and T2 are respectively set to Counter1 and Counter 2; when Value > T2, Counter2 adds 1, which is the strong edge pixel count; when T1< Value < T2, Counter1 increments by 1, which is the texel count; value < T1, flat pixel count; setting two thresholds T3 and T4 to judge the type of the current region, namely whether the current region is a flat region, a texture region or an edge region; when T3< T4, connector 2> T3, dividing into edge areas; when the connector 1> T4, dividing the texture area; otherwise the area is marked as flat. For the texture trend of the texture region, the direction is counted, and the calculation formula of the direction angle is as follows:
θ=arctan(abs(SumY)/abs(SumX))
the texture strike determination counts include CounterH and CounterV, and the CounterH count conditions are as follows:
Figure GDA0003775402160000061
the counting conditions for their counter v are as follows:
Figure GDA0003775402160000062
if CounterH > CounterV, the texture region orientation is marked as horizontal orientation, otherwise marked as vertical orientation.
As shown in fig. 4, the pre-processing for denoising the video is to adaptively select the type of the filter according to the category of the processing unit, i.e., the texture region, the edge region, or the flat region, wherein the flat region selects a simpler 3 × 3 smooth template, and the texture region and the edge region select gaussian filtering or bilateral filtering with higher complexity, as follows:
Figure GDA0003775402160000063
the video adaptive down-sampling selects a down-sampling mode according to the type of a processing unit, namely texture area, edge area or flat area. That is, edge regions are not downsampled, flat regions are downsampled by 1/4, and texture regions are downsampled vertically or horizontally by 1/2. The texture area adopts a bicubic interpolation algorithm, the flat area adopts a modified linear filter of MPEG4, the edge area does not sample, and the edge information is reserved. The following were used:
Figure GDA0003775402160000064
the sampling pattern is shown in fig. 5, and fig. 5 illustrates 8 × 8 blocks as an example. The flat area is downsampled by using an interpolation filter commonly used by MPEG4, the texture area adopts double-cube interpolation, the image details are effectively protected, and the edge area is used for preventing a sawtooth effect and is not downsampled.
The video coding input unit is a processing unit after sampling, and the encoder can adopt H264 or H265. Fig. 6 is a schematic diagram of video encoding.
The invention adopts the self-adaptive filtering and the self-adaptive down-sampling, plays a great role in improving the quality of the reconstructed video of the low-bit-rate high-power compressed video, and particularly aims at the application scene of the video compressed by more than 25 times. Such as fixed low bandwidth wireless wired video transmission.
Firstly, utilizing Sobel operator window and pixel window convolution to obtain edge Value and direction, then according to the Value determining that said point is edge region point, texture region point or flat region point, if said point is texture region point, further judging that the texture of said point is horizontal texture or vertical texture. Until all pixels within the processing unit have been traversed.
Finally, whether the current region is an edge region, a vertical texture region, a horizontal texture region or a flat region is determined according to the statistical results, i.e., Counter2, Counter1, Counter h and Counter v. In the examples, T1-150, T2-300, T3-2000, and T4-6000. As shown in fig. 7, an exemplary diagram of a flat region, a vertical texture, a horizontal texture, and an edge region is shown. Where P denotes a flat area, CH denotes a horizontal texture area, CV denotes a vertical texture area, and E denotes an edge area.
Further, the template of the Sobel operator is as follows:
Figure GDA0003775402160000071
sx represents a vertical detection template, and Sy represents a horizontal detection template; SumX ═ Sx a, SumY ═ Sy ═ a; where a is a 3 x 3 matrix of pixels.
Furthermore, a 9-tap weighted mean filter is adopted for adaptive filtering and denoising in the flat region, and a bilateral filter is adopted for adaptive filtering and denoising in the texture region and the edge region.
Further, the parameters of the 9-tap weighted mean filter are:
Figure GDA0003775402160000072
further, down-sampling both the horizontal and vertical of the flat area by a sampling multiple of 4; the down-sampling filter is a 12-tap filter for MPEG 4; adopting a double-cubic linear filter with higher complexity to perform down-sampling on the texture area, and selecting horizontal down-sampling or vertical down-sampling according to the trend of the texture; the sampling multiple is 2.
Further, the 12-tap Filter is Filter _ down ═ {2, -4, -3,5,19,26,19,5, -3, -4,2,64 }. The filter can determine the number of tap coefficients and the coefficient value according to the division size of the processing unit; wherein the number of tap coefficients is less than half of the minimum of the number of rows or columns of the processing unit.
The invention has the beneficial effects that: and performing adaptive processing on the video before encoding by utilizing different texture characteristics. The method can well protect the texture details of the image and remove the redundancy of the image. In high power video compression applications, the human visual perception can be improved to a great extent.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The embodiment of the invention can be applied to high-definition video wireless real-time transmission, such as fields of unmanned aerial vehicles, FPV, VR, medical image processing, remote sensing image processing, traffic transportation systems, high-definition televisions, image compression, image restoration and the like. The invention is especially applied to the use environment and the requirement of high compression multiple and low code rate.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (7)

1. A video high-power compression method based on preprocessing is characterized by comprising the following steps:
s1, dividing one frame of image into non-overlapping N M processing units according to the resolution of the original video;
s2, detecting edge information by using a Sobel operator template, and performing complexity detection and texture region division by using the processing unit as a unit;
s3, carrying out adaptive filtering denoising on the processing unit according to different complexity of the marks in the S2;
s4, adaptively downsampling the processing unit according to the different complexity marked in the S2;
s5, performing downsampling mode encoding on each processing unit;
s6, carrying out video coding on the video content of each processing unit;
n and M are customized according to the image resolution and the compression multiple; the specific steps of S2 are: values Value abs (SumX) + abs (SumY) calculated by Sobel operator, SumX representing edge strength in the horizontal direction, SumY representing edge strength in the vertical direction, and two thresholds T1 and T2 are set, T1< T2, and counts satisfying the two threshold conditions T1 and T2 are set as Counter1 and Counter2, respectively; value > T2, Counter2 adds 1 for a strong edge pixel count; when T1< Value < T2, Counter1 increments by 1, which is the texel count; value < T1, flat pixel count; setting two thresholds T3 and T4 to judge the type of the current region, namely whether the current region is a flat region, a texture region or an edge region; wherein, when the connector 2> T3, the edge area is divided; when the Conuter1 is more than T4, dividing the texture area; otherwise, the area is marked as a flat area; for the texture trend of the texture region, the direction is counted, and the calculation formula of the direction angle is as follows:
θ=arctan(abs(SumY)/abs(SumX))
the texture strike determination count includes CounterH and CounterV, and the CounterH count condition is as follows:
Figure FDA0003778712610000011
the counting conditions for counter v are as follows:
Figure FDA0003778712610000021
if CounterH > CounterV, the texture region orientation is marked as horizontal orientation, otherwise marked as vertical orientation.
2. The pre-processing based video high-magnification compression method as recited in claim 1, wherein N and M are each multiples of 2.
3. The pre-processing based video high-power compression method of claim 1,
the Sobel operator template is as follows:
Figure FDA0003778712610000022
sx represents a vertical detection template, and Sy represents a horizontal detection template; SumX ═ Sx a, SumY ═ Sy ═ a; where a is a 3 x 3 matrix of pixels.
4. The pre-processing based video high-power compression method of claim 3, wherein the flat region is adaptively filtered and denoised by a 9-tap weighted mean filter, and the texture region and the edge region are adaptively filtered and denoised by a bilateral filter.
5. The pre-processing based video high-power compression method of claim 4, wherein the parameters of the 9-tap weighted mean filter are:
Figure FDA0003778712610000023
6. the pre-processing based video high-magnification compression method of claim 4, wherein the horizontal direction and the vertical direction of the flat area are down-sampled by a factor of 4; the down-sampling filter is a 12-tap filter for MPEG 4; and performing down-sampling on the texture region by adopting a bicubic linear filter, and selecting horizontal down-sampling or vertical down-sampling according to the texture trend, wherein the sampling multiple is 2.
7. The pre-processing based video high-magnification compression method according to claim 6, wherein the 12-tap Filter is Filter _ down { (2, -4, -3,5,19,26,19,5, -3, -4,2,64 }; the filter can determine the number of tap coefficients and the coefficient value according to the division size of the processing unit; wherein the number of tap coefficients is less than half of the minimum of the number of rows or columns of the processing unit.
CN202010578517.4A 2020-06-22 2020-06-22 Video high-power compression method based on preprocessing Active CN111698503B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010578517.4A CN111698503B (en) 2020-06-22 2020-06-22 Video high-power compression method based on preprocessing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010578517.4A CN111698503B (en) 2020-06-22 2020-06-22 Video high-power compression method based on preprocessing

Publications (2)

Publication Number Publication Date
CN111698503A CN111698503A (en) 2020-09-22
CN111698503B true CN111698503B (en) 2022-09-09

Family

ID=72483191

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010578517.4A Active CN111698503B (en) 2020-06-22 2020-06-22 Video high-power compression method based on preprocessing

Country Status (1)

Country Link
CN (1) CN111698503B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112312132B (en) * 2020-10-23 2022-08-12 深圳市迪威码半导体有限公司 HEVC intra-frame simplified algorithm based on histogram statistics

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999059343A1 (en) * 1998-05-12 1999-11-18 Hitachi, Ltd. Method and apparatus for video decoding at reduced cost
EP1917813A2 (en) * 2005-08-26 2008-05-07 Electrosonic Limited Image data processing
CN101710993A (en) * 2009-11-30 2010-05-19 北京大学 Block-based self-adaptive super-resolution video processing method and system
CN102281439A (en) * 2011-06-16 2011-12-14 杭州米加科技有限公司 Streaming media video image preprocessing method
CN106960416A (en) * 2017-03-20 2017-07-18 武汉大学 A kind of video satellite compression image super-resolution method of content complexity self adaptation
CN111314711A (en) * 2020-03-31 2020-06-19 电子科技大学 Loop filtering method based on self-adaptive self-guided filtering

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9118932B2 (en) * 2013-06-14 2015-08-25 Nvidia Corporation Adaptive filtering mechanism to remove encoding artifacts in video data

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999059343A1 (en) * 1998-05-12 1999-11-18 Hitachi, Ltd. Method and apparatus for video decoding at reduced cost
EP1917813A2 (en) * 2005-08-26 2008-05-07 Electrosonic Limited Image data processing
CN101710993A (en) * 2009-11-30 2010-05-19 北京大学 Block-based self-adaptive super-resolution video processing method and system
CN102281439A (en) * 2011-06-16 2011-12-14 杭州米加科技有限公司 Streaming media video image preprocessing method
CN106960416A (en) * 2017-03-20 2017-07-18 武汉大学 A kind of video satellite compression image super-resolution method of content complexity self adaptation
CN111314711A (en) * 2020-03-31 2020-06-19 电子科技大学 Loop filtering method based on self-adaptive self-guided filtering

Also Published As

Publication number Publication date
CN111698503A (en) 2020-09-22

Similar Documents

Publication Publication Date Title
US8208565B2 (en) Pre-processing method and system for data reduction of video sequences and bit rate reduction of compressed video sequences using temporal filtering
US7809207B2 (en) Pre-processing method and system for data reduction of video sequences and bit rate reduction of compressed video sequences using spatial filtering
US7526142B2 (en) Enhancement of decompressed video
CN1253009C (en) Spatial scalable compression
EP2234401A1 (en) Method for image temporal and spatial resolution processing based on code rate control
JPH08186714A (en) Noise removal of picture data and its device
TW200535717A (en) Directional video filters for locally adaptive spatial noise reduction
CN112150400A (en) Image enhancement method and device and electronic equipment
CN113674165A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
US20020150166A1 (en) Edge adaptive texture discriminating filtering
CN111698503B (en) Video high-power compression method based on preprocessing
CN104780383B (en) A kind of 3D HEVC multi-resolution video coding methods
CN110536138B (en) Lossy compression coding method and device and system-on-chip
WO2006131866A2 (en) Method and system for image processing
CN112837238A (en) Image processing method and device
CN113573058B (en) Interframe image coding method based on space-time significance fusion
WO2005020584A1 (en) Method and system for pre-processing of video sequences to achieve better compression
JP3800435B2 (en) Video signal processing device
JPH07240924A (en) Device and method for encoding image
Jacquin et al. Content-adaptive postfiltering for very low bit rate video
JP4065287B2 (en) Method and apparatus for removing noise from image data
KR102027886B1 (en) Image Resizing apparatus for Large Displays and method thereof
US7706440B2 (en) Method for reducing bit rate requirements for encoding multimedia data
US20230269380A1 (en) Encoding method, decoding method, encoder, decoder and storage medium
CN112154667B (en) Encoding and decoding of video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant