CN116647685A - Video encoding method, video encoding device, electronic equipment and readable storage medium - Google Patents

Video encoding method, video encoding device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN116647685A
CN116647685A CN202310663910.7A CN202310663910A CN116647685A CN 116647685 A CN116647685 A CN 116647685A CN 202310663910 A CN202310663910 A CN 202310663910A CN 116647685 A CN116647685 A CN 116647685A
Authority
CN
China
Prior art keywords
image
frame
frame image
ith
encoding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310663910.7A
Other languages
Chinese (zh)
Inventor
王莉
武晓阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202310663910.7A priority Critical patent/CN116647685A/en
Publication of CN116647685A publication Critical patent/CN116647685A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/167Position within a video image, e.g. region of interest [ROI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The embodiment of the application provides a video coding method, a video coding device, electronic equipment and a readable storage medium, wherein the method comprises the following steps: acquiring video data to be encoded and characteristic information of the video data, wherein the video data comprises N frames of images, N is an integer greater than or equal to 2, and the characteristic information of the video data is extracted based on image signals; and respectively encoding the images of each frame according to the characteristic information of the images of each frame in the video data to obtain encoded compressed data. The method greatly improves the compression rate of video coding, and further ensures the high quality of video images.

Description

Video encoding method, video encoding device, electronic equipment and readable storage medium
The present application is a divisional application, the application number of which is 202011158502.9, the application date of which is 10 months 26 in 2020, and the entire contents of which are incorporated herein by reference.
Technical Field
Embodiments of the present application relate to computer technologies, and in particular, to a video encoding method, apparatus, electronic device, and readable storage medium.
Background
Video is a continuous sequence of images, consisting of successive frames, one frame being an image. Due to the persistence of vision effect of the human eye, when the frame sequence is played at a certain rate, the human eye can see the video with continuous motion. In order to facilitate storage and transmission, the original video can be stored or transmitted after being encoded and compressed, and when the video needs to be played, the video is decoded and played. The processing procedure of the video includes image signal processing (Image signal processing, abbreviated as ISP) and video encoding and decoding. The ISP processes the signal output by the front-end image sensor. The processing includes black level correction, gamma correction, color correction, demosaicing, noise reduction, sharpening, white balance, automatic exposure control, etc., and the image is output after the processing for subsequent encoding or transmission display. Video coding specifically refers to coding video data prior to storage or transmission, and decoding the video when it is desired to play the video. Taking coding as an example, video encoding and decoding are based on specific video encoding and decoding standards, and video data is compressed by removing space, time and statistical redundancy, so that the requirements on transmission bandwidth and storage capacity are greatly reduced.
In the prior art, after an image sensor obtains an image signal, the image signal is processed by an ISP to obtain processed video data. The processed video data is input to an encoder for encoding compression, and a compressed code stream is output.
However, the prior art method may result in a low compression rate of video encoding, which in turn results in a reduced quality of video images.
Disclosure of Invention
The embodiment of the application provides a video coding method, a video coding device, electronic equipment and a readable storage medium, which are used for solving the problem of video image quality reduction caused by low video coding compression rate in the prior art.
In a first aspect, an embodiment of the present application provides a video encoding method, including:
acquiring video data to be encoded and characteristic information of the video data, wherein the video data comprises N frames of images, N is an integer greater than or equal to 2, and the characteristic information of the video data is extracted based on image signals;
and respectively encoding the images of each frame according to the characteristic information of the images of each frame in the video data to obtain encoded compressed data.
In a second aspect, an embodiment of the present application provides a video encoding apparatus, including:
The device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring video data to be encoded and characteristic information of the video data, the video data comprises N frames of images, N is an integer greater than or equal to 2, and the characteristic information of the video data is extracted based on image signals;
and the processing module is used for respectively encoding the images of each frame according to the characteristic information of the images of each frame in the video data to obtain encoded compressed data.
In a third aspect, an embodiment of the present application provides an electronic device, including:
a memory for storing program instructions;
and the processor is used for calling and executing the program instructions in the memory and executing the method steps in the first aspect.
In a fourth aspect, an embodiment of the present application provides a readable storage medium, where a computer program is stored, where the computer program is configured to perform the method described in the first aspect.
According to the video coding method, the video coding device, the electronic equipment and the readable storage medium, the characteristic information of the video data is obtained before coding, and the frames of images in the video data are respectively coded according to the characteristic information of the frames of images in the video data, so that the coding is performed based on the actual characteristics of each frame of image, the video coding compression rate is greatly improved, and the high quality of the video images is further ensured.
Drawings
In order to more clearly illustrate the application or the technical solutions of the prior art, the following description of the embodiments or the drawings used in the description of the prior art will be given in brief, it being obvious that the drawings in the description below are some embodiments of the application and that other drawings can be obtained from them without inventive effort for a person skilled in the art.
FIG. 1 is an exemplary application scenario diagram of an embodiment of the present application;
FIG. 2 is a schematic diagram of a prior art video encoding process;
FIG. 3 is a diagram of an exemplary system for video encoding processing according to an embodiment of the present application;
fig. 4 is a schematic flow chart of a video encoding method according to an embodiment of the present application;
FIG. 5 is a diagram of motion information provided by an embodiment of the present application;
fig. 6 is a codec framework of a HEVC protocol-based codec;
fig. 7 is a schematic flow chart of a video encoding method according to an embodiment of the present application;
FIG. 8 is an example diagram of selecting a reference frame from a reference frame list;
fig. 9 is a schematic flow chart of a video encoding method according to an embodiment of the present application;
fig. 10 is a block diagram of a video encoding apparatus according to an embodiment of the present application;
Fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The embodiment of the application can be applied to any scene needing video compression to store or transmit video. Fig. 1 is an exemplary application scenario diagram of an embodiment of the present application, as shown in fig. 1, in a video monitoring scenario, video in a place is collected by an image collecting device disposed in each place, and ISP processing and encoding are performed by the image collecting device in each place, respectively, to obtain encoded compressed data. And then transmitted to the control center through the network, decoded by the video processing equipment of the control center, and further stored and/or displayed by the decoded data, for example, displayed on a large monitor screen.
Wherein the image acquisition device may be a camera.
The embodiment of the application relates to a processing procedure of an encoding end before video storage or transmission, and a processing procedure of a decoding end corresponding to the processing procedure adopts a processing mode corresponding to the encoding end, and the processing mode is not repeated.
Fig. 2 is a schematic diagram of video encoding processing in the prior art, as shown in fig. 2, in the prior art, after an image sensor (for example, a camera shown in fig. 1) acquires a video signal, ISP processing is performed, video data after ISP processing enters a codec to perform encoding, and after encoding is completed, a compressed code stream can be obtained. As can be seen from fig. 2, in the prior art, the codec encodes according to a specific video codec protocol directly based on the processed data output by the ISP. The inventors have found that in some applications, this approach may result in low video coding compression rates. The following is described in two examples.
In one example, when the consistency of adjacent frames of video data is poor, this approach may result in a low video encoding compression rate, and thus, only a trade off of encoding quality for code rate smoothing may be sacrificed, resulting in a reduced quality of the video image. The consistency of the front and rear frames may also be referred to as similarity of the front and rear frames. The problem of this example is described below by way of one specific example. The scene shot by the camera has bright and dark areas, and the acquired image can appear the phenomenon that the bright areas are overexposed and the dark areas are underexposed. In order to improve the accuracy of intelligent analysis, the camera outputs image signals with different exposure time in one frame period, and selects an optimal frame for intelligent analysis and other processing. For example, the intelligent camera can analyze whether a driver drives safely at night, and because the vehicle is darker, the behavior analysis can be performed by utilizing video image frames with longer exposure time; when detecting that the driver has illegal behaviors, license plate information is required to be tracked, and at the moment, frames with shorter exposure time in the same video are utilized for analysis. When encoding a video, since different ISP processes exist, even if scenes are almost the same in the same frame period, the similarity between adjacent frames is not high, and when the similarity is not high, there is a problem that the encoding compression rate is not high when the codec directly encodes video data outputted from the ISP.
In another example, when the codec performs rate control, a consistent rate is used for video data collected in various scenes, which may result in a low video encoding compression rate. The problem of this example is described below by way of one specific example. In security applications, video coding recording is needed at all times, and videos at daytime and night have large differences, and videos at night have poor content definition and more noise. While the codec still uses the same code rate as the daytime video when performing code rate control on the service video, so that a larger code rate is used for invalid information, resulting in low video coding compression rate.
Considering the problem that encoding by a codec directly based on video data processed by an ISP may result in low video encoding compression rate, the embodiment of the application encodes based on the characteristic information of each frame of image in the video data during encoding, so that the video encoding compression rate is greatly improved when the similarity of adjacent frames is low and/or in a night scene, and further, the high quality of the video image is ensured.
Fig. 3 is a system example diagram of a video encoding process according to an embodiment of the present application, and as shown in fig. 3, an embodiment of the present application may relate to an ISP and a codec. Illustratively, the ISP and codec may both be located in the video processing device shown in FIG. 1 described above. After receiving a video signal from a video sensor such as a video camera, the ISP performs processing such as black level correction, gamma correction, color correction, demosaicing, noise reduction, sharpening, white balance, automatic exposure control, and the like on the video data, and transmits the processed video data to a codec. At the same time, the ISP also sends an ISP control signal (ISP-related Ctrl_info) to the codec, wherein the control signal comprises the characteristic information of the video data related to the embodiment of the application. After receiving the video data and ISP control signals, the codec encodes each frame of the video data according to the characteristic information carried by the control signals, and obtains and outputs a compressed code stream.
Fig. 4 is a schematic flow chart of a video encoding method according to an embodiment of the present application, where an execution body of the method may be the aforementioned video processing device, as shown in fig. 4, and the method includes:
s401, obtaining video data to be encoded and characteristic information of the video data, wherein the video data comprises N frames of images, and the characteristic information of the video data is extracted based on image signals.
Wherein N is an integer greater than or equal to 2.
The characteristic information of the video data is extracted based on the image signal. The feature information may be extracted directly from the image signal or based on information after processing the image signal, which is not limited by the embodiment of the present application.
Alternatively, the characteristic information of the video data may be obtained by processing the video data by the ISP processing module, so that the characteristic information of the video data can represent the real characteristics of the image frames in the video data.
And S402, respectively encoding each frame image according to the characteristic information of each frame image in the video data to obtain encoded compressed data.
Optionally, the feature information may include: at least one of motion information of an image, exposure parameters, and photosensitivity. When the characteristic information includes motion information of an image, the motion information may distinguish between different areas in the image frame, and when the characteristic information includes exposure parameters, the exposure parameters may characterize a similarity between adjacent frames. When the characteristic information includes photosensitivity, the photosensitivity can represent a scene corresponding to each frame of image, such as a scene with different environmental light intensities. Encoding according to at least one of motion information, exposure parameters, and photosensitivity of an image can be made based on actual characteristics of each frame of the image, and thus video encoding compression rate can be improved.
In this embodiment, the feature information of the video data is obtained before encoding, and each frame image of the video data is encoded according to the feature information of each frame image in the video data, so that the encoding can be performed based on the actual feature of each frame image, thereby greatly improving the compression rate of video encoding and further ensuring the high quality of the video image.
As described above, the characteristic information of the video data may include at least one of motion information of an image, an exposure parameter, and a photosensitivity.
Alternatively, the exposure parameter may refer to exposure time.
The above-described exposure parameters and/or photosensitivity may be obtained from an ISP processing module, for example, the ISP processing module extracts at least one of the exposure parameters and photosensitivity directly from the image signal. Taking the system architecture shown in fig. 3 as an example, after the ISP processing module processes the video signal sent by the video sensor, the ISP processing module sends an ISP control signal to the codec in addition to the processed video data, and the ISP processing module may carry at least one of an exposure parameter and a photosensitivity of each frame of image in the video data in the ISP control signal.
The processed video data may include: motion information of the image.
The characteristic information based on the video data comprises at least one of motion information, exposure parameters and photosensitivity of the image, and accordingly, when the codec performs encoding, there may be several possible implementations as follows:
mode (1): coding based solely on exposure parameters
It should be understood that a codec is based on exposure parameter encoding only, which may mean that only exposure parameters are carried in the ISP control signal, the codec is based on the carried exposure parameter encoding, or may also mean that exposure parameters and photosensitivity are carried in the ISP control signal at the same time, the codec only uses the exposure parameter encoding carried therein.
In encoding based on exposure parameters only, the codec may use either of the following two alternatives.
In a first alternative manner, the codec may manage the reference frames based on the exposure parameter, so that the similarity between the reference frames is greatly improved, and further, the encoding compression rate may be greatly improved when encoding is performed based on the adjacent frames with high similarity.
In a second alternative manner, the codec may divide the N frames of images in the video data into at least two groups according to the exposure parameters before receiving the video data and not starting encoding, and the images in the same group have high similarity, and further, encode each group of images respectively, so that the encoding compression rate is greatly improved.
The specific implementation of the two alternatives will be described in detail in the following embodiments.
Mode (2): encoding based on light sensitivity only
It should be understood that a codec may refer to an ISP control signal carrying only the photosensitivity, based on which the codec carries only the photosensitivity, or may refer to an ISP control signal carrying both the exposure parameters and the photosensitivity, the codec using only the photosensitivity encoded carried therein.
When encoding is performed based on only the photosensitivity, the codec can recognize the scene in which each frame of image is located according to the photosensitivity, and perform rate control on each frame of image according to the scene. The specific procedure will be described in detail in the following examples.
Mode (3): simultaneously encoding based on exposure parameters and photosensitivity
And encoding the ith frame image according to the exposure parameters of the ith frame image and the photosensitivity of the ith frame image to obtain encoded compressed data corresponding to the ith frame image.
As can be seen from the description of the encoding based on exposure parameters only and the encoding based on photosensitivity only, the encoding by the codec is in different stages in the encoding process from the encoding by photosensitivity, and specifically, the encoding by the exposure parameters is in the code rate control stage before the reference frame management stage or the encoding is started. Therefore, when encoding is performed based on both the exposure parameter and the photosensitivity, the method of encoding using the exposure parameter and the method of encoding using the photosensitivity may be directly combined. Specifically, the following two combinations are possible:
The first combination is to encode according to the reference frame and the photosensitivity. Specifically, the first alternative mode in the above mode (1) is used to perform reference frame management, that is, the reference frame of each frame of image is determined, and further, the rate control is performed on each frame of image by using the photosensitivity, that is, the code rate of each frame of image is determined according to the photosensitivity, and then, each frame of image is encoded by using the code rate of each frame of image according to the reference frame of each frame of image, so as to obtain encoded compressed data.
And (2) dividing the N frames of images into at least two groups by using the second optional mode in the mode (1), respectively encoding each group of images, determining a reference frame of each frame of image for each frame of image in each group of images when encoding each group of images, further performing code rate control on each frame of image according to the photosensitivity, namely determining the code rate of each frame of image according to the photosensitivity, and then respectively encoding each frame of image by using the code rate of each frame of image according to the reference frame of each frame of image to obtain encoded compressed data.
Mode (4): coding based on motion information of an image
The motion information of the image can be represented by a motion information graph shown in fig. 5, the motion information can distinguish different types of regions in the image frame, such as a still region and a motion region, the still region can be a background region of the image, the motion region is a foreground region of the image, as shown in fig. 5, the motion information graph obtained after processing an image signal is generally the motion information graph obtained after processing the image signal, for example, the motion information graph of the foreground is focused more in a monitoring scene, and the background region is focused less, so that a lower code rate can be allocated to the still region in the image, a higher code rate can be allocated to the motion region, namely, the code rate when each frame of image is coded can be controlled according to the motion information of the image.
Mode (5): encoding based on exposure parameters and motion information of an image
And encoding the ith frame image according to the exposure parameters of the ith frame image and the motion information of the ith frame image to obtain encoded compressed data corresponding to the ith frame image.
If so, determining a reference frame of the ith frame image according to the exposure parameters of the ith frame image;
determining the code rate of an ith frame image according to the motion information of the ith frame image;
and according to the reference frame of the ith frame image, encoding the ith frame image by using the code rate of the ith frame image to obtain encoded compressed data corresponding to the ith frame image.
In an embodiment, determining code rates corresponding to at least two types of image areas in the ith frame image according to the motion information of the ith frame image;
and according to the reference frame, encoding the ith frame image by using code rates corresponding to the at least two types of image areas to obtain compressed data after the encoding of the ith frame image.
The coding in this mode (5) is similar to that in mode (3) based on both the exposure parameter and the photosensitivity.
As described above, the codec encodes based on a specific codec standard, which may be, for example, h.264 or h.265 or VVC. The h.264 is advanced video coding (Advanced Video Coding, AVC for short), and the h.265 is efficient video coding (High Efficiency Video Coding, HEVC for short), and may be the next generation (Versatile Video Coding, VVC) codec standard. Fig. 6 is a codec framework of a HEVC protocol-based codec, including intra/inter prediction, transformation, quantization, entropy coding, etc. Loop filtering is used to improve video quality, reduce blocking artifacts, etc., and includes deblocking filtering and sample adaptive compensation (Sample Adaptive Offset, SAO).
In the embodiment of the present application, if the first alternative of the foregoing manner (1) is to perform reference frame management, reference frame management may be performed in the inter-prediction stage shown in fig. 6. If the second alternative of the foregoing mode (1) is used, the N-frame images may be divided into at least two groups after receiving the video data before the inter prediction is not performed. In addition, encoding based on the photosensitivity or motion information may be performed in the quantization stage shown in fig. 6.
The following describes the process of encoding based on the exposure parameters and encoding based on the photosensitivity, respectively. Meanwhile, the process of coding based on the exposure parameters and the photosensitivity is the combination (or superposition) of the two processes, and the description is omitted.
It should be noted that, in the following, the frame referred to in the embodiments of the present application may refer to a frame image.
Fig. 7 is a schematic flow chart of a video encoding method according to an embodiment of the present application, as shown in fig. 7, a processing procedure of a first alternative mode in the above mode (1) includes:
s601, determining a reference frame of the ith frame image according to the exposure parameters of the ith frame image.
The i-th frame may refer to a frame to be encoded that is currently processed. The value of i can be any integer of more than 1 and less than or equal to N.
In determining the reference frame of the i-th frame image from the exposure parameter of the i-th frame, either one of the following two modes may be used.
In the first mode, when i is greater than or equal to 3, a reference frame of the ith frame image is determined from the reference frame list according to the exposure parameter of the ith frame image.
In this way, I-frame encoding is directly employed when I is equal to 1, i.e. for the first frame in the video data. When i is equal to 2, i.e. for the second frame in the video data, the encoded 1 st frame is selected as the reference frame, since there are no other referenceable frames. When i is equal to 3, the most suitable frame may be selected from the reference frame list as the reference frame of the i-th frame according to the exposure parameter of the i-th frame image.
In an alternative embodiment, the determining the reference frame of the i-th frame image according to the exposure parameter of the i-th frame image may be implemented as follows:
dividing the video data into at least two groups of images according to exposure parameters; wherein the same group of images have the same exposure parameters;
and determining a reference frame of the ith frame image according to the exposure parameters of the ith frame image for any group of images.
For any one of the sets of images, the reference frame of the i-th frame image is determined according to the exposure parameters of the i-th frame image, and reference may be made to the schemes described in the embodiments below.
As an alternative embodiment, the reference frame selected from the reference frame list may be a reference frame satisfying the following condition:
and a reference frame positioned before the ith frame image, wherein the reference frame has the smallest difference between the frame number and the frame number of the ith frame image and the smallest difference between the frame number and the exposure parameter of the ith frame image.
Wherein being located before the i-th frame image means that encoding is performed before the i-th frame image.
Fig. 8 is an exemplary diagram of selecting a reference frame from a reference frame list, as shown in fig. 8, the first frame is f0, the second frame is f1, the third frame is f2, and so on. Meanwhile, the exposure time of f0 is t1, the exposure time of f1 is t2, the exposure time of f2 is t1, and so on, namely, from f2 onwards, the exposure time of each frame of image is the same as the exposure time of a frame of image which is positioned before and spaced one frame from the frame of image, and the exposure times of adjacent frames are different, if the prior art mode is used, the video coding compression rate is low because the exposure times of the adjacent frames are different, namely, the similarity is low. In the embodiment shown in fig. 8, the arrow source refers to the reference frame referenced by the frame, f0 is the first frame during encoding, I frame encoding is adopted, f1 selects the encoded f0 as the reference frame because there are no other referenceable frames, and the frame with the forward direction, nearest neighbor and nearest exposure parameters in the reference frame buffer is referenced for P frame encoding during subsequent P frame encoding. When the reference frame is selected for the ith frame image according to the above condition, the image closest to the ith frame image before the ith frame image and closest in exposure time is selected as the reference frame, and based on the condition, the reference frame of the third frame f2 is the first frame f0, the reference frame of the fourth frame f3 is f1, and so on.
In this way, the frame closest to the ith frame and having the smallest exposure parameter difference can be found out to be used as the reference frame of the ith frame image, so that the similarity between the ith frame and the reference frame is high, and the video coding compression rate is greatly improved. Meanwhile, the frame nearest to the ith frame is selected in the mode, so that the selection speed is high and the efficiency is high when the reference frame is selected. In addition, this approach is improved only when the reference frame is selected, and has no influence on the subsequent processing of the codec, and thus has high compatibility.
In the second mode, when i is greater than or equal to 2, the i-1 th frame image is subjected to brightness adjustment according to the difference between the exposure parameter of the i-1 th frame image and the exposure parameter of the i-1 th frame image, so as to obtain a processed i-1 th frame image, and the processed i-1 th frame image is used as a reference frame of the i-1 th frame image.
In this way, I-frame encoding is directly employed when I is equal to 1, i.e. for the first frame in the video data. When i is greater than or equal to 2, namely, from the second frame, the brightness of the ith-1 frame image before the ith frame image is adjusted, so that the brightness difference between the processed ith-1 frame image and the ith frame image is smaller than a threshold value, namely, the brightness is basically close, and the processed ith-1 frame image is used as a reference frame of the ith frame image. After the processing, the similarity between the ith frame and the reference frame is high, so that the video coding compression rate can be greatly improved.
S602, encoding the ith frame image according to the reference frame to obtain encoded compressed data corresponding to the ith frame image.
Taking the codec frame shown in fig. 6 as an example, the step S601 may be performed in an inter-frame prediction stage, and after the reference frame of the i-th frame is obtained by any one of the steps S601, the processes such as transformation and quantization may be continuously performed and the encoding compression may be completed. It should be understood that the method of the embodiment of the present application may be used for rate control in the quantization stage, and the method in the prior art may also be used for rate control.
In this embodiment, the reference frame of the ith frame is determined by using the exposure parameter, so that the similarity between the ith frame and the reference frame is greatly improved, and a high video coding compression rate is further realized.
The second alternative processing procedure in the above-described mode (1) is described below.
In this process, the codec divides N frame images of the video data into at least two groups according to exposure parameters before the video data is received and encoding is not started. For example, where there are two exposure times t1 and t2 in the video data, the codec may divide the image with exposure time t1 into one group to form one frame set, and the image with exposure time t2 into one group to form another frame set. Furthermore, the frame set corresponding to t1 is encoded first, and then the frame set corresponding to t2 is encoded. Since the exposure time of the images in each set is the same, the similarity of the images is high, and thus a high encoding compression rate can be obtained in the subsequent encoding.
The following describes a process of encoding based on the photosensitivity in the above-described mode (2).
Fig. 9 is a schematic flow chart of a video encoding method according to an embodiment of the present application, as shown in fig. 9, a process of encoding according to photosensitivity may include:
s801, determining a scene where the ith frame image is located according to the photosensitivity of the ith frame image.
The ith frame may refer to any one of N frames.
For example, when the image sensor collects images, the photosensitivity of the image sensor is different in the scenes of different ambient light intensities, so that the codec can read the photosensitivity of the i-th frame image from the ISP control signal, and further can judge which scene the shooting scene of the i-th frame image is.
For example, in outdoor scenes of day and night, the difference in ambient light brightness is large, and the photosensitivity is also different.
S802, determining the code rate of the ith frame image according to the scene of the ith frame image.
As described above, the video shot at night has poor definition and more noise, so that a larger code rate is not required for shooting an image with a scene at night, and therefore, when the codec determines that the i-th frame image is an image at night, a smaller code rate can be selected for the i-th frame image. And when the codec judges that the i-th frame image is an image of the daytime, a larger code rate can be selected for the i-th frame image.
S803, coding by using the code rate of the ith frame image to obtain compressed data after the coding of the ith frame image.
Alternatively, coding using the code rate of the ith frame image may refer to code rate control according to the code rate. Wherein code rate control may be achieved by quantization. Taking the codec frame shown in fig. 6 as an example, the processing procedure of the present embodiment may be performed in the quantization stage of fig. 6. In one approach, rate control may be achieved by adjusting quantization parameters (Quatization Parameter, QP) in the macroblock coding such that the rate of each macroblock approaches or reaches a target rate. In this embodiment, the target code rate may refer to the code rate of the i-th frame image obtained in step S802. One QP may correspond to one quantization step size, which may be determined by the QP. In another way, the rate control may also adjust the quantization step size of each component after transformation by Scaling the Matrix (Scaling Matrix). In the present embodiment, for an image belonging to a night scene, quantization of high-frequency components can be enlarged to improve compression rate.
In this embodiment, the scene of the ith frame can be obtained according to the photosensitivity, and then, the code rate matched with the scene can be selected as the code rate of the ith frame and the code rate control is performed, for example, for an image shot at night, a smaller code rate can be selected, so that a larger code rate can be prevented from being used for invalid information, and the video coding compression rate is further improved.
In one embodiment, the encoding process according to the motion information of the image may include:
dividing the region of the ith frame image according to the motion information of the ith frame image to obtain at least two types of image regions, wherein i is an integer which is more than 1 and less than or equal to N;
determining code rates corresponding to the at least two types of image areas according to the at least two types of image areas;
and coding the ith frame of image according to the code rates corresponding to the at least two types of image areas to obtain compressed data after coding the ith frame of image.
Specifically, the motion information of the image may distinguish between different types of image areas in the image frame, for example, a still area and a motion area, where the still area may be a background area of the image, and the motion area is a foreground area of the image, as shown in fig. 5, and is a motion information map obtained after processing an image signal, and generally the motion area is an area of interest to a user, for example, the motion information of the foreground is focused more under a monitoring scene, and focuses less on the background area, so that the image is divided into areas in the encoding process, and the different types of areas correspond to different code rates.
In this embodiment, firstly, according to the motion information of the ith frame image, the ith frame image is subjected to region division to obtain at least two types of image regions, such as a motion region and a still region;
according to the at least two types of areas, determining the code rates corresponding to the at least two types of areas, for example, a static area in an image can be allocated with less code rates, and a moving area can be allocated with more code rates, namely, the code rates in each frame of image coding can be controlled according to the moving information of the image.
For example, the motion information of the foreground is more focused in the monitored scene, the background area is less focused, the light area in fig. 5 corresponds to the background area, a larger code rate can be used in the coding, and a larger step length is used for quantization coding; and the dark region corresponds to the motion region, a smaller code rate can be used in coding, and the coding can be quantized by using a small step length, so that better quality is ensured.
Where I is equal to 1, i.e. for the first frame in the video data, I frame coding can be directly employed.
In this embodiment, the encoding according to the code rate of the ith frame image may refer to the code rate control according to the code rate, which is similar to that in the embodiment of fig. 9 and will not be described herein again.
In this embodiment, different regions in the image may be distinguished according to the motion information, and further, different code rates may be selected for different types of regions, for example, for a motion region in the image, a larger code rate may be selected, and for a still region in the image, a smaller code rate may be selected, so that a larger code rate may be prevented from being used for invalid information, and further, the video coding compression rate may be improved.
Fig. 10 is a block diagram of a video encoding apparatus according to an embodiment of the present application, as shown in fig. 10, where the apparatus includes:
the obtaining module 901 is configured to obtain video data to be encoded and feature information of the video data, where the video data includes N frames of images, N is an integer greater than or equal to 2, and the feature information of the video data is extracted based on the image information.
And a processing module 902, configured to encode the images of each frame according to the feature information of the images of each frame in the video data, so as to obtain encoded compressed data.
As an alternative embodiment, the processing module 902 is specifically configured to:
dividing the region of the ith frame image according to the motion information of the ith frame image to obtain at least two types of image regions, wherein i is an integer which is more than or equal to 1 and less than or equal to N;
Determining code rates corresponding to the at least two types of regions according to the at least two types of regions;
and coding the ith frame image according to the code rates corresponding to the at least two types of areas to obtain compressed data after coding the ith frame image.
As an alternative embodiment, the processing module 902 is specifically configured to:
determining a reference frame of the ith frame image according to the exposure parameters of the ith frame image; and according to the reference frame, encoding the ith frame image to obtain encoded compressed data corresponding to the ith frame image.
As an alternative embodiment, the processing module 902 is specifically configured to:
dividing the video data into at least two groups of images according to exposure parameters; wherein the same group of images have the same exposure parameters;
and determining a reference frame of the ith frame image according to the exposure parameters of the ith frame image for any group of images.
As an alternative embodiment, the processing module 902 is specifically configured to:
and when i is more than or equal to 3, determining a reference frame of the ith frame image from a reference frame list according to the exposure parameter of the ith frame image.
As an optional implementation manner, the reference frame of the ith frame image is a reference frame satisfying the following conditions:
And the reference frame is positioned in front of the ith frame image, and has the smallest difference between the frame sequence number and the frame sequence number of the ith frame image and the smallest difference between the frame sequence number and the exposure parameter of the ith frame image.
As an alternative embodiment, the processing module 902 is specifically configured to:
determining the code rate of an ith frame image according to the light sensitivity of the ith frame image;
and according to the reference frame, encoding the ith frame image by using the code rate of the ith frame image to obtain compressed data after the encoding of the ith frame image.
As another alternative embodiment, the processing module 902 is specifically configured to:
when i is more than or equal to 2, according to the difference between the exposure parameter of the ith frame image and the exposure parameter of the ith-1 frame image, carrying out brightness adjustment treatment on the ith-1 frame image to obtain a treated ith-1 frame image;
and taking the processed i-1 frame image as a reference frame of the i frame image.
As an alternative embodiment, the processing module 902 is specifically configured to:
and according to the light sensitivity of each frame of image, encoding each frame of image to obtain encoded compressed data.
As an alternative embodiment, the processing module 902 is specifically configured to:
Determining a scene where an ith frame image is located according to the photosensitivity of the ith frame image, wherein the scene comprises the day and night; determining the code rate of the ith frame image according to the scene of the ith frame image; and coding by using the code rate of the ith frame image to obtain compressed data after the coding of the ith frame image.
As an alternative embodiment, the processing module 902 is specifically configured to:
determining code rates corresponding to at least two types of image areas in the ith frame image according to the motion information of the ith frame image;
and according to the reference frame, coding the ith frame image by using code rates corresponding to the at least two types of areas to obtain compressed data after coding the ith frame image.
The video coding device provided by the embodiment of the present application may execute the method steps in the above method embodiment, and its implementation principle and technical effects are similar, and will not be described herein.
It should be noted that, it should be understood that the division of the modules of the above apparatus is merely a division of a logic function, and may be fully or partially integrated into a physical entity or may be physically separated. And these modules may all be implemented in software in the form of calls by the processing element; or can be realized in hardware; the method can also be realized in a form of calling software by a processing element, and the method can be realized in a form of hardware by a part of modules. For example, the determining module may be a processing element that is set up separately, may be implemented in a chip of the above apparatus, or may be stored in a memory of the above apparatus in the form of program code, and may be called by a processing element of the above apparatus and execute the functions of the determining module. The implementation of the other modules is similar. In addition, all or part of the modules can be integrated together or can be independently implemented. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in a software form.
For example, the modules above may be one or more integrated circuits configured to implement the methods above, such as: one or more specific integrated circuits (application specific integrated circuit, ASIC), or one or more microprocessors (digital signal processor, DSP), or one or more field programmable gate arrays (field programmable gate array, FPGA), or the like. For another example, when a module above is implemented in the form of a processing element scheduler code, the processing element may be a general purpose processor, such as a central processing unit (central processing unit, CPU) or other processor that may invoke the program code. For another example, the modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
Fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device may be the above-described codec, or a device including a codec, to which the present application is not limited in particular. As shown in fig. 11, the electronic device may include: the system comprises a processor 101, a memory 102, a communication interface 103 and a system bus 104, wherein the memory 102 and the communication interface 103 are connected with the processor 101 through the system bus 104 and are used for completing communication among each other, the memory 102 is used for storing computer execution instructions, the communication interface 103 is used for communicating with other devices, and the processor 101 realizes the schemes of the embodiments shown in the above figures 3 to 8 when executing the computer program.
The system bus referred to in fig. 11 may be a peripheral component interconnect standard (peripheral component interconnect, PCI) bus, or an extended industry standard architecture (extended industry standard architecture, EISA) bus, or the like. The system bus may be classified into an address bus, a data bus, a control bus, and the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus. The communication interface is used to enable communication between the database access apparatus and other devices (e.g., clients, read-write libraries, and read-only libraries). The memory may comprise random access memory (random access memory, RAM) and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The processor may be a general-purpose processor, including a Central Processing Unit (CPU), a network processor (network processor, NP), etc.; but may also be a digital signal processor DSP, an application specific integrated circuit ASIC, a field programmable gate array FPGA or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component.
Optionally, an embodiment of the present application further provides a storage medium, where instructions are stored, when the storage medium runs on a computer, to cause the computer to perform the method of the embodiment shown in fig. 3 to 9.
Optionally, an embodiment of the present application further provides a chip for executing instructions, where the chip is configured to perform the method of the embodiment shown in fig. 3 to fig. 9.
Embodiments of the present application also provide a program product comprising a computer program stored in a storage medium, from which at least one processor can read, the at least one processor executing the computer program implementing the method of the embodiments shown in fig. 1, 3 to 9.
The embodiment of the application also provides a coding and decoding system which comprises an encoder and a decoder, wherein the encoder is used for realizing the method of the embodiment shown in the figures 1, 3 to 9.
In embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or," describing similarity relationships of similarity objects, means that there may be three relationships, e.g., a and/or B, which may represent: a alone, a and B together, and B alone, wherein a, B may be singular or plural. The character "/" generally indicates that the front-to-back similarity object is an "or" relationship; in the formula, the character "/" indicates that the front-to-back similarity object is a "division" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
It will be appreciated that the various numbers referred to in the embodiments of the present application are merely for ease of description and are not intended to limit the scope of the embodiments of the present application.
It should be understood that, in the embodiment of the present application, the sequence number of each process does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application.

Claims (15)

1. A video encoding method based on image signal control information, comprising:
acquiring an image processing control signal, wherein information carried in the image processing control signal is used for encoding each frame of video data, the information comprises exposure parameters used for representing similarity between adjacent frames, the video data comprises N frames of images, and N is an integer greater than or equal to 2;
and determining a reference frame of the ith frame image in the images before the ith frame image according to the exposure parameters of the ith frame image, wherein i is greater than 1, and the exposure parameters at least comprise exposure time.
2. The method according to claim 1, wherein said operation of determining a reference frame of an i-th frame image is performed in an inter-prediction phase;
Prior to the inter prediction stage, the method further comprises:
dividing N frames of images in the video data into at least two groups according to the difference of the exposure parameters;
for each frame of image in any of the sets of images, a reference frame for each frame of image is determined.
3. The method according to claim 1 or 2, wherein the determining the reference frame of the i-th frame image in the image preceding the i-th frame image according to the exposure parameter of the i-th frame image comprises:
among the images before the ith frame image, the frame closest to the ith frame and having the smallest difference in exposure parameters is taken as the reference frame of the ith frame image.
4. The method according to claim 1 or 2, wherein the determining the reference frame of the i-th frame image in the image preceding the i-th frame image according to the exposure parameter of the i-th frame image comprises:
according to the difference between the exposure parameters of the ith frame image and the exposure parameters of the ith-1 frame image, carrying out brightness adjustment treatment on the ith-1 frame image to obtain a treated ith-1 frame image;
and taking the processed i-1 frame image as a reference frame of the i frame image.
5. The method according to any one of claims 1 to 4, wherein after determining a reference frame of an i-th frame image in an image preceding the i-th frame image according to an exposure parameter of the i-th frame image, the method further comprises:
And encoding the ith frame image according to the reference frame to obtain encoded compressed data corresponding to the ith frame image.
6. The method of any of claims 1-4, wherein the information carried in the image processing control signal further comprises: the method comprises the steps of representing the light sensitivity of scene environment light brightness information corresponding to an image, wherein the light sensitivity is different in scenes with different environment light brightness;
after the reference frame of the ith frame image is determined in the image before the ith frame image according to the exposure parameters of the ith frame image, the method further comprises:
identifying a scene where the ith frame image is located according to the photosensitivity, and determining the code rate of the ith frame image according to the scene;
and coding by using the code rate of the ith frame image to obtain coded compressed data corresponding to the ith frame image.
7. The method of claim 6, wherein the encoding using the code rate of the i-th frame image to obtain the encoded compressed data of the i-th frame image comprises:
and according to the reference frame of the ith frame image, coding the ith frame image by utilizing the code rate of the ith frame image to obtain coded compressed data.
8. The method of claim 2, wherein after determining the reference frame for each frame of image for each set of images, the method further comprises:
identifying the scene of each frame of image according to the photosensitivity, and determining the code rate of each frame of image according to the scene of each frame of image;
and respectively encoding each frame of image by utilizing the code rate of each frame of image according to the reference frame of each frame of image to obtain encoded compressed data.
9. The method of any of claims 6-8, wherein a code rate of the images of the daytime scene is greater than a code rate of the images of the night time scene.
10. The method according to any one of claims 1-4, wherein the information carried in the image processing control signal further comprises motion information for distinguishing between different regions in an image frame;
after the reference frame of the ith frame image is determined in the image before the ith frame image according to the exposure parameters of the ith frame image, the method further comprises:
determining the code rate of the ith frame image according to the motion information of the ith frame image;
according to the reference frame of the ith frame image, coding the ith frame image by using the code rate of the ith frame image to obtain coded compressed data corresponding to the ith frame image;
Wherein the determined code rate comprises: and code rates corresponding to at least two types of image areas in the ith frame of image.
11. The method according to any one of claims 1 to 10, wherein the information carried in the image processing control signal is directly extracted from the image signal output from the image sensor, or the information carried in the image processing control signal is extracted based on the information after processing the image signal.
12. A video encoding apparatus based on image signal control information, comprising:
the acquisition module is used for acquiring an image processing control signal, wherein information carried in the image processing control signal is used for encoding each frame of video data, the information comprises exposure parameters used for representing similarity between adjacent frames, the video data comprises N frames of images, and N is an integer greater than or equal to 2;
the processing module is used for determining a reference frame of the ith frame image in the images before the ith frame image according to the exposure parameters of the ith frame image, wherein i is greater than 1, and the exposure parameters at least comprise exposure time.
13. An electronic device, comprising:
A memory for storing program instructions;
a processor for invoking and executing program instructions in said memory to perform the method steps of any of claims 1-11.
14. A readable storage medium, characterized in that the readable storage medium has stored therein a computer program for executing the method of any one of claims 1-11.
15. A codec system comprising an encoder and a decoder, the encoder being adapted to perform the method of any of claims 1-11.
CN202310663910.7A 2020-10-26 2020-10-26 Video encoding method, video encoding device, electronic equipment and readable storage medium Pending CN116647685A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310663910.7A CN116647685A (en) 2020-10-26 2020-10-26 Video encoding method, video encoding device, electronic equipment and readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011158502.9A CN112351280B (en) 2020-10-26 2020-10-26 Video encoding method, video encoding device, electronic equipment and readable storage medium
CN202310663910.7A CN116647685A (en) 2020-10-26 2020-10-26 Video encoding method, video encoding device, electronic equipment and readable storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202011158502.9A Division CN112351280B (en) 2020-10-26 2020-10-26 Video encoding method, video encoding device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN116647685A true CN116647685A (en) 2023-08-25

Family

ID=74358560

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202310663910.7A Pending CN116647685A (en) 2020-10-26 2020-10-26 Video encoding method, video encoding device, electronic equipment and readable storage medium
CN202011158502.9A Active CN112351280B (en) 2020-10-26 2020-10-26 Video encoding method, video encoding device, electronic equipment and readable storage medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202011158502.9A Active CN112351280B (en) 2020-10-26 2020-10-26 Video encoding method, video encoding device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (2) CN116647685A (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113453070B (en) * 2021-06-18 2023-01-03 北京灵汐科技有限公司 Video key frame compression method and device, storage medium and electronic equipment
CN113923476B (en) * 2021-09-30 2024-03-26 支付宝(杭州)信息技术有限公司 Video compression method and device based on privacy protection
CN114422792B (en) * 2021-12-28 2023-06-09 北京华夏电通科技股份有限公司 Video image compression method, device, equipment and storage medium
CN114401405A (en) * 2022-01-14 2022-04-26 安谋科技(中国)有限公司 Video coding method, medium and electronic equipment
CN115082357B (en) * 2022-07-20 2022-11-25 深圳思谋信息科技有限公司 Video denoising data set generation method and device, computer equipment and storage medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100366091C (en) * 2004-06-24 2008-01-30 华为技术有限公司 Video frequency compression
CN101534444B (en) * 2009-04-20 2011-05-11 杭州华三通信技术有限公司 Image processing method, system and device
US9723315B2 (en) * 2011-07-01 2017-08-01 Apple Inc. Frame encoding selection based on frame similarities and visual quality and interests
EP2608529B1 (en) * 2011-12-22 2015-06-03 Axis AB Camera and method for optimizing the exposure of an image frame in a sequence of image frames capturing a scene based on level of motion in the scene
US20150350641A1 (en) * 2014-05-29 2015-12-03 Apple Inc. Dynamic range adaptive video coding system
CN105898306A (en) * 2015-12-11 2016-08-24 乐视云计算有限公司 Code rate control method and device for sport video
CN107306340A (en) * 2016-04-14 2017-10-31 上海富瀚微电子股份有限公司 A kind of automatic exposure and reference frame compensating parameter computing device and method
CN111200734B (en) * 2018-11-19 2022-03-11 浙江宇视科技有限公司 Video coding method and device
CN111385571B (en) * 2018-12-29 2022-07-19 浙江宇视科技有限公司 Method and device for controlling code rate of ultra-long image group
CN110060213B (en) * 2019-04-09 2021-06-15 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN110418142A (en) * 2019-08-06 2019-11-05 杭州微帧信息科技有限公司 A kind of coding method based on video interested region, device, storage medium

Also Published As

Publication number Publication date
CN112351280A (en) 2021-02-09
CN112351280B (en) 2023-08-22

Similar Documents

Publication Publication Date Title
CN112351280B (en) Video encoding method, video encoding device, electronic equipment and readable storage medium
KR100918480B1 (en) Stereo vision system and its processing method
CN1212018C (en) Camera integrated video recording and reproducing apparatus, and record control method thereof
US7561736B2 (en) Image processing apparatus and method of the same
US8218082B2 (en) Content adaptive noise reduction filtering for image signals
CN100586180C (en) Be used to carry out the method and system of de-blocking filter
US20110299604A1 (en) Method and apparatus for adaptive video sharpening
US20240357138A1 (en) Human visual system adaptive video coding
US20110249133A1 (en) Compression-quality driven image acquisition and processing system
CN109587480A (en) Image processing equipment, image processing method and recording medium
CN113259594A (en) Image processing method and device, computer readable storage medium and terminal
CN117441333A (en) Configurable location for inputting auxiliary information of image data processing neural network
US11861814B2 (en) Apparatus and method for sensing image based on event
US8488892B2 (en) Image encoder and camera system
WO2019237753A1 (en) A surveillance camera system and a method for reducing power consumption thereof
Froehlich et al. Content aware quantization: Requantization of high dynamic range baseband signals based on visual masking by noise and texture
US20200084467A1 (en) Low light compression
US20090021645A1 (en) Video signal processing device, video signal processing method and video signal processing program
CN115811616A (en) Video coding and decoding method and device
CN113676729A (en) Video coding method and device, computer equipment and storage medium
KR20230145096A (en) Independent localization of auxiliary information in neural network-based picture processing.
EP3503548B1 (en) Video encoding method and system
CN111050175A (en) Method and apparatus for video encoding
JP5179433B2 (en) Noise reduction device, noise reduction method, and moving image playback device
US11716475B2 (en) Image processing device and method of pre-processing images of a video stream before encoding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination