CN113891090A - Video encoding method, video encoding device, storage medium and electronic equipment - Google Patents

Video encoding method, video encoding device, storage medium and electronic equipment Download PDF

Info

Publication number
CN113891090A
CN113891090A CN202111253683.8A CN202111253683A CN113891090A CN 113891090 A CN113891090 A CN 113891090A CN 202111253683 A CN202111253683 A CN 202111253683A CN 113891090 A CN113891090 A CN 113891090A
Authority
CN
China
Prior art keywords
frame
reference frame
block
inter
current frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111253683.8A
Other languages
Chinese (zh)
Inventor
谷嘉文
闻兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202111253683.8A priority Critical patent/CN113891090A/en
Publication of CN113891090A publication Critical patent/CN113891090A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/56Motion estimation with initialisation of the vector search, e.g. estimating a good candidate to initiate a search

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The disclosure provides a video encoding method, a video encoding device, a storage medium and an electronic device. The method comprises the following steps: acquiring reference frame data of a current frame; determining each inter-frame loss between each reference frame in the reference frame data and the current frame; selecting at least one reference frame from the respective reference frames based on the respective inter-frame losses; the current frame is encoded using at least one reference frame. The method and the device can better balance the coding speed and the coding quality, thereby greatly improving the coding efficiency.

Description

Video encoding method, video encoding device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of video encoding and decoding, and in particular, to a video encoding method, a video encoding apparatus, an electronic device, and a computer-readable storage medium for dynamically selecting a reference frame based on video characteristics.
Background
Image data of a video is encoded by a video encoder based on a specific data compression standard, such as a Moving Picture Experts Group (MPEG) standard, High Efficiency Video Coding (HEVC), multi-function video coding (VVC), and then stored in a recording medium or transmitted in the form of a bitstream through a communication channel.
In video coding standards, frames are generally classified into I-frames (Intra-frames), P-frames (Predicted-frames), and B-frames (Bidirectional-frames) according to different functions and compression efficiencies. In recent video coding standards, in order to further improve the compression efficiency of P frames, a special B frame, i.e. GPB frame (Generalized P and B picture, PB frame), is further proposed to replace P frames. Other types of frames, except for I-frames that are self-reconstructed frames, are encoded with reference to other frames. In order to increase the compression efficiency of video, various video coding standards and encoders select multiple reference frames to obtain more reference regions. In the actual process of encoding a block, it is usually necessary to go through each reference frame, and find the block with the smallest difference from the current block from these reference frames as the best matching block. It can be seen that too few reference frames will cause a loss of coding compression efficiency, and too many reference frames will cause a loss of coding speed.
For example, in standards such as HEVC, VVC, a fixed number of reference frames is defined for a frame at a particular location. For open source software such as x265, x264, a parameter is typically used to control the maximum number of reference frames. For both implementations, the number of reference frames is determined by configuration or parameters before encoding begins, which makes the encoder unable to balance encoding efficiency and speed.
Disclosure of Invention
The present disclosure provides a reference frame selection method, apparatus, storage medium, and electronic device for video coding to solve at least the above-mentioned problems.
According to a first aspect of the present disclosure, there is provided a video encoding method, which may include: acquiring reference frame data of a current frame; determining respective inter-frame losses between respective ones of the reference frame data and the current frame; selecting at least one reference frame from the respective reference frames based on the respective inter-frame losses; encoding the current frame using the at least one reference frame.
Optionally, selecting at least one reference frame from the respective reference frames based on the respective inter-frame losses may include: screening out interframe losses which are smaller than a result value of minimum interframe losses multiplied by a threshold value from the interframe losses, wherein the minimum interframe losses are determined from the interframe losses; and selecting a reference frame corresponding to the screened interframe loss from the reference frames as the at least one reference frame.
Alternatively, the threshold may be determined based on a distance between the current frame and a nearest reference frame of the current frame.
Optionally, in a case that the current frame is a P frame or a GPB frame, the reference frame data may include forward reference frame data, wherein determining each inter-frame loss between each of the reference frame data and the current frame may include: dividing the current frame and the respective reference frames into blocks of a predetermined size; for each block of the current frame, performing a motion search in each reference frame of the forward reference frame data, respectively, to determine a reference block with a minimum inter-block loss in each reference frame, respectively; and for each reference frame, adding the minimum inter-block loss of the reference blocks determined by the blocks of the current frame in the reference frame to obtain the inter-frame loss between the current frame and the reference frame.
Optionally, in the case that the current frame is a B frame, the reference frame data may include forward reference frame data and backward reference frame data, wherein determining each inter-frame loss between each of the reference frame data and the current frame includes: dividing the current frame and the respective reference frames into blocks of a predetermined size; for each block of the current frame, performing a motion search in each forward reference frame of the forward reference frame data, respectively, to determine a forward reference block with a minimum inter-block loss in each forward reference frame, respectively; performing a motion search in each of the backward reference frames of the backward reference frame data based on pixel values of each block of the current frame and a corresponding forward reference block to determine a backward reference block having a minimum inter-block loss in each of the backward reference frames, respectively; and for each backward reference frame, adding the minimum inter-block loss of each block of the current frame in the backward reference frame to obtain the inter-frame loss between the current frame and the backward reference frame.
Optionally, in case the current frame is a P frame or a GPB frame, the reference frame corresponding to an inter-frame loss may comprise a corresponding forward reference frame; in the case where the current frame is a B frame, the reference frames corresponding to an inter-frame loss may include a corresponding forward reference frame and a corresponding backward reference frame.
Alternatively, the interframe loss may include one of frequency-domain sum of absolute difference SATD, sum of squared errors SSE, and time-domain sum of absolute difference SAD.
According to a second aspect of the present disclosure, there is provided a video encoding apparatus, which may include: an acquisition module configured to acquire reference frame data of a current frame; a selection module configured to: determining respective inter-frame losses between respective ones of the reference frame data and the current frame, and selecting at least one reference frame from the respective reference frames based on the respective inter-frame losses; an encoding module configured to encode the current frame using the at least one reference frame.
Optionally, the selection module may be configured to: screening out interframe losses which are smaller than a result value of minimum interframe losses multiplied by a threshold value from the interframe losses, wherein the minimum interframe losses are determined from the interframe losses; and selecting a reference frame corresponding to the screened interframe loss from the reference frames as the at least one reference frame.
Alternatively, the threshold may be determined based on a distance between the current frame and a nearest reference frame of the current frame.
Optionally, in a case that the current frame is a P frame or a GPB frame, the reference frame data may include forward reference frame data, wherein the selection module may be configured to: dividing the current frame and the respective reference frames into blocks of a predetermined size; for each block of the current frame, performing a motion search in each reference frame of the forward reference frame data, respectively, to determine a reference block with a minimum inter-block loss in each reference frame, respectively; and for each reference frame, adding the minimum inter-block loss of the reference blocks determined by the blocks of the current frame in the reference frame to obtain the inter-frame loss between the current frame and the reference frame.
Optionally, in the case that the current frame is a B frame, the reference frame data may include forward reference frame data and backward reference frame data, wherein the selection module is configured to: dividing the current frame and the respective reference frames into blocks of a predetermined size; for each block of the current frame, performing a motion search in each forward reference frame of the forward reference frame data, respectively, to determine a forward reference block with a minimum inter-block loss in each forward reference frame, respectively; performing a motion search in each of the backward reference frames of the backward reference frame data based on pixel values of each block of the current frame and a corresponding forward reference block to determine a backward reference block having a minimum inter-block loss in each of the backward reference frames, respectively; and for each backward reference frame, adding the minimum inter-block loss of each block of the current frame in the backward reference frame to obtain the inter-frame loss between the current frame and the backward reference frame.
Optionally, in case the current frame is a P frame or a GPB frame, the reference frame corresponding to an inter-frame loss may comprise a corresponding forward reference frame; in the case where the current frame is a B frame, the reference frames corresponding to an inter-frame loss may include a corresponding forward reference frame and a corresponding backward reference frame.
Alternatively, the interframe loss may include one of frequency-domain sum of absolute difference SATD, sum of squared errors SSE, and time-domain sum of absolute difference SAD.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic apparatus including: at least one processor; at least one memory storing computer-executable instructions, wherein the computer-executable instructions, when executed by the at least one processor, cause the at least one processor to perform a video encoding method as described above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having instructions which, when executed by a processor of a video encoding apparatus/electronic device/server, enable the video encoding apparatus/electronic device/server to perform the video encoding method as described above.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product in which instructions are executed by at least one processor in an electronic device to perform the video encoding method as described above.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects: the number of reference frames is dynamically selected based on the video characteristics between frames, so that when different video sequences are coded, the balance between the coding speed and the coding quality can be obtained, and the coding efficiency is greatly improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a flowchart illustrating a video encoding method for dynamically selecting a reference frame according to an exemplary embodiment of the present disclosure;
fig. 2 is a block diagram illustrating a video encoding apparatus for dynamically selecting a reference frame according to an exemplary embodiment of the present disclosure;
fig. 3 is a block diagram illustrating a structure of an electronic device for video encoding according to an exemplary embodiment of the present disclosure.
Fig. 4 is a schematic diagram illustrating an electronic device shown in accordance with another example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The embodiments described in the following examples do not represent all embodiments consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
In this case, the expression "at least one of the items" in the present disclosure means a case where three types of parallel expressions "any one of the items", "a combination of any plural ones of the items", and "the entirety of the items" are included. For example, "include at least one of a and B" includes the following three cases in parallel: (1) comprises A; (2) comprises B; (3) including a and B. For another example, "at least one of the first step and the second step is performed", which means that the following three cases are juxtaposed: (1) executing the step one; (2) executing the step two; (3) and executing the step one and the step two.
Before describing embodiments of the present disclosure in detail, some terms or abbreviations that may be involved with the embodiments of the present disclosure are described.
In the current video coding standard, the number of reference frames is determined by configuration or parameters before the start of coding, and is not dynamically selected according to the characteristics of the video itself, so that the encoder cannot achieve a good balance between coding efficiency and coding speed. Based on the above, the present disclosure provides a dynamic reference frame selection method based on video features, which dynamically selects a reference frame based on inter-frame loss results obtained through pre-analysis. A better balance between coding speed and coding quality can be achieved for different video sequences. The dynamic reference frame selection method of the present disclosure will be described in detail below with reference to fig. 1.
Fig. 1 is a flowchart illustrating a video encoding method for dynamically selecting a reference frame according to an exemplary embodiment of the present disclosure.
Referring to fig. 1, in step S101, reference frame data of a current frame is acquired. The reference frame data may be stored in a list, array, or the like. For example, the reference frame data may be a reference frame list composed of a plurality of video frames.
For each video frame, if the currently encoded frame is a B frame or a GPB frame, a bidirectional reference frame list for that frame may be obtained. If the currently encoded frame is a P frame, a list of forward reference frames for the frame may be obtained. Since the forward reference frame list in the bi-directional reference frame list of the GPB frame is identical to the backward reference frame list, the present disclosure may take the same approach when calculating the inter-frame loss of the GPB frame and the P frame.
In step S102, respective inter-frame losses between respective reference frames in the reference frame data of the current frame and the current frame are determined. The frame loss may be an index such as Sum of Absolute differences (Sum of Absolute differences) SATD, Sum of Squared errors (Sum of Squared errors) SSE, or Sum of Absolute differences (Sum of Absolute differences) SAD.
In the case where the current frame is a P frame or a GPB frame, the reference frame corresponding to an inter-frame loss may include a corresponding forward reference frame; in the case where the current frame is a B frame, the reference frames corresponding to an inter-frame loss may include a corresponding forward reference frame and a corresponding backward reference frame.
In particular, where the current frame is a P frame or a GPB frame, the reference frame data may include forward reference frame data. The interframe loss for a current frame (i.e., a P frame or a GPB frame) can be calculated based on: dividing each reference frame in the current frame and the reference frame data into blocks with preset sizes; for each block of the current frame, respectively performing motion search in each reference frame in the forward reference frame data to respectively determine a reference block with minimum inter-block loss in each reference frame; and for each reference frame, adding the minimum inter-block loss of the reference blocks determined by the blocks of the current frame in the reference frame to obtain the inter-frame loss between the current frame and the reference frame.
As an example, in the case where the current frame is a P frame or a GPB frame, a down-sampling process may be performed on a forward reference frame among the current frame and reference frame data, the down-sampled current frame and forward reference frame are divided into a plurality of blocks, for each block in the current frame, an inter-block loss between the block and each block in the forward reference frame is calculated block by block in the forward reference frame, a reference block having a minimum inter-block loss with the block is found therefrom, and then for each forward reference frame, the minimum inter-block losses of all blocks of the current frame are added to obtain an inter-frame loss of the current frame with respect to the forward reference frame.
That is, each block in the current frame determines a corresponding reference block in each reference frame, and then for each reference frame, the minimum inter-block loss of the determined reference blocks can be used for summation to obtain the inter-frame loss of the current frame relative to the corresponding reference frame.
In the case where the current frame is a B frame, the reference frame data may include forward reference frame data and backward reference frame data. The interframe loss for the current frame (i.e., B frame) may be calculated based on: dividing each reference frame in the current frame, the forward reference frame data and the backward reference frame data into blocks with preset sizes; for each block of the current frame, respectively performing motion search in each forward reference frame in the forward reference frame data to respectively determine a forward reference block with minimum inter-block loss in each forward reference frame; performing a motion search in each backward reference frame of the backward reference frame data based on pixel values of each block of the current frame and the corresponding forward reference block to determine a backward reference block having a minimum inter-block loss in each backward reference frame, respectively; and for each backward reference frame, adding the minimum inter-block loss of the backward reference blocks determined by the blocks of the current frame in the backward reference frame to obtain the inter-frame loss between the current frame and the backward reference frame.
As an example, in case that the current frame is a B frame, a down-sampling process may be performed on forward and backward reference frames among the current frame and reference frame data, the down-sampled current frame and forward and backward reference frames are divided into a plurality of blocks, for each block in the current frame, an inter-block loss between the block and each block in the forward reference frame is calculated block by block in the forward reference frame, from which a forward reference block having a minimum inter-block loss with the block is found, then an inter-block loss between the block and each block in the backward reference frame is calculated block by block based on the forward reference block and the current block, from which a backward reference block having a minimum inter-block loss is found, and for each backward reference block, all blocks of the current frame are added for the minimum inter-block loss of the backward reference frame to obtain an inter-frame loss of the current frame with respect to the backward reference frame. And finally, taking the determined backward reference frame and a forward reference frame paired with the backward reference frame as a reference frame of the current frame.
The following first specifically describes the calculation method of the interframe loss of the present disclosure.
A departure (p0, p1, b) is defined (where p0, b, p1 respectively denote frame numbers, and p0< b < ═ p1) denotes an inter-frame loss between the current frame b and the forward reference frame p0 and the backward reference frame p 1. Here, the interframe loss may be an index such as SATD, SSE, or SAD. When b is equal to P1, it can indicate that the current frame b is a GPB/P frame, i.e., the departure (P0, P1, b) indicates the inter-frame loss of the current frame b and the forward reference frame P0.
Taking the interframe loss as SATD as an example, the computation manner of the distorsion (p0, p1, b) can be as follows:
1. the frames p0, p1, b are down-sampled to a resolution of 1/2, and the down-sampled frames are divided into 8 × 8 blocks, respectively.
2. For the current block I in frame b, a motion search is performed in frame p0 to find the block with the smallest SATD as the best matching block I'.
3. If p1 is equal to b, the SATD in operation 2 may be taken as the SATD of the current block I; otherwise, if p 1! Then, a motion search is performed in the frame p1 using the value of 2 × I-I' as the search value, and the block with the smallest SATD is found as the best matching block, and the SATD value is used as the SATD of the current block I. Here, a × I denotes a multiplication operation, i.e., multiplying each pixel in the block I by a, where a is a constant. I-I 'represents a subtraction operation, i.e. a point-by-point subtraction of pixels in block I and pixels in block I'.
4. And (3) performing motion search on each block in the current frame according to the method in the step 3 to find a corresponding best matching block, and obtaining the SATD of each block in the current frame based on the best matching block.
5. The SATDs of the blocks in the current frame are summed to obtain the SATD of the final frame level (SATD of the current frame), i.e., the distorsion (p0, p1, b).
A forward reference frame may be determined based on the discrimination (P0, P1, B) when the current frame is a P frame or a GPB frame, and a forward reference frame and a backward reference frame may be determined based on the discrimination (P0, P1, B) when the current frame is a B frame.
For each frame in the video to be encoded, assuming that the current frame has a coding sequence number a, the current frame a may get its forward reference frame list L0 ═ P according to the coding configuration0,0,P0,1,..,P0,MAnd a backward reference frame list L1 ═ P1,0,P1,1,..,P1,N}. Here, the parameters in the reference frame list each indicate a frame number of the reference frame.
For the current frame A, the Distoretion (P) is calculated separately0,i,P1,jAnd A), wherein i is more than or equal to 0 and less than or equal to M, and j is more than or equal to 0 and less than or equal to N.
At step S103, at least one reference frame is selected from the respective reference frames of the reference frame list based on the calculated respective inter-frame losses.
As an example, an inter-frame loss smaller than a result value of the minimum inter-frame loss multiplied by a threshold may be screened from among the respective inter-frame losses obtained in step S102, wherein the minimum inter-frame loss is determined from the respective inter-frame losses, and then a reference frame corresponding to the screened inter-frame loss is selected from the reference frames as at least one reference frame used for encoding. Here, the threshold is determined based on a distance between the current frame and a nearest reference frame of the current frame.
For example, for the current frame A, the Distation (P) is calculated separately0,i,P1,jA), where 0 ≦ i ≦ M, 0 ≦ j ≦ N, and the minimum value SATD thereof is foundmin. Then, from the above Distoretion (P)0,i,P1,jFinding SATD less than SATD in A)min·ThDTWhere DT represents the sum of the distances of the current frame to its nearest forward and backward reference frames,
ThDTindicating a threshold associated with DT. That is, ThDTMay be set differently according to the sum of the distances of the current frame from its nearest forward reference frame and/or backward reference frame. ThDTA threshold value greater than 1.
In the present disclosure, to distinguish GPB and B-type frames, DT may be any one from 2 to the maximum number of consecutive B frames for B-type frames. For a GPB/P frame, DT may be considered to be 0. Different types of frames can obtain different threshold values Th according to different DTsDT. Thus, GPB type frames can also get specific thresholds individually. However, the above example is merely exemplary, and the present disclosure may additionally set a specific Th for a GPB type frameDT
All the above-mentioned requirements are satisfied with the discrimination (P) being screened out0,i’,P1,j’After B), P may be0,i’And P1,j’New forward and backward reference frame lists are constructed.
In step S104, the current frame is encoded using the selected at least one reference frame. For example, using P satisfying the condition0,i’And P1j’The constructed new forward reference frame list and/or new backward reference frame list are used for coding the current frame.
According to the embodiment of the disclosure, the reference frame which is most matched with the current frame can be found from the video frames according to the video characteristics so as to dynamically adjust the number of the reference frames for encoding.
Fig. 2 is a block diagram illustrating a video encoding apparatus for dynamically selecting a reference frame according to an exemplary embodiment of the present disclosure. It should be understood that the apparatus shown in fig. 2 may be implemented in any one of software, hardware, and a combination of software and hardware.
The video encoding device 200 may include an acquisition module 201, a selection module 202, and an encoding module 203. Each module in the video encoding apparatus 200 may be implemented by one or more modules, and the name of the corresponding module may vary according to the type of the module. In various embodiments, some modules in video encoding device 500 may be omitted, or additional modules may also be included. Furthermore, modules/elements according to various embodiments of the present disclosure may be combined to form a single entity, and thus may equivalently perform the functions of the respective modules/elements prior to combination.
The obtaining module 201 may obtain reference frame data of the current frame. The reference frame data may be in the form of a list, array, or the like. For example, the reference frame data may be a reference frame list composed of a plurality of video frames.
The selection module 202 may determine respective inter-frame losses between respective ones of the reference frame data of the current frame and the current frame, and select at least one reference frame from the respective reference frames based on the respective inter-frame losses. The interframe loss may include one of frequency-domain sum of absolute difference SATD, sum of squared errors SSE, and time-domain sum of absolute difference SAD.
The selection module 202 may screen out inter-frame losses smaller than a result value of a minimum inter-frame loss multiplied by a threshold among the determined respective inter-frame losses, wherein the minimum inter-frame loss is determined from the respective inter-frame losses, and then select a reference frame corresponding to the screened inter-frame loss from the respective reference frames as at least one reference frame for encoding.
Here, the threshold may be determined based on a distance between the current frame and a nearest reference frame of the current frame.
In the case where the current frame is a P frame or a GPB frame, the reference frame corresponding to an inter-frame loss may include a corresponding forward reference frame; in the case where the current frame is a B frame, the reference frames corresponding to an inter-frame loss may include a corresponding forward reference frame and a corresponding backward reference frame.
In the case where the current frame is a P frame or a GPB frame, the reference frame data for the current frame may include forward reference frame data based on the encoding configuration. The selection module 202 may divide the current frame and each reference frame into blocks of a predetermined size, perform a motion search in each reference frame of the forward reference frame data for each block of the current frame, respectively, to determine a reference block with a minimum inter-block loss in each reference frame, respectively, and add the minimum inter-block losses of the reference blocks determined in the reference frame for each reference frame to obtain an inter-frame loss between the current frame and the reference frame.
As an example, in the case that the current frame is a P frame or a GPB frame, the selection module 202 may perform a down-sampling process on a forward reference frame of the current frame and the reference frame data, divide the down-sampled current frame and forward reference frame into a plurality of blocks, calculate, for each block of the current frame, an inter-block loss between the block and each block of the forward reference frame block by block in the forward reference frame, find a reference block having a minimum inter-block loss with the block, and then add the minimum inter-block losses of all blocks of the current frame to obtain an inter-frame loss of the current frame with respect to the forward reference frame.
In the case where the current frame is a B frame, the reference frame data for the current frame may include forward reference frame data and backward reference frame data based on the encoding configuration. The selection module 202 may divide the current frame and the respective reference frames into blocks of a predetermined size, perform a motion search in each forward reference frame in forward reference frame data, respectively, for each block of the current frame to determine a forward reference block having a minimum inter-block loss in each forward reference frame, perform a motion search in each backward reference frame in backward reference frame data, respectively, based on pixel values of each block of the current frame and the corresponding forward reference block, to determine a backward reference block having a minimum inter-block loss in each backward reference frame, respectively, and then add the minimum inter-block losses of the backward reference blocks determined in the backward reference frame for each backward reference frame to obtain an inter-frame loss between the current frame and the backward reference frame.
As an example, in case that the current frame is a B frame, the selection module 202 may perform a down-sampling process on the current frame and forward and backward reference frames in the reference frame data, divide the down-sampled current frame and forward and backward reference frames into a plurality of blocks, calculate, for each block in the current frame, an inter-block loss between the block and each block in the forward reference frame on a block-by-block basis in the forward reference frame, find therefrom a forward reference block having a minimum inter-block loss with the block, then calculate, for each block in the backward reference frame, an inter-block loss between the block and each block in the backward reference frame on a block-by-block basis in the forward reference block and the backward reference frame, find therefrom a backward reference block having a minimum inter-block loss, add all blocks of the current frame for the minimum inter-block loss of the backward reference frame to obtain an inter-frame loss of the current frame with respect to the backward reference frame.
The encoding module 203 may encode the current frame using the selected at least one reference frame.
The operation and function of each block of the video encoding apparatus 200 have been described in detail above with reference to fig. 1, and a description thereof will not be repeated.
Fig. 3 is a block diagram illustrating a structure of an electronic device for video encoding according to an exemplary embodiment of the present disclosure. The electronic device 300 may be, for example: a smart phone, a tablet computer, an MP4(Moving Picture Experts Group Audio Layer IV) player, a notebook computer or a desktop computer. The electronic device 300 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
In general, the electronic device 300 includes: a processor 301 and a memory 302.
The processor 301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 301 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 301 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 301 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 301 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 302 may include one or more computer-readable storage media, which may be non-transitory. Memory 302 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 302 is used to store at least one instruction for execution by processor 301 to implement the video encoding method provided by the method embodiments of the present disclosure as shown in fig. 1.
In some embodiments, the electronic device 300 may further include: a peripheral interface 303 and at least one peripheral. The processor 301, memory 302 and peripheral interface 303 may be connected by a bus or signal lines. Each peripheral may be connected to the peripheral interface 303 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 304, touch display screen 305, camera 306, audio circuitry 307, positioning components 308, and power supply 309.
The peripheral interface 303 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 301 and the memory 302. In some embodiments, processor 301, memory 302, and peripheral interface 303 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 301, the memory 302 and the peripheral interface 303 may be implemented on a separate chip or circuit board, which is not limited by the embodiment.
The Radio Frequency circuit 304 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 304 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 304 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 304 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 304 may also include NFC (Near Field Communication) related circuits, which are not limited by this disclosure.
The display screen 305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 305 is a touch display screen, the display screen 305 also has the ability to capture touch signals on or over the surface of the display screen 305. The touch signal may be input to the processor 301 as a control signal for processing. At this point, the display screen 305 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 305 may be one, disposed on the front panel of the electronic device 300; in other embodiments, the display screens 305 may be at least two, respectively disposed on different surfaces of the terminal 300 or in a folded design; in still other embodiments, the display 305 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 300. Even further, the display screen 305 may be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display screen 305 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 306 is used to capture images or video. Optionally, camera assembly 306 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 306 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Audio circuitry 307 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 301 for processing or inputting the electric signals to the radio frequency circuit 304 to realize voice communication. The microphones may be provided in plural numbers, respectively, at different portions of the terminal 300 for the purpose of stereo sound collection or noise reduction. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 301 or the radio frequency circuitry 304 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 307 may also include a headphone jack.
The positioning component 308 is used to locate the current geographic Location of the electronic device 300 to implement navigation or LBS (Location Based Service). The Positioning component 308 may be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, the graves System in russia, or the galileo System in the european union.
The power supply 309 is used to supply power to various components in the electronic device 300. The power source 309 may be alternating current, direct current, disposable batteries, or rechargeable batteries. When the power source 309 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the electronic device 300 also includes one or more sensors 310. The one or more sensors 310 include, but are not limited to: acceleration sensor 311, gyro sensor 312, pressure sensor 313, fingerprint sensor 314, optical sensor 315, and proximity sensor 316.
The acceleration sensor 311 may detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the terminal 300. For example, the acceleration sensor 311 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 301 may control the touch display screen 305 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 311. The acceleration sensor 311 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 312 may detect a body direction and a rotation angle of the terminal 300, and the gyro sensor 312 may cooperate with the acceleration sensor 311 to acquire a 3D motion of the user on the terminal 300. The processor 301 may implement the following functions according to the data collected by the gyro sensor 312: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensor 313 may be disposed on a side bezel of the terminal 300 and/or an underlying layer of the touch display screen 305. When the pressure sensor 313 is disposed on the side frame of the terminal 300, the holding signal of the user to the terminal 300 can be detected, and the processor 301 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 313. When the pressure sensor 313 is disposed at the lower layer of the touch display screen 305, the processor 301 controls the operability control on the UI according to the pressure operation of the user on the touch display screen 305. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 314 is used for collecting a fingerprint of the user, and the processor 301 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 314, or the fingerprint sensor 314 identifies the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, processor 301 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 314 may be disposed on the front, back, or side of the electronic device 300. When a physical button or vendor Logo is provided on the electronic device 300, the fingerprint sensor 314 may be integrated with the physical button or vendor Logo.
The optical sensor 315 is used to collect the ambient light intensity. In one embodiment, the processor 301 may control the display brightness of the touch screen display 305 based on the ambient light intensity collected by the optical sensor 315. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 305 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 305 is turned down. In another embodiment, the processor 301 may also dynamically adjust the shooting parameters of the camera head assembly 306 according to the ambient light intensity collected by the optical sensor 315.
The proximity sensor 316, also referred to as a distance sensor, is typically disposed on the front panel of the electronic device 300. The proximity sensor 316 is used to capture the distance between the user and the front of the electronic device 300. In one embodiment, when the proximity sensor 316 detects that the distance between the user and the front surface of the terminal 300 gradually decreases, the processor 301 controls the touch display screen 305 to switch from the bright screen state to the dark screen state; when the proximity sensor 316 detects that the distance between the user and the front surface of the electronic device 300 is gradually increased, the processor 301 controls the touch display screen 305 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 3 is not intended to be limiting of electronic device 300, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
Fig. 4 is a block diagram of another electronic device 400. For example, the electronic device 400 may be provided as a server. Referring to fig. 4, an electronic device 400 includes one or more processors 410 and memory 420. The memory 420 may include one or more programs for performing the above video encoding methods. Electronic device 400 may also include a power component 430 configured to perform power management of electronic device 400, a wired or wireless network interface 440 configured to connect electronic device 400 to a network, and an input/output (I/O) interface 450. The electronic device 400 may operate based on an operating system stored in memory 420, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
According to an embodiment of the present disclosure, there may also be provided a computer-readable storage medium storing instructions that, when executed by at least one processor, cause the at least one processor to perform a video encoding method according to the present disclosure. Examples of the computer-readable storage medium herein include: read-only memory (ROM), random-access programmable read-only memory (PROM), electrically erasable programmable read-only memory (EEPROM), random-access memory (RAM), dynamic random-access memory (DRAM), static random-access memory (SRAM), flash memory, non-volatile memory, CD-ROM, CD-R, CD + R, CD-RW, CD + RW, DVD-ROM, DVD-R, DVD + R, DVD-RW, DVD + RW, DVD-RAM, BD-ROM, BD-R, BD-R LTH, BD-RE, Blu-ray or compact disc memory, Hard Disk Drive (HDD), solid-state drive (SSD), card-type memory (such as a multimedia card, a Secure Digital (SD) card or a extreme digital (XD) card), magnetic tape, a floppy disk, a magneto-optical data storage device, an optical data storage device, a hard disk, a magnetic tape, a magneto-optical data storage device, a hard disk, a magnetic tape, a magnetic data storage device, a magnetic tape, a magnetic data storage device, a magnetic tape, a magnetic data storage device, a magnetic tape, a magnetic data storage device, a magnetic tape, a magnetic data storage device, A solid state disk, and any other device configured to store and provide a computer program and any associated data, data files, and data structures to a processor or computer in a non-transitory manner such that the processor or computer can execute the computer program. The computer program in the computer-readable storage medium described above can be run in an environment deployed in a computer apparatus, such as a client, a host, a proxy device, a server, and the like, and further, in one example, the computer program and any associated data, data files, and data structures are distributed across a networked computer system such that the computer program and any associated data, data files, and data structures are stored, accessed, and executed in a distributed fashion by one or more processors or computers.
According to an embodiment of the present disclosure, there may also be provided a computer program product, in which instructions are executable by a processor of a computer device to perform the video encoding method described above.
The video coding method, the video coding device, the electronic equipment and the computer-readable storage medium can dynamically adjust the number of reference frames according to the video characteristics, so as to achieve better coding speed/quality balance. The average loss of BD-Rate (an objective index for measuring the coding performance) tested on the high-quality gear on line by using the method disclosed by the invention is only 0.1%, and meanwhile, about 20% of coding time is saved, and the coding efficiency is greatly improved.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A video encoding method, comprising:
acquiring reference frame data of a current frame;
determining respective inter-frame losses between respective ones of the reference frame data and the current frame;
selecting at least one reference frame from the respective reference frames based on the respective inter-frame losses;
encoding the current frame using the at least one reference frame.
2. The video coding method of claim 1, wherein selecting at least one reference frame from the respective reference frames based on the respective inter-frame losses comprises:
screening out interframe losses which are smaller than a result value of minimum interframe losses multiplied by a threshold value from the interframe losses, wherein the minimum interframe losses are determined from the interframe losses;
and selecting a reference frame corresponding to the screened interframe loss from the reference frames as the at least one reference frame.
3. The video coding method of claim 2, wherein the threshold is determined based on a distance between the current frame and a nearest reference frame of the current frame.
4. The video coding method of claim 1, wherein the reference frame data comprises forward reference frame data in the case that the current frame is a P frame or a GPB frame,
wherein determining each inter-frame loss between each reference frame in the reference frame data and the current frame comprises:
dividing the current frame and the respective reference frames into blocks of a first predetermined size;
for each block of the current frame, performing a motion search in each reference frame of the forward reference frame data, respectively, to determine a reference block with a minimum inter-block loss in each reference frame, respectively;
and for each reference frame, adding the minimum inter-block loss of the reference blocks determined by the blocks of the current frame in the reference frame to obtain the inter-frame loss between the current frame and the reference frame.
5. The video coding method according to claim 1, wherein, in the case where the current frame is a B frame, the reference frame data includes forward reference frame data and backward reference frame data,
wherein determining each inter-frame loss between each reference frame in the reference frame data and the current frame comprises:
dividing the current frame and the respective reference frames into blocks of a second predetermined size;
for each block of the current frame, performing a motion search in each forward reference frame of the forward reference frame data, respectively, to determine a forward reference block with a minimum inter-block loss in each forward reference frame, respectively;
performing a motion search in each of the backward reference frames of the backward reference frame data based on pixel values of each block of the current frame and a corresponding forward reference block to determine a backward reference block having a minimum inter-block loss in each of the backward reference frames, respectively;
and for each backward reference frame, adding the minimum inter-block loss of each block of the current frame in the backward reference frame to obtain the inter-frame loss between the current frame and the backward reference frame.
6. The video coding method of claim 1, wherein in case the current frame is a P frame or a GPB frame, the reference frame corresponding to an inter-frame loss comprises a corresponding forward reference frame; in the case where the current frame is a B frame, the reference frames corresponding to an inter-frame loss include a corresponding forward reference frame and a corresponding backward reference frame.
7. The video encoding method of claim 1, wherein the inter-frame loss comprises one of frequency-domain sum of absolute difference (SATD), Sum of Squared Error (SSE), and Sum of Absolute Difference (SAD) in time domain.
8. A video encoding device, comprising:
an acquisition module configured to acquire reference frame data of a current frame;
a selection module configured to: determining respective inter-frame losses between respective ones of the reference frame data and the current frame, and selecting at least one reference frame from the respective reference frames based on the respective inter-frame losses;
an encoding module configured to encode the current frame using the at least one reference frame.
9. An electronic device, comprising:
at least one processor;
at least one memory storing computer-executable instructions,
wherein the computer-executable instructions, when executed by the at least one processor, cause the at least one processor to perform the video encoding method of any of claims 1 to 7.
10. A computer-readable storage medium, wherein instructions, when executed by a processor of a video encoding apparatus/electronic device/server, enable the video encoding apparatus/electronic device/server to perform the video encoding method of any one of claims 1 to 7.
CN202111253683.8A 2021-10-27 2021-10-27 Video encoding method, video encoding device, storage medium and electronic equipment Pending CN113891090A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111253683.8A CN113891090A (en) 2021-10-27 2021-10-27 Video encoding method, video encoding device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111253683.8A CN113891090A (en) 2021-10-27 2021-10-27 Video encoding method, video encoding device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN113891090A true CN113891090A (en) 2022-01-04

Family

ID=79013733

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111253683.8A Pending CN113891090A (en) 2021-10-27 2021-10-27 Video encoding method, video encoding device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113891090A (en)

Similar Documents

Publication Publication Date Title
CN113411592B (en) Method and device for decoding and encoding prediction mode
CN110022489B (en) Video playing method, device and storage medium
CN108391127B (en) Video encoding method, device, storage medium and equipment
CN110933334B (en) Video noise reduction method, device, terminal and storage medium
CN113891074B (en) Video encoding method and apparatus, electronic apparatus, and computer-readable storage medium
CN110572710B (en) Video generation method, device, equipment and storage medium
CN112911337B (en) Method and device for configuring video cover pictures of terminal equipment
CN112533065B (en) Method and device for publishing video, electronic equipment and storage medium
CN114302137B (en) Time domain filtering method and device for video, storage medium and electronic equipment
CN113192519B (en) Audio encoding method and apparatus, and audio decoding method and apparatus
CN114332709A (en) Video processing method, video processing device, storage medium and electronic equipment
CN112203020B (en) Method, device and system for configuring camera configuration parameters of terminal equipment
CN111698262B (en) Bandwidth determination method, device, terminal and storage medium
CN113891090A (en) Video encoding method, video encoding device, storage medium and electronic equipment
CN113938689B (en) Quantization parameter determination method and device
CN114422782B (en) Video encoding method, video encoding device, storage medium and electronic equipment
CN114268797B (en) Method, device, storage medium and electronic equipment for time domain filtering of video
CN113038124B (en) Video encoding method, video encoding device, storage medium and electronic equipment
CN111246240A (en) Method and apparatus for storing media data
CN111641824A (en) Video reverse playing method and device
CN114549367A (en) Video denoising method and device
CN114360555A (en) Audio processing method and device, electronic equipment and storage medium
CN116957933A (en) Image processing method, apparatus and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination