CN115460404A - Image encoding method, image decoding method, image encoding device, image decoding device, image encoding program, and image decoding program - Google Patents

Image encoding method, image decoding method, image encoding device, image decoding device, image encoding program, and image decoding program Download PDF

Info

Publication number
CN115460404A
CN115460404A CN202211073763.XA CN202211073763A CN115460404A CN 115460404 A CN115460404 A CN 115460404A CN 202211073763 A CN202211073763 A CN 202211073763A CN 115460404 A CN115460404 A CN 115460404A
Authority
CN
China
Prior art keywords
image
code stream
macro block
format
encoder
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211073763.XA
Other languages
Chinese (zh)
Inventor
张路
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Wanxiang Electronics Technology Co Ltd
Original Assignee
Xian Wanxiang Electronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Wanxiang Electronics Technology Co Ltd filed Critical Xian Wanxiang Electronics Technology Co Ltd
Priority to CN202211073763.XA priority Critical patent/CN115460404A/en
Publication of CN115460404A publication Critical patent/CN115460404A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock

Abstract

The embodiment of the application provides an image coding method, an image decoding method and an image coding device, wherein the method comprises the following steps: acquiring a target image, wherein the target image is an original image frame in a first format; the method comprises the steps of carrying out downsampling on an original image frame in a first format to obtain an original image frame in a second format; when the target image belongs to a screen scene image, carrying out macro block classification on the target image to obtain a character macro block and an image macro block; based on an original image frame in a first format, a first encoder is adopted to encode the character macro block, based on an original image frame in a second format, a second encoder is adopted to encode the image macro block, and an encoded code stream is obtained, wherein the encoded code stream comprises: the first code stream corresponds to a character macro block, and the second code stream corresponds to an image macro block. The embodiment can effectively improve the image coding efficiency.

Description

Image encoding method, image decoding method, image encoding device, image decoding device, image encoding program, and image decoding program
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to an image coding method, an image decoding method and an image coding device.
Background
Image coding is applicable to a coding system for an image sequence including a screen image, for example, an image sequence such as a computer desktop operated by a user or a browser for browsing a web page.
In the related art, common encoding methods are a natural image encoder and a screen image encoder, where the natural image encoder may be used to encode an image collected by a shooting device (e.g., a camera), and the screen image encoder may be used to encode an image displayed on a screen of an apparatus.
However, the above coding scheme has low coding quality.
Disclosure of Invention
In view of the foregoing problems, embodiments of the present application provide an image encoding method and an image decoding method and apparatus, which overcome the above problem of poor encoding quality in the prior art.
In a first aspect, an image encoding method is provided, including:
acquiring a target image, wherein the target image is an original image frame in a first format;
downsampling the original image frame in the first format to obtain an original image frame in a second format;
when the target image belongs to a screen scene image, carrying out macro block classification on the target image to obtain a character macro block and an image macro block;
based on the original image frame in the first format, encoding the Wen Zilei macro block by using a first encoder, and based on the original image frame in the second format, encoding the image macro block by using a second encoder to obtain an encoded code stream, where the encoded code stream includes: the method comprises the steps that a first code stream and a second code stream are adopted, wherein the first code stream corresponds to a character macro block, and the second code stream corresponds to an image macro block;
the first encoder is an encoder for encoding a screen scene image, and the second encoder is an encoder for encoding a natural scene image.
In an optional manner, the method further comprises:
and when the target image belongs to a natural scene image, encoding the target image by adopting a second encoder based on the original image frame in the second format to obtain an encoded code stream.
In an optional manner, the encoding of the Wen Zilei macro block by using a first encoder based on the original image frame in the first format, and the encoding of the image class macro block by using a second encoder based on the original image frame in the second format to obtain an encoded code stream includes:
based on the original image frame in the first format, a first encoder is adopted to encode the Wen Zilei macro block to obtain a first code stream;
based on the original image frame in the second format, a second encoder is adopted to encode the image macro block to obtain a second code stream;
and fusing the first code stream and the second code stream to obtain a coding code stream corresponding to one frame of screen scene image.
In an optional manner, the encoding the Wen Zilei macroblock by using a first encoder based on the original image frame in the first format to obtain a first code stream includes:
when a first identical macro block exists in the text macro blocks, based on the original image frame in the first format, a first encoder is adopted to encode other macro blocks, which are not the first identical macro block, in the Wen Zilei macro block and one macro block in the first identical macro block to obtain a first code stream, wherein the rest macro blocks in the first identical macro block are marked with a first prediction macro block;
the encoding, by using a second encoder, the image-class macroblock based on the original image frame in the second format to obtain a second code stream, includes:
when a second identical macro block exists in the image macro block, based on the original image frame in the second format, a second encoder is adopted to encode other macro blocks, which are not the second identical macro block, in the image macro block and one macro block in the second identical macro block to obtain a second code stream, wherein the second predicted macro block is marked on the residual macro blocks in the second identical macro block.
In an optional manner, the method further comprises:
acquiring a previous frame image of a target image;
when the number of the same pixel regions in the target image and the previous frame image exceeds a preset number threshold, determining that the target image belongs to a screen scene image;
and when the number of the same pixel areas in the target image and the previous frame image does not exceed a preset number threshold, determining that the target image belongs to a natural scene image.
In a second aspect, a decoding method is provided, including:
acquiring a code stream and a code stream type corresponding to a target image, wherein the code stream type comprises: the method comprises the steps that a screen image or a natural image is selected, a coding code stream corresponding to a target image is an original image frame based on a first format, a first encoder is adopted to encode character macro blocks divided from the target image, and a second encoder is adopted to encode image macro blocks divided from the target image based on the original image frame of a second format to obtain a code stream;
when the code stream type is the screen image type, decoding a first code stream in the coding code stream based on a first decoder, and decoding a second code stream in the coding code stream based on a second decoder to obtain a decoding macro block, wherein the decoding macro block comprises a plurality of macro blocks;
and splicing a plurality of macro blocks to obtain a decoding image corresponding to the coding code stream.
In an optional manner, the decoding a first code stream of the encoded code streams based on a first decoder, and decoding a second code stream of the encoded code streams based on a second decoder to obtain a decoded macroblock, including:
decoding the first code stream based on a first decoder to obtain a character macro block;
decoding the second code stream based on a second decoder to obtain an image macro block;
the image macro blocks are subjected to up-sampling to obtain macro blocks corresponding to a first format;
and determining a decoding macro block according to the text macro block and the macro block corresponding to the first format.
In an optional manner, the method further comprises:
when the code stream type is the natural image type, decoding the coded code stream based on a second decoder to obtain a decoded image frame, wherein the decoded image frame corresponds to an image frame in a second format;
and upsampling the decoded image frame to determine an image frame in a first format so as to obtain a decoded image corresponding to the coded code stream.
In a third aspect, an image encoding apparatus is provided, including:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a target image, and the target image is an original image frame in a first format;
the sampling module is used for carrying out downsampling on the original image frame in the first format to obtain an original image frame in a second format;
the classification module is used for carrying out macro block classification on the target image to obtain a character macro block and an image macro block when the target image belongs to a screen scene image;
the encoding module is configured to encode the Wen Zilei macro block by using a first encoder based on the original image frame in the first format, and encode the image macro block by using a second encoder based on the original image frame in the second format to obtain an encoded code stream, where the encoded code stream includes: the method comprises the steps that a first code stream and a second code stream are adopted, wherein the first code stream corresponds to a character macro block, and the second code stream corresponds to an image macro block;
the first encoder is an encoder for encoding a screen scene image, and the second encoder is an encoder for encoding a natural scene image.
In a fourth aspect, there is provided an image decoding apparatus comprising:
the second acquisition module is used for acquiring a coding code stream and a code stream type corresponding to the target image, wherein the code stream type comprises: the method comprises the steps that a screen image or a natural image is selected, a coding code stream corresponding to a target image is an original image frame based on a first format, a first encoder is adopted to encode character macro blocks divided from the target image, and a second encoder is adopted to encode image macro blocks divided from the target image based on the original image frame of a second format to obtain a code stream;
the decoding module is used for decoding a first code stream in the coded code stream based on a first decoder when the code stream type is the screen image type, and decoding a second code stream in the coded code stream based on a second decoder to obtain a decoded macro block, wherein the decoded macro block comprises a plurality of macro blocks;
and the splicing module is used for splicing the macroblocks to obtain a decoding image corresponding to the coding code stream.
In a fifth aspect, there is provided a computer device comprising a memory and a processor, wherein the memory stores a computer program, and the processor executes the computer program to implement the steps of the image encoding method as in any one of the above embodiments, or to implement the steps of the image decoding method as in any one of the above embodiments.
A sixth aspect provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the image encoding method as in any one of the above embodiments, or which, when executed, implements the steps of the image decoding method as in any one of the above embodiments.
The image coding method provided in the embodiment of the application can perform format conversion on an obtained target image, obtain an original image frame in a second format according to the original image frame in the first format, perform macro block classification on the target image when the target image is determined to belong to a screen scene image to obtain a text macro block and an image macro block, encode the text macro block by using a first encoder based on the original image frame in the first format, and encode the image macro block by using a second encoder based on the original image frame in the second format to obtain an encoded code stream, so that the image coding efficiency can be effectively improved by performing differential coding on different types of images.
The foregoing description is only an overview of the technical solutions of the embodiments of the present application, and in order that the technical means of the embodiments of the present application can be clearly understood, the embodiments of the present application are specifically described below in order to make the foregoing and other objects, features, and advantages of the embodiments of the present application more clearly understandable.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flowchart of an image encoding method provided in this embodiment;
fig. 2A is a schematic diagram of a screen scene image provided in this embodiment;
fig. 2B is a schematic diagram of a natural scene image provided in this embodiment;
fig. 3 is a schematic flowchart of an image decoding method according to the present embodiment;
FIG. 4 is a diagram of system components for image encoding and image decoding provided by the present embodiment;
FIG. 5 is a schematic structural diagram of an image encoding apparatus provided in this embodiment;
fig. 6 is a schematic structural diagram of an image decoding apparatus provided in this embodiment;
fig. 7 is a schematic structural diagram of a computer device provided in this embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "comprising" and "having," and any variations thereof, in the description and claims of this application and the description of the figures are intended to cover non-exclusive inclusions.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase "an embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: there are three cases of A, both A and B, and B. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Furthermore, the terms "first," "second," and the like in the description and claims of the present application or in the above-described drawings are used for distinguishing between different objects and not necessarily for describing a particular sequential order, and may explicitly or implicitly include one or more of the features.
In the description of the present application, unless otherwise specified, "plurality" means two or more (including two), and similarly, "plural groups" means two or more (including two).
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of an image encoding method provided by an embodiment, where the image encoding method may be applied to an encoding end, and specifically, may be applied to an encoding end of a server, as exemplarily shown in fig. 1, the image encoding method includes:
and S110, acquiring a target image, wherein the target image is an original image frame in a first format.
Wherein the target image may be an image captured from a source, such as an image captured from a desktop of a remote server computer.
Specifically, the acquired target image may be a raw image frame in a first format, and the first format may correspond to the YUV444 format.
After the target image is acquired, the original image frame in the first format may be stored in a buffer for subsequent use.
And S120, downsampling the original image frame in the first format to obtain an original image frame in a second format.
The second format may correspond to YUV420 format, and after the original image frame in the second format is obtained, the original image frame in the second format may be stored in a buffer, and stored together with the original image frame in the first format, so as to facilitate storing image frames in different formats at the same time.
It should be noted that the first format referred to in the present disclosure may not be limited to YUV444 format, and the first format may not be limited to YUV420 format, and may be adjusted according to actual requirements, which is not specifically limited by the present disclosure.
Wherein downsampling may refer to downsampling the acquired image frame YUV444 to YUV 420.
S130, when the target image belongs to the screen scene image, carrying out macro block classification on the target image to obtain a character macro block and an image macro block.
The screen scene image may be a web search interface image of the device side, where the image uploaded by the shooting device is not included, and may be specifically shown in fig. 2A as an example, where the content unclear in fig. 2A is irrelevant to the content of the present disclosure.
The target image may be divided into two types of macroblock sets, which are a text-type macroblock and an image-type macroblock, respectively, the text-type macroblock may include a plurality of text macroblocks, each text macroblock may have a pixel size of 16 × 16, the image-type macroblock may include a plurality of image macroblocks, and each image macroblock may have a pixel size of 16 × 16.
The character macro block is a macro block with rich details and high gradient pixel proportion, such as a part composed of character content, line content and the like, the number of color types of the character macro block is relatively small, the light and shade change near the line is severe, and the pixel gradient change is large; an image macroblock, i.e. a JPEG (Joint Photographic Experts Group, still image compression) macroblock, refers to a macroblock forming a background color, and belongs to a smooth class of macroblocks, such as a color region with a small change, or a gradual change region.
In this embodiment, optionally, before the target image belongs to the screen scene image, the method may further include:
the method comprises the steps of obtaining a previous frame image of a target image, determining that the target image belongs to a screen scene image when the number of the same pixel areas in the target image and the previous frame image exceeds a preset number threshold, and determining that the target image belongs to a natural scene image when the number of the same pixel areas in the target image and the previous frame image does not exceed the preset number threshold.
The screen scene image is synthesized by a computer, a large number of completely same pixel regions often exist between continuous frames, the natural image is shot by a camera, most of the pixels between the two frames have more or less difference, the picture is continuously changed, and whether the current frame belongs to the natural scene image/the screen scene image can be judged by detecting whether the change region between the continuous frames reaches a certain width and height threshold value and whether the change region continuously changes.
Further, the target image can be considered as a natural scene image if the target image is displayed by the video playing window, and considered as a screen scene image if the target image is not displayed by the video playing window.
And S140, based on the original image frame in the first format, adopting a first encoder to encode the character macro block, and based on the original image frame in the second format, adopting a second encoder to encode the image macro block to obtain an encoded code stream.
Wherein, the code stream includes: the first code stream corresponds to a character macro block, and the second code stream corresponds to an image macro block.
In this embodiment, optionally, based on the original image frame in the first format, the text-type macro block is encoded by using a first encoder, and based on the original image frame in the second format, the image-type macro block is encoded by using a second encoder, so as to obtain an encoded code stream, where the method includes:
based on an original image frame in a first format, a first encoder is adopted to encode a character macro block to obtain a first code stream; based on the original image frame in the second format, a second encoder is adopted to encode the image macro block to obtain a second code stream; and fusing the first code stream and the second code stream to obtain a coding code stream corresponding to one frame of screen scene image.
When the text macro blocks are coded, after the palette quantization method is quantized, the first coder (such as huffman coding) can be adopted for coding, and code streams composed of the text macro blocks are output, wherein the text macro blocks are coded by using YUV444 frames.
When the image macro block is coded, a second coder (such as JPEG coding) is used for coding, the effect on the smooth image is good, and the YUV420 frame is used for coding.
And merging the code streams of the obtained macro blocks to form a coded code stream of a frame of screen image, and transmitting the coded code stream to a decoding end for analysis, so that the coded code stream is conveniently displayed.
In this embodiment, optionally, based on an original image frame in a first format, a first encoder is used to encode a text macroblock to obtain a first code stream, where the first code stream includes:
when first same macro blocks exist in the text macro blocks, based on an original image frame in a first format, a first encoder is adopted to encode other macro blocks which are not the first same macro blocks in the text macro blocks and one macro block in the first same macro blocks to obtain a first code stream, wherein the rest macro blocks in the first same macro blocks are marked with first prediction macro blocks.
When the text macro block is coded, intra-frame prediction coding can be adopted, if the content of a certain image macro block is the same as that of a left adjacent macro block or an upper adjacent macro block, the image macro block is only marked as a first prediction macro block, and the content of the adjacent macro block is directly copied during decoding, so that the code stream is reduced, and the transmission quantity of the first code stream is simplified.
Based on the original image frame in the second format, encoding the image-like macro block by using a second encoder to obtain a second code stream, including:
and when a second identical macro block exists in the image macro block, based on the original image frame in the second format, a second encoder is adopted to encode other macro blocks which are not the second identical macro block in the image macro block and one macro block in the second identical macro block to obtain a second code stream, wherein the second predicted macro block is marked on the residual macro blocks in the second identical macro block.
When the image macro block is coded, intra-frame prediction coding can be adopted, if the content of a certain character macro block is the same as that of a left adjacent macro block or an upper adjacent macro block, the character macro block is only marked as a second prediction macro block, and the content of the adjacent macro block is directly copied during decoding, so that the code stream is reduced, and the analysis amount of the second code stream is simplified.
The first encoder is an encoder for encoding an image of a screen scene, such as huffman encoding, and the second encoder is an encoder for encoding an image of a natural scene, such as JPEG encoding.
In this embodiment, format conversion can be performed on an obtained target image, an original image frame in a second format is obtained according to the original image frame in the first format, when it is determined that the target image belongs to a screen scene image, macro block classification is performed on the target image to obtain a text macro block and an image macro block, the text macro block is encoded by using a first encoder based on the original image frame in the first format, and the image macro block is encoded by using a second encoder based on the original image frame in the second format to obtain an encoded code stream, so that image encoding efficiency can be effectively improved by performing differential encoding on different types of images.
In addition, when the target image is determined to belong to the natural scene image, the target image can be encoded by adopting a second encoder based on the original image frame in the second format to obtain an encoded code stream.
When the target image is determined to belong to the natural scene image, the target image can be coded through a natural image coder (such as h.264) to obtain a coded code stream, and therefore the code stream is controlled to ensure the fluency of the displayed picture.
Further, the target image is also detected as a natural scene image by the encoder and encoded using h.264, when a user drags an element in a screen image scene such as fig. 2A quickly, due to the large frame-to-frame variation, even though the window image of fig. 2A is substantially synthesized by a computer, the target image is also detected as a "natural scene image" and encoded using h.264, when the web page of fig. 2A scrolls up and down quickly, the text information in the web page cannot be actually recognized by human eyes, so that the definition is not acceptable, and the fluency is increased due to the controllable code stream, so that the web page dragged quickly is recognized as a natural image without affecting the user's perception.
Once the action of dragging the webpage stops, the action of a user is suspended, the font needs to be immediately clear, at the moment, because the change between frames is very small, the target image can be identified as the screen scene image again, and the screen image encoder is continuously adopted for encoding; when a user browses a webpage, if the action amplitude of static browsing or computer operation is small, the picture change is small, and the picture is identified as a screen scene picture, a screen picture encoder is used for encoding; and when the user drags the mouse rapidly to cause the picture to change rapidly, the picture is identified as a natural scene image and is coded by a natural image coder, and the natural image coder coding used by the scheme can comprise but is not limited to an h.264 coder.
In order to be available under a lower bandwidth, an h264 coding module adopts a YUV420 color space, and in order to enable a still picture to be clear without text distortion, a screen image coder must work in a YUV444 color space, and in practical application, a user is still to see clear contents, mainly text, and is not sensitive to a large background color; for the content of the screen, the pure area occupied by the character part is small, the pure area occupied by the background color is large, the detail of the character part is large, namely, high-gradient pixels are large, the color of the background is smooth, and the pixel change is smooth; when the screen image encoder encodes, characters are encoded according to YUV444 space, and the definition of the characters can be guaranteed; the background color is coded according to YUV420 space, the background color existing in a large area can be guaranteed to be similar to the effect coded by a natural image coder, so that only text lines can be changed in color cast when two specific coders are switched, and the background color existing in a large area basically has no visible bright and dark flicker, so that human eyes are insensitive to the switching process, and the user experience is greatly improved.
Fig. 3 is a schematic flowchart of an image decoding method provided in an embodiment, where the image decoding method may be applied to a decoding end, and specifically, may be applied to a decoding end of a server, as exemplarily shown in fig. 3, the image decoding method includes:
and S310, acquiring a code stream and a code stream type corresponding to the target image.
The code stream type may include: a screen image class or a natural image class.
The coding code stream corresponding to the target image is a code stream obtained by coding the character macro blocks divided from the target image by using a first coder based on the original image frame with the first format, and coding the image macro blocks divided from the target image by using a second coder based on the original image frame with the second format.
Specifically, the encoding code stream corresponding to the target image may be obtained by the encoding end executing the following operations.
Acquiring a target image, wherein the target image is an original image frame in a first format; the method comprises the steps of carrying out downsampling on an original image frame in a first format to obtain an original image frame in a second format; when the target image belongs to a screen scene image, carrying out macro block classification on the target image to obtain a character macro block and an image macro block; based on an original image frame in a first format, a first encoder is adopted to encode the character macro block, based on an original image frame in a second format, a second encoder is adopted to encode the image macro block, and an encoded code stream is obtained, wherein the encoded code stream comprises: the method comprises the steps that a first code stream and a second code stream are adopted, wherein the first code stream corresponds to a character macro block, and the second code stream corresponds to an image macro block; the first encoder is an encoder for encoding a screen scene image, and the second encoder is an encoder for encoding a natural scene image.
And the coding code stream corresponding to the target image is sent to a decoding end by a coding end to carry out code stream analysis so as to carry out decoding in different modes according to different code stream types.
And S320, when the code stream type is a screen image type, decoding a first code stream in the coding code stream based on a first decoder, and decoding a second code stream in the coding code stream based on a second decoder to obtain a decoding macro block.
The decoded macro block may include a plurality of macro blocks, and specifically, the decoded macro blocks may correspond to a plurality of macro blocks decoded by the first code stream and a plurality of macro blocks decoded by the second code stream.
In this embodiment, optionally, decoding a first code stream in the encoded code streams based on a first decoder, and decoding a second code stream in the encoded code streams based on a second decoder to obtain decoded macroblocks, includes:
decoding the first code stream based on a first decoder to obtain a character macro block; decoding the second code stream based on a second decoder to obtain an image macro block; the method comprises the steps of up-sampling image macro blocks to obtain macro blocks corresponding to a first format; and determining a decoding macro block according to the character macro block and the macro block corresponding to the first format.
The text macro block can correspond to a plurality of YUV 444-format macro blocks, the image macro block can correspond to a plurality of YUV 420-format macro blocks, and the up-sampling refers to up-sampling each YUV 420-format macro block into a YUV 444-format macro block.
It should be noted that, during decoding, intra-frame prediction decoding can be performed according to the flag in the code stream, and pixel contents are copied from adjacent reference macroblocks to complete reconstruction of corresponding macroblocks.
And S330, splicing the macro blocks to obtain a decoding image corresponding to the coding code stream.
The method comprises the steps of splicing a plurality of macro blocks to obtain a corresponding decoded image after a target code stream obtained by coding a target image is decoded, wherein the decoded image belongs to a screen scene image, so that the decoded image is conveniently displayed for a user.
In addition, when the code stream type is a natural image type, decoding the coded code stream based on a second decoder to obtain a decoded image frame, wherein the decoded image frame corresponds to an image frame in a second format; and performing up-sampling on the decoded image frame, and determining the image frame in the first format to obtain a decoded image corresponding to the coded code stream.
The second format can correspond to a YUV420 format, and the first format can correspond to a YUV444 format, so that the decoded natural scene image with higher fluency can be conveniently displayed to a user.
The present disclosure provides an image encoding and decoding system, as shown in fig. 4, including: an encoding side and a decoding side.
Wherein, the encoding end includes: the system comprises an image acquisition module 401, an encoding module 402, a scene recognition module 403, a macro block classification module 404, a text macro block encoding module 405, a down-sampling module 406, an encoding frame buffer 407, an intra-frame prediction encoding module 408, an image macro block encoding module 409 and a code stream fusion module 410.
An image acquisition module 401 for acquiring a target image, such as an image acquired from a desktop of a server computer remotely, and outputting a raw image frame in YUV444 space.
The encoding module 402 is configured to encode a target image, and includes two encoders, a screen image encoder and a natural image encoder, where the screen image encoder corresponds to the first encoder and the natural image encoder corresponds to the second encoder.
And a scene recognition module 403, configured to recognize a type of the target image, such as a screen scene image or a natural scene image.
The macro block classifying module 404 is configured to perform macro block classification on the target image to obtain a text macro block and an image macro block when the scene identifying module 430 identifies that the type of the target image belongs to the screen scene image.
A text macroblock encoding module 405, configured to encode a text macroblock.
A down-sampling module 406, configured to down-sample the raw image frame in YUV444 space into a raw image frame in YUV420 space.
And the encoding frame buffer 407 is used for storing the raw image frame in the YUV444 space and the raw image frame in the YUV420 space.
And an intra-frame prediction encoding module 408 for performing intra-frame prediction encoding.
The image macroblock encoding module 409 encodes an image macroblock.
And a code stream fusing module 410 for fusing the code streams of the macro blocks generated by the coding of the corresponding modules to form a coded code stream of a frame of screen image, and the output of the module is transmitted to a decoding end for decoding.
Wherein, the decoding end includes: a code stream parsing module 411, a text macroblock parsing module 412, an image macroblock parsing module 413, an intra prediction decoding module 414, a decoding module 415, an overlay reconstruction module 416, and a reconstructed frame module 417.
The code stream analyzing module 411 is configured to receive the encoded code stream sent by the encoding end, analyze the code stream, obtain the type of the code stream, and send the type of the code stream to different decoding components for decoding.
And the text macro block analysis module 412 is configured to decode the text macro block to obtain a YUV444 macro block.
And the image macro block analysis module 413 is configured to decode the image macro block to obtain a YUV420 macro block.
The intra-prediction decoding module 414 copies pixel content from neighboring reference macroblocks to complete the reconstruction.
The decoding module 415 is configured to decode the encoded code stream into a complete YUV420 frame when the encoded code stream is a natural image, and a color space that needs to be sampled as YUV444 as a whole is used as a final reconstructed frame.
And a superposition reconstruction module 416, configured to splice the YUV444 reconstructed macroblocks together to form a complete reconstructed frame of YUV 444.
And a reconstructed frame module 417, configured to store the complete reconstructed frame spliced by the superposition reconstruction module 416, where the reconstructed frame may be a screen scene image or a natural scene image.
Wherein the dashed boxes represent the respective modules involved in the screen image encoder.
Fig. 5 is a schematic structural diagram of an image encoding apparatus provided in this embodiment, including a first obtaining module 510, a sampling module 520, a classifying module 530, and an encoding module 540, where:
the first obtaining module 510 is configured to obtain a target image, where the target image is an original image frame in a first format.
The sampling module 520 is configured to perform downsampling on the raw image frame in the first format to obtain a raw image frame in a second format.
The classifying module 530 is configured to perform macroblock classification on the target image to obtain a text macroblock and an image macroblock when the target image belongs to a screen scene image.
The encoding module 540 is configured to encode the Wen Zilei macro block by using a first encoder based on the original image frame in the first format, and encode the image macro block by using a second encoder based on the original image frame in the second format to obtain an encoded code stream, where the encoded code stream includes: the code stream comprises a first code stream and a second code stream, wherein the first code stream corresponds to a character macro block, and the second code stream corresponds to an image macro block.
The first encoder is an encoder for encoding a screen scene image, and the second encoder is an encoder for encoding a natural scene image.
In this embodiment, optionally, the encoding module 540 is further configured to, when the target image belongs to a natural scene image, encode the target image by using a second encoder based on the original image frame in the second format to obtain an encoded code stream.
In this embodiment, optionally, the encoding module 540 is specifically configured to:
based on the original image frame in the first format, a first encoder is adopted to encode the Wen Zilei macro block to obtain a first code stream; based on the original image frame in the second format, a second encoder is adopted to encode the image macro block to obtain a second code stream; and fusing the first code stream and the second code stream to obtain a coded code stream corresponding to one frame of screen scene image.
In this embodiment, optionally, the encoding module 540 is specifically configured to:
when a first identical macro block exists in the text macro blocks, based on the original image frame in the first format, a first encoder is adopted to encode other macro blocks, which are not the first identical macro block, in the Wen Zilei macro block and one macro block in the first identical macro block, so as to obtain a first code stream, wherein the rest macro blocks in the first identical macro block are marked with a first prediction macro block.
And when a second identical macro block exists in the image macro blocks, based on the original image frame in the second format, a second encoder is adopted to encode other macro blocks, which are not the second identical macro block, in the image macro blocks and one macro block in the second identical macro blocks to obtain a second code stream, wherein the residual macro blocks in the second identical macro blocks are marked with second prediction macro blocks.
In this embodiment, optionally, the method further includes: and determining a module.
The first obtaining module 510 is further configured to obtain a previous frame image of the target image.
The determining module is used for determining that the target image belongs to a screen scene image when the number of the same pixel areas in the target image and the previous frame image exceeds a preset number threshold; and when the number of the same pixel areas in the target image and the previous frame image does not exceed a preset number threshold, determining that the target image belongs to a natural scene image.
The image encoding device provided in this embodiment can perform format conversion on an obtained target image, obtain an original image frame in a second format according to the original image frame in the first format, perform macroblock classification on the target image when it is determined that the target image belongs to a screen scene image, obtain a text macroblock and an image macroblock, encode the text macroblock by using a first encoder based on the original image frame in the first format, and encode the image macroblock by using a second encoder based on the original image frame in the second format, so as to obtain an encoded code stream.
Fig. 6 is a schematic structural diagram of an image decoding apparatus provided in this embodiment, including a second obtaining module 610, a decoding module 620, and a splicing module 630, where:
a second obtaining module 610, configured to obtain a code stream and a code stream type corresponding to the target image, where the code stream type includes: the method comprises the steps of screen image or natural image, wherein the coding code stream corresponding to a target image is a code stream obtained by coding character macro blocks divided from the target image by adopting a first coder based on an original image frame with a first format, and coding image macro blocks divided from the target image by adopting a second coder based on the original image frame with a second format.
And a decoding module 620, configured to, when the code stream type is the screen image type, decode a first code stream in the encoded code stream based on a first decoder, and decode a second code stream in the encoded code stream based on a second decoder to obtain a decoded macro block, where the decoded macro block includes multiple macro blocks.
And a splicing module 630, configured to splice a plurality of macroblocks to obtain a decoded image corresponding to the encoded code stream.
In this embodiment, optionally, the decoding module 620 is specifically configured to:
decoding the first code stream based on a first decoder to obtain a character macro block;
decoding the second code stream based on a second decoder to obtain an image macro block;
the image macro blocks are subjected to up-sampling to obtain macro blocks corresponding to a first format;
and determining a decoding macro block according to the text macro block and the macro block corresponding to the first format.
In this embodiment, optionally, the decoding module 620 is further configured to, when the code stream type is the natural image type, decode the encoded code stream based on a second decoder to obtain a decoded image frame, where the decoded image frame corresponds to an image frame in a second format; and performing up-sampling on the decoded image frame, and determining the image frame in a first format to obtain a decoded image corresponding to the coded code stream.
The image decoding apparatus provided in this embodiment obtains an encoded code stream and a code stream type corresponding to a target image, and when the code stream type is a screen image type, decodes a first code stream in the encoded code stream based on a first decoder, and decodes a second code stream in the encoded code stream based on a second decoder, to obtain a decoded macro block, where the decoded macro block includes a plurality of macro blocks; and splicing the multiple macro blocks to obtain a screen scene image after decoding the coded code stream, so that the screen scene image is convenient to show to a user.
The embodiment of the application also provides computer equipment. Referring to fig. 7, fig. 7 is a block diagram of a basic structure of a computer device according to the present embodiment.
The computer device includes a memory 710 and a processor 720 communicatively coupled to each other via a system bus. It should be noted that only a computer device having components 710-720 is shown, but it should be understood that not all of the shown components are required to be implemented, and more or fewer components may alternatively be implemented. As will be understood by those skilled in the art, the computer device is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The computer device may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The computer equipment can carry out man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch panel or voice control equipment and the like.
The memory 710 includes at least one type of readable storage medium including a non-volatile memory (non-volatile memory) or a volatile memory, for example, a flash memory (flash memory), a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a PROM, a magnetic memory, a magnetic disk, an optical disk, etc., and the RAM may include a static RAM or a dynamic RAM. In some embodiments, the storage 710 may be an internal storage unit of the computer device, such as a hard disk or a memory of the computer device. In other embodiments, the memory 710 may also be an external storage device of the computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, or a Flash memory Card (Flash Card) provided on the computer device. Of course, the memory 710 may also include both internal and external storage devices for the computer device. In this embodiment, the memory 710 is generally used for storing an operating system and various application software installed on the computer device, such as the program codes of the above-mentioned methods. In addition, the memory 710 may also be used to temporarily store various types of data that have been output or are to be output.
The processor 720 is generally configured to perform the overall operation of the computer device. In this embodiment, the memory 710 is used for storing program codes or instructions, the program codes including computer operation instructions, and the processor 720 is used for executing the program codes or instructions stored in the memory 710 or processing data, such as program codes for executing the above-mentioned methods.
Herein, the bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus system may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this is not intended to represent only one bus or type of bus.
Another embodiment of the present application also provides a computer readable medium, which may be a computer readable signal medium or a computer readable medium. A processor in the computer reads the computer readable program code stored in the computer readable medium, so that the processor can execute the functional actions specified in each step, or the combination of steps, of the above-described method; and means for generating a block diagram that implements the functional operation specified in each block or a combination of blocks.
A computer readable medium includes, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing, the memory storing program code or instructions, the program code including computer-executable instructions, the processor executing the program code or instructions of the above-described method stored by the memory.
The definitions of the memory and the processor can refer to the description of the foregoing embodiments of the computer device, and are not repeated here.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a module or a unit may be divided into only one logical function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Each functional unit or module in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" as used herein does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means can be embodied by one and the same item of hardware. The use of first, second, third, etc. does not denote any order, and the words may be interpreted as names. The steps in the above embodiments should not be construed as limiting the order of execution unless specified otherwise.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. An image encoding method, comprising:
acquiring a target image, wherein the target image is an original image frame in a first format;
downsampling the original image frame in the first format to obtain an original image frame in a second format;
when the target image belongs to a screen scene image, carrying out macro block classification on the target image to obtain a character macro block and an image macro block;
based on the original image frame in the first format, encoding the Wen Zilei macro block by using a first encoder, and based on the original image frame in the second format, encoding the image macro block by using a second encoder to obtain an encoded code stream, where the encoded code stream includes: the method comprises the steps that a first code stream and a second code stream are adopted, wherein the first code stream corresponds to a character macro block, and the second code stream corresponds to an image macro block;
the first encoder is an encoder for encoding a screen scene image, and the second encoder is an encoder for encoding a natural scene image.
2. The method of claim 1, further comprising:
and when the target image belongs to a natural scene image, encoding the target image by adopting a second encoder based on the original image frame in the second format to obtain an encoded code stream.
3. The method of claim 1, wherein the encoding the Wen Zilei macro block by a first encoder based on the original image frame in the first format, and encoding the image class macro block by a second encoder based on the original image frame in the second format to obtain an encoded code stream comprises:
based on the original image frame in the first format, a first encoder is adopted to encode the Wen Zilei macro block to obtain a first code stream;
based on the original image frame in the second format, a second encoder is adopted to encode the image macro block to obtain a second code stream;
and fusing the first code stream and the second code stream to obtain a coding code stream corresponding to one frame of screen scene image.
4. The method of claim 3, wherein said encoding Wen Zilei macro blocks with a first encoder based on the original image frame in the first format to obtain a first code stream comprises:
when a first identical macro block exists in the text macro blocks, based on the original image frame in the first format, a first encoder is adopted to encode other macro blocks, which are not the first identical macro block, in the Wen Zilei macro block and one macro block in the first identical macro block to obtain a first code stream, wherein the rest macro blocks in the first identical macro block are marked with a first prediction macro block;
the encoding, by using a second encoder, the image-like macroblock based on the original image frame in the second format to obtain a second code stream includes:
and when a second identical macro block exists in the image macro blocks, based on the original image frame in the second format, a second encoder is adopted to encode other macro blocks, which are not the second identical macro block, in the image macro blocks and one macro block in the second identical macro blocks to obtain a second code stream, wherein the residual macro blocks in the second identical macro blocks are marked with second prediction macro blocks.
5. The method of claim 1, further comprising:
acquiring a previous frame image of a target image;
when the number of the same pixel areas in the target image and the previous frame image exceeds a preset number threshold, determining that the target image belongs to a screen scene image;
and when the number of the same pixel areas in the target image and the previous frame image does not exceed a preset number threshold, determining that the target image belongs to a natural scene image.
6. An image decoding method, comprising:
acquiring a code stream and a code stream type corresponding to a target image, wherein the code stream type comprises: the method comprises the steps that a screen image or a natural image is selected, a coding code stream corresponding to a target image is an original image frame based on a first format, a first encoder is adopted to encode character macro blocks divided from the target image, and a second encoder is adopted to encode image macro blocks divided from the target image based on the original image frame of a second format to obtain a code stream;
when the code stream type is the screen image type, decoding a first code stream in the coding code stream based on a first decoder, and decoding a second code stream in the coding code stream based on a second decoder to obtain a decoding macro block, wherein the decoding macro block comprises a plurality of macro blocks;
and splicing a plurality of macro blocks to obtain a decoding image corresponding to the coding code stream.
7. The method of claim 6, wherein decoding a first code stream of the encoded code streams based on a first decoder and decoding a second code stream of the encoded code streams based on a second decoder to obtain decoded macroblocks comprises:
decoding the first code stream based on a first decoder to obtain a character macro block;
decoding the second code stream based on a second decoder to obtain an image macro block;
the image macro blocks are subjected to up-sampling to obtain macro blocks corresponding to a first format;
and determining a decoding macro block according to the text macro block and the macro block corresponding to the first format.
8. The method of claim 6, further comprising:
when the code stream type is the natural image type, decoding the coded code stream based on a second decoder to obtain a decoded image frame, wherein the decoded image frame corresponds to an image frame in a second format;
and performing up-sampling on the decoded image frame, and determining the image frame in a first format to obtain a decoded image corresponding to the coded code stream.
9. An image encoding device characterized by comprising:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a target image, and the target image is an original image frame in a first format;
the sampling module is used for carrying out downsampling on the original image frame in the first format to obtain an original image frame in a second format;
the classification module is used for carrying out macro block classification on the target image to obtain a character macro block and an image macro block when the target image belongs to a screen scene image;
the encoding module is configured to encode the Wen Zilei macro block by using a first encoder based on the original image frame in the first format, and encode the image macro block by using a second encoder based on the original image frame in the second format to obtain an encoded code stream, where the encoded code stream includes: the method comprises the steps that a first code stream and a second code stream are adopted, wherein the first code stream corresponds to a character macro block, and the second code stream corresponds to an image macro block;
the first encoder is an encoder for encoding a screen scene image, and the second encoder is an encoder for encoding a natural scene image.
10. An image decoding apparatus, comprising:
a second obtaining module, configured to obtain a code stream and a code stream type corresponding to the target image, where the code stream type includes: the method comprises the steps of obtaining a target image, wherein the target image is divided into a screen image or a natural image, a coding code stream corresponding to the target image is a code stream obtained by coding character macro blocks divided in the target image by adopting a first coder based on an original image frame in a first format, and a second coder is adopted to code image macro blocks divided in the target image based on an original image frame in a second format;
the decoding module is used for decoding a first code stream in the coding code stream based on a first decoder when the code stream type is the screen image type, and decoding a second code stream in the coding code stream based on a second decoder to obtain a decoding macro block, wherein the decoding macro block comprises a plurality of macro blocks;
and the splicing module is used for splicing the macroblocks to obtain a decoding image corresponding to the coding code stream.
CN202211073763.XA 2022-09-02 2022-09-02 Image encoding method, image decoding method, image encoding device, image decoding device, image encoding program, and image decoding program Pending CN115460404A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211073763.XA CN115460404A (en) 2022-09-02 2022-09-02 Image encoding method, image decoding method, image encoding device, image decoding device, image encoding program, and image decoding program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211073763.XA CN115460404A (en) 2022-09-02 2022-09-02 Image encoding method, image decoding method, image encoding device, image decoding device, image encoding program, and image decoding program

Publications (1)

Publication Number Publication Date
CN115460404A true CN115460404A (en) 2022-12-09

Family

ID=84299841

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211073763.XA Pending CN115460404A (en) 2022-09-02 2022-09-02 Image encoding method, image decoding method, image encoding device, image decoding device, image encoding program, and image decoding program

Country Status (1)

Country Link
CN (1) CN115460404A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116095331A (en) * 2023-03-03 2023-05-09 浙江大华技术股份有限公司 Encoding method and decoding method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116095331A (en) * 2023-03-03 2023-05-09 浙江大华技术股份有限公司 Encoding method and decoding method
CN116095331B (en) * 2023-03-03 2023-07-07 浙江大华技术股份有限公司 Encoding method and decoding method

Similar Documents

Publication Publication Date Title
US10771813B2 (en) Reference frame encoding method and apparatus, and reference frame decoding method and apparatus
CN105163127A (en) Video analysis method and device
US20150186744A1 (en) Transmitting video and sharing content via a network
CN101350929B (en) Enhanced compression in representing non-frame-edge blocks of image frames
WO2018234860A1 (en) Real-time screen sharing
CN107493477B (en) Method, system and computer readable storage medium for encoding and decoding frames
CN104581177B (en) Image compression method and device combining block matching and string matching
CN101291436B (en) Video coding/decoding method and device thereof
WO2023005740A1 (en) Image encoding, decoding, reconstruction, and analysis methods, system, and electronic device
CN104704826A (en) Two-step quantization and coding method and apparatus
CN115460404A (en) Image encoding method, image decoding method, image encoding device, image decoding device, image encoding program, and image decoding program
CN111859210A (en) Image processing method, device, equipment and storage medium
WO2024078066A1 (en) Video decoding method and apparatus, video encoding method and apparatus, storage medium, and device
CN102804783A (en) Image encoder apparatus and camera system
WO2023024832A1 (en) Data processing method and apparatus, computer device and storage medium
CN114938408B (en) Data transmission method, system, equipment and medium of cloud mobile phone
WO2023151365A1 (en) Image filtering method and apparatus, device, storage medium and program product
CN113032062A (en) Image data transmission method and device, electronic equipment and storage medium
CN111813534A (en) Method for reducing CPU occupancy rate in intelligent recording and broadcasting
CN115965616B (en) Iris image processing method and device and electronic equipment
WO2024012249A1 (en) Method and apparatus for coding image including text, and method and apparatus for decoding image including text
US20240114185A1 (en) Video coding for machines (vcm) encoder and decoder for combined lossless and lossy encoding
CN110830744B (en) Safety interaction system
CN118020290A (en) System and method for encoding and decoding video with memory efficient prediction mode selection
CN113329227A (en) Video coding method and device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication