CN117528098A - Coding and decoding system, method and equipment for improving image quality based on deep compressed code stream - Google Patents

Coding and decoding system, method and equipment for improving image quality based on deep compressed code stream Download PDF

Info

Publication number
CN117528098A
CN117528098A CN202410022004.3A CN202410022004A CN117528098A CN 117528098 A CN117528098 A CN 117528098A CN 202410022004 A CN202410022004 A CN 202410022004A CN 117528098 A CN117528098 A CN 117528098A
Authority
CN
China
Prior art keywords
component
coding
module
data
code stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410022004.3A
Other languages
Chinese (zh)
Other versions
CN117528098B (en
Inventor
孙立宁
赵建立
李伟
李厚鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Digibird Technology Co ltd
Original Assignee
Beijing Digibird Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Digibird Technology Co ltd filed Critical Beijing Digibird Technology Co ltd
Priority to CN202410022004.3A priority Critical patent/CN117528098B/en
Publication of CN117528098A publication Critical patent/CN117528098A/en
Application granted granted Critical
Publication of CN117528098B publication Critical patent/CN117528098B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention belongs to the field of video coding and decoding processing, in particular relates to a coding and decoding system, a method and equipment for improving image quality based on a deep compressed code stream, and aims to solve the problems of complexity in splitting and reorganizing data and high image quality loss and low image quality in coding in the field. The invention comprises the following steps: the system comprises a coding node, a decoding node and a main network, wherein the coding node comprises a preprocessing module, an acquisition module, a coding module and a sending module, and is used for acquiring the original width and height of an input signal, selecting a corresponding chip according to the color sampling format of a video to perform preprocessing, taking all the acquired content of the chip as a Y component, constructing a UV component according to the Y component, then coding to obtain deep compression code stream data, and sending the deep compression code stream data to the decoding node; the decoding node comprises a receiving module, a decoding module, a post-processing module and a display module, and is used for receiving the deep compression code stream data sent by the encoding node, and decoding, post-processing and displaying the deep compression code stream data.

Description

Coding and decoding system, method and equipment for improving image quality based on deep compressed code stream
Technical Field
The invention belongs to the field of video coding and decoding processing, and particularly relates to a coding and decoding system, a method and equipment for improving image quality based on a deep compressed code stream.
Background
With the continuous development of internet technology, the coding and decoding technology is mature, the definition is higher and higher under the condition of occupying a certain bandwidth, and more video applications are widely applied to various fields, such as video conferences, KVM seats, online education and the like. The main encoding and decoding modes at present mainly comprise H.264, H.265, AVS2 and the like, which are compression-encoded based on YUV420 and YUV422 samples; most ASIC chips currently use 4:2:0 compression. However, 4:2:0, 4:2:2 colors do not meet the requirements of desktop applications in terms of rendition.
In the prior art, coding and decoding are based on YUV420 or YUV422, YUV420 and YUV422 are used for downsampling chrominance components (U, V) to reduce data volume, and more chrominance components are lost when compression coding and re-reduction are carried out, so that the conditions of blurring or mosaicing of characters and table images in a picture can be caused; the YUV444 or the RGB444 is divided into two paths through an algorithm to be encoded, the two paths of images are restored after decoding to reduce loss, the complexity of the algorithm is high, and the two paths of images are required to be signal sources at the same time during decoding and restoring, so that the delay is high; splitting the combined data by an algorithm may lose chrominance components, and the degree of restoration may be significantly different in some scenes than in the original image, such as in some 3D scenes where there are anomalies, such as ghosts on the contours of dynamic objects on the scene (typically using the scene: when the forceps scalpel is operated).
Disclosure of Invention
In order to solve the problems in the prior art, namely that the data splitting and reorganizing have complexity, the image quality loss during encoding is high, and the image quality can not be effectively improved, the invention provides a coding and decoding system for improving the image quality based on a deep compressed code stream.
The system comprises an encoding node, a decoding node and a main body network;
the coding node comprises a preprocessing module, an acquisition module, a coding module and a sending module, and is used for acquiring the original width and height of an input signal, calculating the processed width and height according to the original width and height, selecting a chip for processing a corresponding color sampling format according to the color sampling format of a video, preprocessing, taking all the acquired content of the chip as a Y component, constructing a UV component according to the Y component, then coding to obtain deep compression code stream data, and sending the deep compression code stream data to the decoding node;
the chip for processing the corresponding color sampling format comprises a chip for supporting the data acquisition of YUV444 or RGB444 in the color sampling format and a chip for supporting the data acquisition of YUV420 or YUV422 in the color sampling format;
the decoding node comprises a receiving module, a decoding module, a post-processing module and a display module, and is used for receiving the deep compression code stream data sent by the encoding node, and decoding, post-processing and displaying the deep compression code stream data.
In some preferred embodiments, the preprocessing module specifically includes:
acquiring the original width and height of an input signal, wherein the original width and height comprises an original width W and an original height H;
all the content of YUV44 or RGB444 is taken as Y component, and UV component is constructed according to the Y component;
calculating the processed video width according to the original width and heightAnd processed video high +.>
And obtaining a preprocessing instruction containing allocation memory configuration according to the type of the chip.
In another aspect of the present invention, a coding and decoding method for improving image quality based on a deep compressed code stream is provided, which is characterized in that the method includes:
step S10, preprocessing an input signal through a preprocessing module to obtain a preprocessed signal;
step S20, the preprocessing signals are acquired in real time through an acquisition module, and real-time video stream data are obtained;
the real-time video stream data comprises sampling mode information, original wide W information, original high H information, frame rate information, frame number information and timestamp information; fixing the sampling mode information, the original wide W information, the original high H information, the frame rate information, the frame number information and the timestamp information into a fixed-length structure St;
the complexity of the whole system is reduced, and the influence of the change of UV data on the Y component is reduced;
step S30, the real-time video stream data is encoded through an encoding module, and deep compression code stream data is obtained;
step S40, the deep compressed code stream is transmitted to a receiving module of a decoding node through a network according to a preset protocol by a transmitting module of the encoding node; the deep compressed code stream data comprises information description;
step S50, the decoding module selects an encoding mode according to the information description to decode the deep compression code stream data received by the receiving module to obtain decoded data;
step S60, performing post-processing on the decoded data through a post-processing module to obtain video data to be displayed;
step S70, displaying the video data to be displayed through a display module.
In some preferred embodiments, the pretreatment specifically includes:
step S11, acquiring the width and the height of an input signal, and acquiring original width and height data; the original width and height data comprise an original width W and an original height H;
step S12, calculating the processed video width according to the original width and height dataAnd processed video high +.>
Step S13, judging the type of the chip, if the chip supporting YUV420 or YUV422 data acquisition is jumped to step S14, if the chip supporting YUV444 or RGB444 data acquisition is jumped to step S15;
step S14, according to the processed video widthAnd processed video high +.>Generating a time sequence conforming to the acquisition interface of the coding chip;
step S15, according to the processed video widthAnd processed video high +.>Applying for acquiring occupied memory to obtain a preprocessing signal containing allocation memory configuration;
the time sequence is used for transmitting the processed data to the chip corresponding to the time sequence according to the time sequence conforming to the acquisition interface of the coding chip, and the chip corresponding to the time sequence acquires video data according to the time sequence.
In some preferred embodiments, the step S12 specifically includes:
the original memory space occupied by YUV444 or RGB444 isThe specific gravity between the components is 1:1:1;
taking all contents of YUV44 data or RGB444 data as Y component, constructing UV component according to Y component, constructing based on YUV420 or YUV422 format, and preprocessing the total memory space occupied by YUV444 as Y componentThe total memory space of the pretreatment occupied by RGB444 is +.>
Processed video widthAnd processed video high +.>Should satisfy->
In some preferred embodiments, the building the UV component from the Y component specifically includes:
dividing the value of the Y component by the maximum possible value, to obtain a normalized Y value,
the normalized Y-values are converted into R-, G-and B-components in the RGB color space,
re-normalizing the obtained R, G and B components to a range of 0 to 1 to obtain normalized R, G and B components,
converting RGB into YUV through a color space conversion formula according to the normalized R component, the normalized G component and the normalized B component to obtain corresponding UV components,
wherein,represents the normalized Y value, R represents the R component, G represents the G component, B represents the B component,representing normalized R component, ++>Represents the G component +.>Representing the B component, U representing the U component, and V representing the V component.
In some preferred embodiments, the post-treatment specifically comprises:
according to the processed video widthAnd processed video high +.>And calculating the position of the UV component, and acquiring the original width W, the original height H, the frame rate, the time stamp and the sampling mode according to the total bit number of the structure St.
In a third aspect of the present invention, an electronic device is provided, including:
at least one processor; and
a memory communicatively coupled to at least one of the processors; wherein,
the memory stores instructions executable by the processor for execution by the processor to implement a coding and decoding method for improving image quality based on a deep compressed code stream as described above.
In a fourth aspect of the present invention, a computer readable storage medium is provided, where computer instructions are stored, where the computer instructions are used to be executed by the computer to implement the above-mentioned coding and decoding method for improving image quality based on deep compressed code streams.
The invention has the beneficial effects that:
(1) The invention uses the UV to encode the invalid data by placing the valid data in the Y component, only uses the Y data after decoding, and discards the invalid data.
(2) The invention reduces the complexity brought by splitting and recombining the data, has lower image quality loss during encoding, and effectively improves the image quality.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings, in which:
FIG. 1 is a schematic diagram of a coding and decoding system for improving image quality based on a deep compressed code stream;
FIG. 2 is a flow chart of a method for improving image quality based on a deep compressed code stream;
FIG. 3 is a schematic diagram of a computer system for a server implementing embodiments of the methods, systems, and apparatus of the present application.
Detailed Description
The present application is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings.
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
In order to more clearly describe a coding and decoding system for improving image quality based on a deep compressed code stream, embodiments of the present invention are described in detail below with reference to the accompanying drawings.
The invention relates to a coding and decoding system for improving image quality based on a deep compressed code stream, which is shown in fig. 1, and comprises coding nodes, decoding nodes and a main network;
the coding node comprises a preprocessing module, an acquisition module, a coding module and a sending module, and is used for acquiring the original width and height of an input signal, calculating the processed width and height according to the original width and height, selecting a chip for processing a corresponding color sampling format according to the color sampling format of a video, preprocessing, taking all the acquired content of the chip as a Y component, constructing a UV component according to the Y component, then coding to obtain deep compression code stream data, and sending the deep compression code stream data to the decoding node;
the Y component is in YUV422, each pixel occupies 8 bits (1 byte); the Y component is not sampled in YUV420, each pixel occupies 8 bits (1 byte);
the chip for processing the corresponding color sampling format comprises a chip for supporting the data acquisition of YUV444 or RGB444 in the color sampling format and a chip for supporting the data acquisition of YUV420 or YUV422 in the color sampling format;
the decoding node comprises a receiving module, a decoding module, a post-processing module and a display module, and is used for receiving the deep compression code stream data sent by the encoding node, and decoding, post-processing and displaying the deep compression code stream data.
In this embodiment, the preprocessing module specifically includes:
acquiring the original width and height of an input signal, wherein the original width and height comprises an original width W and an original height H;
all the content of YUV44 or RGB444 is taken as Y component, and UV component is constructed according to the Y component;
calculating the processed video width according to the original width and heightAnd processed video high +.>
And obtaining a preprocessing instruction containing allocation memory configuration according to the type of the chip.
It will be clear to those skilled in the art that, for convenience and brevity of description, the specific working process of the system described above and the related description may refer to the corresponding process in the foregoing method embodiment, which is not repeated here.
It should be noted that, in the coding and decoding system for improving image quality based on deep compressed code stream provided in the above embodiment, only the division of the above functional modules is used as an example, in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the modules or steps in the embodiment of the present invention are decomposed or combined again, for example, the modules in the embodiment may be combined into one module, or may be further split into a plurality of sub-modules, so as to complete all or part of the functions described above. The names of the modules and steps related to the embodiments of the present invention are merely for distinguishing the respective modules or steps, and are not to be construed as unduly limiting the present invention.
The coding and decoding method for improving image quality based on deep compressed code stream according to the second embodiment of the present invention is characterized in that, as shown in the figure, the method comprises:
step S10, preprocessing an input signal through a preprocessing module to obtain a preprocessed signal;
in this embodiment, the preprocessing specifically includes:
step S11, acquiring the width and the height of an input signal, and acquiring original width and height data; the original width and height data comprise an original width W and an original height H;
step S12, calculating the processed video width according to the original width and height dataAnd processed video high +.>
In this embodiment, the step S12 specifically includes:
the original memory space occupied by YUV444 data or RGB444 data isThe specific gravity between the components is 1:1:1;
taking all the content of YUV44 or RGB444 as Y component, constructing UV component according to Y component, constructing based on YUV420 or YUV422 format, and the pretreatment total memory space occupied by YUV444 isThe total memory space of the pretreatment occupied by RGB444 is +.>
Processed video widthAnd processed video high +.>Should satisfy->Select->,The method comprises the steps of carrying out a first treatment on the surface of the These values ensure that the aspect ratio of the video remains consistent and that the corresponding specifications are met in terms of memory usage.
In this embodiment, the constructing the UV component according to the Y component specifically includes:
dividing the value of the Y component by the maximum possible value to obtain a normalized Y value, the normalized Y value ranging from 0 to 1,
the normalized Y-values are converted to R-, G-, and B-components in the RGB color space, using standard conversion formulas (typically bt.709 color space),
re-normalizing the obtained R, G and B components to a range of 0 to 1 to obtain normalized R, G and B components,
converting RGB into YUV through a color space conversion formula according to the normalized R component, the normalized G component and the normalized B component to obtain corresponding UV components,
wherein,represents normalized Y values, R tableShowing the R component, G representing the G component, B representing the B component,representing normalized R component, ++>Represents the G component +.>Representing the B component, U representing the U component, and V representing the V component.
In this embodiment, the UV component may be set as:
in order to reduce the influence of the UV component on the Y component, the pixel values of the UV component are set to (0 x0 ), (0 x80,0x 80), (0 x7F ), and (0 x81, 0x 81), etc.;
wherein (0 x0 ) represents pure black, so that the color information of the UV component is completely lost, and only the brightness information of the Y component is reserved; (0x80 ) represents neutral gray, so that the color information of the UV component is not shifted; (0 x7F ) and (0 x81, 0x 81) are represented as neutral colors, and the colors can also keep the color information of the UV component balanced, so that the coding effect of the Y component is ensured not to be directly influenced; the color value needs to be determined according to the use scene; color selection (0x80 ) is suggested herein;
step S13, judging the type of the chip, if the chip supporting YUV420 or YUV422 data acquisition is jumped to step S14, if the chip supporting YUV444 or RGB444 data acquisition is jumped to step S15;
step S14, according to the processed video widthAnd processed video high +.>Generating a timing sequence conforming to the encoded chip acquisition interface, e.g., BT1120;
step S15, according to the processed video widthAnd processed video high +.>Applying for the memory occupied by acquisition, and applying for a specific memory according to the requirements of YUV420 or YUV 422; in the step, the requirement that the SDK encoding and decoding of an actual chip occupy memory is considered, and enough memory blocks, generally 5 blocks, are applied; obtaining a preprocessing signal containing allocation memory configuration;
the time sequence is used for transmitting the processed data to a chip corresponding to the time sequence according to the time sequence conforming to the acquisition interface of the coding chip, and the chip corresponding to the time sequence acquires video data according to the time sequence;
in this embodiment, the step S15 further includes memory initialization, where the acquired UV portion memory values of the memory are all set to 0x80, and may be other selected UV color values, so as to reduce the subsequent processing time.
Step S20, the preprocessing signals are acquired in real time through an acquisition module, and real-time video stream data are obtained;
the real-time video stream data comprises sampling mode information (YUV 444, YVU444, RGB444 and the like), original wide W information, original high H information, frame rate information, frame number information and timestamp information; fixing the sampling mode information, the original wide W information, the original high H information, the frame rate information, the frame number information and the timestamp information into a fixed-length structure St;
in order to reduce the influence of the modification of the UV data on the Y component, it is possible to consider at this time to use only a single bit of each byte of the UV portion, which may be the lowest bit, or the highest bit or the middle bit, and to determine the total number of bits to be used according to the structure St, the bits of the transmission information suggest selecting the lowest bit;
step S30, the real-time video stream data is encoded through an encoding module, and deep compression code stream data is obtained;
step S40, the deep compression code stream is transmitted to a receiving module of a decoding node through a network according to a preset protocol by a transmitting module of the encoding node, and is transmitted by adopting a standard protocol or a custom protocol during transmission; the deep compressed code stream data comprises information description;
step S50, the decoding module selects an encoding mode according to the information description to decode the deep compression code stream data received by the receiving module to obtain decoded data;
step S60, performing post-processing on the decoded data through a post-processing module to obtain video data to be displayed;
in this embodiment, the post-processing specifically includes:
according to the processed video widthAnd processed video high +.>And calculating the position of the UV component, and acquiring the original width W, the original height H, the frame rate, the time stamp and the sampling mode according to the total bit number of the structure St.
Step S70, displaying the video data to be displayed through a display module.
Although the steps are described in the above-described sequential order in the above-described embodiments, it will be appreciated by those skilled in the art that in order to achieve the effects of the present embodiments, the steps need not be performed in such order, and may be performed simultaneously (in parallel) or in reverse order, and such simple variations are within the scope of the present invention.
An electronic device of a third embodiment of the present invention includes:
at least one processor; and
a memory communicatively coupled to at least one of the processors; wherein,
the memory stores instructions executable by the processor for execution by the processor to implement a coding and decoding method for improving image quality based on a deep compressed code stream as described above.
A computer-readable storage medium of a fourth embodiment of the present invention stores computer instructions for execution by the computer to implement the above-described coding and decoding method for improving image quality based on a deep compressed code stream.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the storage device and the processing device described above and the related description may refer to the corresponding process in the foregoing method embodiment, which is not repeated herein.
Those of skill in the art will appreciate that the various illustrative modules, method steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the program(s) corresponding to the software modules, method steps, may be embodied in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or any other form of storage medium known in the art. To clearly illustrate this interchangeability of electronic hardware and software, various illustrative components and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as electronic hardware or software depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application, but such implementation is not intended to be limiting.
Referring now to FIG. 3, there is shown a block diagram of a computer system for a server implementing embodiments of the methods, systems, and apparatus of the present application. The server illustrated in fig. 3 is merely an example, and should not be construed as limiting the functionality and scope of use of the embodiments herein.
As shown in fig. 3, the computer system includes a central processing unit (CPU, central Processing Unit) 601, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage section 608 into a random access Memory (RAM, random Access Memory) 603. In the RAM603, various programs and data required for system operation are also stored. The CPU 601, ROM 602, and RAM603 are connected to each other through a bus 604. An Input/Output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, mouse, etc.; an output portion 607 including a Cathode Ray Tube (CRT), a liquid crystal display (LCD, liquid Crystal Display), and the like, a speaker, and the like; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN (local area network ) card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The drive 610 is also connected to the I/O interface 605 as needed. Removable media 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on drive 610 so that a computer program read therefrom is installed as needed into storage section 608.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network through the communication portion 609, and/or installed from the removable medium 611. The above-described functions defined in the method of the present application are performed when the computer program is executed by a Central Processing Unit (CPU) 601. It should be noted that the computer readable medium described in the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present application may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The terms "first," "second," and the like, are used for distinguishing between similar objects and not for describing a particular sequential or chronological order.
The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus/apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus/apparatus.
Thus far, the technical solution of the present invention has been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of protection of the present invention is not limited to these specific embodiments. Equivalent modifications and substitutions for related technical features may be made by those skilled in the art without departing from the principles of the present invention, and such modifications and substitutions will be within the scope of the present invention.

Claims (9)

1. The coding and decoding system for improving the image quality based on the deep compressed code stream is characterized by comprising coding nodes, decoding nodes and a main network;
the coding node comprises a preprocessing module, an acquisition module, a coding module and a sending module, and is used for acquiring the original width and height of an input signal, calculating the processed width and height according to the original width and height, selecting a chip for processing a corresponding color sampling format according to the color sampling format of a video, preprocessing, taking all the acquired content of the chip as a Y component, constructing a UV component according to the Y component, then coding to obtain deep compression code stream data, and sending the deep compression code stream data to the decoding node;
the chip for processing the corresponding color sampling format comprises a chip for supporting the data acquisition of YUV444 or RGB444 in the color sampling format and a chip for supporting the data acquisition of YUV420 or YUV422 in the color sampling format;
the decoding node comprises a receiving module, a decoding module, a post-processing module and a display module, and is used for receiving the deep compression code stream data sent by the encoding node, and decoding, post-processing and displaying the deep compression code stream data.
2. The coding and decoding system for improving image quality based on deep compressed code stream according to claim 1, wherein the preprocessing module specifically comprises:
acquiring the original width and height of an input signal, wherein the original width and height comprises an original width W and an original height H;
all the content of YUV44 or RGB444 is taken as Y component, and UV component is constructed according to the Y component;
calculating the processed video width according to the original width and heightAnd processed video high +.>
And obtaining a preprocessing instruction containing allocation memory configuration according to the type of the chip.
3. A coding and decoding method for improving image quality based on a deep compressed code stream, the method comprising:
step S10, preprocessing an input signal through a preprocessing module to obtain a preprocessed signal;
step S20, the preprocessing signals are acquired in real time through an acquisition module, and real-time video stream data are obtained;
the real-time video stream data comprises sampling mode information, original wide W information, original high H information, frame rate information, frame number information and timestamp information; fixing the sampling mode information, the original wide W information, the original high H information, the frame rate information, the frame number information and the timestamp information into a fixed-length structure St;
step S30, the real-time video stream data is encoded through an encoding module, and deep compression code stream data is obtained;
step S40, the deep compressed code stream is transmitted to a receiving module of a decoding node through a network according to a preset protocol by a transmitting module of the encoding node; the deep compressed code stream data comprises information description;
step S50, the decoding module selects an encoding mode according to the information description to decode the deep compression code stream data received by the receiving module to obtain decoded data;
step S60, performing post-processing on the decoded data through a post-processing module to obtain video data to be displayed;
step S70, displaying the video data to be displayed through a display module.
4. The coding and decoding method for improving image quality based on deep compressed code stream according to claim 3, wherein the preprocessing specifically comprises:
step S11, acquiring the width and the height of an input signal, and acquiring original width and height data; the original width and height data comprise an original width W and an original height H;
step S12, calculating the processed video width according to the original width and height dataAnd processed video high +.>
Step S13, judging the type of the chip, if the chip supporting YUV420 or YUV422 data acquisition is jumped to step S14, if the chip supporting YUV444 or RGB444 data acquisition is jumped to step S15;
step S14, according to the processed video widthAnd processed video high +.>Generating a time sequence conforming to the acquisition interface of the coding chip;
step S15, according to the processed video widthAnd processed video high +.>Applying for acquiring occupied memory to obtain a preprocessing signal containing allocation memory configuration;
the time sequence is used for transmitting the processed data to the chip corresponding to the time sequence according to the time sequence conforming to the acquisition interface of the coding chip, and the chip corresponding to the time sequence acquires video data according to the time sequence.
5. The method for coding and decoding the image based on the deep compressed code stream according to claim 4, wherein the step S12 specifically comprises:
the original memory space occupied by YUV444 data or RGB444 data isThe specific gravity between the components is 1:1:1;
all contents of YUV44 data or RGB444 data are used as Y components, UV components are constructed according to the Y components, and the UV components are constructed based on YUV420 or YUV422 formats, and YUV420 formats are adoptedThe occupied pretreatment total memory space isThe total memory space of the pretreatment occupied by the YUV422 format is +.>
Processed video widthAnd processed video high +.>Should satisfy->
6. The coding and decoding method for improving image quality based on deep compressed code stream according to claim 5, wherein the constructing the UV component according to the Y component specifically comprises:
dividing the value of the Y component by the maximum possible value, to obtain a normalized Y value,
the normalized Y-values are converted into R-, G-and B-components in the RGB color space,
re-normalizing the obtained R, G and B components to a range of 0 to 1 to obtain normalized R, G and B components,
converting RGB into YUV through a color space conversion formula according to the normalized R component, the normalized G component and the normalized B component to obtain corresponding UV components,
wherein,represents the normalized Y value, R represents the R component, G represents the G component, B represents the B component,representing normalized R component, ++>Represents the G component +.>Representing the B component, U representing the U component, and V representing the V component.
7. The coding and decoding method for improving image quality based on deep compressed code stream according to claim 6, wherein the post-processing specifically comprises:
according to the processed video widthAnd processed video high +.>And calculating the position of the UV component, and acquiring the original width W, the original height H, the frame rate, the time stamp and the sampling mode according to the total bit number of the structure St.
8. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to at least one of the processors; wherein,
the memory stores instructions executable by the processor for performing a method of improving picture quality based on a deep compressed bitstream as claimed in any one of claims 3 to 7.
9. A computer readable storage medium storing computer instructions for execution by the computer to implement a deep compressed bitstream-based image quality improvement codec method of any one of claims 3-7.
CN202410022004.3A 2024-01-08 2024-01-08 Coding and decoding system, method and equipment for improving image quality based on deep compressed code stream Active CN117528098B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410022004.3A CN117528098B (en) 2024-01-08 2024-01-08 Coding and decoding system, method and equipment for improving image quality based on deep compressed code stream

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410022004.3A CN117528098B (en) 2024-01-08 2024-01-08 Coding and decoding system, method and equipment for improving image quality based on deep compressed code stream

Publications (2)

Publication Number Publication Date
CN117528098A true CN117528098A (en) 2024-02-06
CN117528098B CN117528098B (en) 2024-03-26

Family

ID=89742415

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410022004.3A Active CN117528098B (en) 2024-01-08 2024-01-08 Coding and decoding system, method and equipment for improving image quality based on deep compressed code stream

Country Status (1)

Country Link
CN (1) CN117528098B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050129130A1 (en) * 2003-12-10 2005-06-16 Microsoft Corporation Color space coding framework
CN111095933A (en) * 2017-09-15 2020-05-01 索尼公司 Image processing apparatus and method
CN112929624A (en) * 2021-01-21 2021-06-08 杭州雾联科技有限公司 Encoding method, encoding device, electronic equipment and computer readable storage medium
CN112995664A (en) * 2021-04-20 2021-06-18 南京美乐威电子科技有限公司 Image sampling format conversion method, computer-readable storage medium, and encoder
CN116320472A (en) * 2019-03-25 2023-06-23 Oppo广东移动通信有限公司 Image prediction method, encoder, decoder, and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050129130A1 (en) * 2003-12-10 2005-06-16 Microsoft Corporation Color space coding framework
CN111095933A (en) * 2017-09-15 2020-05-01 索尼公司 Image processing apparatus and method
CN116320472A (en) * 2019-03-25 2023-06-23 Oppo广东移动通信有限公司 Image prediction method, encoder, decoder, and storage medium
CN112929624A (en) * 2021-01-21 2021-06-08 杭州雾联科技有限公司 Encoding method, encoding device, electronic equipment and computer readable storage medium
CN112995664A (en) * 2021-04-20 2021-06-18 南京美乐威电子科技有限公司 Image sampling format conversion method, computer-readable storage medium, and encoder

Also Published As

Publication number Publication date
CN117528098B (en) 2024-03-26

Similar Documents

Publication Publication Date Title
US20210120253A1 (en) High dynamic range image/video coding
CN108696761B (en) Picture file processing method, device and system
US8824799B1 (en) Method and apparatus for progressive encoding for text transmission
US20210377542A1 (en) Video encoding and decoding method, device, and system, and storage medium
JP2018528666A (en) Color volume conversion in coding of high dynamic range and wide color gamut sequences
CN107665128B (en) Image processing method, system, server and readable storage medium
CN112929624B (en) Encoding method, encoding device, electronic equipment and computer readable storage medium
JP2017118492A (en) Method, apparatus and system for determining luma value
JP6303176B2 (en) Method for decoding signal, computer program, and decoder
CN111741302B (en) Data processing method and device, computer readable medium and electronic equipment
CN111885346B (en) Picture code stream synthesis method, terminal, electronic device and storage medium
CN109831668A (en) Data compression method and apparatus, data encoding/decoding method and device
CN109151503B (en) Picture file processing method and equipment
CN113613008A (en) Video encoding and decoding method and device, electronic equipment and storage medium
CN105144726A (en) Custom data indicating nominal range of samples of media content
US9560311B2 (en) Video signal encoding apparatus and video signal encoding method
CN117528098B (en) Coding and decoding system, method and equipment for improving image quality based on deep compressed code stream
CN112261417A (en) Video pushing method and system, equipment and readable storage medium
CN107027027B (en) Image encoding method, image decoding method, image encoding device, image decoding device, and image encoding/decoding system
CN102231834A (en) Animated portable network graphics (APNG) file processing method and device for digital television system
US20150124873A1 (en) Chroma Down-Conversion and Up-Conversion Processing
JP5337736B2 (en) Transmission signal conversion apparatus, transmission signal conversion program, reception signal conversion apparatus, and reception signal conversion program
CN114827620A (en) Image processing method, apparatus, device and medium
US10034007B2 (en) Non-subsampled encoding techniques
JP2017112561A (en) Equipment and method for digital signal transmission

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant