CN115988206B - Image processing method, processing apparatus, and storage medium - Google Patents

Image processing method, processing apparatus, and storage medium Download PDF

Info

Publication number
CN115988206B
CN115988206B CN202310276253.0A CN202310276253A CN115988206B CN 115988206 B CN115988206 B CN 115988206B CN 202310276253 A CN202310276253 A CN 202310276253A CN 115988206 B CN115988206 B CN 115988206B
Authority
CN
China
Prior art keywords
region
sample
luminance
predicted
chroma
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310276253.0A
Other languages
Chinese (zh)
Other versions
CN115988206A (en
Inventor
刘雨田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Transsion Holdings Co Ltd
Original Assignee
Shenzhen Transsion Holdings Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Transsion Holdings Co Ltd filed Critical Shenzhen Transsion Holdings Co Ltd
Priority to CN202310276253.0A priority Critical patent/CN115988206B/en
Publication of CN115988206A publication Critical patent/CN115988206A/en
Application granted granted Critical
Publication of CN115988206B publication Critical patent/CN115988206B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The application provides an image processing method, a processing device and a storage medium, wherein the image processing method can be applied to the processing device and comprises the following steps: and determining or obtaining a prediction result of the sample to be predicted according to at least one reference area. According to the technical scheme, in the process of carrying out video encoding and decoding by adopting the intra-frame prediction mode, at least one reference area is acquired or determined, so that at least one weight parameter is determined according to the reference area, and then a prediction result of sampling to be predicted is determined or obtained according to the weight parameter, so that a proper intra-frame prediction mode can be selected in the process of carrying out intra-frame prediction of chromaticity or brightness in the process of video encoding and decoding, and the overall efficiency of video encoding is effectively improved.

Description

Image processing method, processing apparatus, and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, processing device, and storage medium.
Background
In the development of video coding technology, improvements made by various video coding standards are all focused on improving the coding effect of video from different aspects, wherein intra prediction is a hotspot problem of current research.
In the process of designing and implementing the present application, the inventors found that at least the following problems exist: in research on intra prediction, it is still not clear how to select a suitable intra prediction mode, which results in lower overall efficiency of video coding.
The foregoing description is provided for general background information and does not necessarily constitute prior art.
Disclosure of Invention
In view of the above technical problems, the present application provides an image processing method, a processing device, and a storage medium, which can select a suitable intra-frame prediction mode to perform intra-frame prediction of chromaticity or luminance in a video encoding and decoding process, thereby effectively improving the overall efficiency of video encoding.
The application provides an image processing method, which can be applied to processing equipment (such as an intelligent terminal or a server and the like), and comprises the following steps:
s1: and determining or obtaining a prediction result of the sample to be predicted according to at least one reference area.
Optionally, the acquiring or determining manner of the reference area includes at least one of the following:
acquiring or determining from a left adjacent region or an upper adjacent region of the co-located sample of the sample to be predicted;
acquiring or determining from a region formed by splicing the left adjacent region and the upper adjacent region of the co-located samples to be predicted;
Acquiring or determining according to a preset mark;
acquiring or determining according to the template area;
and if the intra-frame prediction mode adopted by the co-located sampling and/or the adjacent sampling of the sampling to be predicted is an angle intra-frame prediction mode, acquiring or determining the intra-frame angle corresponding to the angle intra-frame prediction mode.
Optionally, the flag is parsed from the encoded bitstream.
Optionally, the acquiring or determining the reference area according to the preset mark includes:
and acquiring or determining the reference area of the value mapping according to the value of the mark.
Optionally, the acquiring or determining the reference area according to intra angle includes:
and acquiring or determining the reference area according to the angle range of the intra-frame angle.
Optionally, the values include a first value and/or a second value.
Optionally, the acquiring or determining according to the template area includes:
and if the value is the third value, determining the reference area according to the template area.
Optionally, the determining the reference area according to the template area includes:
acquiring or determining a first weight parameter and a second weight parameter from the template area;
Acquiring or determining a first sampling prediction result corresponding to the first weight parameter and a second sampling prediction result corresponding to the second weight parameter;
and if the first sampling predicted result is better than the second sampling predicted result, acquiring or determining the reference area according to the first weight parameter, and/or if the second sampling predicted result is better than the first sampling predicted result, acquiring or determining the reference area according to the second weight parameter.
Optionally, the acquiring or determining the reference area according to the first weight parameter includes:
and acquiring or determining the reference region according to the position relation between the corresponding region in the template region and the template region of the first weight parameter.
Optionally, the step S1 includes the steps of:
s11: determining at least one weight parameter from at least one reference region;
s12: and determining or obtaining a prediction result of the sample to be predicted according to the weight parameter.
Optionally, step S11 includes: at least one of the weight parameters is determined based on at least one reference sample in the reference region.
Optionally, step S12 includes: and determining or obtaining a prediction result of the sample to be predicted according to at least one of the weight parameter, at least one first sample, at least one adjacent sample of the first sample, at least one non-adjacent sample of the first sample and at least one gradient component of the first sample.
The present application also provides a processing apparatus comprising: the image processing device comprises a memory and a processor, wherein the memory stores an image processing program, and the image processing program realizes the steps of the image processing method when being executed by the processor.
The present application also provides a storage medium storing a computer program which, when executed by a processor, implements the steps of any of the image processing methods described above.
As described above, the image processing method of the present application includes: and determining or obtaining a prediction result of the sample to be predicted according to at least one reference area. That is, according to the technical scheme of the application, the prediction result of the sample to be predicted is determined or obtained by obtaining or determining at least one reference area and determining at least one weight parameter according to the reference area, so that in the video encoding and decoding process by adopting the intra-frame prediction mode, the intra-frame prediction of chromaticity or brightness can be performed by selecting the proper intra-frame prediction mode, and the overall efficiency of video encoding is effectively improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application. In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a schematic hardware structure of a mobile terminal implementing various embodiments of the present application;
fig. 2 is a schematic diagram of a communication network system according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an application scenario involved in a cross-component linear model prediction mode CCLM;
FIG. 4 is a schematic illustration of an application scenario involved in predicting a pattern CCCM based on a convolutional cross-component model;
FIG. 5 is a schematic diagram of another application scenario involved in predicting a pattern CCCM based on a convolutional cross-component model;
fig. 6A is a schematic diagram of an image encoding scene related to an image processing method according to an embodiment of the present application;
fig. 6B is a schematic diagram of an image decoding scenario related to an image processing method according to an embodiment of the present application;
fig. 7a, 7b and 7c are schematic views each of which illustrates a type of a luminance reference region involved in an image processing method according to a second embodiment of the present application;
fig. 7d, 7e and 7f are schematic diagrams each illustrating types of chromaticity reference regions involved in an image processing method according to a second embodiment of the present application;
fig. 8 is a schematic diagram of a YUV image related to an image processing method according to a second embodiment of the present application;
fig. 9a, 9b, 9c and 9d are schematic views each showing a template region including luminance samples according to a second embodiment of the present application;
10a, 10b, 10c and 10d are schematic diagrams of a template region including chroma sampling according to a second embodiment of the present application;
fig. 11 is a schematic view showing an intra prediction direction involved in an image processing method according to a second embodiment of the present application;
fig. 12a, 12b, and 12c are schematic views of luminance reference areas of template areas related to the image processing method according to the third embodiment of the present application;
fig. 13a, 13b, and 13c are schematic diagrams of chromaticity reference areas of template areas related to the image processing method according to the third embodiment of the present application;
fig. 14 is a schematic application flow diagram illustrating an image processing method according to a fourth embodiment of the present application.
The realization, functional characteristics and advantages of the present application will be further described with reference to the embodiments, referring to the attached drawings. Specific embodiments thereof have been shown by way of example in the drawings and will herein be described in more detail. These drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but to illustrate the concepts of the present application to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the element defined by the phrase "comprising one … …" does not exclude the presence of additional identical elements in a process, method, article, or apparatus that comprises the element, and alternatively, elements having the same name in different embodiments of the present application may have the same meaning or may have different meanings, a particular meaning of which is to be determined by its interpretation in this particular embodiment or further in connection with the context of this particular embodiment.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope herein. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context. Furthermore, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including" specify the presence of stated features, steps, operations, elements, components, items, categories, and/or groups, but do not preclude the presence, presence or addition of one or more other features, steps, operations, elements, components, items, categories, and/or groups. The terms "or," "and/or," "including at least one of," and the like, as used herein, may be construed as inclusive, or meaning any one or any combination. For example, "including at least one of: A. b, C "means" any one of the following: a, A is as follows; b, a step of preparing a composite material; c, performing operation; a and B; a and C; b and C; a and B and C ", again as examples," A, B or C "or" A, B and/or C "means" any of the following: a, A is as follows; b, a step of preparing a composite material; c, performing operation; a and B; a and C; b and C; a and B and C). An exception to this definition will occur only when a combination of elements, functions, steps or operations are in some way inherently mutually exclusive.
It should be understood that, although the steps in the flowcharts in the embodiments of the present application are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the figures may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily occurring in sequence, but may be performed alternately or alternately with other steps or at least a portion of the other steps or stages.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrase "if determined" or "if detected (stated condition or event)" may be interpreted as "when determined" or "in response to determination" or "when detected (stated condition or event)" or "in response to detection (stated condition or event), depending on the context.
It should be noted that, in this document, step numbers such as S11 and S12 are adopted, and the purpose of the present invention is to more clearly and briefly describe the corresponding content, and not to constitute a substantial limitation on the sequence, and those skilled in the art may execute S12 first and then execute S11 when implementing the present invention, which is within the scope of protection of the present application.
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
In the following description, suffixes such as "module", "component", or "unit" for representing elements are used only for facilitating the description of the present application, and are not of specific significance per se. Thus, "module," "component," or "unit" may be used in combination.
The processing device may be implemented in various forms. For example, the processing device described in this application may be a server, but may also be a smart terminal including devices such as a cell phone, tablet computer, notebook computer, palm top computer, personal digital assistant (Personal Digital Assistant, PDA), portable media player (Portable Media Player, PMP), navigation device, wearable device, smart bracelet, pedometer, and a stationary terminal such as a digital TV, desktop computer, and the like.
The processing apparatus provided in the present application will be described in the following description taking a mobile terminal as an example, and those skilled in the art will understand that the configuration according to the embodiment of the present application can be applied to a fixed type terminal in addition to elements particularly used for a mobile purpose.
Referring to fig. 1, which is a schematic hardware structure of a mobile terminal implementing various embodiments of the present application, the mobile terminal 100 may include: an RF (Radio Frequency) unit 101, a WiFi module 102, an audio output unit 103, an a/V (audio/video) input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, a processor 110, and a power supply 111. Those skilled in the art will appreciate that the mobile terminal structure shown in fig. 1 is not limiting of the mobile terminal and that the mobile terminal may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The following describes the components of the mobile terminal in detail with reference to fig. 1:
the radio frequency unit 101 may be used for receiving and transmitting signals during the information receiving or communication process, specifically, after receiving downlink information of the base station, processing the downlink information by the processor 110; and, the uplink data is transmitted to the base station. Typically, the radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Optionally, the radio frequency unit 101 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol including, but not limited to, GSM (Global System of Mobile communication, global system for mobile communications), GPRS (General Packet Radio Service ), CDMA2000 (Code Division Multiple Access, 2000, CDMA 2000), WCDMA (Wideband Code Division Multiple Access ), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access, time Division synchronous code Division multiple access), FDD-LTE (Frequency Division Duplexing-Long Term Evolution, frequency Division duplex long term evolution), TDD-LTE (Time Division Duplexing-Long Term Evolution, time Division duplex long term evolution), and 5G, among others.
WiFi belongs to a short-distance wireless transmission technology, and a mobile terminal can help a user to send and receive e-mails, browse web pages, access streaming media and the like through the WiFi module 102, so that wireless broadband Internet access is provided for the user. Although fig. 1 shows a WiFi module 102, it is understood that it does not belong to the necessary constitution of a mobile terminal, and can be omitted entirely as required within a range that does not change the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a talk mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output (e.g., a call signal reception sound, a message reception sound, etc.) related to a specific function performed by the mobile terminal 100. The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive an audio or video signal. The a/V input unit 104 may include a graphics processor (Graphics Processing Unit, GPU) 1041 and a microphone 1042, the graphics processor 1041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphics processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 can receive sound (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, and the like, and can process such sound into audio data. The processed audio (voice) data may be converted into a format output that can be transmitted to the mobile communication base station via the radio frequency unit 101 in the case of a telephone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting the audio signal.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Optionally, the light sensor includes an ambient light sensor and a proximity sensor, optionally, the ambient light sensor may adjust the brightness of the display panel 1061 according to the brightness of ambient light, and the proximity sensor may turn off the display panel 1061 and/or the backlight when the mobile terminal 100 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for applications of recognizing the gesture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; as for other sensors such as fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that may also be configured in the mobile phone, the detailed description thereof will be omitted.
The display unit 106 is used to display information input by a user or information provided to the user. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the mobile terminal. Alternatively, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 1071 or thereabout by using any suitable object or accessory such as a finger, a stylus, etc.) and drive the corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Optionally, the touch detection device detects the touch azimuth of the user, detects a signal brought by touch operation, and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 110, and can receive and execute commands sent from the processor 110. Alternatively, the touch panel 1071 may be implemented in various types of resistive, capacitive, infrared, surface acoustic wave, and the like. The user input unit 107 may include other input devices 1072 in addition to the touch panel 1071. Alternatively, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, mouse, joystick, etc., as specifically not limited herein.
Alternatively, the touch panel 1071 may overlay the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or thereabout, the touch panel 1071 is transferred to the processor 110 to determine the type of touch event, and the processor 110 then provides a corresponding visual output on the display panel 1061 according to the type of touch event. Although in fig. 1, the touch panel 1071 and the display panel 1061 are two independent components for implementing the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 may be integrated with the display panel 1061 to implement the input and output functions of the mobile terminal, which is not limited herein.
The interface unit 108 serves as an interface through which at least one external device can be connected with the mobile terminal 100. For example, the external devices may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and an external device.
Memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, and alternatively, the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. Alternatively, the memory 109 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by running or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the mobile terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor and a modem processor, the application processor optionally handling mainly an operating system, a user interface, an application program, etc., the modem processor handling mainly wireless communication. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The mobile terminal 100 may further include a power source 111 (e.g., a battery) for supplying power to the respective components, and preferably, the power source 111 may be logically connected to the processor 110 through a power management system, so as to perform functions of managing charging, discharging, and power consumption management through the power management system.
Although not shown in fig. 1, the mobile terminal 100 may further include a bluetooth module or the like, which is not described herein.
In order to facilitate understanding of the embodiments of the present application, a communication network system on which the processing device of the present application is based will be described below by taking a mobile terminal as an example.
Referring to fig. 2, fig. 2 is a schematic diagram of a communication network system provided in the embodiment of the present application, where the communication network system is an LTE system of a general mobile communication technology, and the LTE system includes a UE (User Equipment) 201, an E-UTRAN (Evolved UMTS Terrestrial Radio Access Network ) 202, an epc (Evolved Packet Core, evolved packet core) 203, and an IP service 204 of an operator that are sequentially connected in communication.
Alternatively, the UE201 may be the terminal 100 described above, which is not described here again.
The E-UTRAN202 includes eNodeB2021 and other eNodeB2022, etc. Alternatively, the eNodeB2021 may connect with other enodebs 2022 over a backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide access for the UE201 to the EPC 203.
EPC203 may include MME (Mobility Management Entity ) 2031, HSS (Home Subscriber Server, home subscriber server) 2032, other MMEs 2033, SGW (Serving Gate Way) 2034, PGW (PDN Gate Way) 2035 and PCRF (Policy and Charging Rules Function, policy and tariff function entity) 2036, and the like. Optionally, MME2031 is a control node that handles signaling between UE201 and EPC203, providing bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location registers (not shown) and to hold user specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034 and PGW2035 may provide IP address allocation and other functions for UE201, PCRF2036 is a policy and charging control policy decision point for traffic data flows and IP bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
IP services 204 may include the internet, intranets, IMS (IP Multimedia Subsystem ), or other IP services, etc.
Although the LTE system is described above as an example, it should be understood by those skilled in the art that the present application is not limited to LTE systems, but may be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, 5G, and future new network systems (e.g., 6G), etc.
Based on the hardware structure of the processing device, such as a mobile terminal, and the communication network system, the overall concept of the image processing method of the present application is proposed.
In the development of video coding technology, improvements made by various video coding standards are all focused on improving the coding effect of video from different aspects, wherein intra prediction is a hotspot problem of current research.
However, in the process of designing and implementing the present application, the present applicant found that at least the following problems exist: in research on intra prediction, it is not yet clear how to make a selection of a suitable intra prediction mode, which results in a lower overall efficiency of video coding.
In view of the above problems, the present application proposes an image processing method, which determines or obtains a prediction result of a sample to be predicted according to at least one reference area. That is, according to the technical scheme of the application, the prediction result of the sample to be predicted is determined or obtained by obtaining or determining at least one reference area and determining at least one weight parameter according to the reference area, so that in the video encoding and decoding process by adopting the intra-frame prediction mode, the intra-frame prediction of chromaticity or brightness can be performed by selecting the proper intra-frame prediction mode, and the overall efficiency of video encoding is effectively improved.
Based on the above general conception of the image processing method provided in the present application, various embodiments of the image processing method of the present application are further proposed.
For ease of understanding, the following description will first explain terms of art that may be relevant to embodiments of the present application.
Intra prediction
In the process of encoding or decoding an image, predicting an image block is an indispensable step, and an encoder predicts the image block to obtain a predicted block, constructs a residual block with smaller energy, and reduces transmission bits. The prediction of the image block by the encoder or the decoder can be realized by some preset prediction modes, namely, modes including inter prediction and intra prediction. Alternatively, in intra prediction, the encoder or decoder may typically employ a cross-component linear model prediction mode CCLM, a multi-model CCLM prediction mode MMLM, and a convolutional cross-component model based prediction mode CCCM for intra chroma prediction.
(II), cross component linear model prediction mode CCLM
The core idea of the cross-component linear model prediction mode CCLM is: cross component redundancy is reduced and cross component prediction is performed. CCLM is mainly the construction of the prediction value of a chrominance pixel from reconstructed luminance pixels of the same coding block. The linear model used by CCLM is:
predC(i,j)=α*recL(i,j)+β
Where predC (i, j) represents the chroma prediction pixels of the current image block and recL (i, j) represents the downsampled reconstructed luma pixels of the current CU. Alpha and beta are referred to as linear model parameters. In an embodiment, α and β may be derived from neighboring 4 chroma pixels and corresponding downsampled luma pixels, e.g., α and β may be derived specifically from the left neighboring pixel and the upper neighboring pixel of the current image block in the application scene as shown in fig. 3. Of course, the invention is not limited thereto and other methods may be employed to derive the values of the linear model parameters. In an embodiment, recL (i, j) may also be a reconstructed luma pixel that is not downsampled for the current CU.
(III), multimode CCLM prediction mode MMLM
The multi-model CCLM prediction mode MMLM classifies reconstructed neighboring samples into two classes using a threshold that is an average value of luminance reconstructed neighboring samples, and then derives a linear model CCLM of each class using a Least Mean Square (LMS) method, thereby predicting using a plurality of linear models CCLMs. For example, the luminance average value is calculated through the adjacent sample pairs of the current block, then the adjacent sample pairs are grouped according to the luminance average value as a threshold value, and finally, after the adjacent sample pairs are grouped, the respective linear models CCLM are fitted through the sample pairs of the respective groups, so that when the chromaticity prediction is carried out on the current block, each point of the current block is also grouped so as to use different linear models CCLM for prediction, and the prediction of the points with abundant texture details in the current block can be ensured to be more accurate.
(IV) prediction mode CCCM based on convolution cross component model
The CCCM predicts the chroma of the current image block from already reconstructed luma samples using a filter based on a convolutional cross component model prediction mode. In one embodiment, the filter used by CCCM consists of a 5 tap (or 7 tap) plus sign shape spatial component, a non-linear term, and a bias term. Optionally, the input of the 5-tap plus sign-shaped spatial component of the filter includes a center (C) luminance sample, and an up sample (also referred to as a north sample, N), a down sample (also referred to as a south sample, S), a left sample (also referred to as a west sample, W) and a right sample (also referred to as an east sample, E) of the center (C) luminance sample.
The calculation formula for performing chroma prediction on the CCCM is as follows:
predChromaVal = c0*C + c1*N + c2*S + c3*E + c4*W + c5*P + c6*B
where C represents the luminance sample at the corresponding position of the current chroma sample, and N, S, E, W is the neighboring samples of the current luminance sample, as shown in fig. 4.
Nonlinear term p= (c×c+midval) > > bitDepth, bias term b=midval. The offset term B represents a scalar offset between the input and output (similar to the offset term in CCLM) and is set to an intermediate chroma value (b=512 for 10 bit video).
In another embodiment, the filter used by CCCM consists of center (C) luminance samples, gradient components, non-linear terms, and bias terms. Optionally, the gradient component comprises a vertical gradient component and a horizontal gradient component. For example, the vertical gradient component is Gy and the horizontal gradient component is Gx. As shown in fig. 5, the calculation formula of each of Gy and Gx is as follows:
Gy=(2N+NW+NE)–(2S+SW+SE)
Gx=(2W+NW+SW)–(2E+NE+SE)
in yet another embodiment, the filter used by the CCCM consists of center (C) luminance samples, center luminance sample position information, a non-linear term, and a bias term. Optionally, the center luminance sampling position information includes a vertical position and a horizontal position. For example, the center luminance sample vertical position is Y, and the center luminance sample horizontal position is X, which are calculated with respect to the upper left coordinates of the sample (also referred to as image block) of the current required predicted chromaticity.
Alternatively, referring to fig. 6A and fig. 6B, fig. 6A is a schematic view of an image encoding scene related to the processing method provided in the present application, and fig. 6B is a schematic view of an image decoding flow. The encoder at the encoding end generally divides an input video image into at least one image block according to frames, each image block can be subtracted from a prediction block obtained by prediction in a prediction mode to obtain a residual block, and a series of processing is performed on the residual block and related parameters of the prediction mode to obtain an encoded bitstream. Then, at the decoding end, the decoder can obtain the prediction mode parameters by analyzing the bit stream after receiving the bit stream. Furthermore, an inverse transform unit and an inverse quantization unit of the decoder perform inverse transform and inverse quantization processing on the transform coefficients to obtain a residual block. Optionally, the decoding unit of the decoder parses and decodes the encoded bitstream to obtain the prediction parameters and the associated side information. Next, a prediction processing unit of the decoder performs prediction processing using the prediction parameters, thereby determining a prediction block corresponding to the residual block. In this way, the decoder can obtain a reconstructed block by adding the obtained residual block and the corresponding prediction block. Optionally, the loop filtering unit of the decoder performs loop filtering processing on the reconstructed block to reduce distortion and improve video quality. Thus, the reconstructed blocks subjected to the loop filtering process are further combined into a decoded image which is stored in a decoded image buffer or output as a decoded video signal.
Optionally, the image processing method provided in the embodiment of the present application may be applied to a scene where chroma prediction and/or luminance prediction is performed on an image block in the video image encoding process (for example, a scene where intra prediction is performed in the video image encoding process), and of course, the image processing method provided in the embodiment of the present application may also be applied to a scene where chroma prediction and/or luminance prediction is performed on an image block to be decoded in the video decoding process (for example, a scene where intra prediction is performed in the video image decoding process).
First embodiment
In this embodiment, the execution body of the image processing method provided in the present application may be the above-mentioned processing device, or a cluster formed by the above-mentioned multiple processing devices, where the processing device may be an intelligent terminal (such as the aforementioned mobile terminal 100) or a server. Here, the image processing method provided in the present application will be described with a processing apparatus as an execution subject in the first embodiment of the image processing method provided in the present application.
In this embodiment, the processing method provided in the present application includes the following steps:
s1: and determining or obtaining a prediction result of the sample to be predicted according to at least one reference area.
Alternatively, in the present embodiment, the samples to be predicted may be image blocks in an input video image (i.e., video frame) that are being encoded or decoded so as to require chroma prediction or luminance prediction. Alternatively, the image block may also be referred to as an image sample, which may be simply referred to as a current block, a current sample, or a block to be processed. Under the h.265/high efficiency video coding (High Efficiency Video Coding, HEVC) standard, the sample to be predicted may be a coding tree Unit (Coding Tree Units, CTU) or a coding Unit (Code Unit, CU) in an input video image, and the type of the sample to be predicted is not specifically limited in the embodiment of the present application.
Optionally, the processing device acts as an encoder after receiving a video image from a video source, i.e. dividing the video image into at least one image block, after which the processing device performs a prediction process for each image block, i.e. using the temporal and/or spatial correlation between the video images. And the processing device may determine samples of the current image block to be predicted when performing chroma prediction for the image block using an intra-prediction mode, particularly a cross-component intra-prediction mode. In an embodiment, the chroma prediction is performed according to at least one reference area in the current image frame where the image block is located, so as to determine or obtain a chroma prediction result for the sample to be predicted. Optionally, the processing device may determine the samples to be predicted for the current image block when performing luminance prediction for the image block using intra prediction mode. In an embodiment, the luminance prediction is performed according to at least one reference area in the current image frame where the image block is located, so as to determine or obtain a luminance prediction result for the luminance sample to be predicted.
Alternatively, in this embodiment, the processing device may use, as an encoder, for example, rate distortion optimization to determine the intra prediction mode that the current image block finally adopts. For example, the processing device may calculate a rate-distortion cost corresponding to each prediction mode to determine a minimum rate-distortion cost from the rate-distortion costs corresponding to the multiple prediction modes, where the prediction mode corresponding to the minimum rate-distortion cost is the prediction mode that is finally adopted by the current image block. That is, assuming that a prediction process with respect to an image block currently to be chroma-predicted is performed, a prediction mode that can be used is chroma intra prediction modes 0 to N (including inter-component intra prediction modes), and the processing apparatus determines, when calculating a prediction mode corresponding to a minimum rate distortion cost at which chroma prediction is performed as a mode i among them, the mode i as a mode that is finally used to chroma-predict the current image block. Where i=0,..n.
Optionally, after determining or obtaining the prediction result of the sample to be predicted (the chroma sample to be predicted or the luminance sample to be predicted) in a sample-by-sample manner according to the above process, the processing device serving as the encoder may further subtract the prediction value of the corresponding pixel (the chroma prediction result of the chroma sample to be predicted or the luminance prediction result of the luminance sample to be predicted) in the predicted image block from the sampling value of the pixel in the current image block, so as to obtain the residual value of the pixel and the residual block corresponding to the image block. Then, the residual block is transformed and quantized, and then encoded by an entropy encoder to finally form an encoded bitstream. In addition, the encoded bitstream may further include a prediction parameter (entropy encoded and packed into the encoded bitstream) and related auxiliary information (side information) corresponding to the prediction mode determined by the processing device through the above process. Alternatively, if the processing device adopts a cross-component intra prediction mode, the prediction parameters include at least indication information about a prediction operation performed using the cross-component intra prediction mode.
Alternatively, the transformed quantized residual block may be added to a corresponding prediction block obtained using a prediction mode to obtain a reconstructed block. After obtaining the reconstructed block, the processing device may further perform loop filtering processing on the reconstructed block to reduce distortion.
Alternatively, the processing device may receive the encoded bitstream transferred by the processing device as the encoder when the processing device acts as the decoder, and the decoding unit of the decoder may parse and decode the bitstream to obtain the prediction parameters after the processing device receives the bitstream encoded by the encoder as the decoder. Optionally, the inverse transform unit and the inverse quantization unit of the decoder perform inverse transform and inverse quantization processing on the transform coefficients to obtain the residual block. Then, the prediction processing unit of the decoder may use the residual block as a block to be processed which is currently required to be decoded and perform prediction processing using the prediction parameters, thereby determining a prediction block corresponding to the residual block.
Alternatively, when the same intra prediction mode as used by the encoder is adopted, the decoder may use the prediction parameters obtained by parsing the bitstream to obtain or determine a prediction mode that needs to be adopted for performing chroma prediction or luminance prediction on a current sample to be predicted (e.g., when the prediction parameters indicate that the corresponding prediction mode is a cross-component intra prediction mode, the decoder uses the cross-component intra prediction mode as a prediction mode for performing chroma prediction or luminance prediction on a residual block obtained by decoding), so that the decoder directly uses the prediction mode to determine one or more luminance samples or chroma samples from the current image in which the image block is located, and uses the one or more samples to perform chroma prediction or luminance prediction to determine or obtain a chroma or luminance prediction result for the sample to be predicted.
Optionally, after determining or obtaining a prediction result of a sample to be predicted (a chroma sample to be predicted or a luminance sample to be predicted) in a sample-by-sample manner, the processing device as the decoder may further add the residual block obtained by parsing and the prediction value of the corresponding pixel in the predicted image block (the chroma prediction result of the chroma sample to be predicted or the luminance prediction result of the luminance sample to be predicted) to obtain the reconstructed block. Finally, the processing device also performs loop filtering processing on the reconstructed block through a loop filtering unit to reduce distortion and improve video quality. And the reconstructed block subjected to the loop filtering process is further combined into a decoded image which is stored in a decoded image buffer or output as a decoded video signal.
Alternatively, in this embodiment, the prediction mode used by the processing device as an encoder or decoder to predict the current image block (e.g., chroma block) may be a convolution cross-component intra prediction model CCCM as shown in the following formula (1):
predchromval=c0×c+c1×n+c2×s+c3×e+c4×w+c5×p+c6×b formula (1)
Alternatively, taking chroma prediction as an example, predchamvaval is a chroma prediction value of a sample to be predicted (herein, the sample to be predicted is a chroma sample), C0-C6 are weight coefficients, C is a luminance value of a parity luminance sample of the sample to be predicted, N is a luminance value of a luminance sample above/adjacent to north of the parity luminance sample, S is a luminance value of a luminance sample below/adjacent to south of the parity luminance sample, E is a luminance value of a luminance sample right/adjacent to east of the parity luminance sample, W is a luminance value of a luminance sample left/adjacent to west of the parity luminance sample, and P is a nonlinear term. B is an offset term representing the scalar offset between input and output (B is set to the chroma median, i.e., 512, for a 10 bit depth video), p= (c×c+midval) > > bitDepth (midVal is the chroma median of chroma samples, bitDepth is the bit depth of the video content). The positional relationship of N, S, E, W and C is shown in FIG. 4.
Alternatively, in order to perform the chroma intra prediction process using the CCCM model shown in the above formula (1), the processing device needs to determine the weight coefficients C0 to C6 and the luminance values of C, N, S, E, W, P, B in the formula (1), and then obtains the chroma prediction result of the chroma sampling predChromaVal to be predicted based on the formula (1). Optionally, the manner in which the processing device determines the weight coefficient in formula (1) may be: at least one reference region is acquired or determined from within an image frame in which an image block currently requiring chroma prediction is located, and a weight coefficient is determined according to a sampling value of luminance/chroma sampling in the at least one reference region. In one embodiment, at least one luminance reference region is obtained or determined when performing chroma prediction. In another embodiment, at least one chroma reference region is obtained or determined when chroma prediction is performed.
Alternatively, the prediction mode used by the processing device as an encoder or decoder to predict the current image block (e.g., luma block) may be a convolved cross-component intra prediction model CCCM as shown in the following equation (2):
predlumval=c0 ' +c1 ' +c2 ' +sj+c3 ' +e ' +c4 ' +c5 ' +p6 ' +bj ' formula (2)
Wherein predlumal is a luminance prediction value of a sample to be predicted (here, the sample to be predicted), C0 'to C6' are weight coefficients, C 'is a chrominance value of a co-located chrominance sample of the sample to be predicted, N' is a chrominance value of a chrominance sample above/adjacent to north of the co-located chrominance sample, S 'is a chrominance value of a chrominance sample below/adjacent to south of the co-located chrominance sample, E' is a chrominance value of a chrominance sample adjacent to right/east of the co-located chrominance sample, W 'is a chrominance value of a chrominance sample adjacent to left/west of the co-located chrominance sample, and P' is a nonlinear term. B 'is a bias term representing the scalar offset between input and output (B is set to the luminance median for video of 10 bit depth), P' = (C '×c' +midval ') > bitDepth (midVal' is the chrominance median of the chrominance samples, bitDepth is the bit depth of the video content). The positional relationship of N ', S ', E ', W ' and C ' is similar to that of N, S, E, W and C as shown in fig. 4.
Alternatively, in order to perform luminance intra-frame prediction processing by using the CCCM model shown in the above formula (2), the processing device needs to determine the chromaticity values of the weight coefficients C0' to C6' and C ', N ', S ', E ', W ', P ', B ' in the formula (2), and then obtains the luminance prediction result of the luminance sample predlumeval to be predicted based on the formula (2). Optionally, the manner in which the processing device determines the weight coefficient in the formula (2) may be: at least one reference area is acquired or determined from the image of the image block which is currently required to be subjected to brightness prediction, and a weight coefficient is determined according to the sampling value of brightness/chroma sampling in the at least one reference area. In one embodiment, at least one luminance reference region is obtained or determined when performing luminance prediction. In another embodiment, at least one chrominance reference region is obtained or determined when performing luminance prediction.
In this embodiment, the technical scheme of the present application determines or obtains the prediction result of the sample to be predicted by obtaining or determining at least one reference area to determine at least one weight parameter according to the reference area, so that in the video encoding and decoding process using the intra-frame prediction mode, the technical scheme of the present application can implement intra-frame prediction of chroma or brightness by selecting a suitable intra-frame prediction mode, thereby effectively improving the overall efficiency of video encoding.
Second embodiment
In this embodiment, the execution subject of the image processing method provided in the present application may still be the processing apparatus described above. In this embodiment, the above-mentioned method for acquiring or determining the reference area may include at least one of the following methods:
mode one: acquiring or determining from a left adjacent region or an upper adjacent region of the sample to be predicted or a co-located sample of the sample to be predicted;
optionally, if the chroma prediction is performed to determine the chroma sample to be predicted, the co-located sample of the chroma sample to be predicted is a co-located luma sample. Optionally, the co-located luminance samples are: and the luminance sample is the same as the position of the chroma sample to be predicted in the image frame. Optionally, the co-located luminance samples have completed luminance prediction so that the luminance value is known.
Optionally, if the luminance prediction is performed to determine a luminance sample to be predicted, the co-located samples of the luminance sample to be predicted are co-located chroma samples. Optionally, the co-located chroma samples are: and chroma sampling which is the same as the position of the brightness sampling to be predicted in the image frame. Optionally, co-located chroma sampling has completed chroma prediction so that the chroma values are known.
Optionally, the processing device determines the image block as chroma sample to be predicted in the process of chroma prediction for the current image block. The current image block includes a luminance block and a chrominance block. In an embodiment, at least one reference region is acquired or determined from a left or upper neighboring region of a luminance block in which the co-located luminance sample of the samples to be predicted is located within a current image frame in which the image block is located. Alternatively, one of the left adjacent region and the upper adjacent region may be referred to as a first reference region a1; the other of the left-side adjacent region and the upper adjacent region may be referred to as a second reference region b1. Alternatively, the first reference region a1 and the second reference region b1 are luminance reference regions.
Optionally, the processing device determines the chroma samples to be predicted for the current image block during the chroma prediction for that image block. The current image block includes a luminance block and a chrominance block. In an embodiment, at least one reference region is acquired or determined from a left neighboring region or an upper neighboring region of a chroma block in which the chroma sample to be predicted is located within a current image frame in which the image block is located. Alternatively, one of the left adjacent region and the upper adjacent region may be referred to as a first reference region a2; the other one of the left adjacent region, the upper adjacent region, and the left adjacent region may be referred to as a second reference region b2. Optionally, the first reference area a2 and the second reference area b2 are chrominance reference areas.
Optionally, the processing device determines the image block as the luminance sample to be predicted during the luminance prediction for the current image block. The current image block includes a luminance block and a chrominance block. In an embodiment, at least one reference region is acquired or determined from a left or upper neighboring region of a chroma block in which co-located chroma samples of the luma sample to be predicted are located within a current image frame in which the image block is located. Alternatively, one of the left adjacent region and the upper adjacent region may be referred to as a first reference region a3; the other of the left-side adjacent region and the upper adjacent region may be referred to as a second reference region b3. Alternatively, the first reference region a3 and the second reference region b3 are chromaticity reference regions.
Optionally, the processing device determines the image block as the luminance sample to be predicted during the luminance prediction for the current image block. The current image block includes a luminance block and a chrominance block. In an embodiment, at least one reference region is acquired or determined from a left or upper neighboring region of the luminance block of the luminance samples to be predicted within the current image frame in which the image block is located. Alternatively, one of the left adjacent region and the upper adjacent region may be referred to as a first reference region a4; the other of the left-side adjacent region and the upper adjacent region may be referred to as a second reference region b4. Alternatively, the first reference region a4 and the second reference region b4 are luminance reference regions.
Mode two: acquiring or determining from a region formed by splicing a left adjacent region and an upper adjacent region of the sample to be predicted or the co-located sample of the sample to be predicted;
optionally, the processing device determines the image block as chroma sample to be predicted in the process of chroma prediction for the current image block. The current image block includes a luminance block and a chrominance block. In an embodiment, at least one reference region may also be acquired or determined from a region formed by combining (e.g. stitching) a left neighboring region and an upper neighboring region of a luminance block in which a co-located luminance sample of the chroma sample to be predicted is located within a current image frame in which the image block is located. The at least one reference area is a third reference area c1. Optionally, the third reference area c1 is a luminance reference area.
Optionally, the processing device determines the image block as chroma sample to be predicted in the process of chroma prediction for the current image block. The current image block includes a luminance block and a chrominance block. In an embodiment, at least one reference region may also be acquired or determined from a region formed by combining (e.g., stitching) a left neighboring region and an upper neighboring region of the chroma block in which the chroma sample to be predicted is located within the current image frame in which the image block is located. The at least one reference area is a third reference area c2. Optionally, the third reference area c2 is a chrominance reference area.
Optionally, the processing device determines the image block as the luminance sample to be predicted during the luminance prediction for the current image block. The current image block includes a luminance block and a chrominance block. In an embodiment, the at least one reference region c3 may also be acquired or determined from a region formed by combining (e.g. stitching) a left neighboring region or an upper neighboring region of the chroma block in which the co-located chroma samples of the luma sample to be predicted are located within the current image frame in which the image block is located. Optionally, the third reference area c3 is a chrominance reference area.
Optionally, the processing device determines the image block as the luminance sample to be predicted during the luminance prediction for the current image block. The current image block includes a luminance block and a chrominance block. In an embodiment, at least one reference region may also be acquired or determined from a region formed by combining (e.g. stitching) a left neighboring region and an upper neighboring region of the luminance block in which the luminance sample to be predicted is located within the current image frame in which the image block is located. The at least one reference area is a third reference area c4. Optionally, the third reference area c3 is a luminance reference area.
Alternatively, the first mode and the second mode may be combined with each other, for example, the above-described reference regions may be acquired or determined by at least one of the first reference region, the second reference region, and the third reference region, for example:
optionally, the reference region is acquired or determined by the first reference region or the second reference region or the third reference region;
optionally, the reference region is acquired or determined by the first reference region and/or the second reference region;
optionally, the reference region is acquired or determined by the second reference region and/or the third reference region;
optionally, the reference region is acquired or determined by the first reference region and/or the third reference region;
optionally, the reference region is acquired or determined by the first reference region and/or the second reference region and/or the third reference region.
By determining the finally adopted reference area among the plurality of reference areas, the selection of the reference area can be more flexible. In an embodiment, the first reference area may be an area x, the second reference area may be an area y, and the third reference area may be an area z. The finally adopted reference region is obtained or determined from at least one of the first reference region, the second reference region and the third reference region, so that the finally adopted reference region can be a region combination of the first reference region, the second reference region and the third reference region, the position, the size or the shape of the finally adopted reference region is more flexible, and/or different region combinations can be adopted according to the calculation force of coding and decoding so as to adapt to different application scenes.
Alternatively, as shown in fig. 7a to 7c, when the reference area is a luminance reference area and the co-located luminance block where the co-located luminance sample of the chroma sample to be predicted is the first luminance block or the luminance block where the luminance sample to be predicted is the first luminance block, the first reference area shown in fig. 7a includes an area adjacent to the left side of the first luminance block but does not include an area adjacent to the top of the first luminance block, the second reference area shown in fig. 7b includes an area adjacent to the top of the first luminance block but does not include an area adjacent to the left side of the first luminance block, and the third reference area shown in fig. 7c includes at least a part of the area formed by combining the area adjacent to the left side of the first luminance block and the area adjacent to the top of the first luminance block.
Alternatively, in this embodiment, the width of the area adjacent above the first luminance block may be greater than the width of the first luminance block, and/or the height of the area adjacent to the left side of the first luminance block may be greater than the height of the first luminance block.
Alternatively, as shown in fig. 7d to 7f, when the reference area is a chroma reference area and the chroma block where the chroma sample to be predicted is located is a first chroma block or the co-located chroma block where the co-located chroma sample of the luma sample to be predicted is located is a first chroma block, the first reference area shown in fig. 7d includes an area adjacent to the left side of the first chroma block but not an area adjacent to the top of the first chroma block, the second reference area shown in fig. 7e includes an area adjacent to the top of the first chroma block but not an area adjacent to the left side of the first chroma block, and the third reference area shown in fig. 7f includes at least a part of the area formed by combining the area adjacent to the left side of the first chroma block and the area adjacent to the top of the first chroma block.
Alternatively, in this embodiment, the width of the area above the first chroma block may be greater than the width of the first chroma block, and/or the height of the area adjacent to the left side of the first chroma block may be greater than the height of the first chroma block.
Alternatively, in the present embodiment, the above-described luminance reference region and chrominance reference region are located corresponding to each other and are co-located regions. For example, the first reference area, the second reference area, and the third reference area may correspond to a first luminance reference area, a second luminance reference area, and a third luminance reference area, and optionally, the first reference area, the second reference area, and the third reference area may also correspond to a first chrominance reference area, a second chrominance reference area, and a third chrominance reference area. Thus, the first luminance reference region and the first chrominance reference region are co-located regions, the second luminance reference region and the second chrominance reference region are co-located regions, and the third luminance reference region and the third chrominance reference region are co-located regions.
YUV is a kind of compiling true-color space, and proper nouns such as Y' UV, YUV, YCbCr, YPbPr may be called YUV, which overlap each other. Y represents brightness, i.e., gray scale values, and U and V represent chromaticity.
Alternatively, referring to fig. 8, the left side of fig. 8 shows a Y image of a color image that the processing device needs to perform the encoding and decoding processing, and the right side of fig. 8 shows a U/V image corresponding to the Y image. Optionally, the reference area a and the reference area b are located in the Y image, and the reference area a 'and the reference area b' are located in the U/V image, and since the positions of the reference area a and the reference area a 'are the same, the positions of the reference area b and the reference area b' are the same, the reference area a and the reference area a 'are the same-position areas, and the reference area b' are the same-position areas.
Mode three: acquiring or determining according to a preset mark;
optionally, in this embodiment, the preset flag is a flag that is transmitted to the bitstream at the decoding end and is used to indicate a reference area that needs to be used for predicting the sample to be predicted.
Alternatively, the flags may be parsed from the encoded bitstream.
Optionally, the acquiring or determining the reference area according to the preset flag may include:
and acquiring or determining the reference area of the value mapping according to the value of the mark.
Optionally, after the processing device as the encoding end determines the above reference area, the processing device may encode a flag indicating the reference area into the bitstream transmitted to the decoding end, so that after receiving the bitstream transmitted from the encoding end, the processing device as the decoding end may parse the flag from the bitstream, and further acquire or determine at least one reference area according to the flag.
Alternatively, the processing device as an encoder may indicate the above-mentioned different reference areas by encoding flags of different bit numbers. For example, the processing device may be configured to instruct the processor as the decoder to acquire or determine the at least one reference region within the first reference region by encoding a flag having a first number of bits, and to acquire or determine the at least one reference region within the second reference region by encoding a flag having a second number of bits, optionally the first number of bits is less than the second number of bits, optionally the second number of bits is less than the first number of bits.
Alternatively, the processing device as an encoder may also indicate the above-mentioned different reference areas by encoding flags of different values. For example, the processing device may obtain or determine at least one reference region within the first reference region by encoding a flag having a first value to instruct the processor as a decoder to obtain or determine at least one reference region within the second reference region by encoding a flag having a second value to instruct the processor as a decoder to obtain or determine at least one reference region within the second reference region, optionally the first value being different from the second value.
Alternatively, when the processing device as the encoder encodes the above-described flag to indicate the first reference region, the second reference region, and the third reference region, respectively, it is also possible to indicate a reference region with a high probability of use with a smaller number of bits and to indicate a reference region with a lower probability of use with a larger number of bits. For example, for most scenarios, the probability that the processing device uses the third reference region to obtain or determine the reference region to predict the sample to be predicted is higher than the probability that the first reference region or the second reference region to obtain or determine the reference region to predict the sample to be predicted, so that the processing device may encode the flag indicating the third reference region with fewer bits and encode the flag indicating the first reference region or the second reference region with more bits.
In the present embodiment, the processing apparatus encodes the flag indicating the reference area through the above-described process, and can make the bit stream transmitted to the decoder use a smaller number of bits as a whole, thereby effectively improving the compression rate for the flag.
Alternatively, after determining that the luminance reference region used to acquire or determine the reference region is of the type shown in any one of fig. 7a to 7c in the process of performing chroma prediction with respect to the samples to be predicted, the processing device as an encoder may transmit a luminance reference region flag (flag luma_ref_region_flag) to indicate one of the type of the first reference region shown in fig. 7a, the type of the second reference region shown in fig. 7b, or the type of the third reference region shown in fig. 7 c. On the decoding side, the processing device as a decoder may obtain the value corresponding to the flag by parsing the flag luma_ref_region_flag, so as to determine the type of the luminance reference region used for obtaining or determining the reference region based on the value. Optionally, the mapping relationship between the value of the flag luma_ref_region_flag and the type of the luminance reference region and the binarization of the flag luma_ref_region_flag is as follows in table 1:
TABLE 1
/>
As shown in fig. 7a to 7c, the reference area shown in fig. 7a is a first luminance reference area, the reference area shown in fig. 7b is a second luminance reference area, and the reference area shown in fig. 7c is a third luminance reference area. Alternatively, if the third luminance reference region as shown in fig. 7c is used, the encoder takes the value of the flag luma_ref_region_flag to 0 and packages the flag luma_ref_region_flag into a bitstream after binarizing it to 0. And/or, if the first luminance reference region as shown in fig. 7a is used, the encoder takes 1 for the value of the flag luma_ref_region_flag and packages the flag luma_ref_region_flag into a bitstream after binarizing it to 10. And/or, if the second luminance reference region as shown in fig. 7b is used, the encoder takes the value of the flag luma_ref_region_flag to 2 and packages the flag luma_ref_region_flag into a bitstream after binarizing it to 11.
For the decoder, if the binary value of the flag luma_ref_region_flag is 0, which is obtained by parsing the bitstream, the value of the flag luma_ref_region_flag is 0, which indicates that the reference region is the third luminance reference region shown in fig. 7 c; and/or, if the binary value of the flag luma_ref_region_flag is 10, the value of the flag luma_ref_region_flag is 1, which indicates that the reference region is the first luminance reference region shown in fig. 7 a; and/or, if the binary value of the flag luma_ref_region_flag is 11, the value of the flag luma_ref_region_flag is 2, which indicates that the reference region is the second luminance reference region shown in fig. 7 b.
Alternatively, the value of the flag luma_ref_region_flag, the type of the luminance reference region and the binarization of the flag luma_ref_region_flag are not fixed, and their mapping relationship may also be as shown in the following table 2:
TABLE 2
Optionally, the number of binarized bits of the flag luma_ref_region_flag corresponding to the third reference region type in the above table 1 and the above table 2 is smaller than the number of binarized bits of the flag luma_ref_region_flag corresponding to the first reference region type and the second reference region type. The reason for this is that, in the case of sufficient calculation power, since the prediction processing by the third reference region type is better in accuracy than the prediction processing by the first reference region type and the second reference region type, the probability of use of the third reference region type is larger than the probability of use of the other two reference region types. Setting the number of binarized bits of the flag luma_ref_region_flag corresponding to the third reference region type with high probability to be smaller than the number of binarized bits of the flag luma_ref_region_flag corresponding to the other reference region types can reduce the number of encoding bits under the condition of sufficient calculation power.
Alternatively, after determining that the Chroma reference region used to acquire or determine the reference region is of the type shown in any one of fig. 7d to 7f in the process of Chroma prediction for the sample to be predicted, the processing device as an encoder may send a Chroma reference region flag (flag chroma_ref_region_flag) to indicate one of the type of the first reference region shown in fig. 7d, the type of the second reference region shown in fig. 7e, or the type of the third reference region shown in fig. 7 f. On the decoding side, the processing device as a decoder can obtain the value corresponding to the flag by analyzing the flag chroma_ref_region_flag, so as to determine the type of the Chroma reference region used for acquiring or determining the reference region based on the value. Optionally, the mapping relationship between the value of the flag chroma_ref_region_flag, the type of the Chroma reference region and the binarization of the flag chroma_ref_region_flag is as follows in table 3:
TABLE 3 Table 3
As shown in fig. 7d to 7f, the reference area shown in fig. 7d is a first chrominance reference area, the reference area shown in fig. 7e is a second chrominance reference area, and the reference area shown in fig. 7f is a third chrominance reference area. Alternatively, if the third Chroma reference region as shown in fig. 7f is used, the encoder takes the value of the flag Chroma ref region flag to 0 and packages the flag Chroma ref region flag into the bitstream after binarizing it to 0. And/or, if the first chrominance reference area as shown in fig. 7d is adopted, the encoder takes the value of the flag chroma_ref_region_flag to 1, and packages the flag chroma_ref_region_flag into a bit stream after binarizing the flag chroma_ref_region_flag to 10. And/or, if the second Chroma reference region as shown in fig. 7e is adopted, the encoder takes the value of the flag chroma_ref_region_flag to 2, and packages the flag chroma_ref_region_flag into a bit stream after binarizing the flag chroma_ref_region_flag to 11.
For the decoder, if the binary value of the flag chroma_ref_region_flag is 0, which is obtained by parsing the bitstream, the value of the flag chroma_ref_region_flag is 0, which indicates that the reference region is the third Chroma reference region shown in fig. 7 f; and/or, if the binary value of the flag chroma_ref_region_flag is 10, the value of the flag chroma_ref_region_flag is 1, which indicates that the reference region is the first Chroma reference region shown in fig. 7 d; and/or, if the binary value of the flag chroma_ref_region_flag is 11, the value of the flag chroma_ref_region_flag is 2, which indicates that the reference region is the second Chroma reference region shown in fig. 7 e.
Alternatively, the value of the flag chroma_ref_region_flag, the type of the Chroma reference region and the binarization of the flag chroma_ref_region_flag are not fixed, and their mapping relationship may also be as shown in the following table 4:
TABLE 4 Table 4
Optionally, the number of binarized bits of the flag chroma_ref_region_flag corresponding to the third reference region type in the above table 3 and the above table 4 is smaller than the number of binarized bits of the flag chroma_ref_region_flag corresponding to the first reference region type and the second reference region type. The reason for this is that, in the case of sufficient calculation power, since the prediction processing by the third reference region type is better in accuracy than the prediction processing by the first reference region type and the second reference region type, the probability of use of the third reference region type is larger than the probability of use of the other two reference region types. Setting the number of binarized bits of the flag chroma_ref_region_flag corresponding to the third reference region type with high probability to be smaller than the number of binarized bits of the flag chroma_ref_region_flag corresponding to the other reference region types can reduce the number of encoding bits under the condition of sufficient calculation power.
Similarly, the processing device as the encoder may be implemented by using a luminance reference region type similar to the above luminance reference region type, a flag similar to the above flag luma_ref_region_flag, and a mapping relation similar to table 1 or table 2 above in the process of performing luminance prediction for the samples to be predicted; it may also be implemented using a similar type of Chroma reference region as the above type of Chroma reference region, a similar flag as the above flag Chroma ref region flag, and a similar mapping relation as the above table 3 or table 4.
Alternatively, a flag ref_region_select_flag may indicate whether the luminance reference region type or the chrominance reference region type is employed. For example, if the value of the flag ref_region_select_flag is 1, the luminance reference region type is employed, and/or if the value of the flag ref_region_select_flag is 0, the chrominance reference region type is employed.
Mode four: acquiring or determining according to the template area;
optionally, the processing device as a decoder may also obtain or determine at least one reference region from among the template regions.
Optionally, in this embodiment, the above-mentioned flag may further include a third value in addition to the first value and/or the second value. In one embodiment, the first and second values are one of 0,1,2 and the third value is a value other than the first and second values.
In one embodiment, the first value and the second value are one value in a first range of values, and the third value is a value outside the first range of values.
Optionally, the fourth mode of acquiring or determining the reference area according to the template area may include:
and if the value is the third value, determining the reference area according to the template area.
Alternatively, the processing device at the encoding end may instruct the processing device at the decoding end to acquire or determine at least one reference area from the first reference area, the second reference area or the third reference area based on the value of the flag by encoding the flag having the first value or the second value, and may instruct the processing device at the decoding end to acquire or determine the at least one reference area directly using the template area by encoding the flag having the third value. That is, when the flag coded by the encoder is obtained by parsing from the bit stream, the processing device serving as the decoder directly and autonomously derives the template area at the local end, so as to further acquire or determine at least one reference area from the template area for performing prediction operation on samples to be predicted.
Optionally, in the process of performing chroma prediction on samples to be predicted, the processing device serving as the decoder determines a template area around the samples to be predicted after parsing the received bit stream to obtain a flag with a third value, and then derives the weight coefficients of the CCCM model for the chroma samples in the template area by using three reference area types (i.e., the types of the first reference area, the second reference area and the third reference area), and predicts the weight coefficients according to the obtained weight coefficients to obtain the weight coefficients corresponding to different reference area types. And calculates predicted values of chroma sampling in the template area according to different weight coefficients. And finally, determining the optimal reference region type by analyzing the prediction error of the prediction value under each reference region type. And sets the optimal reference region type as the reference region type of the sample to be predicted.
Alternatively, the template area determined by the processing device around the sample to be predicted may be: the left and/or upper adjacent regions of the sample to be predicted.
Optionally, the processing device as the encoder uses a luma_ref_region_deriv_flag as a luma reference region derivation flag in the chroma prediction process for the samples to be predicted to instruct the decoding end to determine the type of the luma reference region according to the template region, thereby determining the luma reference region. For example, when the value of the luminance reference region derivation flag luma_ref_region_deriv_flag is 1, it indicates that the type of luminance reference region can be derived from the template region can be utilized, and/or when the value of the luminance reference region derivation flag luma_ref_region_deriv_flag is 0, it indicates that the type of luminance reference region is prohibited from being derived from the template region.
Alternatively, as shown in fig. 9a to 9d, in a possible embodiment of the processing device for performing chroma prediction on chroma samples to be predicted, the template area may be an area formed by an adjacent line or lines of luma samples above a first luma block (a co-located luma block of the chroma block in which the chroma samples to be predicted are located) and an adjacent line or lines of luma samples on the left side. The template region is a luminance template region.
Alternatively, the template region may be L-shaped, and the right boundary of the template region is aligned with the right boundary of the first luminance block described above, and the lower boundary of the template region is aligned with the lower boundary of the first luminance block. The number of rows above the template area and the number of columns to the left may or may not be the same.
Alternatively, as shown in fig. 10a to 10d, in a possible embodiment of the processing device for performing chroma prediction on chroma samples to be predicted, the template area may be an area formed by one or several adjacent lines of luma samples above a first chroma block (the chroma block in which the chroma samples to be predicted are located) and one or several adjacent lines of luma samples on the left side. The template region is a chromaticity template region.
Alternatively, the template region may be L-shaped, with the right boundary of the template region aligned with the right boundary of the first chroma block described above and the lower boundary of the template region aligned with the lower boundary of the first chroma block. The number of rows above the template area and the number of columns to the left may or may not be the same.
Alternatively, a portion of the L-shaped template region is located above the first luminance block or the first chrominance block and another portion of the L-shaped template region is located to the left of the first luminance block or the first chrominance block, and thus, the pixel characteristics of the L-shaped template region are shaped like the pixel characteristics of the first luminance block/the first chrominance block, so that a suitable reference region of the L-shaped template region may be made most suitable for the first luminance block/the first chrominance block.
Alternatively, the template region may be a block region. For example, the template region may be a block region located on a diagonal of the first luminance block or the first chrominance block. Optionally, the diagonal line is a diagonal line connecting the upper left corner and the lower right corner of the first luminance block/first chrominance block.
Similarly, in a possible embodiment of the processing device for performing luminance prediction on luminance samples to be predicted, the template area may be an area formed by a row or a plurality of rows of luminance samples adjacent to the first luminance block (the luminance block where the luminance samples to be predicted are located) and a row or a plurality of rows of luminance samples adjacent to the left side. The template region is a luminance template region. In a possible embodiment of the processing device for performing luminance prediction on luminance samples to be predicted, the template area may be an area formed by a neighboring row or rows of chrominance samples above a first chrominance block (a co-located chrominance block of the luminance block in which the luminance samples to be predicted are located) and a neighboring row or rows of chrominance samples on the left side. The template region is a chromaticity template region.
Mode five: and if the intra-frame prediction mode adopted by the co-located samples of the samples to be predicted and/or the adjacent samples of the samples to be predicted is an angle intra-frame prediction mode, acquiring or determining the intra-frame angle corresponding to the angle intra-frame prediction mode.
Optionally, the processing device may further obtain or determine at least one reference area for performing a prediction operation on the sample to be predicted according to an intra-prediction mode used by a co-located sample of the sample to be predicted and/or an adjacent sample of the sample to be predicted that are currently required to perform the same prediction operation. That is, the processing device is acquiring or determining: when the intra-frame prediction mode adopted by the co-located samples and/or the adjacent samples of the samples to be predicted is an angle intra-frame prediction mode, the processing device can directly acquire or determine at least one reference area for predicting the samples to be predicted according to the intra-frame angle corresponding to the angle intra-frame prediction mode.
Optionally, acquiring or determining the reference region according to intra angle may include:
and acquiring or determining the reference area according to the angle range of the intra-frame angle.
Optionally, as indicated by solid arrows in fig. 11, the intra angle corresponding to the intra prediction mode used by the above co-located samples and/or adjacent samples includes: INTRA-frame angle 2 (intra_anguar 2) to INTRA-frame angle 66 (intra_anguar 66). Optionally, for wide-angle modes (wide-angle modes are applied only to non-square blocks), the ANGULAR INTRA-prediction modes also include INTRA-angle-14 (INTRA-angle-14), INTRA-angle-1 (INTRA-angle-1), and INTRA-angle 67 (INTRA-angle 67), INTRA-angle 80 (INTRA-angle 80).
Optionally, if a sampling or an image block adopts an intra-frame prediction mode in a first angle range, the reference region type to be predicted is a reference region type X; and if the sampling or the image block adopts an angular intra-frame prediction mode of a second angular range, the type of the reference area to be predicted is a second reference area type Y. Alternatively, the reference region type X and the reference region type Y are different.
Optionally, in the process of performing chroma prediction on a sample to be predicted, if the processing device determines that a luminance block where a co-located luminance sample for the sample to be predicted or an adjacent luminance sample for the sample to be predicted is a chroma prediction performed by using an angular intra-frame prediction mode, and an intra-frame angle corresponding to the angular intra-frame prediction mode is greater than 2 and less than or equal to 18, the processing device may acquire or determine at least one reference region directly from a left adjacent region of the co-located luminance sample or the adjacent luminance sample.
Optionally, if the processing device determines that the luminance block where the co-located luminance sample of the sample to be predicted or the neighboring luminance sample of the sample to be predicted is a chroma prediction performed in an angular intra-frame prediction mode, and the intra-frame angle corresponding to the angular intra-frame prediction mode is greater than or equal to 50 and less than 66, the processing device may directly obtain or determine at least one reference area from the upper neighboring area of the co-located luminance sample or the neighboring luminance sample.
Optionally, if the processing device determines that the luminance block where the co-located luminance sample or the adjacent luminance sample of the sample to be predicted is a chroma prediction performed by using an angular intra-frame prediction mode, and the intra-frame angle corresponding to the angular intra-frame prediction mode is greater than 21 and less than 49, the processing device may directly acquire or determine at least one reference region from a region formed by splicing a left adjacent region and an upper adjacent region of the co-located luminance sample or the adjacent luminance sample.
Optionally, as shown in table 5 below, in the process of performing chroma prediction on a sample to be predicted, if a luminance block where a co-located luminance sample corresponding to the sample to be predicted is a chroma prediction performed in an ANGULAR INTRA-frame prediction mode, and an INTRA-frame angle 50 (intra_angle 50) to an INTRA-frame angle 66 (intra_angle 66) of the ANGULAR INTRA-frame prediction mode, the processing device may determine that a type of a luminance reference region of the sample to be predicted is the type of the second reference region. If the INTRA angle of the ANGULAR INTRA prediction mode is 2 (intra_anguar 2) to INTRA angle 18 (intra_anguar 18), the type of the luminance reference region to be predicted and sampled is the type of the first reference region. If the INTRA angle of the ANGULAR INTRA prediction mode is 21 (intra_anglar 21) to INTRA angle 49 (intra_anglar 49), the type of the luminance reference region to be predicted and sampled is the type of the third reference region.
TABLE 5
Optionally, as shown in table 6 below, for a wide-angle mode (wide-angle mode), if an INTRA-angle prediction mode is used for a luminance block where a co-located luminance sample corresponding to a sample to be predicted is located, and an INTRA-frame angle of the INTRA-angle prediction mode is intra_anguar 67 to intra_anguar 80, a type of a luminance reference area of the chroma block to be predicted is the type of the second reference area. If the INTRA angle of the ANGULAR INTRA prediction mode is intra_angle-14 to intra_angle-1, the type of the luminance reference region of the chroma block to be predicted is the type of the first reference region. If the INTRA angle of the ANGULAR INTRA prediction mode is intra_angle-14 to intra_angle-1, the type of the luminance reference region of the chroma block to be predicted is the type of the first reference region.
TABLE 6
Optionally, if the INTRA-frame angle of the INTRA-frame prediction mode of the angle used by the luminance block where the co-located luminance sample is located is intra_angle 50 to intra_angle 66, the predicted value of the luminance sample in the luminance block may be determined to be derived according to the boundary pixel above the luminance block, and in this case, the processing device may determine that the chromaticity value of the sample to be predicted has a high correlation with the area above the chromaticity value. If the INTRA-frame angle of the INTRA-frame prediction mode of the angle used by the luminance block where the co-located luminance samples are located is intra_anglar 2 to intra_anglar 18, the predicted value of the luminance sample in the luminance block can be confirmed to be derived according to the boundary pixel on the left side of the luminance block, and in this case, the processing device can determine that the chromaticity value of the sample to be predicted has high correlation with the region on the left side thereof. If the INTRA-frame angle of the INTRA-frame prediction mode of the angle used by the luminance block where the co-located luminance samples are located is intra_anglar 21 to intra_anglar 49, the predicted value of the luminance sample in the luminance block can be confirmed to be derived according to the boundary pixel on the left side and the boundary pixel above the luminance block, and in this case, the processing device can determine that the correlation of the chroma value of the sample to be predicted and the region where the left side region and the upper region are combined and spliced is very high.
Optionally, the upper region includes an upper luminance neighboring region and a co-located chrominance neighboring region thereof. The left side region includes a left side luminance adjacent region and a parity chrominance adjacent region thereof.
Optionally, when the processing device used as the encoder or the decoder performs luminance prediction, the operation means for acquiring or determining the reference area in the fifth mode is the same as that used when the processing device performs chrominance prediction, and will not be described in detail herein.
In this embodiment, when determining the reference region for performing chroma/luminance prediction on the sample to be predicted in the above-described various manners, if the feature of the sample to be predicted is similar to the feature of the left neighboring region only and dissimilar to the feature of the upper neighboring region, the processing apparatus may perform chroma/luminance prediction by using the above-described first reference region to obtain a better weight coefficient; if the features of the sample to be predicted are similar to the features of the upper adjacent region and dissimilar to the features of the left adjacent region, the processing device may obtain a better weight coefficient by using the second reference region to perform chromaticity/brightness prediction; and if the features of the sample to be predicted are similar to those of the upper adjacent region and those of the left adjacent region, the processing device can use the third reference region to obtain a better weight coefficient for chroma/brightness prediction.
Third embodiment
In this embodiment, the execution subject of the image processing method provided in the present application may still be the processing apparatus described above. In this embodiment, the fourth mode of acquiring or determining the reference area according to the template area may further include:
acquiring or determining a first weight parameter and a second weight parameter from the template area;
acquiring or determining a first sampling prediction result corresponding to the first weight parameter and a second sampling prediction result corresponding to the second weight parameter;
and if the first sampling prediction result is better than the second sampling prediction result, acquiring or determining the reference area as a fourth reference area according to the first weight parameter, and/or if the second sampling prediction result is better than the first sampling prediction result, acquiring or determining the reference area as a fifth reference area according to the second weight parameter. Optionally, the fourth reference region is one of a region including an upper adjacent region of the template region but not including a left adjacent region of the template region, a region including a left adjacent region of the template region but not including an upper adjacent region of the template region, and a region formed from a combination (e.g., stitching) of the left adjacent region and the upper adjacent region of the template region, and the fifth reference region is the other of a region including an upper adjacent region of the template region but not including a left adjacent region of the template region, a region including a left adjacent region of the template region but not including an upper adjacent region of the template region, and a region formed from a combination (e.g., stitching) of the left adjacent region and the upper adjacent region of the template region. The template region may be a luminance template region or a chrominance template region.
Alternatively, as an encoder or a processing device of the encoder, each may acquire or determine at least one reference area at the local end using the template area described above. That is, the processing device first obtains or determines at least two weight parameters, namely a first weight parameter and a second weight parameter, from the template region; then, the processing device sequentially acquires or determines a first sampling prediction result corresponding to the first weight parameter and a second sampling prediction result corresponding to the second weight parameter, compares the first sampling prediction result with the second sampling prediction result, and acquires or determines at least one reference area according to the first weight parameter if the processing device compares the first sampling prediction result to be better than the second sampling prediction result; and/or if the second sampling prediction result is compared to be better than the first sampling prediction result, acquiring or determining at least one reference area according to the second weight parameter.
Optionally, the acquiring or determining the reference area according to the first weight parameter may include:
and acquiring or determining the reference region according to the position relation between the corresponding region in the template region and the template region of the first weight parameter.
Optionally, when determining that the first sampling prediction result corresponding to the first weight parameter is better than the second sampling prediction result corresponding to the second weight parameter, the processing device may first acquire or determine a fourth reference area corresponding to the first weight parameter in the template area according to the first weight parameter, and then acquire or determine at least one reference area according to a positional relationship between the fourth reference area and the template area. In an embodiment, if the first weight parameter is derived from a fourth reference region, and the fourth reference region is a region including a left side neighboring region of the template region but not including an upper side neighboring region of the template region, the at least one reference region is a region including a left side neighboring region of the first luminance block/first chrominance block and not including a first luminance block/first chrominance block upper side neighboring region as shown in fig. 7a and 7 e. In an embodiment, if the first weight parameter is derived from a fourth reference region, and the fourth reference region is a region including an upper neighboring region of the template region but not including a left neighboring region of the template region, the at least one reference region is a region including a first luminance block/a first chrominance block upper neighboring region and not including a first luminance block/a first chrominance block left neighboring region as shown in fig. 7b and 7 d. In an embodiment, if the first weight coefficient is derived from a fourth reference region, and the fourth reference region is a region formed by combining (e.g., stitching) a left neighboring region and an upper neighboring region including the template region, the at least one reference region is a region formed by combining (e.g., stitching) a left neighboring region and an upper neighboring region including the first luminance block/first chrominance block as shown in fig. 7c and 7 f.
Optionally, the operation means of the processing device for acquiring or determining the reference area according to the second weight parameter is the same as the process for acquiring or determining the reference area according to the first weight parameter, which is not described herein.
Optionally, in this embodiment, the processing device at the encoding end and/or the decoding end may determine, during chroma prediction for a sample to be predicted, a first weight parameter related to the template region by using a fourth luminance reference region, and determine, by using a fifth reference region, a second weight parameter related to the template region, so as to determine at least one luminance reference region according to the first weight parameter and the second weight parameter, and optionally, obtain or determine at least one luminance reference region according to a positional relationship between the fourth luminance reference region and the template region. Alternatively, the types of the fourth luminance reference region and the fifth luminance reference region are different from each other.
Alternatively, as shown in fig. 12a to 12c, as the processing device of the encoder and/or decoder, in the process of performing chroma prediction for a sample to be predicted, assuming that a template region is a luminance template region, a luminance reference region of the template region is located on the left side and/or above and adjacent to the template region, the processing device may determine the three types of weight coefficients by respectively applying the three types of luminance reference regions to the template region.
In one embodiment, the three types of luminance reference regions are left reference regions only to the left of the template region as shown in fig. 12a, upper reference regions only above the template region as shown in fig. 12b, and/or regions spliced by a combination of the left and upper regions of the template region as shown in fig. 12 c.
In another embodiment, the three types of luminance reference regions are a first reference region, a second reference region, and a third reference region. The first reference region includes a region adjacent to the left of the template region but not including a region adjacent to the upper of the template region, the second reference region includes a region adjacent to the upper of the template region but not including a region adjacent to the left of the template region, and the third reference region includes a region of at least a part of a region formed by combining a region adjacent to the left of the template region and a region adjacent to the upper of the template region.
After the processing device determines the weight coefficients corresponding to the three types, the processing device calculates predicted values of the co-located chroma samples of the luminance samples in the template region under the weight coefficients corresponding to the three types, and compares the predicted values with reconstructed values of the co-located chroma samples, thereby using the luminance reference region type corresponding to the weight coefficient with the smallest error between the predicted values and the reconstructed values as the type of the luminance reference region of the first luminance block.
Alternatively, assuming that the processing apparatus determines the weight coefficients ca10 to ca16 from the luminance reference region shown in fig. 12a, and determines the weight coefficients ca20 to ca26 from the luminance reference region shown in fig. 12b, and determines the weight coefficients ca30 to ca36 from the luminance reference region shown in fig. 12c, then the processing apparatus calculates the predicted value PredCx1 of the co-located chroma sample of the luminance sample in the template region in the case where the weight coefficients ca10 to ca16 are employed, and calculates the predicted value PredCx2 of the co-located chroma sample of the luminance sample in the template region in the case where the weight coefficients ca20 to ca26 are employed, and calculates the predicted value PredCx3 of the co-located chroma sample of the luminance sample in the template region in the case where the weight coefficients ca30 to ca36 are employed. Thus, the processing device, after obtaining the predicted values of the plurality of co-located chroma samples corresponding to each of the plurality of luminance samples in the template region, compares the errors between the predicted values and the reconstructed values of the co-located chroma samples, and thereby takes the luminance reference region type corresponding to the predicted value with the smallest error as the luminance reference region type of the first luminance block. For example, the processing device compares, for a plurality of co-located chroma samples Xn, the prediction value PredCnx1 to the error between the prediction value PredCnx3 and the reconstructed value recCnx of the co-located chroma samples, respectively. N=1, 2,..num. num is the number of co-located chroma samples. For example, the absolute error Sum (SAD) of the prediction value PredCnxi (i=1, 2, 3) and the reconstruction value recCnx of the plurality of co-located chroma samples Xn is compared, and the luminance reference region type corresponding to the prediction value with the smallest absolute error sum is selected as the luminance reference region type of the first luminance block.
Alternatively, as shown in fig. 13a to 13c, in performing chroma prediction for a sample to be predicted, assuming that a template region is a chroma template region, a chroma reference region of the template region is located on the left side and/or above and adjacent to the template region, the processing apparatus may determine the weight coefficients corresponding to the three types by applying three types of chroma reference regions to the template region, respectively (i.e., a left reference region only on the left side of the template region as shown in fig. 13a, an upper reference region only above the template region as shown in fig. 13b, and/or a region combined and spliced by the left and upper regions of the template region as shown in fig. 13 c).
After the processing device determines the weight coefficients corresponding to the three types, the processing device calculates predicted values of chroma sampling in the template area under the weight coefficients corresponding to the three types, and compares the predicted values with reconstructed values of the chroma sampling, so that the type of the chroma reference area corresponding to the weight coefficient with the smallest error between the predicted values and the reconstructed values is used as the type of the chroma reference area of the first chroma block.
Similarly, as a processing device of an encoder and/or a decoder, in performing luminance prediction with respect to a sample to be predicted, similar to the above-described chrominance prediction process, a person skilled in the art will understand how to implement the present invention after having understood the present invention.
Alternatively, assuming that the processing apparatus determines the weight coefficients ca10 to ca16 from the chromaticity reference region shown in fig. 13a, and determines the weight coefficients ca20 to ca26 from the chromaticity reference region shown in fig. 13b, and determines the weight coefficients ca30 to ca36 from the chromaticity reference region shown in fig. 13c, then the processing apparatus calculates the predicted value PredCx1 of the chromaticity samples in the template region in the case where the weight coefficients ca10 to ca16 are employed, and calculates the predicted value PredCx2 of the chromaticity samples in the template region in the case where the weight coefficients ca20 to ca26 are employed, and calculates the predicted value PredCx3 of the chromaticity samples in the template region in the case where the weight coefficients ca30 to ca36 are employed. Thus, after obtaining the predicted values of the plurality of chroma samples in the template region, the processing device compares errors between the predicted values and reconstructed values of the co-located luma samples, and thereby takes the chroma reference region type corresponding to the predicted value with the smallest error as the chroma reference region type of the first chroma block. For example, the processing apparatus compares, for a plurality of chroma samples Xn, the prediction value PredCnx1 to the prediction value PredCnx3 of the chroma sample Xn and the error between the reconstructed value recCnx of the chroma sample Xn, respectively. N=1, 2,..num. num is the number of co-located luminance samples. For example, the absolute error Sum (SAD) of the prediction value PredCnxi (i=1, 2, 3) and the reconstruction value recCnx of the plurality of chroma samples Xn is compared, and the chroma reference region type corresponding to the prediction value with the smallest absolute error sum is selected as the chroma reference region type of the first chroma block.
Optionally, in this embodiment, the third mode of acquiring or determining the reference area according to the preset flag may further include:
acquiring or determining an intra-frame prediction mode corresponding to the mark;
and determining or obtaining a prediction result of the sample to be predicted according to the intra-frame prediction mode.
Optionally, in the process of performing chroma prediction on chroma samples to be predicted, the processing device serving as the encoder may instruct the processing device at the decoding end to determine the type of the luma reference region according to the luma block where the co-located luma samples corresponding to the chroma samples to be predicted are located or the neighboring luma blocks of the luma block by using the luma reference region derivation flag luma_ref_region_deriv_flag1, so as to determine the luma reference region of the chroma samples to be predicted, where chroma prediction is required.
Alternatively, the processing device at the encoding end may use a different flag to instruct the processing device at the decoding end how to derive the reference region type.
Alternatively, when the value of the luminance reference region derivation flag luma_ref_region_deriv_flag1 is 1, that is, the processing device at the decoding end is instructed to determine the luminance block in which the chroma sample to be predicted corresponds to the co-located luma sample, and derive the type of the luminance reference region of the chroma sample to be predicted, which is to be subjected to chroma prediction, using the type of the intra-prediction mode of the luminance block. In an embodiment, if the type of the intra prediction mode of the luminance block is the angular intra prediction mode X1, the type of the luminance reference region of the chroma sample to be predicted to be chroma predicted is the reference region type X1. The reference region type X1 is one of FIG. 7 a-FIG. 7 c. Alternatively, the angular intra prediction mode X1 and the reference region type X1 correspond to table 5. For example, if the ANGULAR INTRA prediction mode X1 is intra_anguar2 to intra_anguar 18, the reference region type X1 is the first reference region. And/or, if the ANGULAR INTRA prediction mode X1 is intra_prediction 50 to intra_prediction 66, the reference region type X1 is the second reference region. And/or, if the ANGULAR INTRA prediction mode X1 is intra_anguar 21 to intra_anguar 49, the reference region type X1 is the third reference region.
Alternatively, when the value of the luminance reference region derivation flag luma_ref_region_deriv_flag1 is 2, that is, the processing device at the decoding end is instructed to determine an adjacent luminance block of the luminance block where the samples to be predicted correspond to the parity luminance samples, and derive the type of the luminance reference region of the samples to be predicted to be subjected to chroma prediction using the type of the intra prediction mode of the adjacent luminance block. In an embodiment, if the type of the intra prediction mode of the neighboring luma block is the angular intra prediction mode X2, the type of the luma reference region of the chroma samples to be predicted, which is to be chroma predicted, is the reference region type X2. The reference region type X2 corresponds to one of regions excluding at least a part of a region formed by combining a region adjacent to the left side of the adjacent luminance block, a region adjacent to the top of the adjacent luminance block, a region adjacent to the left side of the adjacent luminance block, and a region adjacent to the top of the adjacent luminance block. Alternatively, the angular intra prediction mode X2 and the reference region type X2 correspond to table 5. For example, if the ANGULAR INTRA prediction mode X2 is intra_anguar2 to intra_anguar 18, the reference region type X2 is the first reference region. And/or, if the ANGULAR INTRA prediction mode X2 is intra_prediction 50 to intra_prediction 66, the reference region type X2 is the second reference region. And/or if the ANGULAR INTRA prediction mode X2 is intra_anguar 21 to intra_anguar 49, the reference region type X2 is the third reference region. Optionally, the first reference region, the second reference region, and the third reference region are one of regions of at least a part of a region formed by combining the region adjacent to the left of the adjacent luminance block and the region adjacent to the top of the adjacent luminance block, excluding the region adjacent to the top of the adjacent luminance block. Optionally, the first reference region, the second reference region, and the third reference region are different from each other.
Alternatively, when the value of the luminance reference region derivation flag luma_ref_region_deriv_flag1 is 0, that is, the processing device at the decoding end is instructed to prohibit deriving the type of the luminance reference region using the luminance block in which the above-described parity luminance samples are located or the neighboring luminance blocks of the luminance block.
Optionally, in the process of performing Chroma prediction on the Chroma samples to be predicted, the processing device serving as the encoder may instruct the processing device at the decoding end to determine the type of the Chroma reference region according to the Chroma block in which the Chroma samples to be predicted are located or the neighboring Chroma blocks of the Chroma block by using a Chroma reference region derivation flag chroma_ref_region_deriv_flag1, so as to determine the Chroma reference region of the Chroma samples to be predicted, which need to be subjected to Chroma prediction.
Alternatively, the processing device at the encoding end may use a different flag to instruct the processing device at the decoding end how to derive the reference region type.
Alternatively, when the value of the Chroma reference region derivation flag chroma_ref_region_deriv_flag1 is 1, that is, the processing device at the decoding end is instructed to determine the Chroma block in which the Chroma samples to be predicted are located, and derive the type of the Chroma reference region of the Chroma samples to be predicted, for which the Chroma prediction is to be performed, using the type of the intra prediction mode of the co-located luma block of the Chroma block. In an embodiment, if the type of intra prediction mode of the co-located luminance block is the angular intra prediction mode X1', the type of chroma reference region of chroma samples to be predicted for chroma prediction is the reference region type X1'. The reference region type X1' is one of FIG. 7 a-FIG. 7 c. Alternatively, the angular intra prediction mode X1 'and the reference region type X1' correspond to table 5. For example, if the ANGULAR INTRA prediction mode X1 'is intra_anguar2 to intra_anguar 18, the reference region type X1' is the first reference region. And/or, if the ANGULAR INTRA prediction mode X1 'is intra_prediction 50 to intra_prediction 66, the reference region type X1' is the second reference region. And/or, if the angle INTRA prediction mode X1 'is intra_anguar 21 to intra_anguar 49, the reference region type X1' is the third reference region.
Alternatively, when the value of the Chroma reference region derivation flag chroma_ref_region_deriv_flag1 used is 2, that is, the processing device at the decoding end is instructed to determine a co-located neighboring luma block of neighboring Chroma blocks to the Chroma block where the Chroma sample to be predicted is located, and derive the type of the Chroma reference region of the Chroma sample to be predicted, which is to be Chroma-predicted, using the type of the intra prediction mode of the co-located neighboring luma block. The adjacent chrominance blocks and the co-located adjacent luminance blocks are co-located blocks with each other. In an embodiment, if the type of intra prediction mode of the co-located neighboring luma block is the angular intra prediction mode X2', the type of luma reference region of the chroma samples to be predicted to be chroma predicted is the reference region type X2'. The reference region type X2' corresponds to one of regions excluding at least a part of a region formed by a combination of the region adjacent to the left side of the parity adjacent luminance block, the region adjacent to the upper side of the parity adjacent luminance block, the region adjacent to the left side of the parity adjacent luminance block, and the region adjacent to the upper side of the parity adjacent luminance block. Alternatively, the angular intra prediction mode X2 and the reference region type X2 correspond to table 5. For example, if the ANGULAR INTRA prediction mode X2 'is intra_anguar2 to intra_anguar 18, the reference region type X2' is the first reference region. And/or, if the ANGULAR INTRA prediction mode X2 'is intra_prediction 50 to intra_prediction 66, the reference region type X2' is a second reference region. And/or if the ANGULAR INTRA prediction mode X2 'is intra_anguar 21 to intra_anguar 49, the reference region type X2' is the third reference region. Optionally, the first reference region, the second reference region, and the third reference region are one of regions of at least a part of the regions formed by combining the region adjacent to the left of the parity adjacent luminance block and the region adjacent to the upper side of the parity adjacent luminance block, excluding the region adjacent to the left of the parity adjacent luminance block, and the region adjacent to the left of the parity adjacent luminance block. Optionally, the first reference region, the second reference region, and the third reference region are different from each other.
Alternatively, when the value of the Chroma reference region derivation flag chroma_ref_region_deriv_flag1 used is 0, that is, the processing device at the decoding end is instructed to prohibit the use of the Chroma block where the Chroma sample to be predicted is located, or the neighboring Chroma blocks of the Chroma block derive the type of the Chroma reference region.
Optionally, based on the same principle, in the process of performing luminance prediction on the luminance samples to be predicted, the processing device serving as the encoder may instruct the processing device at the decoding end to determine the type of the luminance reference region according to the luminance block in which the luminance samples to be predicted are located or the neighboring luminance blocks of the luminance block by using the luminance reference region derivation flag luma_ref_region_deriv_flag1', so as to determine the luminance reference region of the luminance samples to be predicted, where luminance prediction is required.
Optionally, based on the same principle, in the process of performing luminance prediction on the luminance samples to be predicted, the processing device serving as the encoder may instruct the processing device at the decoding end to determine the type of the chrominance reference region according to the chrominance block in which the co-located chrominance samples corresponding to the luminance samples to be predicted are located or the neighboring luminance blocks of the chrominance block by using the chrominance reference region derivation flag chroma_ref_region_deriv_flag1', so as to determine the chrominance reference region of the luminance samples to be predicted, where luminance prediction is required.
In this embodiment, the processing device at the encoding end may determine the reference area for deriving the weight coefficient of the CCCM model by determining the adjacent luminance/chrominance area above the area where the sample to be predicted is located and/or the similarity between the adjacent luminance/chrominance area on the left side of the area where the sample to be predicted is located and the luminance/chrominance area corresponding to (or called co-located with) the area where the sample to be predicted is located, so as to obtain better encoding performance. After the processing device at the encoding end determines the reference area, the processing device at the decoding end can be instructed by transmitting a flag, so that the processing device at the decoding end autonomously derives which of the following three types of reference areas of the weight coefficient of the CCCM model: an upper adjacent luminance/chrominance region, a left adjacent luminance/chrominance region, and a region in which the upper adjacent luminance/chrominance region is combined (e.g., spliced) with the left adjacent luminance/chrominance region.
Fourth embodiment
In this embodiment, the execution subject of the image processing method provided in the present application may still be the processing apparatus described above. In this embodiment, the step S1 may include the following steps:
S11: determining at least one weight parameter from at least one reference region;
s12: and determining or obtaining a prediction result of the sample to be predicted according to the weight parameter.
Optionally, after acquiring or determining at least one reference region from any type of the first reference region, the second reference region or the third reference region, the processing device may further use the reference region to perform chroma prediction or luminance prediction on the sample to be predicted. That is, the processing device first determines a weight parameter in at least one intra-frame prediction model based on the obtained at least one reference region, so that the processing device may use the weight parameter and the corresponding intra-frame prediction model to calculate, thereby determining or obtaining a chromaticity prediction result of chromaticity prediction currently performed on the sample to be predicted, or determining or obtaining a luminance prediction result of luminance prediction currently performed on the sample to be predicted.
Optionally, the processing device determines a set of samples for each reference region of the at least one reference region having the at least one reference region type during the chroma prediction for the samples to be predicted. For at least one reference region, there is at least one set of samples, and each set of samples includes at least one sample point. And then, determining the value of the weight corresponding to each group of samples when the loss function of each group of samples takes the minimum value. Finally, the most suitable reference region is determined from the at least one reference region having the at least one reference region type by comparing the distortion level of the predicted value at the value of the corresponding weight of each set of samples.
The description will now be given by the formula (1). For example, 7 chroma reference samples are determined for each of the first, second, and third luma reference regions, respectively, the 7 chroma reference samples having predicted values predCa 1-predCa 7 and reconstructed values recCa 1-recCa 7, respectively (the reconstructed values of the 7 chroma reference samples are known values). In an embodiment, the first luminance reference region may be the luminance reference region shown in fig. 12 a. The second luminance reference region may be the luminance reference region shown in fig. 12 b. The third luminance reference region may be the luminance reference region shown in fig. 12 c. The predicted values predCa1 to predCa7 are expressions concerning the formula (1).
For example, 7 luminance reference samples corresponding to the above 7 chromaticity reference samples are determined as luminance samples Ca1 to Ca7 in the formula (1) in the luminance reference region shown in fig. 12a, and then luminance samples Na1 to Na7, sa1 to Sa7, ea1 to Ea7, wa1 to Wa7 around the selected 7 luminance samples, and values for the nonlinear terms Pa1 to Pa7 and the bias terms Ba1 to Ba7 are determined.
Then, the processing device constructs vectors predC, predc= [ predCa1, predCa2, predCa3, predCa4, predCa5, predCa6, predCa7 using predCa 1-predCa 7 ] T And, constructing vectors recC using recCa1 to recCa7, recC= [ recCa1, recCa2, recCa3, recCa4, recCa5, recCa6, recCa7] T . Alternatively, the processing device characterizes the magnitude of the error e by a loss function L, alternatively, L= |recC-predc|| 2 And the value of the weight corresponding to the minimum value of the loss function L (that is, the derivative of the loss function L is 0) is obtained. For example, 7 luminance samples Ca1 to Ca7, and luminance samples Na1 to Na7, sa around 7 luminance samples1 to Sa7, ea1 to Ea7, wa1 to Wa7, and values of nonlinear terms Pa1 to Pa7 and bias terms Ba1 to Ba7 are substituted into formula (1), and then predicted values predCa1 to predCa7 are obtained as expressions related to formula (1), and the expressions are substituted into a loss function L= ||recC-predc| 2 And solving the values of the weight coefficients Ca 1-Ca 7 in the formula (1) when the loss function takes the minimum value. The weight coefficients Ca1 Ca7 are the corresponding weight coefficients (also referred to as the weight parameter w 1) of the luminance reference region shown in FIG. 12 a.
The luminance reference region corresponding weight coefficient (may also be referred to as a weight parameter w 2) shown in fig. 12b and the luminance reference region corresponding weight coefficient (may also be referred to as a weight parameter w 3) shown in fig. 12c can be further determined by the above-described method. Next, by comparing the distortion degree of the predicted value under the values of the corresponding weight coefficients (weight parameters w1 to w 3) of each set of samples, the most suitable reference region is determined from among the plurality of reference regions having the plurality of reference region types. For example, by comparing the values of the weight parameters w1 to w3 corresponding to the samples of each group, the predicted values of the plurality of co-located chroma samples corresponding to the plurality of luminance samples in the luminance reference regions shown in fig. 12a to 12c are calculated by using the formula (1). Then, the absolute error Sum (SAD)/hadamard (hadamard) transformed absolute error Sum (SATD) between the predicted values and the reconstructed values of the co-located chroma samples are compared to determine the distortion degree of the predicted values of the co-located chroma samples corresponding to the luminance reference regions shown in fig. 12 a-12 c. Finally, a luminance reference region corresponding to a value with the smallest sum of absolute errors (SATD) after the sum of absolute errors (SAD)/hadamard conversion is determined in the luminance reference regions shown in fig. 12a to 12 c.
For example, if the sum of absolute errors corresponding to the luminance reference regions of the template regions shown in fig. 12a is the minimum value of the sum of absolute errors corresponding to the luminance reference regions shown in fig. 12a to 12c, since the luminance reference region shown in fig. 12a includes the left side reference region of the template region and does not include the upper side reference region of the template region, the left side luminance reference region of the image block or the co-located image block (e.g., the first luminance block) of the image block to be predictively sampled is taken as the determined reference region, and the upper side luminance reference region of the image block or the co-located image block of the image block is not taken as the determined reference region. That is, the reference region corresponding to the sample to be predicted includes the left luminance reference region of the image block or the co-located image block of the image block and does not include the upper luminance reference region of the image block or the co-located image block of the image block.
If the sum of absolute errors corresponding to the luminance reference regions of the template regions shown in fig. 12b is the minimum value of the sum of absolute errors corresponding to the luminance reference regions shown in fig. 12a to 12c, since the luminance reference regions shown in fig. 12b include the upper reference region of the template region and do not include the left reference region of the template region, the upper luminance reference region of the image block or the co-located image block (e.g., the first luminance block) of the image block where the samples to be predicted are located is taken as the determined reference region, and the left luminance reference region of the image block or the co-located image block of the image block is not taken as the determined reference region. That is, the reference region corresponding to the sample to be predicted includes the upper luminance reference region of the image block or the co-located image block of the image block and does not include the left luminance reference region of the image block or the co-located image block of the image block.
If the sum of absolute errors corresponding to the luminance reference regions of the template region shown in fig. 12c is the minimum value of the sum of absolute errors corresponding to the luminance reference regions shown in fig. 12a to 12c, since the luminance reference region shown in fig. 12c includes the left side reference region of the template region and includes the upper reference region of the template region, the region where the left side luminance reference region of the image block where the samples to be predicted are located or the co-located image block (e.g., the first luminance block) of the image block is combined with the upper region is taken as the determined reference region. That is, the reference region corresponding to the sample to be predicted includes a left luminance reference region and an upper luminance reference region of the image block or a co-located image block of the image block.
And finally, deriving a weight coefficient used for prediction of the sample to be predicted through the determined reference area, and performing inter-frame prediction according to the derived weight coefficient.
Alternatively, the above embodiment of the present invention may be combined with fig. 13a to 13 b. For example, the distortion degree of the predicted value of the chroma sampling in the chroma reference areas shown in fig. 13a to 13c can be determined by the similar method as described above. And then, determining a chromaticity reference area corresponding to the value with the minimum absolute error Sum (SATD) after the absolute error Sum (SAD)/hadamard conversion in the chromaticity reference areas shown in the figures 13 a-13 c.
For example, if the sum of absolute errors corresponding to the chroma reference areas of the template area shown in fig. 13a is the minimum value of the sum of absolute errors corresponding to the chroma reference areas shown in fig. 13a to 13c, since the chroma reference area shown in fig. 13a includes the left side reference area of the template area and does not include the upper side reference area of the template area, the left side chroma reference area of the image block or the co-located image block (e.g., the first chroma block) of the image block to be predicted and sampled is taken as the determined reference area, and the upper side chroma reference area of the image block or the co-located image block of the image block is not taken as the determined reference area. That is, the reference region to which the sample to be predicted corresponds includes the left side chroma reference region of the image block or the co-located image block of the image block and does not include the upper chroma reference region of the image block or the co-located image block of the image block.
If the sum of absolute errors corresponding to the chroma reference areas of the template area shown in fig. 13b is the minimum value of the sum of absolute errors corresponding to the chroma reference areas shown in fig. 13a to 13c, since the chroma reference area shown in fig. 13b includes the upper reference area of the template area and does not include the left reference area of the template area, the upper chroma reference area of the image block or the co-located image block (e.g., the first chroma block) of the image block where the samples to be predicted are located is taken as the determined reference area, and the left chroma reference area of the image block or the co-located image block of the image block is not taken as the determined reference area. That is, the reference region to which the sample to be predicted corresponds includes the upper chroma reference region of the image block or the co-located image block of the image block and does not include the left chroma reference region of the image block or the co-located image block of the image block.
If the sum of absolute errors corresponding to the chrominance reference regions of the template region shown in fig. 13c is the minimum value of the sum of absolute errors corresponding to the chrominance reference regions shown in fig. 13a to 13c, since the chrominance reference region shown in fig. 12c includes the left side reference region of the template region and includes the upper reference region of the template region, the region where the left side chrominance reference region and the upper chrominance region of the image block where the samples to be predicted are located or the co-located image block (e.g., the first chrominance block) of the image block are combined is taken as the determined reference region. That is, the reference region to which the sample to be predicted corresponds includes a left-side chroma reference region and an upper chroma reference region of the image block or a co-located image block of the image block.
Optionally, in this embodiment, the reason why the processing device selects the 7 chroma reference sample point prediction values and the reconstructed values from the above chroma reference areas is: 7 equations can be constructed by the predicted values and the reconstructed values of the 7 chroma reference sampling points, so that 7 weight coefficients to be solved are solved. That is, the number of the chroma reference samples in the chroma reference region is the same as the number of the weight coefficients to be obtained.
Optionally, if a sample or an image block adopts the angular intra prediction mode of the first angular range, the reference region type to be predicted is the reference region type X'. In this embodiment, a weight coefficient of a sample to be predicted may be determined according to the reference region type X', and a prediction result adopted for the prediction may be determined or obtained according to the determined weight coefficient.
Optionally, step S11 described above: determining at least one weight parameter from at least one reference region may include:
determining at least one of the weight parameters based on at least one reference sample in the reference region;
optionally, in this embodiment, the position of the reference sample meets a preset position condition, and/or the value of the reference sample meets a preset value condition.
Alternatively, the preset position condition may be: the position of the reference sample in the reference region is far or farthest from the distance to be predicted (in one implementation, the linear distance between the reference sample and the sample to be predicted is greater than or equal to a first preset threshold or the largest in the linear distance between all samples in the reference region and the sample to be predicted, optionally, the first preset threshold may be set to be greater than or equal to any distance value of the width, the height or the diagonal distance of the image block where the sample to be predicted is located based on the design requirement of practical application, optionally, the width, the height or the diagonal distance of the image block may be 64, 32 or 16, etc. of the width, the height or the diagonal distance corresponding to the coding unit used in VVC, HEVC or other coding standard, optionally, the position of the reference sample in the reference region is closer or closest to the distance to the sample to be predicted (in one implementation, the linear distance between all samples in the reference region and the sample to be predicted is less than or equal to a second preset threshold or the smallest in the linear distance between the reference region and the sample to be predicted), optionally, the linear distance between the reference sample and the sample to be predicted may be set to be greater than or equal to a second preset threshold or the smallest in the linear distance between the reference region and the sample to the width, optionally, the width, the height or the diagonal distance between the reference sample and the sample to be predicted may be greater than or equal to the N, the width, the height or the diagonal distance between the reference sample may be 1 and the vertical distance between the reference sample and the sample may be 1 and the vertical distance between the sample and the sample may be 1 and the vertical distance and the linear distance between the sample to be 1 and the sample to be greater than or the sample.
Alternatively, the first preset threshold and the second preset threshold may be the same or different.
Alternatively, the reference samples may be equally spaced horizontally and/or vertically in the reference region.
Alternatively, the preset value condition may be: the chrominance or luminance value of the reference sample in the reference region is the maximum chrominance or maximum luminance value, alternatively the chrominance or luminance value of the reference sample in the reference region is the minimum chrominance or minimum luminance value, alternatively the chrominance or luminance value of the reference sample in the reference region is near the N-aliquoting chrominance between the maximum chrominance and the minimum chrominance value.
Alternatively, the processing device may determine, when determining the weight parameter from the above-described chroma reference region, a maximum chroma value, a minimum chroma value, and a chroma sample of an average chroma value that approximates the maximum chroma value and the minimum chroma value from the chroma reference region to determine the weight parameter in the intra prediction model based on the chroma sample.
Alternatively, the average chroma value of the maximum chroma value and the minimum chroma value may be a 2-equal-divided chroma value. Alternatively, based on the same theory, the average chroma value may also be: two 3-equal-divided chrominance values between the maximum chrominance value and the minimum chrominance value, which are respectively: (maximum chroma value-minimum chroma value) 1/3, and (maximum chroma value-minimum chroma value) 2/3. The remaining N equally divided chrominance values and so on.
Optionally, when determining the weight parameter from the luminance reference region, the processing device may determine a luminance sample of a maximum luminance value, a brightest chrominance value, and an average luminance value that is close to the maximum luminance value and the minimum luminance value from the luminance reference region to determine the weight parameter in the intra prediction model based on the luminance sample.
Alternatively, the average luminance value of the maximum luminance value and the minimum luminance value may be a 2-equal-divided luminance value. Alternatively, based on the same theory, the average luminance value may also be: two 3-split luminance values between the maximum luminance value and the minimum luminance value, which are respectively: (maximum luminance value-minimum luminance value) 1/3, and (maximum luminance value-minimum luminance value) 2/3. The remaining N equally divided luminance values and so on.
Alternatively, in the present embodiment, step S12 described above: determining or obtaining a prediction result of the sample to be predicted according to the weight parameter may include:
and determining or obtaining a prediction result of the sample to be predicted according to at least one of the weight parameter, at least one first sample, at least one adjacent sample of the first sample, at least one non-adjacent sample of the first sample and at least one gradient component of the first sample.
Optionally, after determining the weight parameter from the above reference area, the processing device may further substitute one or more of the weight parameter, the first sample corresponding to the weight parameter, at least one neighboring sample of the first sample, at least one non-neighboring sample of the first sample, and at least one gradient component of the first sample into a corresponding intra prediction mode to perform calculation, so as to determine or obtain a chroma prediction result or a luminance prediction result of the sample to be predicted.
Alternatively, the processing device may utilize at least one intra-prediction mode to determine a chroma prediction result or a luma prediction result of the sample to be predicted. Alternatively, the at least one intra-prediction mode may be at least one CCCM mode and/or at least one non-CCCM mode.
Alternatively, as shown in fig. 14, the processing device at the decoding end may determine whether to use the CCCM mode by parsing the first flag. For example, if the first flag is 0, CCCM mode is not used; and/or, if the first flag is 1, using CCCM mode. And the processing device uses the non-CCCM mode when it is determined that the CCCM mode is not used. In this case, the processing device further parses the second flag to determine which non-CCCM mode (e.g., the first non-CCCM mode) to use in particular. The processing device may then determine, using the first non-CCCM mode, a chrominance prediction result or a luminance prediction result of the sample to be predicted. In another case, if the processing device determines to use the CCCM mode, the third flag is also further parsed to determine which CCCM mode (e.g., the first CCCM mode) to use in particular. Next, the processing device may use the first CCCM mode to substitute the weight parameter to determine a chroma prediction result or a luminance prediction result of the sample to be predicted.
Alternatively, the processing device may also determine or derive the predicted value of the sample to be predicted using a prediction model other than the one shown in the above formula (1). For example, the processing device may determine the predicted value of the sample to be predicted according to a prediction model shown in the following equation (3) and/or equation (4).
predchromval=c0×c+c1×gy+c2×gx+c3×y+c4×x+c5×p+c6×b equation (3)
Where gy= (2n+nw+ne) - (2s+sw+se), gx= (2w+nw+sw) - (2e+ne+se). Y is the vertical position of the luminance/chrominance sample C corresponding to the sample to be predicted, and X is the horizontal position of the luminance/chrominance sample C corresponding to the sample to be predicted. P is a nonlinear term that is expressed as the square of the luminance/chrominance sample C corresponding to the sample to be predicted and scaled to a bit depth range. Namely, p= (c×c+midval) > > bitDepth; bitDepth is the bit depth to which the samples correspond and "> >" is the right shift symbol. For example, for 10-bit video content, P is calculated by "p= (c×c+512) > > 10". B is a bias term that represents a scalar offset between input and output, which is set to a chroma value 512 for 10-bit video content in the video coding standard.
predchromval=c0×c+c1×l1+c2×l2+c3×a1+c4×a2+c5×p+c6×b formula (4)
Wherein L1 is a luminance sample adjacent to the left of the luminance sample corresponding to the sample to be predicted, L2 is a luminance sample in the horizontal direction of the luminance sample corresponding to the sample to be predicted on the edge of the luminance block adjacent to the left of the current sample to be predicted, A1 is a luminance sample adjacent to the luminance sample corresponding to the sample to be predicted, L2 is a luminance sample in the vertical direction of the luminance sample corresponding to the sample to be predicted on the edge of the luminance block adjacent to the upper of the current sample to be predicted.
Optionally, L1 is a chroma sample left adjacent to a chroma sample corresponding to a sample to be predicted, L2 is a chroma sample in a horizontal direction of the chroma sample corresponding to the sample to be predicted on an edge of a chroma block left adjacent to the current sample to be predicted, A1 is a chroma sample left adjacent to the chroma sample corresponding to the sample to be predicted, L2 is a chroma sample in a vertical direction of the chroma sample corresponding to the sample to be predicted on an edge of the chroma block above the current sample to be predicted.
In this embodiment, the processing device predicts and calculates the chromaticity/brightness of the sample to be predicted by selecting modes corresponding to different gradient algorithms, so that the accuracy of chromaticity prediction on the image block in the image encoding and decoding process can be further improved. Optionally, the processing device may also be adapted to calculate the gradient component by adapting the calculation process of the gradient component in the formula for calculating the predicted chrominance/luminance value in the CCCM to utilize the co-located luminance/chrominance samples left, above, and/or upper left luminance/chrominance samples of the samples to be predicted. In this way, the processing device may be able to perform gradient component calculations for chroma prediction when the co-located luma/chroma samples are not available to the right, below, and/or lower right luma/chroma samples. Optionally, the processing device may also calculate a predicted value of the sample to be predicted sample by sample. For example, the processing device may not need to wait for all samples in the current luminance block to be calculated (at least, for luminance to be calculated), and may sequentially calculate the sample values of the chrominance blocks corresponding to the current luminance block, but may immediately calculate the chrominance sample values corresponding to one luminance sample value of the current luminance block immediately after calculating the luminance sample value.
The embodiment of the application also provides a processing device, which comprises a memory and a processor, wherein the memory stores an image processing program, and the image processing program realizes the steps of the image processing method in any embodiment when being executed by the processor.
The present application also provides a storage medium having stored thereon an image processing program which, when executed by a processor, implements the steps of the image processing method in any of the above embodiments.
The embodiments of the intelligent terminal and the storage medium provided in the present application may include all technical features of any one of the embodiments of the image processing method, and the expansion and explanation contents of the description are substantially the same as those of each embodiment of the method, which are not repeated herein.
The present embodiments also provide a computer program product comprising computer program code which, when run on a computer, causes the computer to perform the method in the various possible implementations as above.
The embodiments also provide a chip including a memory for storing a computer program and a processor for calling and running the computer program from the memory, so that a device on which the chip is mounted performs the method in the above possible embodiments.
It can be understood that the above scenario is merely an example, and does not constitute a limitation on the application scenario of the technical solution provided in the embodiments of the present application, and the technical solution of the present application may also be applied to other scenarios. For example, as one of ordinary skill in the art can know, with the evolution of the system architecture and the appearance of new service scenarios, the technical solutions provided in the embodiments of the present application are equally applicable to similar technical problems.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs.
The units in the device of the embodiment of the application can be combined, divided and pruned according to actual needs.
In this application, the same or similar term concept, technical solution, and/or application scenario description will generally be described in detail only when first appearing, and when repeated later, for brevity, will not generally be repeated, and when understanding the content of the technical solution of the present application, etc., reference may be made to the previous related detailed description thereof for the same or similar term concept, technical solution, and/or application scenario description, etc., which are not described in detail later.
In this application, the descriptions of the embodiments are focused on, and the details or descriptions of one embodiment may be found in the related descriptions of other embodiments.
The technical features of the technical solutions of the present application may be arbitrarily combined, and for brevity of description, all possible combinations of the technical features in the above embodiments are not described, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the present application.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as above, including several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, a controlled terminal, or a network device, etc.) to perform the method of each embodiment of the present application.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable devices. The computer instructions may be stored in a storage medium or transmitted from one storage medium to another storage medium, for example, from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.) means. The storage media may be any available media that can be accessed by a computer or a data storage device such as a server, data center, or the like that contains an integration of one or more available media. Usable media may be magnetic media (e.g., floppy disks, storage disks, magnetic tape), optical media (e.g., DVD), or semiconductor media (e.g., solid State Disk (SSD)), among others.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the claims, and all equivalent structures or equivalent processes using the descriptions and drawings of the present application, or direct or indirect application in other related technical fields are included in the scope of the claims of the present application.

Claims (9)

1. An image processing method, characterized by comprising the steps of:
s1: determining or obtaining a prediction result of the sample to be predicted according to at least one reference area;
the obtaining or determining mode of the reference area comprises the following steps:
if the preset value of the mark is the third value, acquiring or determining a first weight parameter and a second weight parameter from the template area;
acquiring or determining a first sampling prediction result corresponding to the first weight parameter and a second sampling prediction result corresponding to the second weight parameter;
and if the first sampling predicted result is better than the second sampling predicted result, acquiring or determining the reference area according to the first weight parameter, and/or if the second sampling predicted result is better than the first sampling predicted result, acquiring or determining the reference area according to the second weight parameter.
2. The method of claim 1, wherein the reference region is obtained or determined in a manner that further comprises at least one of:
acquiring or determining from a left adjacent region or an upper adjacent region of the sample to be predicted or a co-located sample of the sample to be predicted;
acquiring or determining from a region formed by splicing a left adjacent region and an upper adjacent region of the sample to be predicted or the co-located sample of the sample to be predicted;
and if the intra-frame prediction mode adopted by the co-located sampling and/or the adjacent sampling of the sampling to be predicted is an angle intra-frame prediction mode, acquiring or determining the intra-frame angle corresponding to the angle intra-frame prediction mode.
3. The method of claim 2, further comprising at least one of:
the mark is obtained by analyzing the coded bit stream;
acquiring or determining the reference area of the value mapping according to the value of the mark;
acquiring or determining the reference region according to the intra angle, including: and acquiring or determining the reference area according to the angle range of the intra-frame angle.
4. The method of claim 3, further comprising at least one of:
The values include a first value and/or a second value.
5. The method of claim 1, wherein the acquiring or determining the reference region from the first weight parameter comprises:
and acquiring or determining the reference region according to the position relation between the corresponding region in the template region and the template region of the first weight parameter.
6. The method according to any one of claims 1 to 5, wherein the step S1 comprises the steps of:
s11: determining at least one weight parameter from at least one reference region;
s12: and determining or obtaining a prediction result of the sample to be predicted according to the weight parameter.
7. The method of claim 6, wherein step S11 comprises:
determining at least one of the weight parameters based on at least one reference sample in the reference region;
and/or, step S12 includes:
and determining or obtaining a prediction result of the sample to be predicted according to at least one of the weight parameter, at least one first sample, at least one adjacent sample of the first sample, at least one non-adjacent sample of the first sample and at least one gradient component of the first sample.
8. A processing apparatus, comprising: a memory, a processor, the memory having stored thereon an image processing program which, when executed by the processor, implements the steps of the image processing method according to any one of claims 1 to 7.
9. A storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the image processing method according to any of claims 1 to 7.
CN202310276253.0A 2023-03-21 2023-03-21 Image processing method, processing apparatus, and storage medium Active CN115988206B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310276253.0A CN115988206B (en) 2023-03-21 2023-03-21 Image processing method, processing apparatus, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310276253.0A CN115988206B (en) 2023-03-21 2023-03-21 Image processing method, processing apparatus, and storage medium

Publications (2)

Publication Number Publication Date
CN115988206A CN115988206A (en) 2023-04-18
CN115988206B true CN115988206B (en) 2024-03-26

Family

ID=85970554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310276253.0A Active CN115988206B (en) 2023-03-21 2023-03-21 Image processing method, processing apparatus, and storage medium

Country Status (1)

Country Link
CN (1) CN115988206B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116456102B (en) * 2023-06-20 2023-10-03 深圳传音控股股份有限公司 Image processing method, processing apparatus, and storage medium
CN116847088B (en) * 2023-08-24 2024-04-05 深圳传音控股股份有限公司 Image processing method, processing apparatus, and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101557514A (en) * 2008-04-11 2009-10-14 华为技术有限公司 Method, device and system for inter-frame predicting encoding and decoding
WO2019198997A1 (en) * 2018-04-11 2019-10-17 엘지전자 주식회사 Intra-prediction-based image coding method and apparatus thereof
CN111587574A (en) * 2018-11-23 2020-08-25 Lg电子株式会社 Method for decoding image based on CCLM prediction in image coding system and device thereof
CN111630856A (en) * 2018-01-26 2020-09-04 交互数字Vc控股公司 Method and apparatus for video encoding and decoding based on linear models responsive to neighboring samples
CN113261282A (en) * 2018-12-28 2021-08-13 有限公司B1影像技术研究所 Video encoding/decoding method and apparatus based on intra prediction
CN113273199A (en) * 2019-01-06 2021-08-17 腾讯美国有限责任公司 Method and apparatus for video encoding
CN113365067A (en) * 2021-05-21 2021-09-07 中山大学 Chroma linear prediction method, device, equipment and medium based on position weighting
CN114503570A (en) * 2020-04-07 2022-05-13 腾讯美国有限责任公司 Video coding and decoding method and device
CN115767100A (en) * 2022-10-14 2023-03-07 浙江大华技术股份有限公司 Prediction method, image encoding method, image decoding method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7036628B2 (en) * 2017-03-10 2022-03-15 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Encoding device, decoding device, coding method and decoding method
WO2020130745A1 (en) * 2018-12-21 2020-06-25 삼성전자 주식회사 Encoding method and device thereof, and decoding method and device thereof

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101557514A (en) * 2008-04-11 2009-10-14 华为技术有限公司 Method, device and system for inter-frame predicting encoding and decoding
CN111630856A (en) * 2018-01-26 2020-09-04 交互数字Vc控股公司 Method and apparatus for video encoding and decoding based on linear models responsive to neighboring samples
WO2019198997A1 (en) * 2018-04-11 2019-10-17 엘지전자 주식회사 Intra-prediction-based image coding method and apparatus thereof
CN111587574A (en) * 2018-11-23 2020-08-25 Lg电子株式会社 Method for decoding image based on CCLM prediction in image coding system and device thereof
CN113261282A (en) * 2018-12-28 2021-08-13 有限公司B1影像技术研究所 Video encoding/decoding method and apparatus based on intra prediction
CN113273199A (en) * 2019-01-06 2021-08-17 腾讯美国有限责任公司 Method and apparatus for video encoding
CN114503570A (en) * 2020-04-07 2022-05-13 腾讯美国有限责任公司 Video coding and decoding method and device
CN113365067A (en) * 2021-05-21 2021-09-07 中山大学 Chroma linear prediction method, device, equipment and medium based on position weighting
CN115767100A (en) * 2022-10-14 2023-03-07 浙江大华技术股份有限公司 Prediction method, image encoding method, image decoding method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HEVC帧内预测模式选择的硬件实现;赵敏 等;有线电视技术(第05期);第17-21页 *

Also Published As

Publication number Publication date
CN115988206A (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN115988206B (en) Image processing method, processing apparatus, and storage medium
US11882300B2 (en) Low complexity affine merge mode for versatile video coding
US6782053B1 (en) Method and apparatus for transferring video frame in telecommunication system
CN115834897B (en) Processing method, processing apparatus, and storage medium
US10986332B2 (en) Prediction mode selection method, video encoding device, and storage medium
CN114598880B (en) Image processing method, intelligent terminal and storage medium
CN114422781B (en) Image processing method, intelligent terminal and storage medium
US9129409B2 (en) System and method of compressing video content
JP2007135219A6 (en) Video frame transfer method and apparatus in communication system
JP2007135219A (en) Method and apparatus for video frame transfer in communication system
CN115002463B (en) Image processing method, intelligent terminal and storage medium
CN116456102B (en) Image processing method, processing apparatus, and storage medium
CN113709504B (en) Image processing method, intelligent terminal and readable storage medium
CN116668704B (en) Processing method, processing apparatus, and storage medium
WO2019233423A1 (en) Motion vector acquisition method and device
CN116847088B (en) Image processing method, processing apparatus, and storage medium
KR100828378B1 (en) Method and apparatus for transferring video frame in telecommunication system
CN117979010A (en) Image processing method, processing apparatus, and storage medium
CN115955565B (en) Processing method, processing apparatus, and storage medium
CN116095322B (en) Image processing method, processing apparatus, and storage medium
CN115379214B (en) Image processing method, intelligent terminal and storage medium
WO2024087604A1 (en) Image processing method, intelligent terminal and storage medium
CN115422986B (en) Processing method, processing apparatus, and storage medium
WO2023019567A1 (en) Image processing method, mobile terminal and storage medium
CN117176959B (en) Processing method, processing apparatus, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant