CN114422781B - Image processing method, intelligent terminal and storage medium - Google Patents

Image processing method, intelligent terminal and storage medium Download PDF

Info

Publication number
CN114422781B
CN114422781B CN202210317793.4A CN202210317793A CN114422781B CN 114422781 B CN114422781 B CN 114422781B CN 202210317793 A CN202210317793 A CN 202210317793A CN 114422781 B CN114422781 B CN 114422781B
Authority
CN
China
Prior art keywords
partition
angle
index
parameter
mapping table
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210317793.4A
Other languages
Chinese (zh)
Other versions
CN114422781A (en
Inventor
刘雨田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Transsion Holdings Co Ltd
Original Assignee
Shenzhen Transsion Holdings Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Transsion Holdings Co Ltd filed Critical Shenzhen Transsion Holdings Co Ltd
Priority to CN202210317793.4A priority Critical patent/CN114422781B/en
Publication of CN114422781A publication Critical patent/CN114422781A/en
Application granted granted Critical
Publication of CN114422781B publication Critical patent/CN114422781B/en
Priority to PCT/CN2023/078559 priority patent/WO2023185351A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock

Abstract

The application discloses an image processing method, an intelligent terminal and a storage medium, wherein the image processing method comprises the following steps: determining a target division mode parameter of the target image block based on the first division mode parameter set; determining a prediction result of the target image block based on the target division mode parameter. By the embodiment of the application, the matching degree of the partition mode parameters and the target image block can be effectively improved, and the accuracy of the prediction result is further improved.

Description

Image processing method, intelligent terminal and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an intelligent terminal, and a storage medium.
Background
In the development of video coding technology, improvements made by various video coding standards are all dedicated to improving the coding effect of video in different aspects. Coding the boundary portion of a moving object in a Video or an image is also a hot issue of current research, and for Coding the boundary portion of the moving object in the image, a partition pattern, for example, a Geometric partitioning pattern (GPM) proposed by the next generation Video compression standard (VVC), may be used to attach a partition line included in the partition pattern to the boundary of the moving object, so that the boundary of the moving object in the image may be represented more flexibly and finely.
In the course of conceiving and implementing the present application, the inventors found that at least the following problems existed: the matching degree between the division modes used for different images or image blocks and the images or image blocks is yet to be further improved.
The foregoing description is provided for general background information and is not admitted to be prior art.
Disclosure of Invention
In order to solve the technical problems, the application provides an image processing method, an intelligent terminal and a storage medium, which can effectively improve the matching degree of the partition mode parameters and the target image block, and further improve the accuracy of a prediction result.
In order to solve the above technical problem, the present application provides an image processing method, which is applicable to an intelligent terminal, and includes:
s1: determining a target division mode parameter of the target image block based on the first division mode parameter set;
s2: determining a prediction result of the target image block based on the target division mode parameter.
The present application also provides an image processing apparatus including:
the determining module is used for determining a target division mode parameter of the target image block based on the first division mode parameter set;
the determining module is further configured to determine a prediction result of the target image block based on the target partition mode parameter.
Optionally, the first set of partition mode parameters includes partition mode parameters corresponding to one or more partition lines.
Optionally, at least one of the following is included: the dividing mode parameter is determined based on the use condition of the coded image to the dividing line; and the included angles formed between the adjacent dividing lines in the plurality of dividing lines are not equal in size.
Optionally, the dividing mode parameter is determined by adjusting an angle interval between a dividing line in a target direction and an adjacent dividing line of the dividing line in the target direction based on the usage of the encoded image for the dividing line.
Optionally, an angular interval between a dividing line of the target direction and an adjacent dividing line is smaller than an angular interval between a dividing line of the reference direction and an adjacent dividing line; and/or the application probability of the dividing line of the target direction is larger than that of the dividing line of the reference direction.
Optionally, at least one of the following is also included: the dividing mode parameter comprises at least one of a dividing index, an angle index and a distance index corresponding to the dividing line; the first division pattern parameter set is a parameter mapping table representing a mapping relationship among the division index, the angle index, and the distance index.
In one embodiment, the image processing apparatus further comprises an acquisition module and an update module.
The acquisition module is used for acquiring the use condition of the coded image to the dividing line;
the determining module is used for determining the application probability of the dividing line based on the use condition;
the updating module is used for updating the parameter mapping table based on the application probability.
In one embodiment, the determining module is further configured to: acquiring the number of times of use of the first division mode in response to the number of the coded images reaching a preset threshold; and determining the application probability of each dividing line based on the usage recording parameters and the usage times.
In one embodiment, the update module is further configured to: updating the angle mapping table based on the application probability of each parting line to obtain an updated angle mapping table; and updating the angle index and/or the distance index in the parameter mapping table based on the updated angle mapping table.
Optionally, the angle mapping table is configured to represent a mapping relationship between a query index and an angle parameter, where the query index includes the segmentation index or the angle index, and an association relationship exists between the angle index and the angle parameter.
In one embodiment, the update module is further configured to: and updating the angle mapping table based on the angle parameter corresponding to the first query index and/or the angle parameter corresponding to the second query index to obtain an updated angle mapping table.
Optionally, the first query index is a query index corresponding to a partition line with the highest application probability; the second query index is a query index corresponding to a partition line with the highest application probability or an adjacent query index of the first query index.
In one embodiment, the update module is further configured to: determining a target angle parameter based on the angle parameter corresponding to the first query index and the angle parameter corresponding to the second query index; and updating the angle mapping table based on the target angle parameter to obtain an updated angle mapping table.
In another embodiment, the second query index is a neighboring query index of the first query index, and the update module is further configured to: and adjusting the angle parameter corresponding to the second query index in the angle mapping table according to a preset adjustment rule to obtain an updated angle mapping table.
In one embodiment, the update module is further configured to: in response to the fact that the angle parameter corresponding to the adjacent query index is larger than the angle parameter corresponding to the first query index, reducing the angle parameter corresponding to the adjacent query index to obtain an updated angle mapping table; and/or, in response to the angle parameter corresponding to the adjacent query index being smaller than the angle parameter corresponding to the first query index, increasing the angle parameter corresponding to the adjacent query index to obtain an updated angle mapping table.
Optionally, a difference between the adjusted angle parameter corresponding to the adjacent query index and the angle parameter corresponding to the first query index is within a preset range; and/or the preset range is determined based on the application probability of the segmentation line corresponding to the first query index.
In one embodiment, the image processing apparatus further comprises a sending module, configured to: and sending a bit stream to a decoding end, wherein the bit stream comprises indication information of the updated parameter mapping table, and the indication information is used for indicating the decoding end to determine a prediction result of the target image block by using the updated parameter mapping table.
In one embodiment, the determining module is further configured to: based on parameters of a target image block, performing predictive coding on the target image block by using at least one prediction mode, and determining rate-distortion cost corresponding to each prediction mode; and when the prediction mode with the lowest rate-distortion cost is the first division mode, executing the step of determining the target division mode parameters of the target image block based on the first division mode parameter set.
In one embodiment, the first partition mode includes at least one partition mode, and the determining module is further configured to: determining a rate-distortion cost applied to the target image block by each of the division modes included in the first division mode; determining the division mode with the minimum rate-distortion cost as the target division mode of the target image block; determining a mode parameter corresponding to the target division mode from the first division mode parameter set, and determining the mode parameter corresponding to the target division mode as a target division mode parameter of the target image block.
In one embodiment, the target image block includes a first partition and a second partition, where the first partition and the second partition are two image areas obtained by dividing the target image block by a dividing line, and the determining module is further configured to: determining partition weights based on the target partition mode parameters; determining a prediction result of the first partition and a prediction result of the second partition based on at least one of the partition weight, a first set of prediction results, and a second set of prediction results; and determining the prediction result of the first partition and the prediction result of the second partition as the prediction results of the target image block. Optionally, before determining the partition weights based on the target partition mode parameters, it is also possible to: and determining a first prediction result set and a second prediction result set of the target image block.
In one embodiment, the partition weight includes a first weight corresponding to the first partition and a second weight corresponding to the second partition, the first weight or the second weight is determined based on a distance between a pixel point included in the target image block and a partition line, and the determining module is further configured to: and determining the prediction result of the pixel point based on at least one of the first weight, the second weight, a first prediction result of the pixel point included in the target image block in a first prediction result set and a second prediction result of the pixel point in a second prediction result set.
The application also provides an intelligent terminal, including: a memory, a processor, wherein the memory has stored thereon an image processing program, which when executed by the processor implements the steps of any of the methods described above.
The present application also provides a computer-readable storage medium, which stores a computer program that, when executed by a processor, implements the steps of the method as described in any one of the above.
As described above, the image processing method of the present application, which is applicable to an intelligent terminal, includes the steps of: determining a target division mode parameter of the target image block based on the first division mode parameter set; determining a prediction result of the target image block based on the target division mode parameter. By the technical scheme, the partition mode parameter which is highly matched with the target image block can be determined from the first partition mode parameter set, a more accurate prediction result of the target image block can be obtained based on the partition mode parameter, the partition mode parameter is highly matched with the image block, the function of the coding effect is improved, the problem that the matching degree of the partition mode parameter corresponding to the partition mode and the image block is low is solved, and further the user experience is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic hardware structure diagram of a mobile terminal implementing various embodiments of the present application;
fig. 2 is a communication network system architecture diagram provided in an embodiment of the present application;
FIG. 3a is a schematic diagram of an angle quantization provided by an embodiment of the present application;
FIG. 3b is a schematic diagram of various offsets at an angle φ i according to an embodiment of the present application;
FIG. 3c is a schematic diagram of a partition mode provided in an embodiment of the present application;
fig. 4 is a flowchart illustrating an image processing method according to the first embodiment;
FIG. 5 is a schematic view of an adjusted split line according to a first embodiment;
FIG. 6a is a schematic flow chart diagram illustrating a method of image processing according to a second embodiment;
FIG. 6b is a diagram illustrating the effect of applying different dividing lines to an image block according to the second embodiment;
fig. 7 is a flowchart illustrating an image processing method according to a third embodiment;
FIG. 8a is a schematic diagram illustrating a distance analysis between a pixel and a partition line according to a third embodiment;
FIG. 8b is a schematic flow chart diagram illustrating a method of image processing according to a fourth embodiment;
FIG. 8c is a diagram illustrating neighboring block locations of a spatial merge candidate list according to a fourth embodiment;
fig. 8d is a schematic diagram of a merge candidate list according to the fourth embodiment;
fig. 9 is a schematic diagram of an image processing apparatus according to an embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings. With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the recitation of an element by the phrase "comprising an … …" does not exclude the presence of additional like elements in the process, method, article, or apparatus that comprises the element, and further, where similarly-named elements, features, or elements in different embodiments of the disclosure may have the same meaning, or may have different meanings, that particular meaning should be determined by their interpretation in the embodiment or further by context with the embodiment.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope herein. The word "if," as used herein, may be interpreted as "at … …" or "when … …" or "in response to a determination," depending on the context. Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, items, species, and/or groups thereof. The terms "or," "and/or," "including at least one of the following," and the like, as used herein, are to be construed as inclusive or mean any one or any combination. For example, "includes at least one of: A. b, C "means" any of the following: a; b; c; a and B; a and C; b and C; a and B and C ", again for example," A, B or C "or" A, B and/or C "means" any of the following: a; b; c; a and B; a and C; b and C; a and B and C'. An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
It should be understood that, although the steps in the flowcharts in the embodiments of the present application are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least some of the steps in the figures may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, in different orders, and may be performed alternately or at least partially with respect to other steps or sub-steps of other steps.
The words "if", as used herein, may be interpreted as "at … …" or "when … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrase "if determined" or "if detected (a stated condition or event)" may be interpreted as "upon determining" or "in response to determining" or "upon detecting (a stated condition or event)" or "in response to detecting (a stated condition or event)", depending on the context.
It should be noted that step numbers such as S401 and S402 are used herein for the purpose of more clearly and briefly describing the corresponding contents, and do not constitute a substantial limitation on the sequence, and those skilled in the art may perform S402 and then S401 in specific implementation, but these should be within the scope of the present application.
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for the convenience of description of the present application, and have no specific meaning in themselves. Thus, "module", "component" or "unit" may be used mixedly.
The smart terminal may be implemented in various forms. For example, the smart terminal described in the present application may include smart terminals such as a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, a pedometer, and the like, and fixed terminals such as a Digital TV, a desktop computer, and the like.
The following description will be given by way of example of a mobile terminal, and it will be understood by those skilled in the art that the configuration according to the embodiment of the present application can be applied to a fixed type terminal, in addition to elements particularly used for mobile purposes.
Referring to fig. 1, which is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present application, the mobile terminal 100 may include: an RF (Radio Frequency) unit 101, a WiFi module 102, an audio output unit 103, an a/V (audio/video) input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, a processor 110, and a power supply 111. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 1 is not intended to be limiting of mobile terminals, which may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile terminal in detail with reference to fig. 1:
the radio frequency unit 101 may be configured to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink information of a base station and then process the downlink information to the processor 110; in addition, uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA2000 (Code Division Multiple Access 2000 ), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division duplex-Long Term Evolution), TDD-LTE (Time Division duplex-Long Term Evolution, Time Division Long Term Evolution), 5G, and so on.
WiFi belongs to short-distance wireless transmission technology, and the mobile terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 102, and provides wireless broadband internet access for the user. Although fig. 1 shows the WiFi module 102, it is understood that it does not belong to the essential constitution of the mobile terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is for receiving an audio or video signal. The a/V input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, the Graphics processor 1041 Processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 can receive sounds (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, or the like, and can process such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of the phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Optionally, the light sensor includes an ambient light sensor that may adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 1061 and/or the backlight when the mobile terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Alternatively, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 1071 (e.g., an operation performed by the user on or near the touch panel 1071 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Optionally, the touch detection device detects a touch orientation of a user, detects a signal caused by a touch operation, and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Optionally, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited thereto.
Alternatively, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation on or near the touch panel 1071, the touch operation is transmitted to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although the touch panel 1071 and the display panel 1061 are shown in fig. 1 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 108 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and external devices.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a program storage area and a data storage area, and optionally, the program storage area may store an operating system, an application program (such as a sound playing function, an image playing function, and the like) required by at least one function, and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby integrally monitoring the mobile terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor and a modem processor, optionally, the application processor mainly handles operating systems, user interfaces, application programs, etc., and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The mobile terminal 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 111 may be logically connected to the processor 110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown in fig. 1, the mobile terminal 100 may further include a bluetooth module or the like, which is not described in detail herein.
In order to facilitate understanding of the embodiments of the present application, a communication network system on which the mobile terminal of the present application is based is described below.
Referring to fig. 2, fig. 2 is an architecture diagram of a communication Network system according to an embodiment of the present disclosure, where the communication Network system is an LTE system of a universal mobile telecommunications technology, and the LTE system includes a UE (User Equipment) 201, an E-UTRAN (Evolved UMTS Terrestrial Radio Access Network) 202, an EPC (Evolved Packet Core) 203, and an IP service 204 of an operator, which are in communication connection in sequence.
Optionally, the UE201 may be the mobile terminal 100 described above, and details are not described here.
The E-UTRAN202 includes eNodeB2021 and other eNodeBs 2022, among others. Alternatively, the eNodeB2021 may be connected with other enodebs 2022 through a backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide the UE201 access to the EPC 203.
The EPC203 may include an MME (Mobility Management Entity) 2031, an HSS (Home Subscriber Server) 2032, other MMEs 2033, an SGW (Serving gateway) 2034, a PGW (PDN gateway) 2035, and a PCRF (Policy and Charging Rules Function) 2036, and the like. Optionally, the MME2031 is a control node that handles signaling between the UE201 and the EPC203, providing bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location register (not shown) and holds subscriber specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034, PGW2035 may provide IP address assignment for UE201 and other functions, and PCRF2036 is a policy and charging control policy decision point for traffic data flow and IP bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
The IP services 204 may include the internet, intranets, IMS (IP Multimedia Subsystem), or other IP services, among others.
Although the LTE system is described as an example, it should be understood by those skilled in the art that the present application is not limited to the LTE system, but may also be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, and future new network systems (e.g. 5G), and the like.
Based on the above mobile terminal hardware structure and communication network system, various embodiments of the present application are proposed.
For the sake of understanding, the following first explains the terms of art to which the embodiments of the present application may be related.
One, inter prediction mode
In the process of coding an image, predicting an image block is an indispensable step, and a prediction block is obtained by predicting the image block, so that a residual block with smaller energy is constructed, and transmission bits are reduced. The prediction of the image block may be implemented by some preset prediction modes, which may include an inter prediction mode and an intra prediction mode. The inter prediction mode is a prediction mode having higher coding efficiency, which removes temporal redundancy using correlation between pixels, compared to the intra prediction mode.
Second, Geometric Partitioning Mode (GPM)
The GPM mode is a prediction mode adopted by the next generation Video compression standard (VVC) for the boundary portion of a moving object in an image. Compared with other prediction modes, the GPM mode can divide the boundary of a moving object in an image more finely, and divides the edge coding unit (Code unit) of the moving object into non-rectangular sub coding units for unidirectional prediction by attaching a dividing line to the boundary of the moving object, so that the prediction value of the whole coding unit is obtained.
FIG. 3a is a diagram illustrating angle quantization, which can be found that the angle corresponding to each dividing line is differentMarking the angle as phii(i =1~ 24). Fig. 3a does not fully illustrate all angles. There are a maximum of 4 offsets for each angle, as shown in fig. 3 b. FIG. 3b is a schematic diagram of an exemplary embodiment of the present application illustrating an angle φiSchematic of the corresponding 4 offsets. Wherein the offset is marked as ρj(j =0~ 3). Alternatively, 64 partition modes can be combined, as shown in fig. 3 c. Fig. 3c is a schematic diagram of a partition mode according to an embodiment of the present application. The division mode comprises at least one division mode formed by combining different angles and offsets. It can be seen that at different angles and different offsets, the positions of the dividing lines are different, and the number of dividing lines that can be used for the image block is also different. Any one of all the partition lines is one implementation of the GPM mode applied to the image block. Optionally, the offset ρ may be determined from the GPM mode indexjAnd an angle phii
First embodiment
Referring to fig. 4, fig. 4 is a schematic flowchart of an image processing method shown in a first embodiment, where an execution main body in this embodiment may be one computer device or a cluster formed by multiple computer devices, and the computer device may be an intelligent terminal (such as the foregoing mobile terminal 100), or may be a server, and here, the execution main body in this embodiment is taken as an intelligent terminal for example for description.
It should be noted that, in the process of video coding, an input video frame is usually divided into a plurality of image blocks for processing, each image block may be subtracted from a prediction block obtained by prediction in a prediction mode to obtain a residual block, and then a series of processing is performed on the residual block to obtain a coded bitstream. The image processing scheme provided by the embodiment of the application can be applied to a prediction stage in a video coding process, namely, an image block is predicted by adopting a corresponding prediction mode to obtain a scene of a prediction block.
S401, determining target division mode parameters of the target image block based on the first division mode parameter set.
The target image block refers to an image block currently being encoded in an input Video image (i.e., a Video frame), and may be referred to as a current block or a current image block for short, and under the h.265/High Efficiency Video Coding (HEVC) standard, the target image block may be a Coding Tree Unit (CTU) or a Coding Unit (CU) in the input Video image, which is not limited herein.
The first division mode parameter set refers to a set of division mode parameters corresponding to the first division mode. The first partition mode is a prediction mode that can perform prediction after dividing the image block into different image regions by using a dividing line, and is, for example, a geometric partition mode. When the geometric division mode is employed, the image block may be divided into a rectangular area, a triangular area, or a rectangular area. Optionally, the first set of partition mode parameters includes partition mode parameters corresponding to one or more partition lines.
The division line is a line for area-dividing the image block, for example, a straight line. The division line applied to the image block corresponds to one division mode, and each division mode corresponds to a division mode parameter, in other words, the first division mode comprises at least one division mode, and the first division mode parameter set comprises at least one division mode parameter corresponding to each division mode. Accordingly, the target partition mode parameter is a partition mode parameter corresponding to a target partition line in the first partition mode parameter set, and optionally, the rate distortion cost corresponding to encoding of the target image block by using the target partition line is the minimum. For a detailed determination manner of the target partition mode parameter, reference may be made to the following description of the embodiments, which will not be described in detail herein.
The following description focuses on the content related to the first partition mode parameter set.
The partition mode parameter is a mode parameter corresponding to the application of the partition line to the image block. In one embodiment, the partition mode parameter includes at least one of a partition index, an angle index, and a distance index corresponding to the partition line. That is, the partition mode parameter may include any one of the partition index, the angle index, and the distance index corresponding to the partition line, may include any two of the partition index, the angle index, and the distance index corresponding to the partition line, and may include three of the partition index, the angle index, and the distance index corresponding to the partition line. It should be noted that, in actual development or application, the contents included in the partition mode parameters may be flexibly combined according to actual needs, but any combination belongs to the technical solution of the present application, and is also covered in the protection scope of the present application. The division indexes corresponding to the division lines can be used for marking different division lines, and the division indexes can also be called division mode indexes for uniquely identifying one division mode (namely uniquely identifying one division line) because the different division lines correspond to different division modes; the angle index corresponding to the dividing line can be used for indicating the direction of the dividing line; the distance index corresponding to the partition line can be used to indicate the location of the offset of the partition line. And determining the position of the dividing line in the image block according to the angle index and the distance index corresponding to the dividing line.
It should be noted that the segmentation index, the angle index and the distance index may be numbers, or other characters, or other distinguishable markers, and are not limited herein. In order to distinguish different division mode parameters, the division indexes corresponding to the division lines are different and have uniqueness (the same principle of angle indexes). For each segment index, its corresponding angle index and/or distance index may be partially identical to the angle index and/or distance index corresponding to the other segment index. For example, in the GPM mode, the division index merge _ GPM _ partition _ idx is represented by a non-repeating number. For example, the partition index merge _ gpm _ partition _ idx has a value of 0 to 64. The angle index angleIdx is denoted by φ i. For example, i is 0 to 30. The distance index distanceIdx is denoted by ρ j. For example, j has a value of 0 to 3.
In one embodiment, if the partition index merge _ gpm _ partition _ idx is 0, the angle index anglexdx is 0, and the distance index distanceIdx is 1; and/or, if the partition index merge _ gpm _ partition _ idx is 1, the angle index angleIdx is 0, and the distance index distanceIdx is 3. For segment index 0 and segment index 1, their corresponding angle indices are the same. If the partition index merge _ gpm _ partition _ idx is 0, the angle index angleIdx is 0, and the distance index distentidx is 1; and/or, if the partition index merge _ gpm _ partition _ idx is 3, the angle index angleIdx is 2, and the distance index distanceIdx is 1. For segment index 0 and segment index 3, their corresponding distance indices are the same.
When the partition mode parameter includes a partition index, an angle index, and a distance index, the first partition mode parameter is a parameter mapping table representing a mapping relationship among the partition index, the angle index, and the distance index. The adjustment of the partition mode parameter can be realized by updating the parameter mapping table, and reference may be made to the content correspondingly described in the second embodiment, which will not be described in detail herein first.
In general, there is an unavoidable case where the boundary of an object is a curve in an image or a video. For such images, a partition line in a first partition mode can be adopted to fit a boundary curve, and an image block to be coded is divided into two parts to be predicted to obtain a prediction block. In different scenarios, when the division is performed by using the first division mode (for example, the geometric division mode), the usage of the division line is inconsistent. In order to better fit the curve of the object boundary in the target image block, the partition mode parameters corresponding to the partition lines in the first partition mode parameter set can be adaptively set based on the use conditions of the partition lines, so that the fitting degree is improved, and the prediction result is more accurate.
Optionally, the partition mode parameter is determined based on usage of the partition line by the encoded image. The encoded image refers to an image (or referred to as a frame) that is encoded before the image in which the target image block is located, and the dividing line used by the encoded image may be the dividing line indicated by each dividing mode parameter in the latest first dividing mode parameter set. Since a plurality of prediction modes are included in the video encoding process, the image blocks in the encoded image may adopt other prediction modes, such as an intra-prediction mode, in addition to the first partition mode when encoding. Therefore, the use condition of the coded image for the dividing line refers to the use of the dividing line by part or all of the image blocks in the coded image, the dividing lines used by the image blocks may be the same or different, and the dividing mode parameter may be determined by counting the use condition of the dividing line in the coded image.
In one embodiment, the latest first partition mode parameter set is not updated until the update condition is satisfied. Such as the geometric partitioning pattern defined in the VVC standard. In another embodiment, the partition mode parameter included in the latest first partition mode parameter set may be determined based on usage of a preset number of encoded images. After the determination of the partition mode parameter, the partition mode parameter is not updated until the update condition is satisfied.
In an embodiment, the update condition may be that the first set of partition mode parameters is updated after the update flag is received and after the update flag is received according to a partition condition of an image that has been encoded before the update flag is received. For example, the encoded image is encoded using the first partition mode parameter set S1 before the update flag is received. After receiving the update flag, the first division pattern parameter set S1 is updated based on the division situation counted before the time of receiving the update flag to obtain a first division pattern parameter set S2. Next, the encoding of the subsequent image is encoded using the first division mode parameter set S2.
In another embodiment, the predetermined condition may be that the first set of partition mode parameters is updated according to the partition condition of a preset number of images encoded after the update flag is received. For example, if the preset number is 3, the image is encoded using the first division mode parameter set S1 before the update flag is received. After receiving the update flag, the first division mode parameter set S1 is updated based on the division of the consecutive 3 images from the time when the update flag is received, resulting in a first division mode parameter set S2. Next, the encoding of the subsequent image is encoded using the first division mode parameter set S2. Based on the scheme, the parameters of the first division mode can be adjusted according to the updating marks, so that the balance between saving of computing resources and enhancement of applicability of the first division mode in different scenes is realized.
In another embodiment, the partition mode parameters in the latest first partition mode parameter set may be dynamically adjusted based on the usage determination of a preset number of coded images, that is, each time the preset number of coded images are coded, the partition mode parameters may be determined based on the usage determination of the preset number of coded images, so as to update the latest first partition mode parameter set. Illustratively, the target image block is located in 7 images, the preset number is 3, then the first division mode parameter set S1 is used in the encoding process for the 1 st to 3 images, the first division mode parameter set S2 is obtained based on the usage of the previous 3 encoded images to the partition line, the first division mode parameter set S2 is used in the encoding process for the 4 th to 5 images, and the first division mode parameter set S3 is obtained based on the usage of the 4 th to 5 encoded images to the partition line, then the division mode parameter in the first division mode parameter S3 is used for the target image block, and in the subsequent encoding, the image in which the target image block is located is also used as the encoded image, a new division mode parameter is determined based on the usage of the 6 th to 8 encoded image to the partition line, and the loop is performed in turn, the first division mode parameter can be continuously and dynamically adjusted based on the current coded image, in this way, the division mode parameter used in the coding process of each video frame indirectly refers to the use condition of the coded image, the coded image and the current coded image have correlation to a certain extent, different division mode parameter sets are used for the preset images at different stages by following the use condition of the nearest coded image to the division line in real time, and at least one division mode parameter set can cover various scene images in the same video, so that the applicability of the first division mode in different scenes is improved.
Alternatively, the partition mode parameter may also be determined in units of image blocks, that is: the partition mode parameter is determined based on usage of the partition line by the encoded image block. In this way, when the target image block is the image block encoded later in the image where the target image block is located, the use condition of the division lines of all the previous image blocks can be counted, the statistical range of the use condition of the division lines is further expanded to the image where the current block is located, the applicability of the division mode parameters used by the target image block can be improved, the accuracy of the prediction result is improved, and thus the distortion degree of video encoding is reduced.
The first division mode may include at least one division mode, each division mode corresponds to a division line, different image blocks in the encoded image may use different division lines, or the same division line may be used repeatedly, the use of the same division line in each image block in the encoded image is different, for example, the number of uses or the frequency of uses is different, and assuming that there are two division lines, namely, division line L1 and division line L2, division line L1 may not be used by any image block in the encoded image, and division line L2 is used by 10 times by an image block in the encoded image. Since the use cases of different dividing lines in the encoded image are different, the application of the dividing mode parameter in the target image block is also different, and since there is a correlation between the encoded image and the target image in which the target image block is located, for example, an object existing in the encoded image may also exist in the target image, and the object boundary is not changed or slightly changed, the dividing mode parameter may be determined based on the use cases of the dividing lines in the encoded image, and may be used for encoding the target image block.
For example, the use frequencies are different, and there are a division line with a high use frequency and a division line with a low use frequency, and for a division line with a high use frequency being used by more image blocks in the encoded image, it means that there are more object boundaries of the same type to which the division line is attached in the encoded image, and the probability that the corresponding division mode parameter is applied to the target image block is high, while for a division line with a low use frequency being used by less image blocks in the encoded image, it means that there are less object boundaries of the same type to which the division line is attached in the image, and the probability that the corresponding division mode parameter is applied to the target image block is low.
Based on the idea, the partition mode parameter of the partition line used by the coded picture can be set or adjusted to obtain the first partition mode set. If the object boundary part is located between the dividing line with the most use times and the dividing line of the adjacent dividing line, the angle of the dividing line with the more use times can be further subdivided, for example, more dividing lines are arranged to divide the angle into finer granularity, so that more selecting and screening out the dividing line which is more fit with the object boundary can be provided. This is because the dividing line that is used the most frequently in the encoded image is not necessarily the most suitable for the currently encoded image block, but fine adjustment of the angle by the dividing lines distributed around the dividing line makes it possible to find out that the degree of conformity to the boundary is higher. In addition, the original bonded dividing line can be adjusted to be bonded with the boundary. These adjustments may involve the generation of new partition mode parameters or the adjustment of original partition mode parameters.
Optionally, the included angle formed between adjacent dividing lines in the at least one dividing line is not equal in size. In at least one dividing line corresponding to at least one dividing mode in the first dividing mode, the size of an included angle formed between any two adjacent dividing lines may be unequal. For example, the angle formed between line L1 and the left adjacent line of line L1 is greater than the angle formed between line L1 and the right adjacent line of line L1. The adjacent dividing lines are relative, for example, dividing line L1 and dividing line L2 are adjacent dividing lines. The position distribution of each dividing line can be uneven due to the fact that the included angles of any two dividing lines are different in size. Because the uneven distribution characteristics are related to the characteristics of the boundary of the object in the image block, the fit degree of the dividing line and the boundary is improved to a certain degree.
In one possible embodiment, the partition mode parameter is determined by adjusting an angular interval between a partition line in the target direction and a neighboring partition line of the partition line in the target direction based on the usage of the encoded image for the partition line.
The dividing lines corresponding to the respective dividing patterns included in the first dividing pattern may be dividing lines in different directions, for example, dividing lines in horizontal/vertical directions, and dividing lines in positive/negative 45-degree directions. The target direction dividing line may be determined by the use of the dividing line in the encoded image, for example, the target direction dividing line is the horizontal direction (90 degrees) dividing line used the most times in the encoded image. In the embodiment of the present application, a dividing line adjacent to a certain dividing line is simply referred to as an adjacent dividing line, for example, an adjacent dividing line of a dividing line in a target direction is a dividing line adjacent to a dividing line in a target direction. For the division of the same image block, an angle interval can be formed between the dividing line in the target direction and the adjacent dividing line, namely an included angle exists between two straight lines with intersection points. The foregoing references: for the partition mode parameters included in the first partition mode parameter set, a new partition line may be added or the position of an existing partition line may be adjusted based on the usage of the partition line by the encoded image, where the position of the existing partition line may be adjusted by adjusting the angle interval between the partition line in the target direction and the adjacent partition line of the partition line in the target direction, and thus the partition mode parameters may be determined. In one implementation, the angle adjustment between the target direction-based parting line and the adjacent parting line by adding a new parting line may be: and adding a dividing line based on the average value, namely averaging the dividing line in the target direction and the angles corresponding to the adjacent dividing lines to obtain a new angle, and adding the dividing line corresponding to the new angle between the dividing line in the target direction and the adjacent dividing lines as the new dividing line. Illustratively, if the angle corresponding to the dividing line in the target direction is Φ 11 and the angle corresponding to the adjacent dividing line is Φ 12, a new angle, that is, a new angle Φ 13= (Φ 11+ Φ 12)/2, can be obtained by averaging the two angles, and a new dividing line can be determined according to the new angle and added between the dividing line in the target direction and the adjacent dividing line.
Alternatively, the rule followed for the dividing line of the target direction and the angular interval between adjacent dividing lines may be: the angular interval between the dividing line in the target direction and the adjacent dividing line is smaller than the angular interval between the dividing line in the reference direction and the adjacent dividing line; and/or the application probability of the dividing line of the target direction is larger than that of the dividing line of the reference direction.
The reference direction dividing line and the target direction dividing line are two different direction dividing lines, and the adjacent dividing line of the target direction dividing line and the adjacent dividing line of the reference direction dividing line may be the same dividing line or different dividing lines. By setting the angular interval between the dividing line in the target direction and the adjacent dividing line thereof to be smaller than the angular interval between the dividing line in the reference direction and the adjacent dividing line thereof, the dividing line around the dividing line in the target direction can be made denser than the dividing line around the dividing line in the reference direction, and a partial region centered on the dividing line in the target direction can be divided more finely by the dense dividing line, thereby improving the degree of adhesion with the boundary of the object in the image when the dividing line is applied.
Please refer to fig. 5, which is a schematic diagram of an adjusted dividing line according to an embodiment of the present application. As shown in fig. 5, the angular interval between the horizontal direction dividing line and its adjacent dividing line is smaller than the angular interval between the +/-45 degree direction dividing line and its adjacent dividing line, and the angular interval between the vertical direction dividing line and its adjacent dividing line is smaller than the angular interval between the +/-45 degree direction dividing line and its adjacent dividing line. For example, the angular separation between parting line φ 8 and adjacent parting line φ 6 is less than the angular separation between parting line φ 4 and adjacent parting line φ 6 (some parting lines are not shown in FIG. 5). Therefore, the division lines in the vertical direction and the division lines around the division lines in the horizontal direction are more dense, and the division lines in the +/-45 degree direction are sparser.
Alternatively, since the above mentioned adjustment involves an adjustment of the angular interval, a look-up table looking up a mapping between the index i and the angle-related parameter cos (φ) may be provided. Wherein i is an index, and i is 0-n. a is1~anThe corresponding value of cos (phi) is indexed for the different partitions. The index i may correspond to a segmentation index and cos (φ) may correspond to an angle index. Then, the adjustment of the partition mode parameters can be realized through the adjustment of cos (phi). It should be noted that this embodimentThe formula is suitable for scenes with more approximate horizontal boundaries and approximate vertical boundaries. In this embodiment, the angle index and the distance index corresponding to each dividing line are preset, and are not dynamically adjusted in the encoding and decoding process.
Exemplarily, if the index i = ia corresponds to 0 degrees, then aiaThe value is 1. And/or, if the index i = ib corresponds to 90 degrees, then aibA value of 0 or approximately 0; and/or, if the index i = ic corresponds to 45 degrees, aicThe value is cos (45); wherein ia, ib and ic are 0-n. That is, there are cosine values of angles corresponding to the division lines in the horizontal direction, the vertical direction, and the 45-degree direction, respectively. Referring to the rule followed by the angle interval between the dividing line of the target direction and the adjacent dividing line, the difference between the angle corresponding to the index ia and the angle corresponding to the index ia +1 adjacent to the index ia may be smaller than the difference between the angle corresponding to the index ic and the angle corresponding to the angle index ic +1 adjacent to the index ic, and the difference between the angle corresponding to the index ia and the angle corresponding to the index ia-1 adjacent to the index ia may be smaller than the difference between the angle corresponding to the index ic and the angle corresponding to the index ic-1 adjacent to the index ic. Similarly, the difference between the angle corresponding to the index ib and the angle corresponding to the index ib +1 adjacent to the index ib is smaller than the difference between the angle corresponding to the index ic and the angle corresponding to the angle index ic +1 adjacent to the index ic. The difference between the angle corresponding to the index ib and the angle corresponding to the index ib-1 adjacent to the index ib is smaller than the difference between the angle corresponding to the index ic and the angle corresponding to the index ic-1 adjacent to the index ic.
The application probability of the dividing line refers to the possibility that the dividing line in the target direction is used in the image block, and for some videos in specific scenes, the application probability of the dividing line in the target direction is greater than that of the dividing line in the reference direction. Illustratively, for example, the boundary of an object included in a video is mostly in the approximately horizontal and approximately vertical directions, and thus the probability that its dividing line close to the horizontal/vertical direction is used is greater than the probability that the dividing line having the other direction (for example, the dividing line in the +/-45 degree direction) is used. In one implementation, the angle between the partition line with the lowest application probability and the adjacent partition line is larger than the angle between the partition line with the highest application probability and the adjacent partition line.
Alternatively, the determination manner of the application probability of the dividing line for the target direction may be: counting the use times of each dividing line in a preset number of coded images in advance, and determining the application probability of the dividing line based on the use times.
Therefore, more dense parting lines can be arranged in the angle interval with high application probability of the parting lines. Alternatively, the dividing line of the target direction and the dividing line of the reference direction may be divided by applying a probability threshold as a reference, the dividing line of the target direction being the dividing line of all directions for which the application probability is greater than the application probability threshold, and the dividing line of the reference direction being the dividing line of all directions for which the application probability is less than the application probability threshold. The target direction dividing line may include a plurality of (at least two) directions, for example, a horizontal direction dividing line and a vertical direction dividing line, and the reference direction dividing line may include a 45-degree direction dividing line and a 60-degree direction dividing line. The division lines in all target directions and the division lines in all reference directions and the angle intervals between the adjacent division lines can be set according to the rules, so that the division lines in the target directions with high application probability are more densely arranged, the division lines in the reference directions with low application probability are more sparsely arranged, and the use effect of the division mode parameters in the first division mode parameter set used by the subsequent image blocks in the encoding process is improved.
There may be a target direction split line and no reference direction split line, and illustratively, the split lines may be distributed within an angular range around a horizontal line and/or a vertical line, with no split line in the +/-45 degree direction. Therefore, the adjustment of the existing dividing line can be quickened, and for videos in some specific scenes, for example, the boundaries of most objects in the videos belong to the target direction, the dividing line in the target direction is directly used for jointing the boundaries of the objects in the image block, the dividing line in the reference direction is omitted, and the encoding can be more efficiently carried out.
S402, determining a prediction result of the target image block based on the target division mode parameter.
In one embodiment, the target partition mode parameter includes any one or more of a partition index, an angle index and a distance index corresponding to the target partition line, and the rate distortion cost obtained by applying the target partition line to the target image block is the minimum. Each index included in the target division mode parameter is a mode parameter used by the target division line applied to the target image block. Optionally, the target division mode parameter includes an angle index and a distance index, and the process of predicting the target image block based on the target division mode parameter may roughly be: and determining a prediction result in the image area of the target image block divided by the dividing line by using the angle index and the distance index, and then taking the prediction result as the prediction result of the target image block. Optionally, the weight of each pixel point in the target image block may be determined according to the angle index and the distance index, and then the weighted sum is performed on the weight and the corresponding pixel value to obtain a weighted prediction pixel value, where the weighted prediction pixel value may be used as a prediction result of the target image block. And adopting the prediction result by each pixel point in the prediction block of the target image block. The detailed implementation of step S402 can refer to the corresponding description of the third embodiment, and will not be described in detail here.
It should be noted that, at the encoding end, the partition mode parameter corresponding to the first partition mode may be binarized and then packed into the bitstream and transmitted to the decoding end, for example, the partition index may be packed into the bitstream. After receiving the bit stream, the decoding end obtains a partition mode parameter related to the first partition mode by parsing the bit stream: and optionally, an angle index and a distance index corresponding to the division index can be determined by searching the parameter mapping table, and then the corresponding prediction results of the target image block in different image areas divided by the division line are determined according to the angle index and the division index. The parameter mapping table needs to be set in the decoder in advance. In the first partition mode parameter set obtained in the above manner, since the degree of conformity between the partition line in the partition mode and the object boundary of the image block is improved, the matching degree between the partition mode parameter used by the image block and the image block is higher, and a more accurate prediction result can be obtained at both the encoding side and the decoding side.
It should be noted that step numbers such as S401 and S402 are used herein for the purpose of more clearly and briefly describing the corresponding contents, and do not constitute a substantial limitation on the sequence, and those skilled in the art may perform S402 and then S401 in specific implementation, but these should be within the scope of the present application. The following examples are given for the same purpose.
According to the image processing scheme provided by the embodiment of the application, the target division mode parameter matched with the target image block is determined from the first division mode parameter set containing the division mode parameter corresponding to the first or at least one division line, and the prediction result of the target image block can be obtained based on the target division mode parameter. The dividing mode parameters included in the first dividing mode parameter set are determined based on the use condition of the coded image to the dividing lines, so that better reference can be provided for the use of the dividing lines by the target image block, in addition, the angle interval between the target direction and the adjacent dividing lines is adjusted according to a certain rule through the use condition of the coded image to the dividing lines, the dividing lines around the dividing lines in the target direction can be set more densely, the attaching degree of the dividing lines used by the target image block and the boundaries containing objects in the target image block is improved, and the error of the prediction result is reduced.
Second embodiment
Referring to fig. 6a, fig. 6a is a schematic flowchart of an image processing method according to a second embodiment, where an execution main body in this embodiment may be a computer device or a cluster formed by a plurality of computer devices, and the computer device may be an intelligent terminal (such as the foregoing mobile terminal 100) or a server, and here, the execution main body in this embodiment is an intelligent terminal for example.
The partition mode parameters included in the first partition mode parameter set may be determined based on the usage of the encoded image on each partition line, and the first partition mode parameter set may be a parameter mapping table representing a mapping relationship among the partition index, the angle index, and the distance index. The adjustment method of the first partition mode parameter set is described in detail below, and refer to the following steps S601 to S603.
S601, obtaining the use condition of the coded image to the dividing line.
The encoded image refers to an image encoded before an image in which the target image block is located, that is, an encoded frame, and the encoded image may be processed by any one of multiple prediction modes, and the use condition of the first partition mode in the encoded image specifically refers to the use condition of the partition line corresponding to each partition mode in the first partition mode in the encoded image, for example, the number of times the partition line corresponding to a certain partition mode is used in the encoded image. In this way, the usage of all the dividing lines corresponding to the first division mode in the encoded image can be obtained, and the division mode parameters in the first division mode parameter set can be determined based on the usage (see steps S602 and S603). Because there may be a relationship between the encoded image and the image being encoded, for example, an object in the encoded image may repeatedly appear in the currently encoded image, the use condition of the encoded image for the dividing line is referential for encoding of the subsequent image, the dividing mode parameter is adjusted in real time according to the use condition of the encoded image for the dividing line, and the dividing line and the boundary of the object in the image can be adaptively matched to improve the degree of fit between the two.
Note that the dividing line used for the encoded image is a dividing line corresponding to a dividing mode parameter in a first dividing mode parameter set, where the first dividing mode parameter set is a parameter mapping table representing a mapping relationship among the dividing index, the angle index, and the distance index. In one embodiment, the parameter mapping table is preset and is not changed all the time, or the parameter mapping table is adjusted once after a preset number of images are encoded, or the parameter mapping table is adjusted after receiving the update flag; in another embodiment, the parameter mapping table is the latest parameter mapping table, and the parameter mapping table is adjusted to the latest parameter mapping table based on the current parameter mapping table after the encoded image reaches a predetermined number, for example, after each frame is encoded, which is a dynamic adjustment manner.
S602, determining the application probability of the parting line based on the use condition.
The application probability of the dividing line refers to the possibility of being used in the encoding process of the image. I.e. the probability that the respective segmentation line appears in the encoded image. For the determination of the application probability, in one embodiment, the following may be used: acquiring the number of times of use of the first division mode in response to the number of the coded images reaching a preset threshold; and determining the application probability of each dividing line based on the usage recording parameters and the usage times.
During the encoding of the encoded image, the image block may be subjected to a prediction process using the first partition mode or using another prediction mode. By counting the use times of a preset number of coded images to the first division mode, more data support of the coded images can be obtained, and further the application probability with higher reliability can be obtained.
The number of times of use of the first division mode refers to the total number of times the first division mode is used by the encoded image. The first division mode comprises at least one division mode, and each division mode corresponds to a use recording parameter for recording the total times of using the division line corresponding to the division mode by the coded image. The number of uses of the first partitioning pattern may be determined based on the usage recording parameter, namely: and summing the use recording parameters corresponding to the division modes to obtain the total times of using the first division mode by the coded image. Then, the ratio of the usage record parameter and the usage number corresponding to each dividing line can be used as the application probability of the corresponding dividing line.
In practical applications, a history parameter that a corresponding dividing line is used, that is, a usage recording parameter in the dividing mode, may be set separately for each index (for example, a dividing index) included in the first dividing mode parameter set at the encoding end. If a first division mode is adopted for an image block in a frame from the input first frame F1, the corresponding history parameter may be increased by 1, so that when the number of coded images reaches a preset threshold, the number of times that each division mode is used is obtained by statistics, and the total number of times that the first division mode is used may be obtained at the same time. Illustratively, table 1 below is usage record parameters obtained by applying the geometric partition mode to the image block statistics, and table 2 is application probabilities of respective partition lines in the geometric partition mode.
TABLE 1 statistical Table of usage record parameters
i 0 1 2 n
gpm_partition_idx_ hist[i] b0 b1 b2 bn
Table 2 application probability statistics table
i 0 1 2 n
gpm_partition_idx_ pro[i] c0 c1 c2 cn
The number of times of the division mode adopted by each image block in the multiple frames can be continuously recorded, after a preset number of frames are recorded, the historical record parameters corresponding to each index are added to obtain the total historical number of times (namely the number of times of use of the first division mode), and then the historical record of each index is divided by the total historical number of times to obtain the probability of each index, namely the probability of each division line being applied. See the following expressions for details.
Figure 614801DEST_PATH_IMAGE001
Where bi represents the usage record parameter of the ith dividing line, and ci represents the application probability of the ith dividing line.
S603, updating the parameter mapping table based on the application probability.
Optionally, the parameter mapping Table is a Look-Up-Table (LUT) indicating a mapping relationship between the split index, the angle index, and the distance index, and if the index corresponding to the split index is found in the parameter mapping Table through any one of the indices, for example, if the split index is determined, the angle index and the distance index corresponding to the split index may be found in the parameter mapping Table according to the split index.
For the parameter mapping table, for example, see table 3 below, which is a lookup table representing a mapping relationship between the GPM partition index merge _ GPM _ partition _ idx and the angle index angleIdx and the distance index distanceIdx in the geometric partition mode.
Table 3 parameter mapping table
merge_gpm_partition_idx 0 1 2 3 4 59 60 61 62 63
angleIdx a0 a1 a2 a3 a4 a59 a60 a61 a62 a63
distanceIdx ρ0 ρ1 ρ2 ρ3 ρ0 ρ1 ρ2 ρ3 ρ0 ρ1
It can be seen that in the parameter mapping table shown in table 3, the distance index is duplicated, and the segmentation index and the angle index are different. In one embodiment, the angle index angleIdx corresponds to a sine value or a cosine value of the angle and the distance index distanceIdx corresponds to ρ j (j =0,1,2, 3).
The method comprises the steps of determining a segmentation line with the highest application probability by counting the application probability of each segmentation line in a coded image, or determining a region with high application probability of the segmentation line based on the application probability, further setting a new segmentation line by adopting an interpolation method, optionally determining a new angle index and a new distance index corresponding to the newly set segmentation line according to the newly set segmentation line, and adjusting the mapping relation between the segmentation index and the angle index and the distance index according to the new angle index and the new distance index, thereby constructing a dynamically adjusted parameter mapping table.
For an image block, multiple partition lines Φ 0, Φ 1, Φ 2, Φ 3 … Φ i, Φ j … Φ n may exist, and by using these partition lines to match with object boundaries in the image block, an optimal partition line used by the image block may be determined according to Rate-distortion optimization (RDO), and a corresponding partition mode parameter is a target partition mode parameter.
For example, please refer to fig. 6b, which is a schematic diagram illustrating an effect of applying different dividing lines to an image block according to an embodiment of the present application. As shown in (1) of FIG. 6b, the effect of applying two types of division lines (including φ i, φ j, and others not shown) to different image blocks is shown, wherein the dotted lines represent division lines that are not adopted in the corresponding image blocks according to the RDO process, and the solid lines represent division lines that are not adopted according to the RDO processThe finally adopted dividing line, shown as (1) in FIG. 6b, is adopted 2 times φ i and 3 times φ j. If φ i, φ j is a dividing line with high frequency of occurrence in a period of time, it indicates that [ φ i, φ j ] is selected as the boundary of the object in the image block (as shown by the curve in FIG. 6 b)]The probability of angles in the first division pattern is higher, and to make the dividing line in the first division pattern closer to the boundary of the object in the image, the distance [ phi i, phi j ] can be adjusted]The inner angles are further subdivided, and the fitting degree of the parting lines to the object boundary is better by adding more parting lines. For example, after adding a new dividing line phinewThen, more alternative segmentation lines are made for object edges with high probability in the image. Obviously, after adding a new dividing line phinewThen, the degree of adhesion is better for the image blocks in the corresponding 2 nd, 3 rd and 4 th columns in (1) and (2) in fig. 6 b.
In one embodiment, since the most critical for updating the partition mode parameters corresponding to the partition lines is the updating of the angles, by introducing the angle mapping table containing the angle parameters describing the angle information, the parameter mapping table can be updated quickly and accurately. Based on this, the implementation of step S603 includes the following steps S6031 and S6032.
S6031, updating the angle mapping table based on the application probability of each parting line to obtain an updated angle mapping table; and S6032, updating the angle index and/or the distance index in the parameter mapping table based on the updated angle mapping table.
Optionally, the angle mapping table is configured to represent a mapping relationship between a query index and an angle parameter, where the query index includes the segmentation index or the angle index, and an association relationship exists between the angle index and the angle parameter.
The angle mapping table is a lookup table representing a mapping relation between a query index and an angle parameter, the query index is used for establishing association between the angle mapping table and the parameter mapping table, and the query index in the angle mapping table can be set as a partition index or an angle index because the partition index and the angle index in the parameter mapping table are unique. The angle parameter in the angle mapping table may be used to describe angle information of the partition line in the image block, so that there is an association relationship between the angle parameter and the angle index: the angle index correspondence in the parameter mapping table may be set to the angle parameter or a function of the angle parameter. Assuming that the query index is denoted as i and the angle parameter is denoted as cos (φ), the angle mapping table can be specifically as shown in Table 4 below.
TABLE 4 Angle mapping Table
i 0 1 2 3 4 n
cos(φ) a0 a1 a2 a3 a4 an
The table includes angle parameters corresponding to n +1 division modes, and the angle parameters cos (phi) correspond to the angle indexes angleIdx.
Due to the incidence relation between the angle mapping table and the parameter mapping table, the angle mapping table is updated based on the application probability of the dividing line, mainly the angle parameter in the angle mapping table is updated, and in view of the incidence relation between the angle parameter and the angle index in the parameter mapping table, the parameter mapping table can be updated based on the updated angle mapping table, including the angle index and the distance index in the parameter mapping table.
Optionally, an implementation manner of step S6031 includes: and updating the angle mapping table based on the angle parameter corresponding to the first query index and/or the angle parameter corresponding to the second query index to obtain an updated angle mapping table.
The first query index and/or the second query index may be determined based on an application probability of the split line: the first query index is a query index corresponding to a partition line with the highest application probability; the second query index is a query index corresponding to a partition line with the highest application probability or an adjacent query index of the first query index.
The first query index and the second query index are two different query indexes in the angle mapping table, and it is assumed that the query indexes in the angle mapping table are split indexes, and the application probability can correspond to a split line through the split indexes, so the first query index and/or the second query index can be determined from the angle mapping table based on the application probability, for example, the query index with the highest application probability can be used as the first query index, and the query index with the second highest application probability can be used as the second query index. And then, according to a preset update rule, updating based on the angle parameters corresponding to the first query index and/or the second query index, for example, inserting a new angle parameter between the angle parameters corresponding to the two query indexes, or adjusting the size of the angle parameter corresponding to each query index, and so on. In more detail, reference may be made to the two embodiments described below.
In one embodiment, a target angle parameter may be determined based on the angle parameter corresponding to the first query index and the angle parameter corresponding to the second query index; and updating the angle mapping table based on the target angle parameter to obtain an updated angle mapping table.
The target angle parameter is a new angle parameter, the size of the target angle parameter may be between the size of the angle parameter corresponding to the first query index and the size of the angle parameter corresponding to the second query index, and the angle mapping table may be updated according to the target angle parameter to obtain an updated angle mapping table, in this embodiment, the following modes 1 and 2 are included.
Mode 1: the first query index is a query index with the highest application probability, the second query index is an adjacent query index of the first query index, the adjacent query index comprises a query index adjacent to the left of the first query index and/or a query index adjacent to the right of the first query index, the target angle parameter is a new angle parameter set between angle parameters corresponding to the first query index and the second query index respectively, and the target angle parameter can be understood as an angle parameter newly added in the angle mapping table. The angle parameter indicates an angle between the angles indicated by the angle parameters corresponding to the two query indexes. Thus, the target angle parameter includes an angle parameter adjacent to the left and/or an angle parameter adjacent to the right of the angle parameter corresponding to the first query index in the angle mapping table. Optionally, the corresponding angle parameter needs to be correspondingly deleted in the angle mapping table to add the target angle parameter to the angle mapping table, for example, if the target angle parameter includes a new angle parameter, the angle parameter with the lowest application probability is correspondingly deleted.
For convenience of description, the first query index is hereinafter referred to as index M, the second query index includes two adjacent indexes respectively referred to as index MN1 and index MN2, and the angle parameter cos (Φ) may be corresponding to the index M and the index MN1M) And cos (phi)MN1) Between which a new cos is inserted(φNew) I.e. the target angle parameter. Wherein phi isNewIs located between the angles corresponding to the indices M and MN1, respectively. For example, for cos (. phi.)M) And cos (phi)MN1) The target angle parameter cos (φ ') is obtained by averaging, so that the angle corresponding to φ' is located between the angles corresponding to the index M and the index MN1, respectively, that is, between the angles of the dividing lines corresponding to the divided indexes, respectively. Similarly, the angle parameter cos (φ) may correspond between index M and index MN2M) And cos (phi)MN2) With a new cos (φ')New). Wherein, phiNewIs located between the angles corresponding to the indices M and MN2, respectively.
Assuming that the index corresponding to phi 'is i', the values of i 'and cos (phi') may be inserted into the angle mapping table shown in table 4, and the index with the lowest application probability may be deleted from table 4 to obtain an updated angle mapping table. It should be noted that the number of query indexes is not affected by inserting i'. Illustratively, based on the angle mapping table shown in table 4 as an example, if the index 1 corresponds to the cos (φ) value a2Has the highest probability of being applied, then the angle parameter a1And a2Interpolate between to obtain anew1And/or, at a2And a3Interpolate between to obtain anew2Then a isnew1And/or anew2Inserted into the angle mapping table.
Optionally, the angle parameter to be deleted may be screened out based on the application probability, and if both of the two new angle parameters are to be inserted into the angle mapping table, the two deleted angle parameters are screened out. For example, if the application probability of the partition line corresponding to the query index n is the lowest, the index n is corresponding to the cos (φ) value, i.e., anDeleting from the table, if the application probability of the partition line corresponding to the query index n-1 is the second lowest, then the cos (phi) value corresponding to the index n, namely an-1Deleted from the angle mapping table. Optionally, the angle mapping table is reordered to obtain table 5 as shown below.
Table 5 updated angle mapping table 1
i 0 1 2 3 4 n
cos(φ) a0 anew1 a2 anew2 a4 an
Alternatively, the angleIdx in the parameter mapping table may be updated based on the updated angle mapping table as shown in table 5.
Mode 2: the first query index and the second query index are respectively the query index with the highest application probability and the query index with the second highest application probability, and interpolation or adjustment of the parting line can be performed between angle areas formed by angles indicated by angle parameters corresponding to the first query index and the second query index, so that updating of the parameter mapping table is realized.
For convenience of description, the first query index is denoted as M1, and the second query index is denoted as M2, that is, if the first query index M1 and the second query index M2 indicate an angle φ1And phi2And is considered to be within [ phi ] its corresponding angular interval1,φ2]If the probability of applying the partition line is higher, a new partition line is inserted between the first query index and the second query index, and the specific insertion method may be as described in the manner 1, which is not described herein again. The number of dividing lines may be 1 or more.
Illustratively, if within the angular interval [ phi ]1,φ2]The probability of applying the dividing line is high and the range of the angle interval corresponds to cos (phi) being equal to a2And a3The angle interval therebetween is in a2And a3Is inserted betweennew. Alternatively, if anIf the application probability of the corresponding dividing line is lower, deleting a in the angle mapping tablen. And finally, rearranging the angle parameters corresponding to the undeleted parting lines, and corresponding the angle parameters corresponding to the parting lines to the query index i. As shown in table 6 below, the updated angle mapping table obtained by updating the angle mapping table shown in table 4 is according to the above example content.
Table 6 updated angle mapping table 2
i 0 1 2 3 4 n
cos(φ) a0 a1 a2 anew a3 an-1
It should be noted that at least one partition line may be inserted between the query index with the highest probability and the query index with the second highest probability, and at least one partition line may be inserted into the corresponding angle region to correspond to the angle phi, so as to form at least one angle parameter, such as a2And a5The angle parameters respectively corresponding to the first query index and the second query index can be in a2And a5Is not in contact with a3Or a4The repeated angle parameters correspond to the dividing lines, so that the dividing lines with higher application probability are distributed more densely around the dividing lines, angles in the area range where the boundary with higher application dividing lines in the image block is located are divided more finely, and the image block is attached to the boundary of the object as far as possible.
In another embodiment, the second query index is a neighboring query index of the first query index, and may be: and adjusting the angle parameter corresponding to the second query index in the angle mapping table according to a preset adjustment rule to obtain an updated angle mapping table.
In this embodiment, the first query index may be a partition index corresponding to a partition line with the highest application probability, and the second query index may be left adjacent to the first query index and/or right adjacent to the first query index. The adjustment target of the preset adjustment rule is to make the dividing line around the dividing line having the highest application probability denser as much as possible. The angle parameter corresponding to the second query index may be adjusted, so that the difference between the angle parameters of the first query index and the second query index is further reduced, and an updated angle mapping table is obtained.
Illustratively, still taking the angle mapping table shown in table 4 as an example, if the first query index is query index 2 and the second query index is query index 1 and query index 3, the cos (Φ) values corresponding to query index 1 and query index 3 may be adjusted to a1And a3The differences between the cos (phi) values corresponding to query index 1 and query index 3, respectively, and the cos (phi) values corresponding to query index 2 are made to decrease, as shown in table 7 below.
Table 7 updated angle mapping table 3
i 0 1 2 3 4 n
cos(φ) a0 a'1 a2 a'3 a4 an
The corresponding angle index angleIdx in the parameter mapping table can be updated according to the updated angle mapping table as shown in table 7.
It should be noted that, in this way, the original angle parameter is adjusted without adding a new angle parameter and deleting the original angle parameter in the angle mapping table, so as to achieve the purpose that the dividing lines are set more densely.
Optionally, the manner of updating the angle mapping table according to the preset adjustment rule may be: in response to the angle parameter corresponding to the adjacent query index being greater than the angle parameter corresponding to the first query index, reducing the angle parameter corresponding to the adjacent query index to obtain an updated angle mapping table; and/or, in response to the angle parameter corresponding to the adjacent query index being smaller than the angle parameter corresponding to the first query index, increasing the angle parameter corresponding to the adjacent query index to obtain an updated angle mapping table.
As can be seen from the above description, the adjustment aims to reduce the difference between the first query index and the adjacent query index as much as possible, so that when the angle parameter of the adjacent query index is greater than the angle parameter corresponding to the first query index, the angle parameter corresponding to the adjacent query index can be adjusted to be closer to the angle parameter corresponding to the first query index, that is, the angle parameter corresponding to the adjacent query index is reduced; when the angle parameter of the adjacent query index is smaller than the angle parameter corresponding to the first query index, the angle parameter corresponding to the adjacent query index is adjusted to be close to the angle parameter corresponding to the first query index, namely, the angle parameter corresponding to the adjacent index is increased. The updated angle mapping table can be obtained by executing one or two adjustment rules. It should be noted that, in order to ensure the ordering of the angle mapping table, the angle parameter after the adjustment corresponding to the adjacent query index is also greater (or smaller) than the angle parameter corresponding to the first query index, but the difference between the two is reduced compared with the difference before the adjustment.
Optionally, there may be a preset adjustment limit for the adjustment of the angle parameter of the adjacent index, namely: the difference value between the adjusted angle parameter corresponding to the adjacent query index and the angle parameter corresponding to the first query index is within a preset range; and/or the preset range is determined based on the application probability of the segmentation line corresponding to the first query index.
The preset range is used for controlling the adjustment limit of the angle parameter corresponding to the adjacent query index, and the difference value between the adjusted angle parameter corresponding to the adjacent query index and the first query index is controlled within the preset range, so that the angle parameter corresponding to the adjacent query index can be adjusted within a reasonable range. Optionally, the preset range may be determined based on an application probability of a partition line corresponding to the first query index, and the application probability of the partition line corresponding to the first query index and the preset range may be implemented by setting a corresponding association relationship, for example, setting a mapping function between the application probability and the preset range, where the larger the application probability of the first query index is, the smaller the preset range is. In this case, the smaller the difference between the adjusted angle parameter corresponding to the adjacent query index and the angle parameter corresponding to the first query index is, the smaller the angle difference between the corresponding dividing lines is. The adjustment limit of the angle parameters corresponding to the adjacent query indexes is related to the application probability of the dividing line corresponding to the first query index, the greater the application probability of the dividing line corresponding to the first query index is, the higher the possibility that the target image block uses the dividing line is, the finer division can be performed on the area around the dividing line corresponding to the first query index through the adjustment of the angle parameters corresponding to the adjacent query indexes, if the adjacent dividing line is applied to the target image block, the object boundary can be attached more finely, and therefore the coincidence degree with the object boundary is improved to a certain degree.
It should be noted that the updating of the parameter mapping table may be updating in real time before the current block is encoded, and after the target image where the target image block is located is encoded, the latest parameter mapping table may also be updated based on the use condition of the target image for the partition line, so as to facilitate the use of the subsequent image block.
In one embodiment, after updating the parameter mapping table based on the application probability of the partition line, a bitstream including indication information of the updated parameter mapping table may be further transmitted to the decoding side.
That is, after the angle mapping table is updated based on the application probability of the dividing line and the parameter mapping table is updated based on the updated angle mapping table according to the above embodiment, the indication information of the updated parameter mapping table may be transmitted in the bitstream. The indication information is used for indicating the decoding end to determine the prediction result of the target image block by using the updated parameter mapping table. Therefore, the decoding end decodes the coded bit stream corresponding to the target image block, and the same parameter mapping table as the coding end can be used in the prediction stage, so as to determine the corresponding prediction result.
It should be noted that, at the decoding end, a partition index related to the partition mode parameter may be obtained through the received bitstream and the bitstream, and optionally, an angle index and a distance index corresponding to the partition index may be determined by searching the parameter mapping table, and further, prediction results corresponding to different image areas in the target image block may be determined according to the angle index and the distance index. In the above-described embodiment, the mapping relationship between the partition index and the angle index and the distance index is dynamically adjusted in the whole encoding process, and the adaptability to different scenes can be effectively improved.
According to the image processing scheme provided by the embodiment of the application, the angle mapping table used for representing the mapping relation between the query index and the angle parameter is introduced, the angle index corresponding to the dividing line is independently extracted from the dividing mode parameter for adjustment, and the updating of the parameter mapping table is realized through the updating of the angle mapping table, so that the dividing mode parameter can be more efficiently determined; the application probability of the dividing line is determined by referring to the use condition of the coded image to the dividing line, and the use condition of the dividing line of the coded image can be quantitatively evaluated, so that the query index of the angle parameter needing to be adjusted or newly added is determined based on the application probability, a more scientific updating basis is provided, and the updating of the angle mapping table is realized; in addition, the parameter mapping table can be updated by updating the angle mapping table when the statistics of the use condition of the preset coded image is achieved, so that the parameter mapping table can be continuously updated in the coding process, the dynamic adjustment of the parameter mapping table is realized, different partition mode parameters are obtained at different stages, and more various scenes are covered to improve the adaptability of the first partition mode to different scenes.
Third embodiment
Referring to fig. 7, fig. 7 is a flowchart illustrating an image processing method according to a third embodiment, where an execution main body in this embodiment may be a computer device or a cluster formed by a plurality of computer devices, and the computer device may be an intelligent terminal (such as the foregoing mobile terminal 100) or a server, and here, the execution main body in this embodiment is an intelligent terminal for example.
S701, determining target division mode parameters of the target image block based on the first division mode parameter set.
In one embodiment, since the target image block may adopt any one of an inter prediction mode and an intra prediction mode, the image processing scheme provided by the embodiment of the present application may have implementation preconditions, that is: based on parameters of a target image block, performing predictive coding on the target image block by using at least one prediction mode, and determining rate-distortion cost corresponding to each prediction mode; when the prediction mode with the smallest rate-distortion cost is the first division mode, the step S701 is performed.
Firstly, an encoder may determine a color component of a target image block, where the color component includes a luminance component and/or a chrominance component, and then, based on a parameter of the target image block (where the parameter is a parameter used in prediction encoding), perform prediction encoding on the color component of the target image block using at least one prediction mode, and further may determine a rate-distortion cost corresponding to each prediction mode, where the at least one prediction mode includes an intra-frame prediction mode and/or an inter-frame prediction mode, and if multiple (at least two) prediction modes are used, then perform prediction encoding on the color component of the target image block, and calculate a rate-distortion cost corresponding to each prediction mode. Then, the minimum rate-distortion cost can be determined from the rate-distortion costs respectively corresponding to the multiple prediction modes, the prediction mode corresponding to the minimum rate-distortion cost is determined as the target prediction mode of the target image block, and the corresponding mode parameter is determined as the prediction mode parameter of the target image block.
When the prediction mode corresponding to the minimum rate-distortion cost is a first partition mode, for example, a geometric partition mode (GPM mode for short), the first partition mode may be used as a target prediction mode adopted by a target image block, and a target partition mode parameter of the target image block is determined based on the first partition mode parameter set.
It should be noted that, when the first partition mode is used to perform prediction encoding on the target image block, the scheme provided in the embodiment of the present application may be executed, and when it is determined that the prediction mode with the smallest distortion cost is the first partition mode, the partition mode parameter corresponding to the first partition mode may be directly determined as the prediction mode parameter of the target image block, and the prediction result obtained in the embodiment of the present application may be directly used.
The target partition mode parameter of the target image block is a partition mode parameter satisfying a certain condition in a first partition mode parameter set, the first partition mode includes at least one partition mode, and in one embodiment, the implementation manner of step S701 includes the following steps:
s7011: determining a rate-distortion cost applied to the target image block by each of the division modes included in the first division mode;
s7012: determining the division mode with the minimum rate-distortion cost as the target division mode of the target image block;
s7013: determining a mode parameter corresponding to the target division mode from the first division mode parameter set, and determining the mode parameter corresponding to the target division mode as a target division mode parameter of the target image block.
That is to say, all the partition modes included in the first partition mode may be traversed, rate-distortion costs required by the target image block for adopting various partition modes are determined, in order to achieve optimal encoding performance, a goal of reducing video distortion as much as possible at a certain code rate or compressing a video to the minimum within a distortion allowable range is sought, the rate-distortion costs corresponding to the respective partition modes may be compared, the partition mode with the minimum rate-distortion cost is determined, the partition mode with the minimum rate-distortion cost is taken as the target partition mode of the target image block, and optimal encoding of the target image block may be achieved. Optionally, the mode parameter corresponding to the target division mode may be queried from the first division mode parameter set, and the mode parameter is used as a target division mode parameter to be used by the target image block, and then the target image block may be divided according to the target division mode parameter and predicted, so as to obtain a prediction result of the target image block.
For convenience of understanding, the following describes the above process by taking the first partition mode as the GPM mode, and the partition mode as any one of 64 partition modes corresponding to the GPM mode, and taking the first partition mode as the GPM mapping table as an example: through traversing 64 division modes corresponding to the GPM mode, the division mode with the minimum rate distortion cost can be determined, and the division mode with the minimum rate distortion cost is used as the target division mode of the current block. Optionally, according to the target partition mode, a target GPM partition index merge _ GPM _ partition _ idxT, a target angle index angleIdxT, and a target distance index distanceIdxT corresponding to the target partition mode are determined through a mapping table of the GPM partition index merge _ GPM _ partition _ idx, the angle index angleiddx, and the distance index distanceIdx. For the determination of the parameter mapping table of the GPM partition index, the angle index and the distance index, reference may be made to the description of the foregoing embodiments, which will not be described in detail herein.
It should be noted that when the angle index angleIdx takes different values, it corresponds to different angles Φ (as shown in fig. 3 a), and when the distance index distanceIdx takes different values, it corresponds to different distances ρ (as shown in fig. 3 b). In one embodiment, a lookup table may be used to represent the relationship between the GPM partition index merge _ GPM _ partition _ idx and the angle index angleIdx and the distance index distanceIdx. Such as the parameter mapping table introduced above (as shown in table 3).
In one embodiment, the target image block includes a first partition and a second partition, and the first partition and the second partition are two image areas obtained by dividing the target image block by a dividing line. The division line is a division line indicated by the target division mode parameter of the target image block. The first partition and the second partition are relative, and for example, the target image block is divided by using a horizontal dividing line, and an image area above the horizontal dividing line may be referred to as a first partition, and an image area below the horizontal dividing line may be referred to as a second partition. Conversely, the image area below the horizontal dividing line may be referred to as a first partition, and the image area above the horizontal dividing line may be referred to as a second partition. The horizontal dividing line is entirely or partially attached to the boundary region in the target image block. The implementation of step S402 in the first embodiment can be as follows steps S702-S704.
S702, determining partition weight based on the target partition mode parameter.
Optionally, the partition weight is determined based on an angle index and a distance index included in the target partition mode parameter. The partition weight refers to the weight corresponding to two partitions included in the image block, the partition weight is used for weighting with each pixel point in the corresponding partition to obtain a weighted prediction pixel value, and the weighted prediction pixel value is marked as predSamples, and the weighted prediction pixel value of the corresponding partition can be used as a prediction result.
The linear equation of the dividing line can be obtained according to the angle index angleIdx and the distance index distanceIdx, and optionally cos (phi) and sin (phi) in the linear equation can be determined according to the angle index angleIdx, and p in the linear equation can be determined according to the distance index distanceIdx. The expression for the straight line equation is as follows:
Figure 575804DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure 83009DEST_PATH_IMAGE003
the coordinates of the pixel points at a continuous position relative to the central position of the target image block.
As shown in fig. 8a, fig. 8a is a schematic diagram of distance analysis between a pixel point and a dividing line according to an embodiment of the present disclosure. If the pixel point
Figure 889291DEST_PATH_IMAGE004
For the pixels in the current block, based on the above linear equation, pixel points can be obtained
Figure 798341DEST_PATH_IMAGE004
The distance to the parting line is:
Figure 415398DEST_PATH_IMAGE005
when ρ is 0, the dividing line is as shown in FIG. 3 a.
For this reason, the angle index and the distance index included in the target division mode parameter may determine a distance between each pixel point in the target image block and the division line, and may determine a weight corresponding to each pixel value of the target image block according to the distance.
Optionally according to pixel points
Figure 409899DEST_PATH_IMAGE006
Setting different weights relative to the distance of the dividing line, e.g. if a pixel is located
Figure 754293DEST_PATH_IMAGE006
If the distance to the segmentation line is greater than or equal to the set distance threshold value, the pixel point is detected
Figure 783429DEST_PATH_IMAGE007
Setting the corresponding weight as K1, otherwise, setting the pixel point
Figure 820655DEST_PATH_IMAGE007
The corresponding weight is set to K2. It is apparent that: the pixel points smaller than the set distance threshold are all located near the dividing line, and the pixel points larger than the set distance threshold are far away from the dividing line. And performing weight setting on pixel points corresponding to the first partition and the second partition in the target image block by adopting the rule to obtain partition weights. In addition, for the pixel points on the two sides of the dividing line, namely the pixel points included in different partitions, different fixed weights can be set, and therefore the weight of each pixel value is not only related to the distance from the dividing line, but also related to the partition where each pixel is located.
By setting the partition weights in such a way, different attention degrees can be given to the pixel points in the two image areas which take the dividing line as the boundary in the target image block, the weights corresponding to the pixel points close to the dividing line and the pixel points not close to the dividing line can also be different, and the weight corresponding to the pixel point closer to the dividing line in the partitions at the two sides of the dividing line is larger, so that the two partitions of the target image block can be better fused along the edge of the dividing line, and the prediction result is obtained.
S703, determining the prediction result of the first partition and the prediction result of the second partition based on at least one of the partition weight, the first prediction result set and the second prediction result set.
The first prediction result set and the second prediction result set are obtained by referring to motion vectors of different reference frames, the first prediction result set comprises a plurality of prediction results, and the prediction results can be prediction values obtained by performing unidirectional prediction on a target image block and performing motion compensation after the unidirectional prediction motion vectors are obtained. The specific process for determining the first set of predicted results and the second set of predicted results can be referred to the following description of embodiments, which will not be described in detail herein.
In one embodiment, the partition weight includes a first weight corresponding to the first partition and a second weight corresponding to the second partition, and the first weight or the second weight is determined based on a distance between a pixel point included in the target image block and a partition line. In detail, the first weight is determined based on a distance between a pixel point within the first partition in the target image block and the dividing line, and the second weight is determined based on a distance between a pixel point within the second partition in the target image block and the dividing line. The manner of determining the distance may refer to the above description, and is not described herein.
Optional implementation manners of step S703 include: and determining the prediction result of the pixel point based on at least one of the first weight, the second weight, a first prediction result of the pixel point included in the target image block in a first prediction result set and a second prediction result of the pixel point in a second prediction result set.
For the first partition, it is possible to: and determining the prediction result of the pixel points in the first partition included by the target image block based on at least one of the first weight, the first prediction result of the pixel points included by the target image block in the first prediction result set and the second prediction result of the pixel points in the second prediction result set.
The first weight may be a set of weights corresponding to the first partition, which may be referred to as a first set of weights, including { w }11,w12And determining a weight which respectively represents a weight corresponding to a partition line close to the first partition and a weight corresponding to a partition line far away from the first partition based on a rule that a weight is determined based on a distance between a pixel point and the partition line, wherein the close or far area is divided by a distance threshold. And carrying out weighted summation processing on a first prediction result corresponding to the pixel points in the first partition included in the target image block in the first prediction result set and a second prediction result corresponding to the pixel points in the second prediction result set by using the first weight to obtain a fused prediction value, namely the prediction result of the pixel points in the first partition. The specific expression is as follows:
Figure 53184DEST_PATH_IMAGE008
wherein the content of the first and second substances,
Figure 201269DEST_PATH_IMAGE009
representing pixel points, P, in a first partition11A first prediction result, P, of a pixel point of the first partition in the first prediction result set12A second prediction result, w, of the pixel points of the first partition in a second prediction result set11And w11The sum is 1, according to the setting rule of the weight, w11May be set to K1, w12Corresponding to (1-K1); or w11Set to K2, w12Corresponding to (1-K2).
For the second partition, it may be: and determining the prediction result of the pixel points in the second partition included by the target image block based on at least one of the second weight, the first prediction result of the pixel points included by the target image block in the first prediction result set and the second prediction result of the pixel points in the second prediction result set.
In the same way, the firstThe second weight is a weight set corresponding to the second partition, and may be referred to as a second weight set, including { w }21,w22And the weights corresponding to the areas close to the dividing line and the areas far away from the dividing line in the second partition are respectively represented, and the areas close to or far away from the dividing line are divided by a distance threshold which can be the same as or different from the distance threshold used by the first partition. Optionally, the pixel points in the second partition included in the target image block correspond to corresponding prediction results in the first prediction result set and the second prediction result set, so that the target image block including the first prediction result corresponding to the pixel points in the second partition in the first prediction result set and the second prediction result corresponding to the pixel points in the second prediction result set may be weighted and summed by using the second weight to obtain a fused prediction value, that is, the prediction result of the pixel points in the second partition. The specific expression is as follows:
Figure 84911DEST_PATH_IMAGE010
wherein the content of the first and second substances,
Figure 293038DEST_PATH_IMAGE011
representing pixel points, P, in the second partition21Representing the first prediction result, P22Representing a second prediction result, w, according to the setting rule of the above-mentioned weight21Can be set to K1, w22Corresponding to (1-K1); or w21Set to K2, w22Corresponding to (1-K2); or w21May be set to a different weight than K1 or K2, but w21And w21The sum is 1.
It should be noted that, for each pixel point in the first partition and the second partition, the corresponding prediction result may be determined according to the above manner, so as to obtain the prediction result of the first partition and the prediction result of the second partition, where the prediction result of the first partition includes the prediction results of all pixel points in the first partition, and the prediction result of the second partition includes the prediction results of all pixel points in the second partition.
S704, determining the prediction result of the first partition and the prediction result of the second partition as the prediction results of the target image block.
Based on the above manner, the prediction results of the first partition and the second partition of the target image block are combined to obtain the prediction results of the pixel points of all the partitions, and the prediction results are the prediction results of the target image block. Or performing edge fusion on the prediction results of the two partitions to obtain the prediction result of the target image block. In this embodiment of the application, the target image block may obtain the prediction block by using any one of the first partition modes, for example, a geometric partition mode, and if the first partition mode is an inter-frame prediction mode, the prediction result of the target image block refers to an inter-frame prediction result, which may be an inter-frame prediction value.
It should be noted that the above scheme may be applied to an encoding end, and when the scheme is applied to a decoding end, the target partition mode parameter of the target image block may be obtained by parsing the received bitstream. In one embodiment, the prediction mode parameters may be parsed by a received bitstream. If the prediction mode parameter indicates that a first partition mode (e.g., a geometric partition mode GPM) is used, a partition mode parameter of the current block is determined. For example, the GPM partition index merge _ GPM _ partition _ idx with respect to the GPM parameter. Alternatively, the corresponding angle index angleIdx, distance index distanceIdx (also referred to as step index) may be determined by the GPM partition index merge _ GPM _ partition _ idx. In addition, other parameters, for example, index information of Motion Vector (MV) information of the first partition and index information of MV information of the second partition, may be included. Alternatively, the index of the partition mode may be signaled in the coding unit layer syntax element, e.g. in the merging data. Illustratively, the GPM partition index merge _ GPM _ partition _ idx is signaled in the coding unit layer syntax element. Specifically, a GPM partition index merge _ GPM _ partition _ idx is signaled in the merge data merge _ data (). In addition to the GPM partition index merge _ GPM _ partition _ idx, merge _ GPM _ idx0[ x0] [ y0] and merge _ GPM _ idx1[ x0] [ y0] are also signaled in the merged data merge _ data ().
According to the image processing scheme provided by the embodiment of the application, for the partition mode parameters used by the current coded image block, the partition mode parameters respectively corresponding to a plurality of partition modes can be selected according to the rate distortion cost, so that the distortion generated in the coding process is reduced as much as possible; for two partitions of the target image block divided by the dividing lines, appropriate partition weights can be determined for pixel points of each partition in the target image block based on target division mode parameters, prediction results of corresponding partitions are determined by using different partition weights and prediction result sets, and due to the fact that different attention degrees are given to the pixel points at different partitions and/or different positions in the partitions by the partition weights, the prediction results of the target image block are more accurate.
Fourth embodiment
Referring to fig. 8b, fig. 8b is a schematic flowchart of an image processing method according to a fourth embodiment, where an execution main body in this embodiment may be a computer device or a cluster formed by a plurality of computer devices, and the computer device may be an intelligent terminal (such as the foregoing mobile terminal 100) or a server, and here, the execution main body in this embodiment is an intelligent terminal for example.
S801, determining target division mode parameters of the target image block based on the first division mode parameter set.
The specific implementation manner of this step may refer to step S701 in the foregoing third embodiment, which is not described herein again.
S802, determining a first prediction result set and a second prediction result set of the target image block.
In one embodiment, it is possible to: constructing a unidirectional prediction candidate list of the target image block, and acquiring a first motion vector and a second motion vector based on the unidirectional prediction candidate list; a first set of predictors is determined based on the first motion vector, and a second set of predictors is determined based on the second motion vector. The uni-directional prediction candidate list may be a merge candidate list, the first motion vector may be a uni-directional prediction motion vector (e.g., uni-directional prediction motion vector 1 described below) with respect to the first predictor set, and the second motion vector may be a uni-directional prediction motion vector (e.g., uni-directional prediction motion vector 2 described below) with respect to the second predictor set.
The first prediction result set and the second prediction result set are obtained by referring to motion vectors of different reference frames, the first prediction result set comprises a plurality of prediction results, and the prediction results can be prediction values obtained by performing unidirectional prediction on a target image block to obtain unidirectional prediction motion vectors and then performing motion compensation. The same can be done for the second prediction. Alternatively, the uni-directional prediction motion vector of the corresponding partition may be determined by the positions of the motion vector of the first partition and the motion vector of the second partition in the merge candidate list. For example, in the GPM mode, the merge _ GPM _ idx0[ x0] [ y0] indicates the position of the motion vector of the first partition (i.e., a partition) in the merge candidate list, and the merge _ GPM _ idx1[ x0] [ y0] indicates the position of the motion vector of the second partition (i.e., B partition) in the merge candidate list. The uni-directional predicted motion vector of the corresponding partition can be determined through the merge _ gpm _ idx0[ x0] [ y0] and merge _ gpm _ idx1[ x0] [ y0 ].
Before determining the first prediction result combination and the second prediction result set, the prediction candidates of the target image block need to be constructed first, and the construction process of the unidirectional prediction candidate list is described below.
Fig. 8c is a schematic diagram of neighboring block positions of a spatial merge candidate list according to this embodiment. At most 4 candidate motion vectors can be selected from the spatial domain merging candidate list, and the construction sequence is as follows: the motion vector information of the left neighboring block a1, the upper neighboring block B1, the upper right neighboring block B0, the lower left neighboring block a0, the upper left neighboring block B2, and the neighboring block col at the corresponding position of the reference frame are arranged in sequence. It should be noted that B2 is considered only if other locations are not available. After adding the lower left neighboring block a0, redundancy detection is needed to ensure that candidates in the list do not have the same motion information. In addition, the historical reference block his, the average motion vector avg of the first candidate motion vector and the second candidate motion vector, and the zero motion vector 0 may be added to the merge candidate list.
Fig. 8d is a schematic diagram of a merge candidate list provided in this embodiment. The merge candidate list includes the motion information of the 5 neighboring blocks shown in fig. 8d, the sequence numbers are 0,1,2,3, and 4, respectively, and each neighboring block includes the bi-directional predicted motion vector information, i.e., the motion vector information corresponding to the list 0list0 and the list 1list1, respectively. For the GPM prediction mode, only the unidirectional prediction mode is used per partition. Since each entry in the merge candidate list may be motion vector information for bi-prediction, a uni-directional prediction motion vector needs to be derived for use. Let variable X be (m &0X01), where m = merge _ gpm _ idx0[ xCb ] [ yCb ]. Merge _ gpm _ idx0[ x0] [ y0], which indicates the position of the motion vector of the first partition in the merge candidate list. And is a bit and operation, i.e., the last bit of m is taken (similar to parity). In this way, the motion vector information corresponding to the reference frame in the reference list corresponding to X is preferentially used, for example, as shown in fig. 8d for the option corresponding to the shaded area. If the prediction list corresponding to X is 0 using the flag predFlagLXM, the motion vector of list LX (X is 0 or 1) is not available (the motion vectors of the neighboring blocks themselves may also be unidirectional). At this time, the motion vector of the facing position (motion vector indicated in the blank region horizontally corresponding to the shaded region), that is, X is 1-X.
Next, motion vectors for the first set of predictors and the second set of predictors are obtained as follows:
the first set of predicted results and the second set of predicted results may be sets of predicted values, and thus, the first set of predicted results and the second set of predicted results may also be referred to as a first set of predicted values and a second set of predicted values, respectively. In general, motion vectors for a first set of predictors may be obtained, followed by motion vectors for a second set of predictors. The specific process is as follows:
first, let m be merge _ gpm _ idx0[ xCb ]][yCb]。merge_gpm_idx0[xCb][yCb]Is the position in the merge candidate list of the motion vector representing the first set of predictors, and m is therefore the position in the merge candidate list of the motion vector of the first set of predictors. merge _ gpm _ idx1[ x0]][y0]Representing a second set of prediction valuesThe position of the motion vector in the merge candidate list, the position of the motion vector of the second set of predictors in the merge candidate list may be decreased by 1 because the motion vector of the first set of predictors has previously chosen the next to the previous one. Thus, let n = merge _ gpm _ idx1[ xCb ]][yCb]+
Figure 996552DEST_PATH_IMAGE012
And n denotes the actual position of the motion vector of the second set of prediction values in the merge candidate list.
Next, let M be mergeCandList [ M ], take the mth entry from the merge candidate list for motion vector construction of the first set of prediction values. Specifically, mvA [0] ═ mvLXM [0], mvA [1] ═ mvLXM [1], refIdxA ═ refIdxLXM, and predlistflag a ═ X are given. Wherein, mvLXM represents a motion vector of a merge candidate at position m in the merge candidate list. mvA is the motion vector in two directions corresponding to the first prediction value set, refIdxA is the reference frame corresponding to the motion vector, predlistflag a indicates which list of the motion vector candidate is currently selected. As can be seen from the above, the selected uni-directional predictive motion vector 1 is determined by letting predlistflag a ═ X (i.e., the uni-directional predictive motion vector is determined from mvA [0] and mvA [1 ]), where the variable X ═ m &0X 01.
Similarly, let N be mergeCandList [ N ], take the nth entry from the merge candidate list for motion vector construction of the second prediction value set. Specifically, mvB [0] ═ mvLXN [0], mvB [1] ═ mvLXN [1], refIdxB ═ refIdxLXN, predlistflag b ═ X. Where mvLXN denotes a motion vector of the merge candidate at position n in the merge candidate list. mvB is the second set of prediction values and the corresponding motion vectors in both directions.
refIdxB is the reference frame corresponding to the motion vector, predlistflag b indicates which list of candidates for the motion vector is currently selected. As can be seen from the above, the selected uni-directional prediction motion vector 2 is determined by making predlistflag b ═ X (i.e., determining uni-directional prediction motion vectors from mvB [0] and mvB [1 ]), where the variable X is (m &0X 01). In this process, the opposite motion vector is also used with preference if not available, using the corresponding option as the shaded area in fig. 8 d.
After determining the uni-directional predicted motion vector 1 and the uni-directional predicted motion vector 2, motion compensation is performed according to the uni-directional predicted motion vector 1 to obtain a first set of prediction results, and motion compensation is performed according to the uni-directional predicted motion vector 2 to obtain a second set of prediction results.
S803, based on the target partition mode parameter, determines a partition weight.
S804, determining the prediction result of the first partition and the prediction result of the second partition based on at least one of the partition weight, the first prediction result set and the second prediction result set.
And S805, determining the prediction result of the first partition and the prediction result of the second partition as the prediction results of the target image block.
The above steps S803 to S805 can refer to the corresponding contents introduced in the steps S702 to S704 in the embodiment corresponding to fig. 7, and are not described herein again.
According to the image processing scheme provided by the embodiment of the application, for the partition mode parameters used by the current coded image block, the partition mode parameters respectively corresponding to a plurality of partition modes can be selected according to the rate distortion cost, so that the distortion generated in the coding process can be reduced as much as possible; for two partitions of a target image block divided by a dividing line, a unidirectional prediction motion vector of the target image block can be estimated through a merging candidate list, and then accurate prediction results are obtained to be combined.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application, where the image processing apparatus may be a computer program (including program code) running in a server, and the image processing apparatus is an application software; the apparatus may be used to perform the corresponding steps in the methods provided by the embodiments of the present application. The image processing apparatus 900 includes: a determining module 901, an obtaining module 902, an updating module 903 and a sending module 904.
A determining module 901, configured to determine a target division mode parameter of the target image block based on the first division mode parameter set;
the determining module 901 is further configured to determine a prediction result of the target image block based on the target partition mode parameter.
Optionally, the first set of partition mode parameters includes partition mode parameters corresponding to one or more partition lines.
Optionally, at least one of: the dividing mode parameter is determined based on the use condition of the coded image to the dividing line; and the included angles formed between the adjacent dividing lines in the plurality of dividing lines are not equal in size.
Optionally, the dividing mode parameter is determined by adjusting an angle interval between a dividing line in a target direction and an adjacent dividing line of the dividing line in the target direction based on the usage of the encoded image for the dividing line.
Optionally, an angular interval between a dividing line of the target direction and an adjacent dividing line is smaller than an angular interval between a dividing line of the reference direction and an adjacent dividing line; and/or the application probability of the segmentation line of the target direction is greater than the application probability of the segmentation line of the reference direction.
Optionally, at least one of: the dividing mode parameter comprises at least one of a dividing index, an angle index and a distance index corresponding to the dividing line; the first division pattern parameter set is a parameter mapping table representing a mapping relationship among the division index, the angle index, and the distance index.
In one embodiment, the obtaining module 902 is configured to obtain a usage of the encoded image to the partition line; the determining module 901 is configured to determine an application probability of the dividing line based on the usage; the updating module 903 is configured to update the parameter mapping table based on the application probability.
In one embodiment, the determining module 901 is configured to: acquiring the number of times of use of the first division mode in response to the number of the coded images reaching a preset threshold; and determining the application probability of each dividing line based on the usage recording parameters and the usage times.
In an embodiment, the update module 903 is specifically configured to: updating the angle mapping table based on the application probability of each parting line to obtain an updated angle mapping table; and updating the angle index and/or the distance index in the parameter mapping table based on the updated angle mapping table.
Optionally, the angle mapping table is configured to represent a mapping relationship between a query index and an angle parameter, where the query index includes the segmentation index or the angle index, and an association relationship exists between the angle index and the angle parameter.
In an embodiment, the update module 903 is specifically configured to: and updating the angle mapping table based on the angle parameter corresponding to the first query index and/or the angle parameter corresponding to the second query index to obtain an updated angle mapping table.
Optionally, the first query index is a query index corresponding to a partition line with the highest application probability; the second query index is a query index corresponding to a partition line with the highest application probability or an adjacent query index of the first query index.
In an embodiment, the update module 903 is specifically configured to: determining a target angle parameter based on the angle parameter corresponding to the first query index and the angle parameter corresponding to the second query index; and updating the angle mapping table based on the target angle parameter to obtain an updated angle mapping table.
In another embodiment, the second query index is an adjacent query index to the first query index, and the updating module 903 is specifically configured to: and adjusting the angle parameter corresponding to the second query index in the angle mapping table according to a preset adjustment rule to obtain an updated angle mapping table.
In an embodiment, the update module 903 is specifically configured to: in response to the fact that the angle parameter corresponding to the adjacent query index is larger than the angle parameter corresponding to the first query index, reducing the angle parameter corresponding to the adjacent query index to obtain an updated angle mapping table; and/or, in response to the angle parameter corresponding to the adjacent query index being smaller than the angle parameter corresponding to the first query index, increasing the angle parameter corresponding to the adjacent query index to obtain an updated angle mapping table.
Optionally, a difference between the adjusted angle parameter corresponding to the adjacent query index and the angle parameter corresponding to the first query index is within a preset range; and/or the preset range is determined based on the application probability of the segmentation line corresponding to the first query index.
In one embodiment, the sending module 904 is configured to: and sending a bit stream to a decoding end, wherein the bit stream comprises indication information of the updated parameter mapping table, and the indication information is used for indicating the decoding end to determine a prediction result of the target image block by using the updated parameter mapping table.
In one embodiment, the determining module 901 is further configured to: based on parameters of a target image block, performing predictive coding on the target image block by using at least one prediction mode, and determining rate-distortion cost corresponding to each prediction mode; and when the prediction mode with the lowest rate-distortion cost is the first division mode, executing the step of determining the target division mode parameters of the target image block based on the first division mode parameter set.
In an embodiment, the first partition mode includes at least one partition mode, and the determining module 901 is specifically configured to: determining a rate-distortion cost applied to the target image block by each of the division modes included in the first division mode; determining the division mode with the minimum rate-distortion cost as the target division mode of the target image block; determining a mode parameter corresponding to the target division mode from the first division mode parameter set, and determining the mode parameter corresponding to the target division mode as a target division mode parameter of the target image block.
In an embodiment, the target image block includes a first partition and a second partition, where the first partition and the second partition are two image areas obtained by dividing the target image block by a dividing line, and the determining module 901 is specifically further configured to: determining partition weights based on the target partition mode parameters; determining a prediction result of the first partition and a prediction result of the second partition based on at least one of the partition weight, a first set of prediction results, and a second set of prediction results; and determining the prediction result of the first partition and the prediction result of the second partition as the prediction results of the target image block. Optionally, before determining the partition weights based on the target partition mode parameter, it is also possible to: and determining a first prediction result set and a second prediction result set of the target image block.
In an embodiment, the partition weight includes a first weight corresponding to the first partition and a second weight corresponding to the second partition, where the first weight or the second weight is determined based on a distance between a pixel point included in the target image block and a partition line, and the determining module 901 is further configured to: and determining the prediction result of the pixel point based on at least one of the first weight, the second weight, a first prediction result of the pixel point included in the target image block in a first prediction result set and a second prediction result of the pixel point in a second prediction result set.
It can be understood that the functions of the functional modules of the image processing apparatus described in this embodiment may be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process may refer to the description related to the foregoing method embodiment, which is not described herein again. In addition, the beneficial effects of the same method are not described in detail.
The embodiment of the application also provides an image processing method which can be applied to an intelligent terminal, and the method comprises the following steps:
s10: determining a first division mode according to a first preset strategy;
s20: determining a target division mode parameter of the target image block based on a second preset strategy related to the first division mode;
s30: determining a prediction result of the target image block based on the target division mode parameter.
In one embodiment, the step of S10 includes: the method comprises the steps of carrying out predictive coding on a target image block by utilizing at least one prediction mode based on parameters of the target image block, determining rate distortion cost corresponding to each prediction mode, and determining the prediction mode with the minimum rate distortion cost as a first division mode.
In one embodiment, the step of S20 includes: determining a mode parameter corresponding to the target division mode from the first division mode parameter set, and determining the mode parameter corresponding to the target division mode as a target division mode parameter of the target image block.
In one embodiment, the target image block includes a first partition and a second partition, the first partition and the second partition are two image areas obtained by dividing the target image block by dividing lines, and the step S30 includes: determining a first set of prediction results and a second set of prediction results for the target image block based on the target partition mode parameter; determining a prediction result of the first partition and a prediction result of the second partition based on at least one of a partition weight, the first set of prediction results, and the second set of prediction results; and determining the prediction result of the first partition and the prediction result of the second partition as the prediction results of the target image block.
In one embodiment, the partition weight includes a first weight corresponding to the first partition and a second weight corresponding to the second partition, the first weight or the second weight being determined based on a distance between a pixel point included in the target image block and a partition line, the method further includes: and determining the prediction result of the pixel point based on at least one of the first weight, the second weight, a first prediction result of the pixel point included in the target image block in a first prediction result set and a second prediction result of the pixel point in a second prediction result set.
Optionally, the first set of partition parameters comprises partition mode parameters corresponding to one or more partition lines.
Optionally, the partition mode parameter is determined based on use of the coded image to the partition line.
Optionally, the included angles formed between adjacent dividing lines in the plurality of dividing lines are not equal in size.
Optionally, the dividing mode parameter is determined by adjusting an angle interval between a dividing line in a target direction and an adjacent dividing line of the dividing line in the target direction based on the usage of the encoded image for the dividing line.
Optionally, an angular interval between a dividing line of the target direction and an adjacent dividing line is smaller than an angular interval between a dividing line of the reference direction and an adjacent dividing line.
Optionally, the application probability of the dividing line of the target direction is greater than the application probability of the dividing line of the reference direction.
Optionally, the dividing mode parameter includes at least one of a dividing index, an angle index, and a distance index corresponding to the dividing line.
Optionally, the first partition pattern parameter set is a parameter mapping table representing a mapping relationship among the partition index, the angle index, and the distance index.
In one embodiment, the method further comprises: acquiring the use condition of the coded image to the dividing line; determining an application probability of the segmentation line based on the use case; updating the parameter mapping table based on the application probability.
In one embodiment, the determining the application probability of the partition line based on the usage includes: acquiring the number of times of use of the first division mode in response to the number of the coded images reaching a preset threshold; and determining the application probability of each segmentation line based on the usage record parameters and the usage times.
In one embodiment, said updating said parameter mapping table based on said application probability comprises: updating the angle mapping table based on the application probability of each parting line to obtain an updated angle mapping table; and updating the angle index and/or the distance index in the parameter mapping table based on the updated angle mapping table.
In an embodiment, the updating the angle mapping table based on the application probability of each partition line to obtain an updated angle mapping table includes: and updating the angle mapping table based on the angle parameter corresponding to the first query index and/or the angle parameter corresponding to the second query index to obtain an updated angle mapping table.
In an embodiment, the updating the angle mapping table based on the angle parameter corresponding to the first query index and/or the angle parameter corresponding to the second query index to obtain an updated angle mapping table includes: determining a target angle parameter based on the angle parameter corresponding to the first query index and the angle parameter corresponding to the second query index; and updating the angle mapping table based on the target angle parameter to obtain an updated angle mapping table.
In an embodiment, the updating the angle mapping table based on the angle parameter corresponding to the first query index and/or the angle parameter corresponding to the second query index to obtain an updated angle mapping table includes: and adjusting the angle parameter corresponding to the second query index in the angle mapping table according to a preset adjustment rule to obtain an updated angle mapping table.
In an embodiment, the adjusting the angle parameter corresponding to the second query index in the angle mapping table according to a preset adjustment rule to obtain an updated angle mapping table includes: in response to the angle parameter corresponding to the adjacent query index being greater than the angle parameter corresponding to the first query index, reducing the angle parameter corresponding to the adjacent query index to obtain an updated angle mapping table; and/or, in response to the angle parameter corresponding to the adjacent query index being smaller than the angle parameter corresponding to the first query index, increasing the angle parameter corresponding to the adjacent query index to obtain an updated angle mapping table.
Optionally, a difference between the adjusted angle parameter corresponding to the adjacent query index and the angle parameter corresponding to the first query index is within a preset range; and/or the preset range is determined based on the application probability of the segmentation line corresponding to the first query index.
In one embodiment, after updating the parameter mapping table based on the application probability, the method further comprises:
and sending a bit stream to a decoding end, wherein the bit stream comprises indication information of the updated parameter mapping table, and the indication information is used for indicating the decoding end to determine a prediction result of the target image block by using the updated parameter mapping table.
It can be understood that, for the image processing method described in this embodiment, the specific implementation process may refer to the description related to the other method embodiments, and details are not described here again. In addition, the beneficial effects of the same method are not described in detail.
The embodiment of the application also provides an image processing method which can be applied to an intelligent terminal, and the method comprises the following steps:
s100: determining a first division mode according to a first preset strategy and determining a target division mode parameter of a target image block based on a second preset strategy;
s200: determining a prediction result of the target image block based on the target division mode parameter.
In one embodiment, the S100 step includes: the method comprises the steps of carrying out predictive coding on a target image block by utilizing at least one prediction mode based on parameters of the target image block, determining rate distortion cost corresponding to each prediction mode, and determining the prediction mode with the minimum rate distortion cost as a first division mode. Determining a mode parameter corresponding to the target division mode from the first division mode parameter set, and determining the mode parameter corresponding to the target division mode as a target division mode parameter of the target image block.
In one embodiment, the target image block includes a first partition and a second partition, where the first partition and the second partition are two image areas obtained by dividing the target image block by a dividing line, and the S200 includes: determining a first set of prediction results and a second set of prediction results for the target image block based on the target partition mode parameter; determining a prediction result of the first partition and a prediction result of the second partition based on at least one of a partition weight, the first set of prediction results, and the second set of prediction results; and determining the prediction result of the first partition and the prediction result of the second partition as the prediction results of the target image block.
In one embodiment, the partition weight includes a first weight corresponding to the first partition and a second weight corresponding to the second partition, the first weight or the second weight being determined based on a distance between a pixel point included in the target image block and a partition line, the method further includes: and determining the prediction result of the pixel point based on at least one of the first weight, the second weight, a first prediction result of the pixel point included in the target image block in a first prediction result set and a second prediction result of the pixel point in a second prediction result set.
Optionally, the first set of partition parameters comprises partition mode parameters corresponding to one or more partition lines.
Optionally, the partition mode parameter is determined based on use of the coded image to the partition line.
Optionally, the included angles formed between adjacent dividing lines in the plurality of dividing lines are not equal in size.
Optionally, the dividing mode parameter is determined by adjusting an angle interval between a dividing line in the target direction and an adjacent dividing line of the dividing line in the target direction based on a usage of the encoded image for the dividing line.
Optionally, an angular interval between a dividing line of the target direction and an adjacent dividing line is smaller than an angular interval between a dividing line of the reference direction and an adjacent dividing line.
Optionally, the application probability of the dividing line of the target direction is greater than the application probability of the dividing line of the reference direction.
Optionally, the dividing mode parameter includes at least one of a dividing index, an angle index, and a distance index corresponding to the dividing line.
Optionally, the first partition pattern parameter set is a parameter mapping table representing a mapping relationship among the partition index, the angle index, and the distance index.
In one embodiment, the method further comprises: acquiring the use condition of the coded image to the dividing line; determining an application probability of the segmentation line based on the use case; updating the parameter mapping table based on the application probability.
In one embodiment, the determining the application probability of the partition line based on the usage includes: acquiring the number of times of use of the first division mode in response to the number of the coded images reaching a preset threshold; and determining the application probability of each segmentation line based on the usage record parameters and the usage times.
In one embodiment, said updating said parameter mapping table based on said application probability comprises: updating the angle mapping table based on the application probability of each parting line to obtain an updated angle mapping table; and updating the angle index and/or the distance index in the parameter mapping table based on the updated angle mapping table.
In an embodiment, the updating the angle mapping table based on the application probability of each partition line to obtain an updated angle mapping table includes: and updating the angle mapping table based on the angle parameter corresponding to the first query index and/or the angle parameter corresponding to the second query index to obtain an updated angle mapping table.
In an embodiment, the updating the angle mapping table based on the angle parameter corresponding to the first query index and/or the angle parameter corresponding to the second query index to obtain an updated angle mapping table includes: determining a target angle parameter based on the angle parameter corresponding to the first query index and the angle parameter corresponding to the second query index; and updating the angle mapping table based on the target angle parameter to obtain an updated angle mapping table.
In an embodiment, the updating the angle mapping table based on the angle parameter corresponding to the first query index and/or the angle parameter corresponding to the second query index to obtain an updated angle mapping table includes: and adjusting the angle parameter corresponding to the second query index in the angle mapping table according to a preset adjustment rule to obtain an updated angle mapping table.
In an embodiment, the adjusting the angle parameter corresponding to the second query index in the angle mapping table according to a preset adjustment rule to obtain an updated angle mapping table includes: in response to the fact that the angle parameter corresponding to the adjacent query index is larger than the angle parameter corresponding to the first query index, reducing the angle parameter corresponding to the adjacent query index to obtain an updated angle mapping table; and/or, in response to the angle parameter corresponding to the adjacent query index being smaller than the angle parameter corresponding to the first query index, increasing the angle parameter corresponding to the adjacent query index to obtain an updated angle mapping table.
Optionally, a difference between the adjusted angle parameter corresponding to the adjacent query index and the angle parameter corresponding to the first query index is within a preset range; and/or the preset range is determined based on the application probability of the segmentation line corresponding to the first query index.
In one embodiment, after updating the parameter mapping table based on the application probability, the method further comprises:
and sending a bit stream to a decoding end, wherein the bit stream comprises indication information of the updated parameter mapping table, and the indication information is used for indicating the decoding end to determine a prediction result of the target image block by using the updated parameter mapping table.
It can be understood that, for the image processing method described in this embodiment, the specific implementation process may refer to the description related to the other method embodiments, and details are not described here again. In addition, the beneficial effects of the same method are not described in detail.
The embodiment of the present application further provides an intelligent terminal, where the intelligent terminal includes a memory and a processor, and the memory stores an image processing program, and when the image processing program is executed by the processor, the image processing method in any of the embodiments is implemented. The intelligent terminal may be a mobile terminal 100 as shown in fig. 1.
In a possible embodiment, the processor 110 of the mobile terminal 100 shown in fig. 1 may be configured to call the image processing program stored in the memory 109 to perform the following operations: s1: determining a target division mode parameter of the target image block based on the first division mode parameter set; s2: determining a prediction result of the target image block based on the target division mode parameter.
Optionally, the first set of partition mode parameters includes partition mode parameters corresponding to one or more partition lines.
Optionally, at least one of the following is included: the dividing mode parameter is determined based on the use condition of the coded image to the dividing line; and the included angles formed between the adjacent dividing lines in the plurality of dividing lines are not equal in size.
Optionally, the dividing mode parameter is determined by adjusting an angle interval between a dividing line in a target direction and an adjacent dividing line of the dividing line in the target direction based on the usage of the encoded image for the dividing line.
Optionally, an angular interval between a dividing line of the target direction and an adjacent dividing line is smaller than an angular interval between a dividing line of the reference direction and an adjacent dividing line; and/or the application probability of the dividing line of the target direction is larger than that of the dividing line of the reference direction.
Optionally, at least one of: the dividing mode parameter comprises at least one of a dividing index, an angle index and a distance index corresponding to the dividing line; the first partition mode parameter set is a parameter mapping table representing a mapping relationship among the partition index, the angle index and the distance index.
In one embodiment, the processor 110 is configured to obtain a usage of the encoded image for the partition line; determining an application probability of the segmentation line based on the use case; updating the parameter mapping table based on the application probability.
In one embodiment, the processor 110 is configured to: acquiring the number of times of use of the first division mode in response to the number of the coded images reaching a preset threshold; and determining the application probability of each dividing line based on the usage recording parameters and the usage times.
In an embodiment, the processor 110 is specifically configured to: updating the angle mapping table based on the application probability of each parting line to obtain an updated angle mapping table; and updating the angle index and/or the distance index in the parameter mapping table based on the updated angle mapping table.
Optionally, the angle mapping table is configured to represent a mapping relationship between a query index and an angle parameter, where the query index includes the segmentation index or the angle index, and an association relationship exists between the angle index and the angle parameter.
In an embodiment, the processor 110 is specifically configured to: and updating the angle mapping table based on the angle parameter corresponding to the first query index and/or the angle parameter corresponding to the second query index to obtain an updated angle mapping table.
Optionally, the first query index is a query index corresponding to a partition line with the highest application probability; the second query index is a query index corresponding to a partition line with the highest application probability or an adjacent query index of the first query index.
In an embodiment, the processor 110 is specifically configured to: determining a target angle parameter based on the angle parameter corresponding to the first query index and the angle parameter corresponding to the second query index; and updating the angle mapping table based on the target angle parameter to obtain an updated angle mapping table.
In another embodiment, the second query index is a neighboring query index of the first query index, and the processor 110 is specifically configured to: and adjusting the angle parameter corresponding to the second query index in the angle mapping table according to a preset adjustment rule to obtain an updated angle mapping table.
In an embodiment, the processor 110 is specifically configured to: in response to the angle parameter corresponding to the adjacent query index being greater than the angle parameter corresponding to the first query index, reducing the angle parameter corresponding to the adjacent query index to obtain an updated angle mapping table; and/or, in response to the angle parameter corresponding to the adjacent query index being smaller than the angle parameter corresponding to the first query index, increasing the angle parameter corresponding to the adjacent query index to obtain an updated angle mapping table.
Optionally, a difference between the adjusted angle parameter corresponding to the adjacent query index and the angle parameter corresponding to the first query index is within a preset range; and/or the preset range is determined based on the application probability of the segmentation line corresponding to the first query index.
In one embodiment, the processor 110 is further configured to: and sending a bit stream to a decoding end, wherein the bit stream comprises indication information of the updated parameter mapping table, and the indication information is used for indicating the decoding end to determine a prediction result of the target image block by using the updated parameter mapping table.
In one embodiment, the processor 110 is further configured to: based on parameters of a target image block, performing predictive coding on the target image block by using at least one prediction mode, and determining rate-distortion cost corresponding to each prediction mode; when the prediction mode with the smallest rate-distortion cost is the first division mode, the step S1 is performed.
In an embodiment, the first partition mode includes at least one partition mode, and the processor 110 is specifically configured to: determining a rate-distortion cost applied to the target image block by each of the division modes included in the first division mode; determining the division mode with the minimum rate-distortion cost as the target division mode of the target image block; determining a mode parameter corresponding to the target division mode from the first division mode parameter set, and determining the mode parameter corresponding to the target division mode as a target division mode parameter of the target image block.
In an embodiment, the target image block includes a first partition and a second partition, where the first partition and the second partition are two image areas obtained by dividing the target image block by a dividing line, and the processor 110 is further configured to: partitioning weights based on the target partitioning mode parameter; determining a prediction result of the first partition and a prediction result of the second partition based on at least one of the partition weight, a first set of prediction results, and a second set of prediction results; and determining the prediction result of the first partition and the prediction result of the second partition as the prediction results of the target image block. Optionally, before determining the partition weights based on the target partition mode parameter, it is also possible to: and determining a first prediction result set and a second prediction result set of the target image block.
In one embodiment, the partition weight includes a first weight corresponding to the first partition and a second weight corresponding to the second partition, the first weight or the second weight is determined based on a distance between a pixel point included in the target image block and the partition line, and the processor 110 is further configured to: and determining the prediction result of the pixel point based on at least one of the first weight, the second weight, a first prediction result of the pixel point included in the target image block in a first prediction result set and a second prediction result of the pixel point in a second prediction result set.
It should be understood that the mobile terminal described in the embodiment of the present application may perform the method description of any one of the above embodiments, and may also perform the description of the image processing apparatus in the above corresponding embodiment, which is not described herein again. In addition, the beneficial effects of the same method are not described in detail.
The embodiment of the present application further provides a computer-readable storage medium, where an image processing program is stored on the storage medium, and when the image processing program is executed by a processor, the image processing program implements the steps of the image processing method in any of the above embodiments.
In the embodiments of the intelligent terminal and the computer-readable storage medium provided in the embodiments of the present application, all technical features of any one of the embodiments of the image processing method may be included, and the expanding and explaining contents of the specification are basically the same as those of the embodiments of the method, and are not described herein again.
Embodiments of the present application also provide a computer program product, which includes computer program code, when the computer program code runs on a computer, the computer is caused to execute the method in the above various possible embodiments.
Embodiments of the present application further provide a chip, which includes a memory and a processor, where the memory is used to store a computer program, and the processor is used to call and run the computer program from the memory, so that a device in which the chip is installed executes the method in the above various possible embodiments.
It is to be understood that the foregoing scenarios are only examples, and do not constitute a limitation on application scenarios of the technical solutions provided in the embodiments of the present application, and the technical solutions of the present application may also be applied to other scenarios. For example, as can be known by those skilled in the art, with the evolution of system architecture and the emergence of new service scenarios, the technical solution provided in the embodiments of the present application is also applicable to similar technical problems.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs.
The units in the device in the embodiment of the application can be merged, divided and deleted according to actual needs.
In the present application, the same or similar term concepts, technical solutions and/or application scenario descriptions will be generally described only in detail at the first occurrence, and when the description is repeated later, the detailed description will not be repeated in general for brevity, and when understanding the technical solutions and the like of the present application, reference may be made to the related detailed description before the description for the same or similar term concepts, technical solutions and/or application scenario descriptions and the like which are not described in detail later.
In the present application, each embodiment is described with emphasis, and reference may be made to the description of other embodiments for parts that are not described or illustrated in any embodiment.
The technical features of the technical solution of the present application may be arbitrarily combined, and for brevity of description, all possible combinations of the technical features in the embodiments are not described, however, as long as there is no contradiction between the combinations of the technical features, the scope of the present application should be considered as being described in the present application.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method of the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, a controlled terminal, or a network device) to execute the method of each embodiment of the present application.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the present application are all or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, memory Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (20)

1. An image processing method, characterized by comprising the steps of:
s1: determining a target division mode parameter of the target image block based on the first division mode parameter set;
s2: determining a prediction result of the target image block based on the target division mode parameter;
the first division mode parameter set is a parameter mapping table which represents mapping relations among division indexes, angle indexes and distance indexes, wherein the angle indexes and/or the distance indexes corresponding to one or more division indexes are adjusted based on the use condition of the division line.
2. The method of claim 1, wherein the first set of partition mode parameters includes partition mode parameters corresponding to one or more partition lines.
3. The method of claim 2, wherein angles formed between adjacent ones of the plurality of parting lines are not equal in magnitude.
4. The method of claim 3, wherein the partition mode parameter is determined by adjusting an angular interval between a partition line of a target direction and a neighboring partition line of the target direction based on usage of the partition line by an encoded image.
5. The method of claim 4, wherein an angular interval between a segment line of the target direction and an adjacent segment line is smaller than an angular interval between a segment line of the reference direction and an adjacent segment line; and/or the application probability of the dividing line of the target direction is larger than that of the dividing line of the reference direction.
6. The method of claim 1, wherein the method further comprises:
acquiring the use condition of the coded image to the dividing line;
determining an application probability of the partition line based on the usage;
updating the parameter mapping table based on the application probability.
7. The method of claim 6, wherein the determining the application probability of the split line based on the usage comprises:
acquiring the number of times of use of a first division mode in response to the number of the coded images reaching a preset threshold;
determining the application probability of each dividing line based on the use recording parameters and the use times; the use recording parameter is used for recording the total times of using each dividing line corresponding to the dividing mode by the coded image.
8. The method of claim 6, wherein said updating the parameter mapping table based on the application probability comprises:
updating the angle mapping table based on the application probability of each parting line to obtain an updated angle mapping table;
and updating the angle index and/or the distance index in the parameter mapping table based on the updated angle mapping table.
9. The method as claimed in claim 8, wherein said updating the angle mapping table based on the application probability of each partition line to obtain an updated angle mapping table comprises:
and updating the angle mapping table based on the angle parameter corresponding to the first query index and/or the angle parameter corresponding to the second query index to obtain an updated angle mapping table.
10. The method of claim 9, wherein the updating the angle mapping table based on the angle parameter corresponding to the first query index and/or the angle parameter corresponding to the second query index to obtain an updated angle mapping table comprises:
determining a target angle parameter based on the angle parameter corresponding to the first query index and the angle parameter corresponding to the second query index;
and updating the angle mapping table based on the target angle parameter to obtain an updated angle mapping table.
11. The method of claim 9, wherein the second query index is a neighboring query index of the first query index, and the updating the angle mapping table based on the angle parameter corresponding to the first query index and/or the angle parameter corresponding to the second query index to obtain the updated angle mapping table comprises:
and adjusting the angle parameter corresponding to the second query index in the angle mapping table according to a preset adjustment rule to obtain an updated angle mapping table.
12. The method as claimed in claim 11, wherein the adjusting the angle parameter corresponding to the second query index in the angle mapping table according to a preset adjustment rule to obtain an updated angle mapping table comprises:
in response to the angle parameter corresponding to the adjacent query index being greater than the angle parameter corresponding to the first query index, reducing the angle parameter corresponding to the adjacent query index to obtain an updated angle mapping table; and/or the presence of a gas in the gas,
and in response to the angle parameter corresponding to the adjacent query index being smaller than the angle parameter corresponding to the first query index, increasing the angle parameter corresponding to the adjacent query index to obtain an updated angle mapping table.
13. The method according to claim 12, wherein the difference between the adjusted angle parameter corresponding to the neighboring query index and the angle parameter corresponding to the first query index is within a preset range; the preset range is determined based on the application probability of the partition line corresponding to the first query index.
14. The method of any of claims 6 to 13, wherein after updating the parameter mapping table based on the application probability, the method further comprises:
and sending a bit stream to a decoding end, wherein the bit stream comprises indication information of the updated parameter mapping table, and the indication information is used for indicating the decoding end to determine a prediction result of the target image block by using the updated parameter mapping table.
15. The method according to any one of claims 1 to 13, wherein prior to the step of S1, the method further comprises:
based on parameters of a target image block, performing predictive coding on the target image block by using at least one prediction mode, and determining rate-distortion cost corresponding to each prediction mode;
when the prediction mode with the smallest rate-distortion cost is the first division mode, the step S1 is performed.
16. The method according to any one of claims 1 to 13, wherein the first partition mode includes at least one partition mode, and the step of S1 includes:
determining a rate-distortion cost applied to the target image block by each of the division modes included in the first division mode;
determining the division mode with the minimum rate-distortion cost as the target division mode of the target image block;
determining a mode parameter corresponding to the target division mode from the first division mode parameter set, and determining the mode parameter corresponding to the target division mode as a target division mode parameter of the target image block.
17. The method according to any one of claims 1 to 13, wherein the target image block includes a first partition and a second partition, the first partition and the second partition being two image areas obtained by dividing the target image block by a dividing line, the step S2 including:
determining partition weights based on the target partition mode parameters;
determining a first prediction result set and a second prediction result set of the target image block, wherein the first prediction result set and the second prediction result set are obtained by referring to motion vectors of different reference frames;
determining a prediction result of the first partition and a prediction result of the second partition based on at least one of the partition weights, the first set of prediction results, and the second set of prediction results;
and determining the prediction result of the first partition and the prediction result of the second partition as the prediction results of the target image block.
18. The method of claim 17, wherein the partition weight includes a first weight corresponding to the first partition and a second weight corresponding to the second partition, the first weight or the second weight being determined based on a distance between a pixel point included in the target image block and a partition line, the method further comprising:
and determining the prediction result of the pixel point based on at least one of the first weight, the second weight, a first prediction result of the pixel point included in the target image block in the first prediction result set and a second prediction result of the pixel point in the second prediction result set.
19. An intelligent terminal, characterized in that, intelligent terminal includes: memory, a processor, wherein the memory has stored thereon an image processing program which, when executed by the processor, implements the steps of the image processing method of any of claims 1 to 18.
20. A computer-readable storage medium, characterized in that the storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 18.
CN202210317793.4A 2022-03-29 2022-03-29 Image processing method, intelligent terminal and storage medium Active CN114422781B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210317793.4A CN114422781B (en) 2022-03-29 2022-03-29 Image processing method, intelligent terminal and storage medium
PCT/CN2023/078559 WO2023185351A1 (en) 2022-03-29 2023-02-27 Image processing method, intelligent terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210317793.4A CN114422781B (en) 2022-03-29 2022-03-29 Image processing method, intelligent terminal and storage medium

Publications (2)

Publication Number Publication Date
CN114422781A CN114422781A (en) 2022-04-29
CN114422781B true CN114422781B (en) 2022-07-12

Family

ID=81263240

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210317793.4A Active CN114422781B (en) 2022-03-29 2022-03-29 Image processing method, intelligent terminal and storage medium

Country Status (2)

Country Link
CN (1) CN114422781B (en)
WO (1) WO2023185351A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114422781B (en) * 2022-03-29 2022-07-12 深圳传音控股股份有限公司 Image processing method, intelligent terminal and storage medium
CN114598880B (en) * 2022-05-07 2022-09-16 深圳传音控股股份有限公司 Image processing method, intelligent terminal and storage medium
CN115379214B (en) * 2022-10-26 2023-05-23 深圳传音控股股份有限公司 Image processing method, intelligent terminal and storage medium
CN117176959B (en) * 2023-11-02 2024-04-09 深圳传音控股股份有限公司 Processing method, processing apparatus, and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109905702A (en) * 2017-12-11 2019-06-18 腾讯科技(深圳)有限公司 The method, apparatus and storage medium that reference information determines in a kind of Video coding

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2464116A1 (en) * 2010-12-13 2012-06-13 Thomson Licensing Method and device for video encoding using geometry adaptive block partitioning
CN102611884B (en) * 2011-01-19 2014-07-09 华为技术有限公司 Image encoding and decoding method and encoding and decoding device
WO2019001733A1 (en) * 2017-06-30 2019-01-03 Huawei Technologies Co., Ltd. Encoder, decoder, computer program and computer program product for processing a frame of a video sequence
US20220109835A1 (en) * 2019-03-11 2022-04-07 Apple Inc. Method for encoding/decoding image signal, and device therefor
US11570434B2 (en) * 2019-08-23 2023-01-31 Qualcomm Incorporated Geometric partition mode with harmonized motion field storage and motion compensation
WO2021040572A1 (en) * 2019-08-30 2021-03-04 Huawei Technologies Co., Ltd. Method and apparatus of high-level signaling for non-rectangular partitioning modes
US11317094B2 (en) * 2019-12-24 2022-04-26 Tencent America LLC Method and apparatus for video coding using geometric partitioning mode
CN114586366A (en) * 2020-04-03 2022-06-03 Oppo广东移动通信有限公司 Inter-frame prediction method, encoder, decoder, and storage medium
CN115280778A (en) * 2020-04-03 2022-11-01 Oppo广东移动通信有限公司 Inter-frame prediction method, encoder, decoder, and storage medium
WO2021253373A1 (en) * 2020-06-19 2021-12-23 Alibaba Group Holding Limited Probabilistic geometric partitioning in video coding
WO2022019613A1 (en) * 2020-07-20 2022-01-27 한국전자통신연구원 Method, apparatus, and recording medium for encoding/decoding image by using geometric partitioning
CN114422781B (en) * 2022-03-29 2022-07-12 深圳传音控股股份有限公司 Image processing method, intelligent terminal and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109905702A (en) * 2017-12-11 2019-06-18 腾讯科技(深圳)有限公司 The method, apparatus and storage medium that reference information determines in a kind of Video coding

Also Published As

Publication number Publication date
WO2023185351A1 (en) 2023-10-05
CN114422781A (en) 2022-04-29

Similar Documents

Publication Publication Date Title
CN114422781B (en) Image processing method, intelligent terminal and storage medium
CN114598880B (en) Image processing method, intelligent terminal and storage medium
CN111161316B (en) Target object tracking method and device and terminal equipment
US10986332B2 (en) Prediction mode selection method, video encoding device, and storage medium
CN115834897B (en) Processing method, processing apparatus, and storage medium
CN115988206B (en) Image processing method, processing apparatus, and storage medium
CN115002463B (en) Image processing method, intelligent terminal and storage medium
JP2007135219A (en) Method and apparatus for video frame transfer in communication system
JP2007135219A6 (en) Video frame transfer method and apparatus in communication system
CN108198150B (en) Method for eliminating image dead pixel, terminal and storage medium
CN116668704B (en) Processing method, processing apparatus, and storage medium
CN116456102B (en) Image processing method, processing apparatus, and storage medium
US10827198B2 (en) Motion estimation method, apparatus, and storage medium
CN116847088B (en) Image processing method, processing apparatus, and storage medium
CN115955565B (en) Processing method, processing apparatus, and storage medium
CN115379214B (en) Image processing method, intelligent terminal and storage medium
CN116095322B (en) Image processing method, processing apparatus, and storage medium
CN117176959B (en) Processing method, processing apparatus, and storage medium
KR20030027021A (en) Method and apparatus for transferring video frame in telecommunication system
CN111866389A (en) Video tracking shooting method and device and computer readable storage medium
CN115422986B (en) Processing method, processing apparatus, and storage medium
CN114125151B (en) Image processing method, mobile terminal and storage medium
CN112866710B (en) Coding unit processing method and related device
CN110992255B (en) Image color conversion method, device and computer readable storage medium
JP2007312397A (en) Method and apparatus for video frame transfer in communication system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant