CN111932463B - Image processing method, device, equipment and storage medium - Google Patents

Image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN111932463B
CN111932463B CN202010872723.6A CN202010872723A CN111932463B CN 111932463 B CN111932463 B CN 111932463B CN 202010872723 A CN202010872723 A CN 202010872723A CN 111932463 B CN111932463 B CN 111932463B
Authority
CN
China
Prior art keywords
image
gradient
target
resolution
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010872723.6A
Other languages
Chinese (zh)
Other versions
CN111932463A (en
Inventor
宋晨光
熊诗尧
龙泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010872723.6A priority Critical patent/CN111932463B/en
Publication of CN111932463A publication Critical patent/CN111932463A/en
Application granted granted Critical
Publication of CN111932463B publication Critical patent/CN111932463B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The application discloses an image processing method, an image processing device, image processing equipment and a storage medium, and belongs to the technical field of multimedia. The embodiment of the application can be realized by a computer vision technology in artificial intelligence, and particularly can realize an image processing method by using an image processing technology and a video processing technology. In the embodiment of the application, a new factor of gradient characteristics is introduced, and the target convolution parameters are directly determined from the existing corresponding relation according to the factor, so that the determination process of the target convolution parameters is less in time consumption, fine characteristic extraction and nonlinear mapping of images are not needed, and the time consumption of characteristic extraction and processing is greatly reduced. And the high-resolution image can be obtained by carrying out one-step convolution processing on the target convolution parameters, so that compared with a mode of reconstructing based on the extracted image features, the image processing steps are effectively simplified, and the image processing efficiency can be improved.

Description

Image processing method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of multimedia technologies, and in particular, to an image processing method, apparatus, device, and storage medium.
Background
With the development of multimedia technology, image processing technology is becoming more and more widely used, and for example, an image with low resolution can be processed into an image with high resolution for display by performing super processing on the image, so as to improve the image quality.
At present, an image processing method is usually realized through a super-division algorithm, and the super-division algorithm is an interpolation super-division algorithm, a learning-based super-division algorithm and the like. For example, taking a machine learning-based super-division algorithm such as srcan (Super Resolution Convolutional Neural Networks, convolutional neural network super-division) as an example, the structure is also very simple, and only three layers of networks are provided, wherein the first layer of convolution layer has 64 convolution kernels and is responsible for extracting the characteristics of the interpolated low-resolution image, the second layer of convolution layer is responsible for nonlinear mapping of the characteristics extracted by the first layer, and the third layer of convolution layer is used for characteristic reconstruction and generates a final high-resolution image.
In the method, the image is subjected to feature extraction and nonlinear mapping, and then the time consumption based on the feature reconstruction is longer, and even on equipment with better performance, such as a server, a frame can be calculated only in a second level time, so that the time consumption of image processing is longer, the efficiency is very low, and especially in the process of super-processing the image in video, the effect of real-time processing cannot be achieved.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, image processing equipment and a storage medium, which can reduce time consumption and improve image processing efficiency. The technical scheme is as follows:
in one aspect, there is provided an image processing method, the method including:
up-sampling a first image according to a target resolution to obtain a second image with the target resolution, wherein the resolution of the first image is smaller than the target resolution;
extracting features of the second image to obtain gradient features of the second image, wherein the gradient features are used for indicating the relation between pixel points in the second image and adjacent pixel points;
determining a target convolution parameter corresponding to the gradient feature of the second image according to the corresponding relation between the gradient feature and the convolution parameter;
and carrying out convolution processing on the second image based on the target convolution parameter to obtain a third image, wherein the resolution of the third image is the target resolution.
In one possible implementation manner, the smoothing the first gradient information of the pixel point in the second image to obtain second gradient information includes:
and carrying out Gaussian blur processing on the first gradient information of the pixel points in the second image to obtain second gradient information.
In one possible implementation manner, the obtaining the pixel value variation at the pixel point according to the pixel value of any pixel point in the second image and the pixel value of the first adjacent pixel point includes:
and obtaining the differential at the pixel point according to the pixel value of any pixel point in the second image and the pixel value of the first adjacent pixel point, and taking the differential as the first gradient information.
In one possible implementation manner, the acquiring the first gradient information of the pixel point in the second image includes:
and acquiring first gradient information of the pixel points in the second image on the brightness channel.
In one possible implementation manner, the target resolution is less than or equal to a resolution threshold, and the method is applied to a terminal;
the method further comprises the steps of:
and rendering the third image.
In one possible implementation, the target resolution is greater than a resolution threshold, and the method is applied to a server;
the method further comprises the steps of:
compressing the third image to obtain compressed data of the third image;
and sending the compressed data to a terminal, and rendering the third image by the terminal based on the compressed data.
In one possible implementation, the method includes:
inputting the first image into an image processing model, performing the steps of up-sampling, feature extraction, determining target convolution parameters and convolution processing by the image processing model, and outputting the third image.
In one possible implementation, the training process of the image processing model includes:
acquiring a first sample image and a target sample image, wherein the resolution of the target sample image is the target resolution, and the resolution of the first sample image is smaller than the target resolution;
upsampling the first sample image to obtain a second sample image of the target resolution;
extracting features of a second image of the sample to obtain gradient features of the second image, wherein the gradient features are used for indicating the relation between pixel points in the second image and adjacent pixel points;
determining a target convolution parameter corresponding to the gradient feature according to the gradient feature of the second image and the sample target image;
and generating the corresponding relation between the gradient characteristic and the convolution parameter based on the target convolution parameter.
In one possible implementation, the image processing model is trained based on a sample first image and at least two sample target images, the resolutions of the at least two sample target images including at least one target resolution.
In one possible implementation, the image processing model includes a plurality of filters in series;
the steps of performing the upsampling, feature extraction, determining target convolution parameters, and convolution processing by the image processing model, outputting the third image, comprising:
the step of upsampling is performed by the image processing model and the steps of feature extraction and convolution processing are performed by the plurality of serial filters.
In one possible implementation, the step of performing the feature extraction and convolution processing by the plurality of serial filters includes:
creating at least one object;
the steps of feature extraction, determining target convolution parameters, and convolution processing are performed by the at least one object, the number of objects being less than the number of filters.
In one possible implementation, the method further includes:
acquiring the frame rate of the target video in real time;
updating the rendering time of each frame of image according to the frame rate;
and responding to the rendering time length being larger than a second target threshold value, and carrying out frame loss processing on the target video.
In one aspect, there is provided an image processing apparatus including:
The up-sampling module is used for up-sampling the first image according to the target resolution to obtain a second image with the target resolution, and the resolution of the first image is smaller than the target resolution;
the feature extraction module is used for extracting features of the second image to obtain gradient features of the second image, wherein the gradient features are used for indicating the relation between pixel points in the second image and adjacent pixel points;
the determining module is used for determining a target convolution parameter corresponding to the gradient characteristic of the second image according to the corresponding relation between the gradient characteristic and the convolution parameter;
and the convolution module is used for carrying out convolution processing on the second image based on the target convolution parameter to obtain a third image, and the resolution of the third image is the target resolution.
In one possible implementation manner, the feature extraction module includes a first acquisition unit, a smoothing unit, and a second acquisition unit;
the first acquisition unit is used for acquiring first gradient information of pixel points in the second image;
the smoothing unit is used for carrying out smoothing processing on the first gradient information of the pixel points in the second image to obtain second gradient information of the pixel points;
The second obtaining unit is configured to obtain a gradient feature corresponding to the second gradient information.
In one possible implementation manner, the first obtaining unit is configured to obtain, according to a pixel value of any pixel point in the second image and a pixel value of a first neighboring pixel point, a pixel value variation at the pixel point, and use the pixel value variation as the first gradient information, where a distance between the first neighboring pixel point and the pixel point is smaller than a first distance threshold.
In one possible implementation manner, the first obtaining unit is configured to obtain a differential at a pixel point according to a pixel value of any pixel point in the second image and a pixel value of a first adjacent pixel point, and use the differential as the first gradient information.
In one possible implementation manner, the first obtaining unit is configured to obtain first gradient information of a pixel point in the second image on a luminance channel.
In one possible implementation manner, the smoothing unit is configured to perform weighted summation on the first gradient information of any pixel point in the second image and the first gradient information of a second adjacent pixel point, so as to obtain second gradient information of the pixel point, where a distance between the second adjacent pixel point and the pixel point is smaller than a second distance threshold.
In one possible implementation, the second gradient information of one pixel point includes gradient information in different directions;
the gradient features include at least one of angle, intensity, and correlation;
the second obtaining unit is used for obtaining at least one of angle, intensity and relativity of the pixel point gradient according to gradient information of any pixel point in different directions.
In one possible implementation manner, the smoothing unit is configured to perform gaussian blur processing on the first gradient information of the pixel point in the second image, so as to obtain second gradient information.
In one possible implementation, the determining module is configured to:
quantifying the gradient features;
the step of determining a target convolution parameter is performed based on the quantified gradient characteristics.
In one possible implementation manner, the feature extraction module is configured to:
windowing the second image to obtain at least one image block;
extracting the characteristics of the at least one image block to obtain gradient characteristics of the at least one image block;
the determining module is used for determining a target convolution parameter corresponding to the gradient feature of the at least one image block according to the corresponding relation between the gradient feature and the convolution parameter.
In one possible implementation, the apparatus further includes:
the sharpening module is used for carrying out sharpening processing on the third image to obtain a fourth image;
and the first rendering module is used for rendering the fourth image.
In one possible implementation, the sharpening module is configured to:
acquiring difference information of the third image and the first image;
and acquiring a fourth image based on the difference information, the target coefficient and the first image, wherein the definition of a target area in the fourth image is larger than that of the target area in the first image.
In one possible implementation, the apparatus further includes:
the first updating module is used for responding to a coefficient setting instruction and updating the target coefficient;
the sharpening module is used for executing the step of acquiring the fourth image based on the updated target coefficient.
In one possible implementation, the apparatus further includes:
the first acquisition module is used for acquiring at least two frames of images of the target video;
the up-sampling module, the feature extraction module, the determination module and the convolution module are respectively used for performing the steps of up-sampling, feature extraction, target convolution parameter determination and convolution processing on at least one frame of image in the at least two frames of images to obtain at least one frame of target image corresponding to the at least one frame of image;
And the second rendering module is used for rendering the images except the at least one frame of image and the at least one frame of target image in the at least two frames of images.
In one possible implementation, the apparatus further includes:
the second acquisition module is used for acquiring the time interval between the rendering time of any two frames of images in the target video in real time;
and the first frame dropping module is used for dropping any one of the two frames of images in response to the time interval being smaller than a first target threshold.
In one possible implementation, the apparatus further includes:
the third acquisition module is used for acquiring the frame rate of the target video in real time;
the second updating module is used for updating the rendering time of each frame of image according to the frame rate;
and the second frame loss module is used for responding to the rendering time length being larger than a second target threshold value and carrying out frame loss processing on the target video.
In one possible implementation, the target resolution is less than or equal to a resolution threshold, and the apparatus is applied to a terminal;
the apparatus further comprises:
and the third rendering module is used for rendering the third image.
In one possible implementation, the target resolution is greater than a resolution threshold, and the apparatus is applied to a server;
The apparatus further comprises:
the compression module is used for compressing the third image to obtain compressed data of the third image;
and the sending module is used for sending the compressed data to a terminal, and the terminal renders the third image based on the compressed data.
In a possible implementation manner, the device is configured to input the first image into an image processing model, perform the steps of upsampling, feature extraction, determining a target convolution parameter and convolution processing by the image processing model, and output the third image.
In one possible implementation, the training process of the image processing model includes:
acquiring a first sample image and a target sample image, wherein the resolution of the target sample image is the target resolution, and the resolution of the first sample image is smaller than the target resolution;
upsampling the first sample image to obtain a second sample image of the target resolution;
extracting features of a second image of the sample to obtain gradient features of the second image, wherein the gradient features are used for indicating the relation between pixel points in the second image and adjacent pixel points;
Determining a target convolution parameter corresponding to the gradient feature according to the gradient feature of the second image and the sample target image;
and generating the corresponding relation between the gradient characteristic and the convolution parameter based on the target convolution parameter.
In one possible implementation, the image processing model is trained based on a sample first image and at least two sample target images, the resolutions of the at least two sample target images including at least one target resolution.
In one possible implementation, the image processing model includes a plurality of filters in series;
the steps of performing the upsampling, feature extraction, determining target convolution parameters, and convolution processing by the image processing model, outputting the third image, comprising:
the step of upsampling is performed by the image processing model and the steps of feature extraction and convolution processing are performed by the plurality of serial filters.
In one possible implementation, the step of performing the feature extraction and convolution processing by the plurality of serial filters includes:
creating at least one object;
the steps of feature extraction, determining target convolution parameters, and convolution processing are performed by the at least one object, the number of objects being less than the number of filters.
In one aspect, an electronic device is provided that includes one or more processors and one or more memories having stored therein at least one piece of program code that is loaded and executed by the one or more processors to implement various alternative implementations of the above-described image processing methods.
In one aspect, a computer readable storage medium having stored therein at least one program code loaded and executed by a processor to implement various alternative implementations of the image processing method described above is provided.
In one aspect, a computer program product or computer program is provided, the computer program product or computer program comprising one or more program codes, the one or more program codes being stored in a computer readable storage medium. The one or more processors of the electronic device are capable of reading the one or more program codes from the computer readable storage medium, the one or more processors executing the one or more program codes so that the electronic device can perform the image processing method of any one of the possible embodiments described above.
In the embodiment of the application, a new factor of gradient characteristics is introduced, and the target convolution parameters are directly determined from the existing corresponding relation according to the factor, so that the determination process of the target convolution parameters is less in time consumption, fine characteristic extraction and nonlinear mapping of images are not needed, and the time consumption of characteristic extraction and processing is greatly reduced. And the high-resolution image can be obtained by carrying out one-step convolution processing on the target convolution parameters, so that compared with a mode of reconstructing based on the extracted image features, the image processing steps are effectively simplified, and the image processing efficiency can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an implementation environment of an image processing method according to an embodiment of the present application;
FIG. 2 is a flowchart of an image processing method according to an embodiment of the present application;
FIG. 3 is a flowchart of an image processing method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an upsampling method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an upsampling method according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a pixel location according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of execution time of a drawing command for each filter according to an embodiment of the present application;
FIG. 8 is an overall architecture diagram of an image processing method provided in an embodiment of the present application;
FIG. 9 is an exploded view of a filter layer according to an embodiment of the present disclosure;
fig. 10 is a schematic view of an image display effect after each image processing step according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a training process of an image processing model according to an embodiment of the present application;
FIG. 12 is a schematic diagram of an image processing model using process according to an embodiment of the present application;
FIG. 13 is a schematic diagram of a sharpening process according to an embodiment of the present disclosure;
fig. 14 is a schematic view of a terminal interface provided in an embodiment of the present application;
FIG. 15 is a schematic illustration of a different gear interface display provided by an embodiment of the present application;
FIG. 16 is a schematic diagram of a GPU occupation and frame rate control scheme according to an embodiment of the present application;
Fig. 17 is a schematic structural view of an image processing apparatus according to an embodiment of the present application;
fig. 18 is a block diagram of a terminal according to an embodiment of the present application;
fig. 19 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The terms "first," "second," and the like in this application are used to distinguish between identical or similar items that have substantially the same function and function, and it should be understood that there is no logical or chronological dependency between the "first," "second," and "nth" terms, nor is it limited to the number or order of execution. It will be further understood that, although the following description uses the terms first, second, etc. to describe various elements, these elements should not be limited by the terms. These terms are only used to distinguish one element from another element. For example, a first image can be referred to as a second image, and similarly, a second image can be referred to as a first image, without departing from the scope of the various described examples. The first image and the second image can both be images, and in some cases, can be separate and distinct images.
The term "at least one" in this application means one or more, the term "plurality" in this application means two or more, for example, a plurality of data packets means two or more.
It is to be understood that the terminology used in the description of the various examples described herein is for the purpose of describing particular examples only and is not intended to be limiting. As used in the description of the various described examples and in the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. The term "and/or" is an association relationship describing an associated object, meaning that three relationships can exist, e.g., a and/or B, can be represented: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" in the present application generally indicates that the front-rear association object is an or relationship.
It should also be understood that, in the embodiments of the present application, the sequence number of each process does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not constitute any limitation on the implementation process of the embodiments of the present application.
It should also be understood that determining B from a does not mean determining B from a alone, but can also determine B from a and/or other information.
It will be further understood that the terms "Comprises" and/or "Comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "if" may be interpreted to mean "when" ("white" or "upon") or "in response to a determination" or "in response to detection". Similarly, the phrase "if a [ stated condition or event ] is detected" may be interpreted to mean "upon a determination" or "in response to a determination" or "upon a detection of a [ stated condition or event ] or" in response to a detection of a [ stated condition or event ], depending on the context.
The terms referred to in this application are described below.
Image superdivision: i.e. Image Super Resolution, refers to restoring a high resolution image from a low resolution image.
LR (Low Resolution): low resolution, in the present embodiment, a low resolution image is referred to by LR.
HR (High Resolution): high resolution, in the present embodiment, high resolution images are referred to by HR.
Super-Resolution (SR) is an underlying image processing task that maps low Resolution images to high Resolution in order to achieve the effect of enhancing image detail. There are many reasons for image blur, such as various noise, lossy compression, downsampling … … super resolution is a classical application of computer vision. SR means reconstructing corresponding high-resolution image from the observed low-resolution image by a software or hardware method (namely improving resolution), and has important application value in the fields of monitoring equipment, satellite image remote sensing, digital high definition, microscopic imaging, video coding communication, video restoration, medical image and the like.
The Super-division tasks may include Image Super-Resolution (ISR) and Video Super-Resolution (VSR), among others. The video super-division can be realized by performing image super-division on each frame or part of frame images in the video, or by combining multiple frame images in the video.
Image quality enhancement: including resolution enhancement and color enhancement, i.e., improving the image quality by algorithms.
End-on superdivision: namely, compared with the super-division algorithm running on the server, the super-division algorithm running on the mobile terminal needs to balance the relation between the algorithm effect and the performance power consumption, so that the method has greater difficulty.
Srcn (Super Resolution Convolutional Neural Networks, convolutional neural network superdivision) algorithm: is a classical super-division algorithm realized based on a three-layer convolutional neural network.
The image processing method provided by the embodiment of the application can be realized through artificial intelligence, and the related content of the artificial intelligence is explained below.
Artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
Computer Vision (CV) is a science of studying how to "look" a machine, and more specifically, to replace human eyes with a camera and a Computer to perform machine Vision such as recognition, tracking and measurement on a target, and further perform graphic processing to make the Computer process into an image more suitable for human eyes to observe or transmit to an instrument to detect. As a scientific discipline, computer vision research-related theory and technology has attempted to build artificial intelligence systems that can acquire information from images or multidimensional data. Computer vision techniques typically include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D techniques, virtual reality, augmented reality, synchronous positioning, and map construction, among others, as well as common biometric recognition techniques such as face recognition, fingerprint recognition, and others.
Machine Learning (ML) is a multi-domain interdisciplinary, involving multiple disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory, etc. It is specially studied how a computer simulates or implements learning behavior of a human to acquire new knowledge or skills, and reorganizes existing knowledge structures to continuously improve own performance. Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence. Machine learning and deep learning typically include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, induction learning, teaching learning, and the like.
With research and advancement of artificial intelligence technology, research and application of artificial intelligence technology is being developed in various fields, such as common smart home, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned, automatic driving, unmanned aerial vehicles, robots, smart medical treatment, smart customer service, etc., and it is believed that with the development of technology, artificial intelligence technology will be applied in more fields and with increasing importance value.
The scheme provided by the embodiment of the application relates to technologies of image processing, video processing, machine learning and the like in computer vision of artificial intelligence, and is specifically described by the following embodiments.
The following describes the environment in which the present application is implemented.
Fig. 1 is a schematic diagram of an implementation environment of an image processing method according to an embodiment of the present application. The implementation environment includes a terminal 101 or the implementation environment includes a terminal 101 and an image processing platform 102. The terminal 101 is connected to the image processing platform 102 via a wireless network or a wired network.
The terminal 101 can be at least one of a smart phone, a game console, a desktop computer, a tablet computer, an electronic book reader, an MP3 (Moving Picture Experts Group Audio Layer III, moving picture experts compression standard audio layer 3) player, or an MP4 (Moving Picture Experts Group Audio Layer IV, moving picture experts compression standard audio layer 4) player, a laptop portable computer. The terminal 101 installs and runs an application program supporting image processing, which can be, for example, a system application, an instant messaging application, a news push application, a shopping application, an online video application, a social application.
The terminal 101 can have an image processing function, for example, and can process an image and render the image according to the processing result. Illustratively, in this embodiment, the terminal 101 is capable of receiving an image or video transmitted by a server and processing one or more frames of the image or video. The terminal 101 is capable of doing this independently and also of providing data services to it through the image processing platform 102. The image processing platform 102 is capable of processing an image and transmitting the processed image to the terminal 101, and rendering the image by the terminal 101, for example. The embodiments of the present application are not limited in this regard.
The image processing platform 102 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. The image processing platform 102 is used to provide background services for applications that support image processing. Optionally, the image processing platform 102 takes on primary processing work and the terminal 101 takes on secondary processing work; alternatively, the image processing platform 102 performs a secondary processing job, and the terminal 101 performs a primary processing job; alternatively, the image processing platform 102 or the terminal 101 can each independently undertake processing work. Alternatively, the image processing platform 102 and the terminal 101 perform collaborative computing by using a distributed computing architecture.
Optionally, the image processing platform 102 includes at least one server 1021 and a database 1022, where the database 1022 is configured to store data, and in this embodiment of the present application, the database 1022 can store a sample first image or a sample target image, and provide a data service for the at least one server 1021.
The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligence platforms. The terminal can be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc.
Those skilled in the art will appreciate that the number of terminals 101 and servers 1021 can be greater or fewer. For example, the number of the terminals 101 and the servers 1021 can be only one, or the number of the terminals 101 and the servers 1021 can be tens or hundreds, or more, and the number and the device type of the terminals or the servers are not limited in the embodiment of the present application.
Fig. 2 is a flowchart of an image processing method provided in an embodiment of the present application, where the method is applied to an electronic device, and the electronic device is a terminal or a server, and referring to fig. 2, the method includes the following steps.
201. And the electronic equipment upsamples the first image according to the target resolution to obtain a second image with the target resolution, wherein the resolution of the first image is smaller than the target resolution.
The resolution of the first image is low and when it is desired to process it into a high resolution image, a high resolution second image can be obtained by upsampling the image features of the first image.
It will be appreciated that the density of pixels in the high resolution second image is higher than the density of pixels in the first image, and that the number of pixels in the second image is greater than the number of pixels in the first image. The resolution of the first image may be increased by adding pixels in the first image in an upsampling manner to obtain a second image.
In this embodiment of the present application, the second image is directly obtained by upsampling based on the first image, and the pixel obtained by interpolation may not well reflect the original image content with the original pixel, or the pixel value of the interpolated pixel may not well transition the adjacent pixel. The electronic equipment can further process the second image to obtain a third image with better display effect. Specific processing may be found in steps 202 to 204 described below.
202. And the electronic equipment performs feature extraction on the second image to obtain gradient features of the second image, wherein the gradient features are used for indicating the relation between the pixel points in the second image and the adjacent pixel points.
The electronic device needs to fine-tune the pixel values of the pixel points in the second image, so that the relationship between the pixel points can better present the image content. Therefore, the relation between the pixel points can be characterized based on the gradient characteristics of the second image, and how to process the second image is determined based on the gradient characteristics so as to optimize the relation between the adjacent pixel points, and the gradient characteristics are optimized, so that the image obtained by super-division is more natural and has better display effect.
203. And the electronic equipment determines a target convolution parameter corresponding to the gradient characteristic of the second image according to the corresponding relation between the gradient characteristic and the convolution parameter.
The corresponding relation between the gradient characteristics and the convolution parameters is stored in the electronic equipment, and when the convolution parameters need to be determined, the corresponding relation is queried through the gradient characteristics, so that the corresponding target convolution parameters can be determined. The convolution parameter is used for carrying out convolution processing on the image so as to change the pixel value of the pixel point.
204. And the electronic equipment carries out convolution processing on the second image based on the target convolution parameter to obtain a third image, wherein the resolution of the third image is the target resolution.
The convolution processing is a process of updating the pixel value of the pixel point in the second image, so that the resolution of the obtained third image is the same as that of the second image, and the resolution is the target resolution, and the corresponding target convolution processing is queried through the gradient characteristics, so that the pixel value of the pixel point can be processed, and the third image which is more natural and has better display effect is obtained.
In the embodiment of the application, a new factor of gradient characteristics is introduced, and the target convolution parameters are directly determined from the existing corresponding relation according to the factor, so that the determination process of the target convolution parameters is less in time consumption, fine characteristic extraction and nonlinear mapping of images are not needed, and the time consumption of characteristic extraction and processing is greatly reduced. And the high-resolution image can be obtained by carrying out one-step convolution processing on the target convolution parameters, so that compared with a mode of reconstructing based on the extracted image features, the image processing steps are effectively simplified, and the image processing efficiency can be improved.
In the embodiment of the application, the electronic equipment can acquire the image, perform image super-division on the image, and process the image with low resolution into the image with high resolution so as to improve the image quality. In one possible implementation manner, the image may be any frame of image in the video, that is, the electronic device may obtain multiple frames of images of the video, and perform image super-division on one or more frames of images in the multiple frames of images, so as to improve the quality of the video. The flow of the image processing method is described below by way of the embodiment shown in fig. 3. Fig. 3 is a flowchart of an image processing method provided in an embodiment of the present application, and referring to fig. 3, the method includes the following steps.
301. The electronic device obtains a first image and a target resolution.
The resolution of the first image is less than the target resolution. The target resolution is the resolution of the processed image, that is, the resolution which is expected to be achieved after the processing.
In one possible implementation, the target resolution may be set by the relevant technician as desired, for example, the target resolution may be 1080P. Wherein P is Progressive and Chinese is Progressive. In a specific scenario, a related technician sets a target resolution, and when the electronic device acquires the first image, the electronic device can process the first image into an image with the target resolution.
In another possible implementation manner, the target resolution may be set by the user according to the requirement of the user, for example, the user sets the target resolution (such as 1080P) desired to be watched in the electronic device, and the electronic device may process the first image after acquiring the first image, process the first image into an image with the target resolution desired by the user, and display the image with the target resolution to meet the requirement of the user.
In one possible implementation, the number of target resolutions may be one or more. That is, the electronic device may acquire a target resolution, and process the first image to obtain an image of the target resolution. The electronic device may also acquire a plurality of target resolutions, and process the first image based on each target resolution, to obtain an image of each target resolution. The electronic device processes the first image based on a plurality of target resolutions to obtain a plurality of images, and the resolutions of the plurality of images are respectively the plurality of target resolutions.
The foregoing provides several possible setting manners and numbers of target resolutions, and the setting manners and numbers are not particularly limited in the embodiments of the present application.
For the electronic device, the electronic device may be a terminal or a server. That is, the image processing method provided in the embodiment of the present application may be applied to a terminal or a server. It can be appreciated that the server has better processing performance than the terminal, and can implement an algorithm with high computation. Compared with a server, the terminal is directly oriented to the user and can directly present the processing result to the user.
In one possible implementation, the electronic device may provide an image processing switch control for controlling the turning on and off of image processing functions. For example, the image processing method may be understood as a superdivision algorithm, and the image processing switch control may also be referred to as a superdivision switch.
In this implementation, the electronic device may determine whether to perform the steps of the image processing method based on the state of the image processing switch control. Specifically, when the electronic device acquires the first image, the state of the image processing switch control may be detected, and in response to the state of the image processing switch control being an on state, the following steps 302 to 309 are performed. And the terminal can also respond to the state of the image processing switch control being the closing state, and then the subsequent steps are not executed, so that the first image is directly rendered.
302. And the electronic equipment upsamples the first image according to the target resolution to obtain a second image with the target resolution, wherein the resolution of the first image is smaller than the target resolution.
Upsampling (upsampling) is in fact a process of enlarging an image by which the main purpose of enabling an image (or called upsampling or) is to enlarge the original image so that it can be displayed on a higher resolution display device. In one possible implementation, the upsampling process may be implemented by image interpolation (interpolation), for example, by way of interpolation values. The image interpolation process refers to inserting new pixels between pixel points by adopting a proper interpolation algorithm on the basis of original image pixels. It will be appreciated that by upsampling, the number of pixels in the image can be increased to improve the image quality.
The upsampling process may be implemented by a variety of interpolation methods, and several possible interpolation methods are provided below, and the embodiment of the present application is not limited to what manner is specifically adopted.
In one possible implementation, the electronic device may perform upsampling processing on the first image using a nearest neighbor method to obtain the second image. The nearest neighbor method does not need to calculate, and the gray scale of the adjacent pixel nearest to the pixel to be added is assigned to the pixel to be added in the four adjacent pixels of the pixel to be added. For example, as shown in fig. 4, (i+u, j+v) is set as the coordinates of the pixel to be added. Wherein I, j is a positive integer, u, v is a fraction greater than zero and less than 1), the gray value of the pixel to be added is f (i+u, j+v). If (i+u, j+v) falls in the A region, i.e., u <0.5, v <0.5, the gray value of the upper left pixel is assigned to the pixel to be added, similarly, the gray value of the upper right pixel is assigned when falling in the B region, the gray value of the lower left pixel is assigned when falling in the C region, and the gray value of the lower right pixel is assigned when falling in the D region.
In another possible implementation, the electronic device can perform upsampling processing on the first image to obtain the second image using bilinear interpolation. Specifically, the electronic device may linearly interpolate the gray values of four adjacent pixels among the pixels to be added in two directions. As shown in fig. 5, for (I, j+v), the gray scale changes from f (I, j) to f (I, j+1) are linear, and f (I, j+v) = [ f (I, j+1) -f (I, j) ]v+f (I, j) can be obtained. Similarly, for (i+1, j+v), f (i+1, j+v) = [ f (i+1, j+1) -f (i+1, j) ], v+f (i+1, j) can be obtained. The gray scale change from f (i, j+v) to f (i+1, j+v) is linear, and thus the gray scale value of the pixel to be added can be obtained by f (i+u, j+v) = (1-u) ×f (i, j) + (1-u) ×v×f (i, j+1) +u×f (i+1, j) +u×v×f (i+1, j+1) as follows.
Of course, the up-sampling process may also be implemented by other interpolation methods, for example, cubic interpolation, "Inverse Distance to a Power (inverse distance weighted interpolation)", "Kriging (Kriging interpolation)", "Minimum Curvature (minimum curvature)", "Modified sheplate's Method", "Natural Neighbor interpolation)", "Nearest Neighbor interpolation)", "Polynomial Regression (multiple regression)", "Radial Basis Function (radial basis function)", "Triangulation with Linear Interpolation (linear interpolation triangle network)", "Moving Average", "Local polynomials", and the implementation of the up-sampling process is not particularly limited in the embodiments of the present application.
303. And the electronic equipment acquires the first gradient information of the pixel points in the second image.
After the electronic equipment obtains a second image through rapid up-sampling, a proper target convolution parameter can be determined for the second image according to the gradient characteristics of the second image, so that convolution processing is carried out on the second image, and a third image with a better display effect is obtained. The electronic device may first obtain the gradient characteristics of the pixel points in the second image through the step 303 and the following steps 304 and 305, and then execute the step of determining the target convolution parameter based on the gradient characteristics.
The electronic device may acquire first gradient information of the pixel point first, and then process the first gradient information to extract a gradient feature. The first gradient information of each pixel point is used for representing the change condition of the pixel value at the pixel point. Thus, when the first gradient information is acquired, the pixel value of the currently calculated pixel point and the pixel value of the pixel point adjacent to the pixel point may be referred to.
Specifically, the electronic device may obtain a pixel value variation at the pixel point according to a pixel value of any pixel point in the second image and a pixel value of a first adjacent pixel point, and use the pixel value variation as the first gradient information, where a distance between the first adjacent pixel point and the pixel point is smaller than a first distance threshold. The first gradient information represents pixel value variation between adjacent pixels, and thus, the first gradient information may also be referred to as a neighborhood gradient.
For the first distance threshold, the first distance threshold may be set by a related technician according to requirements, which is not limited in the embodiment of the present application. For example, the first distance threshold may be one pixel, or a value greater than the first pixel and less than two pixels.
In one possible implementation, the first neighboring pixel point is explained only by the first distance threshold value, which may not be set in the electronic device, nor is the distance between the pixels calculated, but the position of the first neighboring pixel point is explained by the distance. The electronic device may use a pixel point adjacent to the pixel point as the first adjacent pixel point through the position of the pixel point. For example, as shown in fig. 6, for a pixel point 601, the electronic device takes a pixel point 602 above, below, left, and right of the pixel 601 as the first adjacent pixel point.
The pixel value variation may be represented by a differential, and accordingly, in step 303, the electronic device may obtain a differential at a pixel point according to a pixel value of any pixel point in the second image and a pixel value of a first adjacent pixel point, and use the differential as the first gradient information. For example, the electronic device may calculate a neighborhood gradient of the first image Y channel, resulting in a derivative (dxdx, dydy, dxdy), i.e. the first gradient information. The differentiation represents the gradient and is a gradient statistics of one pixel point, and thus may also be referred to as a gradient statistics component.
In one possible implementation, in step 303, the electronic device obtains first gradient information of the pixel point in the second image on the luminance channel. The format of the first image is YUV format. The image data of the first image is data in YUV format. Wherein Y is a luminance channel, and U and V are chrominance channels. The pixel value of the brightness channel is a gray value, the shape and the like of each object in the second image can be clearly known through the gray value of the pixel point in the second image, only the color information is lacking, and the presentation effect on the content of the final image is not very great. Therefore, the information on the brightness channel is used as a basis for calculation, the super-division can be accurately performed, the calculated amount can be effectively reduced, and the image processing efficiency can be improved.
304. And the electronic equipment performs smoothing processing on the first gradient information of the pixel point in the second image to obtain second gradient information of the pixel point.
After the electronic equipment obtains the first gradient information, the first gradient information can be further processed, so that pixel points in the image corresponding to the second gradient information are more coherent, and transition between the pixel points is more natural.
Among them, smoothing, also called blurring, is a simple image processing method with high frequency of use. The process of this smoothing process can be understood as follows: for one pixel point, the similarity is larger in consideration of similarity of contents expressed by adjacent pixel points in the image. The process of smoothing the image may be a process of determining the pixel value of a pixel according to the pixel values of adjacent pixels of the pixel, so that the correlation between the pixel value of the pixel and the adjacent pixels is stronger, and the transition between the pixels is more natural and the connection is more coherent.
The smoothing process may include a variety of processing methods, such as gaussian blur, normalized block filtering, median filtering, bilateral filtering, and the like. The embodiment of the present application is not limited to what mode is specifically adopted.
In one possible implementation, the smoothing process may be a gaussian blur process. Among them, gaussian Blur (Gaussian blue), also called Gaussian smoothing, is a common smoothing method. Gaussian blur is widely used in image processing software such as Adobe Photoshop, GIMP (GNU Image Manipulation Program, GNU image processing program) and paint. Wherein GNU is a free operating system, the name comes from GNU's Not Unix-! Is a recursive abbreviation of (c). The gaussian blur can reduce image noise and lower the level of detail. In colloquial terms, the visual effect of an image after a gaussian blur process is as if the image was being viewed through a frosted glass. From a mathematical point of view, the gaussian blur process of an image is the convolution of the image with a normal distribution. Since the normal distribution is also called gaussian distribution, this image processing technique is called gaussian blur. Gaussian blur is understood to be a low pass filter for an image.
Specifically, the smoothing process may be: and the electronic equipment performs weighted summation on the first gradient information of any pixel point in the second image and the first gradient information of a second adjacent pixel point to obtain second gradient information of the pixel point, wherein the distance between the second adjacent pixel point and the pixel point is smaller than a second distance threshold value. The second distance threshold may be set by a skilled person according to requirements, for example, the second distance threshold may be two pixels, which is not limited in the embodiment of the present application. For example, as shown in fig. 6, for the pixel 601, the electronic device may obtain the pixel 602 above, below, left, and right of the pixel 601 and the pixel 603 above, below, and right of the pixel 601 as second adjacent pixels, which fig. 6 is merely an exemplary illustration, and the second adjacent pixels may also include only the pixel 601, or may also include other pixels. In addition, in the implementation of gaussian blur, the second distance threshold may be understood as a gaussian blur radius. In weighted summation, reference can be made to pixel values centered on a pixel at a pixel point within the gaussian blur radius. The weights of the second adjacent pixels may be the same or different. In gaussian blur, the weight may satisfy a normal distribution.
In step 303, after the electronic device obtains the first gradient information, the first gradient information of each pixel point may be a gradient map, and the electronic device may perform gaussian blur on the obtained gradient map. The dimension of the image block divided by the image is dxd, and the corresponding dimension of the Gaussian blur kernel is (d-1) x (d-1). Wherein d is a positive integer.
305. And the electronic equipment acquires the gradient characteristics corresponding to the second gradient information.
After the second gradient information is obtained through the smoothing processing, the second gradient information can be converted into gradient characteristics capable of clearly expressing gradients, and the gradient characteristics can better reflect pixel point change characteristics of the pixel points.
In an implementation manner that the second gradient information includes gradient information in different directions, the electronic device may acquire the gradient feature according to that the second gradient information of each pixel point includes gradient information in different directions. In one possible implementation, the gradient feature includes at least one of angle, intensity, and correlation. That is, the gradient feature may be an angle. Alternatively, the gradient feature may be intensity. Alternatively, the gradient feature may be a correlation. Alternatively still, the gradient feature may be any two of the three. Alternatively, the gradient features may be three. Accordingly, in step 305, the electronic device may obtain at least one of the angle, the intensity and the correlation of the gradient of the pixel according to the gradient information of any pixel in different directions.
The manner in which these three gradient features are obtained is exemplified below. In a specific possible embodiment, for the pixel point k, the gradient dx, dy of the pixel point in the x, y direction is obtained by calculating the first gradient information. After the smoothing process, second gradient information in different directions can be obtained. Specifically, the method is realized by the following formula I and formula II.
Figure BDA0002651657430000141
Figure BDA0002651657430000142
The electronic device may further obtain gradient features through the third, fourth and fifth formulas: angle theta k Intensity (strength) s k Coherence (coherence). The three gradient features can represent the gradient conditions near the pixel point, and thus, may also be referred to as local gradient statistics.
θ k =tan -1 ((λ 1 -d x d x )/d x d y ) Formula III
s k =λ 1 Equation four
Figure BDA0002651657430000143
Steps 303 to 305 are processes of extracting features of the second image to obtain gradient features of the second image. In this process, gradient features can be extracted by acquiring first gradient information, smoothing processing, and feature transformation. The process may also be implemented in other ways, for example, the electronic device convolves the second image to obtain the gradient feature. The embodiments of the present application are not limited in this regard.
In one possible implementation manner, in the above feature extraction process, the electronic device may divide the image into different image blocks, and perform feature extraction on the image blocks respectively to obtain gradient features of each image block. Specifically, the electronic device may perform windowing processing on the second image to obtain at least one image block, and perform feature extraction on the at least one image block to obtain gradient features of the at least one image block. Accordingly, in step 306 described below, the electronic device may determine the target convolution parameter corresponding to the gradient feature of the at least one image block according to the correspondence between the gradient feature and the convolution parameter. Through the division of the image blocks, the characteristics of the image can be extracted more finely, the extracted gradient characteristics are more accurate, and further, the subsequent image processing method is executed according to the gradient characteristics, so that a better processing effect can be achieved.
306. And the electronic equipment determines a target convolution parameter corresponding to the gradient characteristic of the second image according to the corresponding relation between the gradient characteristic and the convolution parameter.
Different gradient features correspond to different convolution parameters, and it can be appreciated that when the gradient features of the second image are different, the convolution processing required to be performed on the pixel values of the pixel points is different, and the convolution parameters adopted in the convolution processing are different. After the electronic equipment acquires the gradient characteristics of the second image, the target convolution parameters can be determined according to the gradient characteristics, namely, how to process the second image is determined, and the image with good display effect can be obtained.
The correspondence between the gradient features and the convolution parameters can be stored in the electronic device, and when the convolution parameters corresponding to a certain gradient feature need to be determined, the correspondence is queried. Therefore, the electronic equipment can perform quick query by taking the corresponding relation as a reference, and other complex calculation modes are not needed, so that the image processing efficiency can be effectively improved.
The correspondence may be obtained in a variety of ways, and may be set empirically by a relevant technician, or may be obtained by analyzing a large number of images.
In one possible implementation, the correspondence may be determined during image processing model training. In this implementation, the above-described image processing method is implemented by an image processing model.
In this implementation, after step 301, the electronic device may input the acquired first image into an image processing model, perform an image processing step by the image processing model, and output a third image. Accordingly, the image processing step may be: the electronic device inputs the first image into an image processing model, performs the steps of upsampling, feature extraction, determining target convolution parameters, and convolution processing by the image processing model, and outputs the third image.
Alternatively, the image processing model can process the input image into an image of a particular target resolution. The image processing model may be used to process an input image to output an image of a particular resolution. That is, the electronic device may acquire only the first image in step 301, input it into the image processing model for processing, the target resolution being determined during the model training process.
Alternatively, the image processing model is capable of processing an input image into images of a plurality of target resolutions, that is, the image processing model is configured to process the input image based on the input target resolution and output the image of the target resolution. That is, after the electronic device acquires the first image and the target resolution in step 301, the first image and the target resolution may be both input into the image processing model, and the target resolution may be continuously set by a relevant technician according to the requirement, or may be set in response to a resolution setting instruction.
In the training of the image processing model, the electronic equipment can acquire and obtain convolution parameters based on the gradient features of the low-resolution sample images after feature extraction and the corresponding high-resolution sample images, so that a corresponding relation is established between the gradient features and the convolution parameters.
Each convolution parameter may be identified by corresponding identification information. The correspondence may store identification information of the convolution parameter and the corresponding gradient feature. In step 306, the electronic device can obtain the identification information of the target convolution parameter through the gradient feature, and obtain the target convolution parameter according to the identification information to perform convolution processing.
In the mode of dividing an image into image blocks and extracting features, the electronic equipment acquires gradient features of each image block, can classify the image blocks, determines identification information of convolution parameters corresponding to each image block, and further acquires corresponding convolution parameters to carry out convolution processing on the convolution parameters.
For example, in one specific example, image blocks may be analyzed according to their gradient characteristics, similar image blocks may be determined, and convolved into a class. It can be understood that: the image blocks are classified into a barrel for processing, and each barrel corresponds to own convolution parameters. The identification information of the convolution parameter may be understood as a bucket index, and in step 306, the electronic device may calculate the bucket index corresponding to the image block according to the obtained local gradient statistic. The convolution parameters corresponding to the bucket index (which may also be referred to as convolution kernel parameters) may be fitted during the image processing model training described above. For example, in one specific example, the convolution parameters may be obtained by solving a linear fitting problem by a least squares method.
In one possible implementation, the correspondence may be stored by way of a convolution table. The target convolution parameter determination step may be implemented by table look-up. The table look-up mode is fast and convenient, and the image processing efficiency can be improved.
In one possible implementation, after determining the gradient feature, the electronic device may further process the gradient feature, and then perform the step of determining the target convolution parameter. Specifically, the electronic device quantifies the gradient feature, and based on the quantified gradient feature, the step of determining the target convolution parameters is performed. In one particular possible embodiment, the angle in the gradient feature may be quantized to one of 24 angles, the intensity to one of 9 intensities, and the correlation to one of 9. That is, the quantization step can divide the gradient feature using a24×9×9 structure. By more careful division of the gradient features, better edge processing of image details can be achieved.
For example, angle is denoted as a, intensity is denoted as B, and coherence is denoted as C. The angles include A1, A2, … …, a24. The intensities include B1, B2, … …, B9. Coherence includes C1, C2, … …, C9. Different permutation and combination of the seed gradient features may correspond to different convolution parameters, i.e. to different buckets.
307. And the electronic equipment carries out convolution processing on the second image based on the target convolution parameter to obtain a third image, wherein the resolution of the third image is the target resolution.
After the electronic device obtains the target convolution parameters, convolution processing can be performed on the second image, and pixel values of pixel points in the second image can be updated through the convolution processing, so that image quality is improved. In one possible implementation, the convolution parameter may be a convolution matrix, through which the image block is convolved, so as to obtain the third image.
In the manner in which the image processing method is implemented by the image processing model, the convolution processing is also performed by the image processing model.
The following explains the training process of the image processing model and the manner of determining the corresponding relationship in the training process.
Specifically, the training process of the image processing model can be implemented through steps one to five.
Step one, an electronic device acquires a first sample image and a target sample image, wherein the resolution of the target sample image is a target resolution, and the resolution of the first sample image is smaller than the target resolution.
In this step one, the sample first image is a low resolution image, which may be referred to herein as an LR image. The sample target image is a high resolution image corresponding to the low resolution map, which may be referred to herein as an HR image.
Step two, the electronic equipment carries out up-sampling on the first sample image to obtain a second sample image with the target resolution.
Step three, the electronic equipment performs feature extraction on the second image of the sample to obtain gradient features of the second image, wherein the gradient features are used for indicating the relation between the pixel points in the second image and the adjacent pixel points.
And step four, the electronic equipment determines target convolution parameters corresponding to the gradient features according to the gradient features of the second image and the sample target image.
The second to fourth steps are similar to the steps 303 to 305, and are not described herein.
And fifthly, the electronic equipment generates the corresponding relation between the gradient characteristic and the convolution parameter based on the target convolution parameter.
In one possible implementation, the image processing model is trained based on a sample first image and at least two sample target images, the resolutions of the at least two sample target images including at least one target resolution.
In this step five, the convolution parameters determined by the electronics may be used by a single convolution layer to perform the convolution process, i.e., in this step 307, the convolution process step may be implemented by a single convolution layer.
Because the number of the first images of the sample is a plurality of, through the fifth step, the electronic equipment can fit different convolution parameters to form a convolution table or a single-layer convolution group. The single-layer convolution group comprises a plurality of single-layer convolution layers, each single-layer convolution layer represents a convolution parameter, and a convolution processing can be carried out on the image.
In one possible implementation, the size of the single convolutional layer may be 5x5. In the convolution processing, the rolling layer is simplified into a single layer, but the convolution is still the part with the largest operation amount. Fig. 7 is a schematic diagram of execution time of each filter drawing command provided in the embodiment of the present application, as shown in fig. 7, it is found through experiments that convolution with a size of 7×7 takes approximately 30 milliseconds (ms), other processing procedures require about 3ms, and under the condition that other operations remain unchanged, we can replace the convolution kernel to be 5×5, so that the operation time can be found to be directly reduced by 30%. Through the size change of the convolution layer, the algorithm effect is ensured as much as possible, the operation amount is effectively reduced, and the image processing efficiency can be improved. Where eos is an image processing software. Rgbtoyuv refers to converting RGB format to YUV format. YUV format is converted to RGB format by YUV torgb.
In one possible implementation, the image processing model includes a plurality of serial filters, and the step of feature extraction and convolution processing may be performed by the plurality of serial filters after the image processing model performs the step of upsampling. For example, as shown in fig. 8, the overall architecture of image processing includes: a player 801, a decoding layer 802, a filter layer 803, and a rendering layer 804. The player 801 can acquire an image, then decode the image through the decoding layer 802, then filter the decoded image data through the filter layer 803, and finally render and display the filtered image through the rendering layer 804. Taking the YUV data of 540P as an example of the decoded image data, each filter is called a filter as shown in fig. 9, and fig. 9 is an exploded structure of the filter layer 803 in fig. 8. The plurality of serial filters may include a gradient filter 901, a gaussian filter 902, a feature filter 903, and a convolution filter 904. The decoded YUV data 905 (i.e., the first image) of 540P is amplified to obtain amplified YUV data 906 (i.e., the second image) of 1080P, and then the amplified YUV data 906 is divided into two paths, wherein one path passes through the gradient filter 901, the gaussian filter 902 and the feature filter 903, and the second image is subjected to the first gradient information acquisition step, the gaussian blur and the gradient feature acquisition step, respectively, while the other path passes through the convolution filter 904 directly. The convolution filter 904 is configured to query from the convolution table 907 based on the extracted gradient feature to obtain a corresponding target convolution parameter, so as to perform convolution processing on the input 1080P YUV data 906 to obtain a third image. The electronic device may also format convert the convolved image through the RGB conversion filter 908 before rendering, and then render it on the screen 909. Where the upper screen means displayed on the screen. As can be seen from fig. 9, the filter layer provided in the present application realizes multiple input multiple output, and the application is more flexible. As shown in fig. 10, LR, YUV, gradient, gaussian, characteristic, and HR in fig. 10 are image display effects obtained before processing and after processing of each filter, respectively.
It should be noted that, the plurality of filters process the image in a pipeline manner, so that the filters can be conveniently plugged and unplugged, and the effect of each filter can be further debugged, so that the method is a convenient and quick implementation mode.
In one possible implementation, in the process of performing the feature extraction and convolution processing by the plurality of serial filters, the electronic device may create at least one object, and the steps of performing the feature extraction, determining the target convolution parameters, and convolution processing by the at least one object may be performed by the electronic device, where the number of objects is less than the number of filters. Therefore, each filter does not need to create an object, the number of times of creating the object can be reduced, time consumption is reduced, and image processing efficiency is improved by combining the filters. As shown in fig. 10, the intermediate result generated by each Filter (Filter) is a convenient and quick implementation manner, and the Filter can be conveniently plugged and unplugged by adopting a pipeline manner, so that the effect of each Filter can be debugged.
For example, in one specific example, where we have optimized the filter layers, some or all of the multiple filters may be combined, for which objects required for rendering by pipeline controllers, frame buffers, textures, etc. can be created, the functionality of the combined filters is implemented in one object. By combining a plurality of filter renderers, the creation times of pipeline controllers, frame buffering and textures on the GPU and the submission times of the textures are reduced, so that the occupation of the GPU is effectively reduced.
A specific example is provided below by fig. 11 and 12, and as shown in fig. 11, in the training process of the image processing model, a low-resolution image (LR) 1101 may be input (input), the LR may be rapidly upsampled 1102, a block-based feature may be extracted 1103, then the input LR and a label (label) high-resolution image (HR) 1105 may be solved by a rapid solver 1104, and a convolution parameter may be determined, and a plurality of convolution parameters may be obtained by fitting through a plurality of LR and HR, and the plurality of convolution parameters may form a single-layer convolution group 1106. As shown in fig. 12, after the training of the image processing model is completed, the input low-resolution image 1201 can be quickly up-sampled 1202, the block-based feature extraction 1203 is performed, then the filtering index 1204 is performed, the bucket index of the image block is obtained, and the corresponding bucket (convolution parameter) is found from the single-layer convolution group 1205 to perform the convolution processing.
308. And the electronic equipment performs sharpening processing on the third image to obtain a fourth image.
The step 308 is an optional step, and after the step 307, the electronic device may further execute the step 308 to render the image after sharpening, or may render the third image directly without sharpening.
For sharpening, the electronic device may obtain difference information between the third image and the first image, and obtain a fourth image based on the difference information, the target coefficient, and the first image, where a sharpness of a target area in the fourth image is greater than a sharpness of a target area in the first image. Wherein the target coefficient is a coefficient of difference information for controlling a size of the difference information added in the first image. It will be appreciated that the difference information is added to the first image, and a third image can be obtained. If the target coefficient is less than 1, the super-division effect of the fourth image is worse than that of the third image. If the target coefficient is greater than 1, the super-division effect of the fourth image is better than that of the third image.
The target coefficient can be set by a relevant technician according to the requirement, and can also be determined based on the coefficient setting instruction.
In one specific example, the sharpening process may be implemented through a non-sharpening Mask (USM).
In one possible implementation manner, a coefficient setting function may be provided, and a user may set a target coefficient according to a requirement to adjust the superdistribution effect, so as to implement a more flexible superdistribution function. Specifically, the electronic device may acquire the target coefficient in response to the coefficient setting instruction. Further, the electronic device may render a fourth image sharpened based on the set target coefficients in a subsequent step 309.
For example, for the sharpening process, as shown in fig. 13, x (n, m) is an input image (i.e., a first image), y (n, m) is an output image (i.e., a third image), and z (n, m) is a correction signal, where we use the difference between the super-division image and the original image as the correction signal, i.e., z (n, m) =y (n, m) -x (n, m), which can be determined by a Linear high-voltage Filter (Linear HP Filter), and then introduce λ as a coefficient for controlling the super-division effect, i.e., a target coefficient. The fourth image that is eventually presented to the user is x (n, m) +λ x z (n, m).
For example, as shown in fig. 14, the user may select a target resolution, or may select a superdivision effect by adjusting a target coefficient. For the target resolution, several candidate resolutions may be chosen. For example, high definition is selected. The target coefficient adjustment may be adjusted by a bar option for intelligent image quality. The user may drag the drag item in the bar option to adjust the superdivision effect, e.g., measure the superdivision effect by 0-100, currently dragged to 38, which has a correspondence with the target coefficient λ. As shown in fig. 15, this embodiment produces a very remarkable image quality enhancement effect in practice of a plurality of gear steps. It can be seen that, with the high definition file (540 p) as input, the subjective definition of our super-resolution result (1080 p) is obviously improved, even reaching the level of definition close to that of the blue light file (1080 p), and meanwhile, in various types of online video data, we test the objective quality of our super-resolution algorithm, and from the test result (as shown in the following table 1), the evaluation indexes such as vmaf (Visual Multimethod Assessment Fusion, video quality multi-way evaluation fusion), psnr (Peak Signal-to-Noise Ratio), SSIM (Structural SIMilarity ) and the like are obviously improved.
TABLE 1
Figure BDA0002651657430000181
Wherein tv in Super-classified tvers refers to television, esr is Enhanced Super-Resolution, and Super-Resolution is Enhanced.
The image processing method provided by the embodiment of the application has good applicability, and through experiments, the method can realize real-time superdivision effect in most machine types.
In the implementation of the image processing method implemented by the image processing model, the step 308 may be performed by the image processing model, so that the image processing model outputs the fourth image.
309. The electronic device renders the fourth image.
After the electronic device obtains the fourth image, the fourth image may be rendered and displayed. If the electronic equipment is a terminal, the terminal can directly render and display the fourth image after processing the fourth image, coding is not needed, and the super-resolution effect is realized on the rendering layer. If the electronic device is a server, the server may also compress the fourth image and transmit the fourth image to the terminal for rendering and displaying.
In one possible implementation, the formats of the first image, the second image, the third image, and the fourth image may be YUV formats. An electronic device typically renders an image in RGB format when rendering the image. The electronic device may convert the format of the fourth image to RGB format and render the fourth image in RGB format in step 309.
The image processing method provided by the embodiment of the application is a super-division algorithm, the super-division algorithm is realized on a mobile terminal in two modes, and has the advantages that super-division is realized on a rendering layer, secondary coding is not needed, meanwhile, the cost of a server is saved, but the defects that the performance power consumption of the terminal is limited, the algorithm with higher operand cannot be realized, the implementation in a cloud is just opposite, the algorithm with better effect can be used, for example, 1080P can be super-divided into 2K and then issued, but the problem that the method is still limited by a downlink channel, the secondary coding is needed after super-division, and the cost of the server is higher. Therefore, the mixed scheme is finally selected, 720P and below are realized at the terminal, the super-division is played to 1080P, and the super-division above 1080P is realized at the server.
Specifically, the flow of the above-described image processing method includes the following two cases.
In one case, the target resolution is less than or equal to the resolution threshold, and the image processing method is applied to the terminal, that is, after the terminal performs the steps 301 to 307, the terminal may render the third image.
In case two, the target resolution is greater than the resolution threshold, and the method is applied to the server. That is, after the server performs the steps 301 to 307, the third image may be compressed to obtain compressed data of the third image, the compressed data is sent to the terminal, and the terminal renders the third image based on the compressed data. Of course, when the image processing process includes the sharpening process, the server may compress the fourth image to obtain compressed data, and then send the compressed data to the terminal for rendering.
The resolution threshold may be set by a related technician according to the performance or the use requirement of the terminal and the server, for example, the resolution threshold is 1080P, which is not limited in the embodiment of the present application.
The above steps 301 to 309 illustrate an image super-division manner, and in the embodiment of the present application, the image processing method may be used to super-divide a video. Specifically, the electronic device may acquire at least two frame images of the target video, perform the steps of upsampling, feature extraction, determining a target convolution parameter, and convolution processing on at least one frame image of the at least two frame images, obtain at least one frame target image corresponding to the at least one frame image, and render images other than the at least one frame image and the at least one frame target image of the at least two frame images. For example, the image processing method may be applied in a live scene, which may be a game live scene, for example. In the scene, the electronic equipment can perform super-processing on the images in the live stream in real time through the image processing method to obtain the images with the target resolution, so that diversified live broadcast requirements of users are met.
In one possible implementation manner, a frame loss mechanism can be provided, by setting the frame loss mechanism at a first target threshold, when the time distance between two adjacent frames is detected to be too short, frame loss can be adopted, so that the phenomenon that the super-division effect cannot be observed by a person due to the fact that the super-division processed image is mixed with the image which is not subjected to the super-division processing is avoided. Specifically, the electronic device may acquire, in real time, a time interval between rendering times of any two frames of images in the target video, and discard any one of the any two frames of images in response to the time interval being less than a first target threshold.
In one possible implementation manner, a frame rate control mechanism may be further configured, and the electronic device may determine, according to the real-time frame rate, whether to perform frame loss processing, so as to ensure smooth video playing. Specifically, the electronic device may acquire the frame rate of the target video in real time, update the rendering time of each frame of image according to the frame rate, and perform frame loss processing on the target video in response to the rendering time being greater than the second target threshold.
For the first target threshold and the second target threshold, both thresholds may be set by the relevant technician as desired, and both thresholds may be the same, e.g., both corresponding to 25 frames/second. The two thresholds may also be different. The embodiments of the present application are not limited in this regard.
For example, as shown in fig. 16, fig. 16 (a) shows the GPU (Graphics Processing Unit, image processor) duty ratio in different cases, and it can be found that single frame rendering is less time-consuming, only 4, without performing super division. The GPU duty ratio becomes very large by the superdivision step, which is complex superdivision processing by the superdivision algorithm of the related art. By optimizing the super-division, the GPU duty ratio can be reduced, and the optimization can refer to the image processing method provided by the application. For the above frame rate control, the GPU duty cycle is further optimized. As can be seen, the GPU duty cycle in the experimental data is approximately 46. Specifically, we can define a single frame rendering time limit, and when the interval between 2 frames is counted to be smaller than this value, the frames are directly lost (here, if no frame loss is adopted, the super-resolution picture and the common picture are mixed together, so that the super-resolution effect is not obvious to naked eyes). As shown in (b) in fig. 16, the frame rate of the real rendering is counted once per second, and single-frame rendering is time-consuming with a quadratic curve to dynamically increase (more than 25 frames) or decrease (less than 25 frames) according to the frame rate. Through experiments, the scheme can enable the video playing frame rate to be basically stabilized at 25 frames.
In the embodiment of the application, a new factor of gradient characteristics is introduced, and the target convolution parameters are directly determined from the existing corresponding relation according to the factor, so that the determination process of the target convolution parameters is less in time consumption, fine characteristic extraction and nonlinear mapping of images are not needed, and the time consumption of characteristic extraction and processing is greatly reduced. And the high-resolution image can be obtained by carrying out one-step convolution processing on the target convolution parameters, so that compared with a mode of reconstructing based on the extracted image features, the image processing steps are effectively simplified, and the image processing efficiency can be improved.
All the above optional solutions can be combined to form an optional embodiment of the present application, which is not described in detail herein.
Fig. 17 is a schematic structural diagram of an image processing apparatus provided in an embodiment of the present application, referring to fig. 17, the apparatus includes:
an up-sampling module 1701, configured to up-sample a first image according to a target resolution, to obtain a second image of the target resolution, where the resolution of the first image is smaller than the target resolution;
the feature extraction module 1702 is configured to perform feature extraction on the second image to obtain a gradient feature of the second image, where the gradient feature is used to indicate a relationship between a pixel point in the second image and an adjacent pixel point;
A determining module 1703, configured to determine a target convolution parameter corresponding to the gradient feature of the second image according to a correspondence between the gradient feature and the convolution parameter;
the convolution module 1704 is configured to perform convolution processing on the second image based on the target convolution parameter, so as to obtain a third image, where a resolution of the third image is the target resolution.
In one possible implementation, the feature extraction module 1702 includes a first acquisition unit, a smoothing unit, and a second acquisition unit;
the first acquisition unit is used for acquiring first gradient information of pixel points in the second image;
the smoothing unit is used for carrying out smoothing processing on the first gradient information of the pixel point in the second image to obtain second gradient information of the pixel point;
the second acquisition unit is used for acquiring gradient characteristics corresponding to the second gradient information.
In one possible implementation manner, the first obtaining unit is configured to obtain, according to a pixel value of any pixel point in the second image and a pixel value of a first neighboring pixel point, a pixel value variation at the pixel point, and use the pixel value variation as the first gradient information, where a distance between the first neighboring pixel point and the pixel point is smaller than a first distance threshold.
In one possible implementation manner, the first obtaining unit is configured to obtain a differential at a pixel point according to a pixel value of any pixel point in the second image and a pixel value of a first adjacent pixel point, and use the differential as the first gradient information.
In one possible implementation manner, the first obtaining unit is configured to obtain first gradient information of a pixel point in the second image on the luminance channel.
In one possible implementation manner, the smoothing unit is configured to perform weighted summation on the first gradient information of any pixel point in the second image and the first gradient information of a second adjacent pixel point to obtain second gradient information of the pixel point, where a distance between the second adjacent pixel point and the pixel point is smaller than a second distance threshold.
In one possible implementation, the second gradient information of one pixel point includes gradient information in different directions;
the gradient features include at least one of angle, intensity, and correlation;
the second acquisition unit is used for acquiring at least one of angle, intensity and correlation of the gradient of the pixel point according to gradient information of any pixel point in different directions.
In one possible implementation manner, the smoothing unit is configured to perform gaussian blur processing on the first gradient information of the pixel point in the second image to obtain second gradient information.
In one possible implementation, the determining module 1703 is configured to:
quantifying the gradient feature;
the step of determining the target convolution parameters is performed based on the quantified gradient characteristics.
In one possible implementation, the feature extraction module 1702 is configured to:
windowing the second image to obtain at least one image block;
extracting the characteristics of the at least one image block to obtain gradient characteristics of the at least one image block;
the determining module 1703 is configured to determine a target convolution parameter corresponding to the gradient feature of the at least one image block according to a correspondence between the gradient feature and the convolution parameter.
In one possible implementation, the apparatus further includes:
the sharpening module is used for carrying out sharpening processing on the third image to obtain a fourth image;
and the first rendering module is used for rendering the fourth image.
In one possible implementation, the sharpening module is configured to:
acquiring difference information of the third image and the first image;
and acquiring a fourth image based on the difference information, the target coefficient and the first image, wherein the definition of a target area in the fourth image is larger than that of the target area in the first image.
In one possible implementation, the apparatus further includes:
the first updating module is used for responding to the coefficient setting instruction and updating the target coefficient;
the sharpening module is used for executing the step of acquiring the fourth image based on the updated target coefficient.
In one possible implementation, the apparatus further includes:
the first acquisition module is used for acquiring at least two frames of images of the target video;
the up-sampling module 1701, the feature extraction module 1702, the determination module 1703 and the convolution module 1704 are respectively configured to perform the steps of up-sampling, feature extraction, determining a target convolution parameter and convolution processing on at least one frame of image in the at least two frames of images, so as to obtain at least one frame of target image corresponding to the at least one frame of image;
and the second rendering module is used for rendering images except the at least one frame of image and the at least one frame of target image in the at least two frames of images.
In one possible implementation, the apparatus further includes:
the second acquisition module is used for acquiring the time interval between the rendering time of any two frames of images in the target video in real time;
and the first frame dropping module is used for dropping any frame of the any two frames of images in response to the time interval being smaller than a first target threshold.
In one possible implementation, the apparatus further includes:
the third acquisition module is used for acquiring the frame rate of the target video in real time;
the second updating module is used for updating the rendering time of each frame of image according to the frame rate;
and the second frame loss module is used for carrying out frame loss processing on the target video in response to the rendering time length being greater than a second target threshold value.
In one possible implementation, the target resolution is less than or equal to a resolution threshold, and the apparatus is applied to a terminal;
the apparatus further comprises:
and the third rendering module is used for rendering the third image.
In one possible implementation, the target resolution is greater than a resolution threshold, and the apparatus is applied to a server;
the apparatus further comprises:
the compression module is used for compressing the third image to obtain compressed data of the third image;
and the sending module is used for sending the compressed data to the terminal, and the terminal renders the third image based on the compressed data.
In a possible implementation, the apparatus is configured to input the first image into an image processing model, perform the steps of upsampling, feature extraction, determining target convolution parameters, and convolution processing by the image processing model, and output the third image.
In one possible implementation, the training process of the image processing model includes:
acquiring a first sample image and a target sample image, wherein the resolution of the target sample image is the target resolution, and the resolution of the first sample image is smaller than the target resolution;
upsampling the first sample image to obtain a second sample image of the target resolution;
extracting features of a second image of the sample to obtain gradient features of the second image, wherein the gradient features are used for indicating the relation between pixel points in the second image and adjacent pixel points;
determining a target convolution parameter corresponding to the gradient feature according to the gradient feature of the second image and the sample target image;
and generating the corresponding relation between the gradient characteristic and the convolution parameter based on the target convolution parameter.
In one possible implementation, the image processing model is trained based on a sample first image and at least two sample target images, the resolutions of the at least two sample target images including at least one target resolution.
In one possible implementation, the image processing model includes a plurality of filters in series;
the step of performing the upsampling, feature extraction, determining target convolution parameters, and convolution processing by the image processing model, outputting the third image, comprising:
The step of upsampling is performed by the image processing model and the steps of feature extraction and convolution processing are performed by the plurality of serial filters.
In one possible implementation, the step of performing the feature extraction and convolution processing by the plurality of serial filters includes:
creating at least one object;
the steps of feature extraction, determining target convolution parameters, and convolution processing are performed by the at least one object, the number of objects being less than the number of filters.
According to the device provided by the embodiment of the application, a new factor of the gradient characteristic is introduced, and the target convolution parameter is directly determined from the existing corresponding relation according to the factor, so that the determination process of the target convolution parameter is less in time consumption, fine characteristic extraction and nonlinear mapping are not required, and the time consumption of characteristic extraction and processing is greatly reduced. And the high-resolution image can be obtained by carrying out one-step convolution processing on the target convolution parameters, so that compared with a mode of reconstructing based on the extracted image features, the image processing steps are effectively simplified, and the image processing efficiency can be improved.
It should be noted that: the image processing apparatus provided in the above embodiment is exemplified by the above-described division of the respective functional modules when processing an image, and in practical application, the above-described functional allocation can be performed by different functional modules as needed, that is, the internal structure of the image processing apparatus is divided into different functional modules to perform all or part of the functions described above. In addition, the image processing apparatus and the image processing method provided in the foregoing embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
The electronic device in the method embodiment described above can be implemented as a terminal. For example, fig. 18 is a block diagram of a terminal according to an embodiment of the present application. The terminal 1800 may be a portable mobile terminal, such as: a smart phone, a tablet, an MP3 (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3) player, an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook or a desktop. The terminal 1800 may also be referred to as a user device, portable terminal, laptop terminal, desktop terminal, or the like.
In general, the terminal 1800 includes: a processor 1801 and a memory 1802.
Processor 1801 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 1801 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 1801 may also include a main processor and a coprocessor, the main processor being a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ); a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 1801 may be integrated with a GPU (Graphics Processing Unit, image processor) for taking care of rendering and rendering of content that the display screen is required to display. In some embodiments, the processor 1801 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
The memory 1802 may include one or more computer-readable storage media, which may be non-transitory. The memory 1802 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1802 is used to store at least one instruction for execution by processor 1801 to implement the image processing methods provided by the method embodiments herein.
In some embodiments, the terminal 1800 may also optionally include: a peripheral interface 1803 and at least one peripheral. The processor 1801, memory 1802, and peripheral interface 1803 may be connected by a bus or signal line. The individual peripheral devices may be connected to the peripheral device interface 1803 by buses, signal lines or circuit boards. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1804, a display screen 1805, a camera assembly 1806, an audio circuit 1807, a positioning assembly 1808, and a power supply 1809.
The peripheral interface 1803 may be used to connect I/O (Input/Output) related at least one peripheral device to the processor 1801 and memory 1802. In some embodiments, processor 1801, memory 1802, and peripheral interface 1803 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 1801, memory 1802, and peripheral interface 1803 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1804 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 1804 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 1804 converts electrical signals to electromagnetic signals for transmission, or converts received electromagnetic signals to electrical signals. Optionally, the radio frequency circuit 1804 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 1804 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuitry 1804 may also include NFC (Near Field Communication ) related circuitry, which is not limited in this application.
The display 1805 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 1805 is a touch display, the display 1805 also has the ability to collect touch signals at or above the surface of the display 1805. The touch signal may be input as a control signal to the processor 1801 for processing. At this point, the display 1805 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1805 may be one and disposed on the front panel of the terminal 1800; in other embodiments, the display 1805 may be at least two, disposed on different surfaces of the terminal 1800 or in a folded configuration; in other embodiments, the display 1805 may be a flexible display disposed on a curved surface or a folded surface of the terminal 1800. Even more, the display screen 1805 may be arranged in an irregular pattern other than rectangular, i.e., a shaped screen. The display 1805 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 1806 is used to capture images or video. Optionally, the camera assembly 1806 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, the camera assembly 1806 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuitry 1807 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1801 for processing, or inputting the electric signals to the radio frequency circuit 1804 for realizing voice communication. For stereo acquisition or noise reduction purposes, the microphone may be multiple, and disposed at different locations of the terminal 1800. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is then used to convert electrical signals from the processor 1801 or the radio frequency circuit 1804 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuitry 1807 may also include a headphone jack.
The location component 1808 is utilized to locate a current geographic location of the terminal 1800 to enable navigation or LBS (Location Based Service, location-based services). The positioning component 1808 may be a positioning component based on the united states GPS (Global Positioning System ), the beidou system of china, or the galileo system of russia.
A power supply 1809 is used to power the various components in the terminal 1800. The power supply 1809 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 1809 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 1800 also includes one or more sensors 1810. The one or more sensors 1810 include, but are not limited to: acceleration sensor 1811, gyroscope sensor 1812, pressure sensor 1813, fingerprint sensor 1814, optical sensor 1815, and proximity sensor 1816.
The acceleration sensor 1811 may detect the magnitudes of accelerations on three coordinate axes of a coordinate system established with the terminal 1800. For example, the acceleration sensor 1811 may be used to detect components of gravitational acceleration on three coordinate axes. The processor 1801 may control the display screen 1805 to display a user interface in either a landscape view or a portrait view based on gravitational acceleration signals acquired by the acceleration sensor 1811. The acceleration sensor 1811 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1812 may detect a body direction and a rotation angle of the terminal 1800, and the gyro sensor 1812 may collect a 3D motion of the user to the terminal 1800 in cooperation with the acceleration sensor 1811. The processor 1801 may implement the following functions based on the data collected by the gyro sensor 1812: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
Pressure sensor 1813 may be disposed on a side frame of terminal 1800 and/or below display 1805. When the pressure sensor 1813 is disposed at a side frame of the terminal 1800, a grip signal of the terminal 1800 by a user may be detected, and the processor 1801 performs a left-right hand recognition or a shortcut operation according to the grip signal collected by the pressure sensor 1813. When the pressure sensor 1813 is disposed at the lower layer of the display 1805, the processor 1801 controls the operability control on the UI interface according to the pressure operation of the user on the display 1805. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 1814 is used to collect a fingerprint of the user, and the processor 1801 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 1814, or the fingerprint sensor 1814 identifies the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 1801 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 1814 may be disposed at the front, back, or side of the terminal 1800. When a physical key or vendor Logo is provided on the terminal 1800, the fingerprint sensor 1814 may be integrated with the physical key or vendor Logo.
The optical sensor 1815 is used to collect the ambient light intensity. In one embodiment, the processor 1801 may control the display brightness of the display screen 1805 based on the intensity of ambient light collected by the optical sensor 1815. Specifically, when the intensity of the ambient light is high, the display brightness of the display screen 1805 is turned up; when the ambient light intensity is low, the display brightness of the display screen 1805 is turned down. In another embodiment, the processor 1801 may also dynamically adjust the shooting parameters of the camera assembly 1806 based on the intensity of ambient light collected by the optical sensor 1815.
A proximity sensor 1816, also known as a distance sensor, is typically provided on the front panel of the terminal 1800. Proximity sensor 1816 is used to collect the distance between the user and the front face of terminal 1800. In one embodiment, when the proximity sensor 1816 detects that the distance between the user and the front face of the terminal 1800 gradually decreases, the processor 1801 controls the display 1805 to switch from the on-screen state to the off-screen state; when the proximity sensor 1816 detects that the distance between the user and the front of the terminal 1800 gradually increases, the processor 1801 controls the display 1805 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 18 is not limiting and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
The electronic device in the above-described method embodiment can be implemented as a server. For example, fig. 19 is a schematic structural diagram of a server provided in the embodiment of the present application, where the server 1900 may have a relatively large difference due to different configurations or performances, and can include one or more processors (Central Processing Units, CPU) 1901 and one or more memories 1902, where the memory 1902 stores at least one program code, and the at least one program code is loaded and executed by the processor 1901 to implement the image processing method provided in each of the method embodiments described above. Of course, the server can also have components such as a wired or wireless network interface and an input/output interface for inputting and outputting, and can also include other components for implementing the functions of the device, which are not described herein.
In an exemplary embodiment, a computer readable storage medium, for example a memory, comprising at least one program code executable by a processor to perform the image processing method of the above embodiment is also provided. For example, the computer readable storage medium can be Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), compact disk Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM), magnetic tape, floppy disk, optical data storage device, etc.
In an exemplary embodiment, a computer program product is also provided, an aspect, a computer program product or computer program is provided, the computer program product or computer program comprising one or more program code, the one or more program code being stored in a computer readable storage medium. The one or more processors of the electronic device are capable of reading the one or more program codes from the computer-readable storage medium, the one or more processors executing the one or more program codes so that the electronic device can perform the above-described image processing method.
It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application.
It should be understood that determining B from a does not mean determining B from a alone, but can also determine B from a and/or other information.
Those of ordinary skill in the art will appreciate that all or a portion of the steps implementing the above-described embodiments can be implemented by hardware, or can be implemented by a program instructing the relevant hardware, and the program can be stored in a computer readable storage medium, and the above-mentioned storage medium can be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description is only of alternative embodiments of the present application and is not intended to limit the present application, but any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (15)

1. An image processing method, the method comprising:
up-sampling a first image according to a target resolution to obtain a second image with the target resolution, wherein the resolution of the first image is smaller than the target resolution;
extracting features of the second image to obtain gradient features of the second image, wherein the gradient features are used for indicating the relation between pixel points in the second image and adjacent pixel points;
determining a target convolution parameter corresponding to the gradient feature of the second image according to the corresponding relation between the gradient feature and the convolution parameter;
and carrying out convolution processing on the second image based on the target convolution parameter to obtain a third image, wherein the resolution of the third image is the target resolution.
2. The method of claim 1, wherein the feature extraction of the second image to obtain gradient features of the second image comprises:
Acquiring first gradient information of pixel points in the second image;
smoothing the first gradient information of the pixel points in the second image to obtain second gradient information of the pixel points;
and acquiring gradient characteristics corresponding to the second gradient information.
3. The method of claim 2, wherein the acquiring the first gradient information of the pixel point in the second image comprises:
and acquiring the pixel value variation of the pixel point according to the pixel value of any pixel point in the second image and the pixel value of a first adjacent pixel point, taking the pixel value variation as the first gradient information, wherein the distance between the first adjacent pixel point and the pixel point is smaller than a first distance threshold.
4. The method according to claim 2, wherein the smoothing the first gradient information of the pixel point in the second image to obtain the second gradient information of the pixel point includes:
and carrying out weighted summation on the first gradient information of any pixel point in the second image and the first gradient information of a second adjacent pixel point to obtain second gradient information of the pixel point, wherein the distance between the second adjacent pixel point and the pixel point is smaller than a second distance threshold value.
5. The method of claim 2, wherein the second gradient information of one pixel point includes gradient information in different directions;
the gradient features include at least one of angle, intensity, and correlation;
the obtaining the gradient characteristics corresponding to the second gradient information includes:
and acquiring at least one of the angle, the intensity and the correlation of the pixel gradient according to the gradient information of any pixel in different directions.
6. The method according to claim 1, wherein determining the target convolution parameter corresponding to the gradient feature of the second image according to the correspondence between the gradient feature and the convolution parameter comprises:
quantifying the gradient features;
based on the quantified gradient characteristics, a step of determining a target convolution parameter is performed.
7. The method of claim 1, wherein the feature extraction of the second image to obtain gradient features of the second image comprises:
windowing the second image to obtain at least one image block;
extracting the characteristics of the at least one image block to obtain gradient characteristics of the at least one image block;
The determining the target convolution parameter corresponding to the gradient feature of the second image according to the corresponding relation between the gradient feature and the convolution parameter comprises:
and determining a target convolution parameter corresponding to the gradient feature of the at least one image block according to the corresponding relation between the gradient feature and the convolution parameter.
8. The method according to claim 1, wherein the method further comprises:
sharpening the third image to obtain a fourth image;
and rendering the fourth image.
9. The method of claim 8, wherein sharpening the third image results in a fourth image, comprising:
acquiring difference information of the third image and the first image;
and acquiring a fourth image based on the difference information, the target coefficient and the first image, wherein the definition of a target area in the fourth image is larger than that of the target area in the first image.
10. The method according to claim 9, wherein the method further comprises:
updating the target coefficient in response to a coefficient setting instruction;
and executing the step of acquiring the fourth image based on the updated target coefficient.
11. The method according to claim 1, wherein the method further comprises:
acquiring at least two frames of images of a target video;
the steps of up-sampling, feature extraction, target convolution parameter determination and convolution processing are carried out on at least one frame of image in the at least two frames of images, so that at least one frame of target image corresponding to the at least one frame of image is obtained;
and rendering the images except for the at least one frame of image in the at least two frames of images and the at least one frame of target image.
12. The method of claim 11, wherein the method further comprises:
acquiring the time interval between rendering time of any two frames of images in the target video in real time;
and discarding any one of the two frame images in response to the time interval being less than a first target threshold.
13. An image processing apparatus, characterized in that the apparatus comprises:
the up-sampling module is used for up-sampling the first image according to the target resolution to obtain a second image with the target resolution, and the resolution of the first image is smaller than the target resolution;
the feature extraction module is used for extracting features of the second image to obtain gradient features of the second image, wherein the gradient features are used for indicating the relation between pixel points in the second image and adjacent pixel points;
The determining module is used for determining a target convolution parameter corresponding to the gradient characteristic of the second image according to the corresponding relation between the gradient characteristic and the convolution parameter;
and the convolution module is used for carrying out convolution processing on the second image based on the target convolution parameter to obtain a third image, and the resolution of the third image is the target resolution.
14. An electronic device comprising one or more processors and one or more memories, the one or more memories having stored therein at least one piece of program code loaded and executed by the one or more processors to implement the image processing method of any of claims 1-12.
15. A computer readable storage medium having stored therein at least one program code, the at least one program code being loaded and executed by a processor to implement the image processing method of any one of claims 1 to 12.
CN202010872723.6A 2020-08-26 2020-08-26 Image processing method, device, equipment and storage medium Active CN111932463B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010872723.6A CN111932463B (en) 2020-08-26 2020-08-26 Image processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010872723.6A CN111932463B (en) 2020-08-26 2020-08-26 Image processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111932463A CN111932463A (en) 2020-11-13
CN111932463B true CN111932463B (en) 2023-05-30

Family

ID=73305772

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010872723.6A Active CN111932463B (en) 2020-08-26 2020-08-26 Image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111932463B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112597916B (en) * 2020-12-24 2021-10-26 中标慧安信息技术股份有限公司 Face image snapshot quality analysis method and system
CN113734197A (en) * 2021-09-03 2021-12-03 合肥学院 Unmanned intelligent control scheme based on data fusion
CN114827723B (en) * 2022-04-25 2024-04-09 阿里巴巴(中国)有限公司 Video processing method, device, electronic equipment and storage medium
CN116385260B (en) * 2022-05-19 2024-02-09 上海玄戒技术有限公司 Image processing method, device, chip, electronic equipment and medium
CN116055802B (en) * 2022-07-21 2024-03-08 荣耀终端有限公司 Image frame processing method and electronic equipment

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106157249A (en) * 2016-08-01 2016-11-23 西安电子科技大学 Based on the embedded single image super-resolution rebuilding algorithm of optical flow method and sparse neighborhood
CN106204489A (en) * 2016-07-12 2016-12-07 四川大学 Single image super resolution ratio reconstruction method in conjunction with degree of depth study with gradient conversion
CN107527321A (en) * 2017-08-22 2017-12-29 维沃移动通信有限公司 A kind of image rebuilding method, terminal and computer-readable recording medium
WO2018006095A2 (en) * 2016-07-01 2018-01-04 Digimarc Corporation Image-based pose determination
CN109118432A (en) * 2018-09-26 2019-01-01 福建帝视信息科技有限公司 A kind of image super-resolution rebuilding method based on Rapid Circulation convolutional network
CN109903221A (en) * 2018-04-04 2019-06-18 华为技术有限公司 Image oversubscription method and device
CN110428378A (en) * 2019-07-26 2019-11-08 北京小米移动软件有限公司 Processing method, device and the storage medium of image
CN110599402A (en) * 2019-08-30 2019-12-20 西安理工大学 Image super-resolution reconstruction method based on multi-feature sparse representation
WO2020062191A1 (en) * 2018-09-29 2020-04-02 华为技术有限公司 Image processing method, apparatus and device
CN111182254A (en) * 2020-01-03 2020-05-19 北京百度网讯科技有限公司 Video processing method, device, equipment and storage medium
CN111325726A (en) * 2020-02-19 2020-06-23 腾讯医疗健康(深圳)有限公司 Model training method, image processing method, device, equipment and storage medium
CN111402143A (en) * 2020-06-03 2020-07-10 腾讯科技(深圳)有限公司 Image processing method, device, equipment and computer readable storage medium
CN111429347A (en) * 2020-03-20 2020-07-17 长沙理工大学 Image super-resolution reconstruction method and device and computer-readable storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108133456A (en) * 2016-11-30 2018-06-08 京东方科技集团股份有限公司 Face super-resolution reconstruction method, reconstructing apparatus and computer system
WO2019128660A1 (en) * 2017-12-29 2019-07-04 清华大学 Method and device for training neural network, image processing method and device and storage medium
US10540749B2 (en) * 2018-03-29 2020-01-21 Mitsubishi Electric Research Laboratories, Inc. System and method for learning-based image super-resolution

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018006095A2 (en) * 2016-07-01 2018-01-04 Digimarc Corporation Image-based pose determination
CN106204489A (en) * 2016-07-12 2016-12-07 四川大学 Single image super resolution ratio reconstruction method in conjunction with degree of depth study with gradient conversion
CN106157249A (en) * 2016-08-01 2016-11-23 西安电子科技大学 Based on the embedded single image super-resolution rebuilding algorithm of optical flow method and sparse neighborhood
CN107527321A (en) * 2017-08-22 2017-12-29 维沃移动通信有限公司 A kind of image rebuilding method, terminal and computer-readable recording medium
CN109903221A (en) * 2018-04-04 2019-06-18 华为技术有限公司 Image oversubscription method and device
CN109118432A (en) * 2018-09-26 2019-01-01 福建帝视信息科技有限公司 A kind of image super-resolution rebuilding method based on Rapid Circulation convolutional network
WO2020062191A1 (en) * 2018-09-29 2020-04-02 华为技术有限公司 Image processing method, apparatus and device
CN110428378A (en) * 2019-07-26 2019-11-08 北京小米移动软件有限公司 Processing method, device and the storage medium of image
CN110599402A (en) * 2019-08-30 2019-12-20 西安理工大学 Image super-resolution reconstruction method based on multi-feature sparse representation
CN111182254A (en) * 2020-01-03 2020-05-19 北京百度网讯科技有限公司 Video processing method, device, equipment and storage medium
CN111325726A (en) * 2020-02-19 2020-06-23 腾讯医疗健康(深圳)有限公司 Model training method, image processing method, device, equipment and storage medium
CN111429347A (en) * 2020-03-20 2020-07-17 长沙理工大学 Image super-resolution reconstruction method and device and computer-readable storage medium
CN111402143A (en) * 2020-06-03 2020-07-10 腾讯科技(深圳)有限公司 Image processing method, device, equipment and computer readable storage medium

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Deep Super-Resolution Network for Single Image Super-Resolution with Realistic Degradations;Rao Muhammad Umer;《ICDSC 2019: Proceedings of the 13th International Conference on Distributed Smart Cameras》;全文 *
基于梯度特征的稀疏表示超分辨率恢复;于建;郭春生;;杭州电子科技大学学报(第03期);全文 *
基于深度特征学习的图像超分辨率重建;胡长胜;詹曙;吴从中;;自动化学报(第05期);全文 *
基于稀疏编码的红外显著区域超分重建算法;黄硕;胡勇;巩彩兰;郑付强;;红外与毫米波学报(第03期);全文 *
结合多特征的单幅图像超分辨率重建算法;黄剑华;王丹丹;金野;;哈尔滨工业大学学报(第11期);全文 *

Also Published As

Publication number Publication date
CN111932463A (en) 2020-11-13

Similar Documents

Publication Publication Date Title
CN110136136B (en) Scene segmentation method and device, computer equipment and storage medium
CN111932463B (en) Image processing method, device, equipment and storage medium
CN110288518B (en) Image processing method, device, terminal and storage medium
WO2020224479A1 (en) Method and apparatus for acquiring positions of target, and computer device and storage medium
CN110555839A (en) Defect detection and identification method and device, computer equipment and storage medium
CN111091576A (en) Image segmentation method, device, equipment and storage medium
CN111091166B (en) Image processing model training method, image processing device, and storage medium
CN112598686B (en) Image segmentation method and device, computer equipment and storage medium
CN110290426B (en) Method, device and equipment for displaying resources and storage medium
CN110796248A (en) Data enhancement method, device, equipment and storage medium
CN110991457B (en) Two-dimensional code processing method and device, electronic equipment and storage medium
CN111258467A (en) Interface display method and device, computer equipment and storage medium
CN111178343A (en) Multimedia resource detection method, device, equipment and medium based on artificial intelligence
CN114820633A (en) Semantic segmentation method, training device and training equipment of semantic segmentation model
CN110675412A (en) Image segmentation method, training method, device and equipment of image segmentation model
CN111836073B (en) Method, device and equipment for determining video definition and storage medium
CN111915481A (en) Image processing method, image processing apparatus, electronic device, and medium
CN113706440A (en) Image processing method, image processing device, computer equipment and storage medium
CN110807769B (en) Image display control method and device
CN112115900B (en) Image processing method, device, equipment and storage medium
CN114283299A (en) Image clustering method and device, computer equipment and storage medium
CN113642359B (en) Face image generation method and device, electronic equipment and storage medium
CN113570510A (en) Image processing method, device, equipment and storage medium
CN110232417B (en) Image recognition method and device, computer equipment and computer readable storage medium
CN112818979A (en) Text recognition method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant