CN111932463A - Image processing method, device, equipment and storage medium - Google Patents

Image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN111932463A
CN111932463A CN202010872723.6A CN202010872723A CN111932463A CN 111932463 A CN111932463 A CN 111932463A CN 202010872723 A CN202010872723 A CN 202010872723A CN 111932463 A CN111932463 A CN 111932463A
Authority
CN
China
Prior art keywords
image
gradient
target
resolution
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010872723.6A
Other languages
Chinese (zh)
Other versions
CN111932463B (en
Inventor
宋晨光
熊诗尧
龙泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010872723.6A priority Critical patent/CN111932463B/en
Publication of CN111932463A publication Critical patent/CN111932463A/en
Application granted granted Critical
Publication of CN111932463B publication Critical patent/CN111932463B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing method, an image processing device, image processing equipment and a storage medium, and belongs to the technical field of multimedia. The embodiment of the application can be realized by a computer vision technology in artificial intelligence, and particularly can realize an image processing method by using an image processing technology and a video processing technology. In the embodiment of the application, a new factor of gradient characteristics is introduced, and the target convolution parameters are directly determined from the existing corresponding relation according to the new factor, so that the time consumption of the determination process of the target convolution parameters is low, the image does not need to be subjected to fine characteristic extraction, the nonlinear mapping is also not needed, and the time consumption of characteristic extraction and processing is greatly reduced. And the high-resolution image can be obtained by performing one-step convolution processing on the target convolution parameter, and compared with a mode of reconstructing based on the extracted image characteristics, the method effectively simplifies the image processing steps, thereby improving the image processing efficiency.

Description

Image processing method, device, equipment and storage medium
Technical Field
The present application relates to the field of multimedia technologies, and in particular, to an image processing method, apparatus, device, and storage medium.
Background
With the development of multimedia technology, image processing technology is more and more widely applied, for example, images with low resolution can be processed into images with high resolution for display by performing super-resolution processing on the images, so as to improve image quality.
At present, an image processing method is generally realized by a hyper-segmentation algorithm, which is an interpolation hyper-segmentation algorithm, a learning-based hyper-segmentation algorithm and the like. For example, taking a hyper-segmentation algorithm based on machine learning, such as SRCNN (Super Resolution probabilistic Neural Networks, Convolutional Neural network hyper-segmentation), as an example, the structure is also simple, and there are only three layers of Networks, where the first layer of Convolutional layer has 64 Convolutional kernels and is responsible for extracting the features of the low-Resolution image after interpolation, the second layer is responsible for nonlinear mapping of the features extracted by the first layer, and the third layer of Convolutional layer is for feature reconstruction, so as to generate a final high-Resolution image.
In the method, the image is subjected to feature extraction, then nonlinear mapping is carried out, then the time consumption is long based on the process of feature reconstruction, even on equipment with good performance such as a server, the time of a second level is needed to calculate one frame, therefore, the image processing time consumption is long, the efficiency is low, and particularly, the effect of real-time processing cannot be achieved when the image in the video is subjected to overdivision processing.
Disclosure of Invention
The embodiment of the application provides an image processing method, device, equipment and storage medium, which can reduce time consumption and improve image processing efficiency. The technical scheme is as follows:
in one aspect, an image processing method is provided, and the method includes:
according to a target resolution, performing up-sampling on a first image to obtain a second image of the target resolution, wherein the resolution of the first image is smaller than the target resolution;
extracting features of the second image to obtain gradient features of the second image, wherein the gradient features are used for indicating the relationship between pixel points and adjacent pixel points in the second image;
determining a target convolution parameter corresponding to the gradient feature of the second image according to the corresponding relation between the gradient feature and the convolution parameter;
and performing convolution processing on the second image based on the target convolution parameter to obtain a third image, wherein the resolution of the third image is the target resolution.
In a possible implementation manner, the smoothing processing on the first gradient information of the pixel point in the second image to obtain the second gradient information includes:
and performing Gaussian blur processing on the first gradient information of the pixel points in the second image to obtain second gradient information.
In a possible implementation manner, the obtaining a variation of a pixel value at a pixel point according to a pixel value of any pixel point in the second image and a pixel value of a first adjacent pixel point includes:
and acquiring a differential at the pixel point according to the pixel value of any pixel point in the second image and the pixel value of the first adjacent pixel point, and taking the differential as the first gradient information.
In a possible implementation manner, the obtaining first gradient information of a pixel point in the second image includes:
and acquiring first gradient information of the pixel points in the second image on a brightness channel.
In one possible implementation manner, the target resolution is less than or equal to a resolution threshold, and the method is applied to a terminal;
the method further comprises the following steps:
rendering the third image.
In one possible implementation, the target resolution is greater than a resolution threshold, and the method is applied to a server;
the method further comprises the following steps:
compressing the third image to obtain compressed data of the third image;
and sending the compressed data to a terminal, and rendering the third image by the terminal based on the compressed data.
In one possible implementation, the method includes:
inputting the first image into an image processing model, executing the steps of up-sampling, feature extraction, target convolution parameter determination and convolution processing by the image processing model, and outputting the third image.
In one possible implementation, the training process of the image processing model includes:
acquiring a sample first image and a sample target image, wherein the resolution of the sample target image is a target resolution, and the resolution of the sample first image is smaller than the target resolution;
up-sampling the sample first image to obtain a sample second image with the target resolution;
extracting features of the sample second image to obtain gradient features of the second image, wherein the gradient features are used for indicating the relation between pixel points and adjacent pixel points in the second image;
determining a target convolution parameter corresponding to the gradient feature according to the gradient feature of the second image and the sample target image;
and generating the corresponding relation between the gradient feature and the convolution parameter based on the target convolution parameter.
In one possible implementation, the image processing model is trained based on a sample first image and at least two sample target images, the resolutions of the at least two sample target images including at least one target resolution.
In one possible implementation, the image processing model includes a plurality of filters in series;
the step of performing the upsampling, feature extraction, determining target convolution parameters, and convolution processing by the image processing model, and outputting the third image, includes:
the step of upsampling is performed by the image processing model, and the steps of feature extraction and convolution processing are performed by the plurality of serial filters.
In one possible implementation, the step of performing the feature extraction and convolution processing by the plurality of serial filters includes:
creating at least one object;
the steps of feature extraction, determining target convolution parameters and convolution processing are performed by the at least one object, the number of objects being less than the number of filters.
In one possible implementation, the method further includes:
acquiring the frame rate of the target video in real time;
updating the rendering duration of each frame of image according to the frame rate;
and responding to the rendering duration being larger than a second target threshold value, and performing frame loss processing on the target video.
In one aspect, an image processing apparatus is provided, the apparatus including:
the up-sampling module is used for up-sampling a first image according to a target resolution to obtain a second image of the target resolution, wherein the resolution of the first image is smaller than the target resolution;
the characteristic extraction module is used for extracting the characteristics of the second image to obtain the gradient characteristics of the second image, and the gradient characteristics are used for indicating the relationship between pixel points and adjacent pixel points in the second image;
the determining module is used for determining a target convolution parameter corresponding to the gradient feature of the second image according to the corresponding relation between the gradient feature and the convolution parameter;
and the convolution module is used for performing convolution processing on the second image based on the target convolution parameter to obtain a third image, and the resolution of the third image is the target resolution.
In one possible implementation, the feature extraction module includes a first obtaining unit, a smoothing unit, and a second obtaining unit;
the first obtaining unit is used for obtaining first gradient information of a pixel point in the second image;
the smoothing unit is used for smoothing first gradient information of a pixel point in the second image to obtain second gradient information of the pixel point;
the second obtaining unit is configured to obtain a gradient feature corresponding to the second gradient information.
In a possible implementation manner, the first obtaining unit is configured to obtain a pixel value variation at a pixel point according to a pixel value of any one pixel point in the second image and a pixel value of a first adjacent pixel point, and use the pixel value variation as the first gradient information, where a distance between the first adjacent pixel point and the pixel point is smaller than a first distance threshold.
In a possible implementation manner, the first obtaining unit is configured to obtain a differential at the pixel point according to a pixel value of any one pixel point in the second image and a pixel value of a first adjacent pixel point, and use the differential as the first gradient information.
In a possible implementation manner, the first obtaining unit is configured to obtain first gradient information of a pixel point in the second image on a luminance channel.
In a possible implementation manner, the smoothing unit is configured to perform weighted summation on first gradient information of any pixel in the second image and first gradient information of a second adjacent pixel to obtain second gradient information of the pixel, where a distance between the second adjacent pixel and the pixel is smaller than a second distance threshold.
In a possible implementation manner, the second gradient information of one pixel point includes gradient information in different directions;
the gradient feature comprises at least one of an angle, an intensity, and a correlation;
the second obtaining unit is used for obtaining at least one of the angle, the strength and the correlation of the gradient of any pixel point according to the gradient information of the pixel point in different directions.
In a possible implementation manner, the smoothing unit is configured to perform gaussian blurring processing on the first gradient information of the pixel point in the second image to obtain second gradient information.
In one possible implementation, the determining module is configured to:
quantifying the gradient features;
performing the step of determining target convolution parameters based on the quantified gradient features.
In one possible implementation, the feature extraction module is to:
windowing the second image to obtain at least one image block;
performing feature extraction on the at least one image block to obtain a gradient feature of the at least one image block;
the determining module is used for determining a target convolution parameter corresponding to the gradient feature of the at least one image block according to the corresponding relation between the gradient feature and the convolution parameter.
In one possible implementation, the apparatus further includes:
the sharpening module is used for sharpening the third image to obtain a fourth image;
and the first rendering module is used for rendering the fourth image.
In one possible implementation, the sharpening module is to:
acquiring difference information of the third image and the first image;
and acquiring a fourth image based on the difference information, the target coefficient and the first image, wherein the definition of the target area in the fourth image is greater than that of the target area in the first image.
In one possible implementation, the apparatus further includes:
the first updating module is used for responding to a coefficient setting instruction and updating the target coefficient;
the sharpening module is configured to perform the step of obtaining the fourth image based on the updated target coefficient.
In one possible implementation, the apparatus further includes:
the first acquisition module is used for acquiring at least two frames of images of a target video;
the up-sampling module, the feature extraction module, the determination module and the convolution module are respectively used for performing the steps of up-sampling, feature extraction, target convolution parameter determination and convolution processing on at least one frame of image in the at least two frames of images to obtain at least one frame of target image corresponding to the at least one frame of image;
and the second rendering module is used for rendering images except the at least one frame of image in the at least two frames of images and the at least one frame of target image.
In one possible implementation, the apparatus further includes:
the second acquisition module is used for acquiring the time interval between the rendering times of any two frames of images in the target video in real time;
and the first frame dropping module is used for dropping any one frame image in the any two frame images in response to the time interval being smaller than a first target threshold value.
In one possible implementation, the apparatus further includes:
the third acquisition module is used for acquiring the frame rate of the target video in real time;
the second updating module is used for updating the rendering duration of each frame of image according to the frame rate;
and the second frame loss module is used for responding to the rendering duration being greater than a second target threshold value and performing frame loss processing on the target video.
In one possible implementation manner, the target resolution is smaller than or equal to a resolution threshold, and the device is applied to a terminal;
the device further comprises:
and the third rendering module is used for rendering the third image.
In one possible implementation, the target resolution is greater than a resolution threshold, and the apparatus is applied to a server;
the device further comprises:
the compression module is used for compressing the third image to obtain compressed data of the third image;
and the sending module is used for sending the compressed data to a terminal, and the terminal renders the third image based on the compressed data.
In one possible implementation, the apparatus is configured to input the first image into an image processing model, perform the steps of upsampling, feature extraction, determining a target convolution parameter and convolution processing by the image processing model, and output the third image.
In one possible implementation, the training process of the image processing model includes:
acquiring a sample first image and a sample target image, wherein the resolution of the sample target image is a target resolution, and the resolution of the sample first image is smaller than the target resolution;
up-sampling the sample first image to obtain a sample second image with the target resolution;
extracting features of the sample second image to obtain gradient features of the second image, wherein the gradient features are used for indicating the relation between pixel points and adjacent pixel points in the second image;
determining a target convolution parameter corresponding to the gradient feature according to the gradient feature of the second image and the sample target image;
and generating the corresponding relation between the gradient feature and the convolution parameter based on the target convolution parameter.
In one possible implementation, the image processing model is trained based on a sample first image and at least two sample target images, the resolutions of the at least two sample target images including at least one target resolution.
In one possible implementation, the image processing model includes a plurality of filters in series;
the step of performing the upsampling, feature extraction, determining target convolution parameters, and convolution processing by the image processing model, and outputting the third image, includes:
the step of upsampling is performed by the image processing model, and the steps of feature extraction and convolution processing are performed by the plurality of serial filters.
In one possible implementation, the step of performing the feature extraction and convolution processing by the plurality of serial filters includes:
creating at least one object;
the steps of feature extraction, determining target convolution parameters and convolution processing are performed by the at least one object, the number of objects being less than the number of filters.
In one aspect, an electronic device is provided that includes one or more processors and one or more memories having at least one program code stored therein, the at least one program code being loaded into and executed by the one or more processors to implement various alternative implementations of the above-described image processing method.
In one aspect, a computer-readable storage medium is provided, in which at least one program code is stored, which is loaded and executed by a processor to implement various alternative implementations of the image processing method described above.
In one aspect, a computer program product or computer program is provided that includes one or more program codes stored in a computer-readable storage medium. One or more processors of the electronic device can read the one or more program codes from the computer-readable storage medium, and the one or more processors execute the one or more program codes, so that the electronic device can execute the image processing method of any one of the above possible embodiments.
In the embodiment of the application, a new factor of gradient characteristics is introduced, and the target convolution parameters are directly determined from the existing corresponding relation according to the new factor, so that the time consumption of the determination process of the target convolution parameters is low, the image does not need to be subjected to fine characteristic extraction, the nonlinear mapping is also not needed, and the time consumption of characteristic extraction and processing is greatly reduced. And the high-resolution image can be obtained by performing one-step convolution processing on the target convolution parameter, and compared with a mode of reconstructing based on the extracted image characteristics, the method effectively simplifies the image processing steps, thereby improving the image processing efficiency.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to be able to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment of an image processing method according to an embodiment of the present application;
fig. 2 is a flowchart of an image processing method provided in an embodiment of the present application;
fig. 3 is a flowchart of an image processing method provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of an upsampling method provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of an upsampling method provided by an embodiment of the present application;
fig. 6 is a schematic diagram of a pixel location according to an embodiment of the present disclosure;
FIG. 7 is a diagram illustrating execution times of various filter rendering commands according to an embodiment of the present disclosure;
fig. 8 is an overall architecture diagram of an image processing method according to an embodiment of the present application;
FIG. 9 is an exploded view of a filter layer according to an embodiment of the present disclosure;
FIG. 10 is a schematic diagram illustrating an image display effect after each image processing step according to an embodiment of the present application;
FIG. 11 is a schematic diagram illustrating a training process of an image processing model according to an embodiment of the present disclosure;
FIG. 12 is a schematic diagram illustrating an image processing model using process according to an embodiment of the present application;
FIG. 13 is a diagram illustrating a sharpening process according to an embodiment of the present disclosure;
FIG. 14 is a schematic diagram of a terminal interface provided by an embodiment of the present application;
FIG. 15 is a schematic illustration of a different gear interface display provided by an embodiment of the present application;
fig. 16 is a schematic diagram illustrating a GPU occupation and frame rate control method according to an embodiment of the present disclosure;
fig. 17 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 18 is a block diagram of a terminal according to an embodiment of the present disclosure;
fig. 19 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The terms "first," "second," and the like in this application are used for distinguishing between similar items and items that have substantially the same function or similar functionality, and it should be understood that "first," "second," and "nth" do not have any logical or temporal dependency or limitation on the number or order of execution. It will be further understood that, although the following description uses the terms first, second, etc. to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first image can be referred to as a second image, and similarly, a second image can be referred to as a first image without departing from the scope of various described examples. The first image and the second image can both be images, and in some cases, can be separate and distinct images.
The term "at least one" is used herein to mean one or more, and the term "plurality" is used herein to mean two or more, e.g., a plurality of packets means two or more packets.
It is to be understood that the terminology used in the description of the various described examples herein is for the purpose of describing particular examples only and is not intended to be limiting. As used in the description of the various described examples and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. The term "and/or" is an associative relationship that describes an associated object, meaning that three relationships can exist, e.g., a and/or B, can mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in the present application generally indicates that the former and latter related objects are in an "or" relationship.
It should also be understood that, in the embodiments of the present application, the size of the serial number of each process does not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
It should also be understood that determining B from a does not mean determining B from a alone, but can also determine B from a and/or other information.
It will be further understood that the terms "Comprises," "Comprising," "inCludes" and/or "inCluding," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also understood that the term "if" may be interpreted to mean "when" ("where" or "upon") or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined." or "if [ a stated condition or event ] is detected" may be interpreted to mean "upon determining.. or" in response to determining. "or" upon detecting [ a stated condition or event ] or "in response to detecting [ a stated condition or event ]" depending on the context.
The following is a description of terms involved in the present application.
And (3) image super-segmentation: image Super Resolution refers to the recovery of a high Resolution Image from a low Resolution Image.
Lr (low resolution): low resolution, in the present embodiment, a low resolution image is referred to by LR.
Hr (high resolution): high resolution, in the present embodiment, a high resolution image is referred to by HR.
Super-Resolution (SR) is a bottom-layer image processing task that maps an image with a low Resolution to a high Resolution in order to enhance the image details. There are many reasons for the blurring of images, such as various types of noise, lossy compression, and down-sampling … … super-resolution, which are the classic applications of computer vision. SR means that a corresponding high-resolution image is reconstructed from an observed low-resolution image (that is, resolution is improved) by a software or hardware method, and has important application values in the fields of monitoring equipment, satellite image remote sensing, digital high definition, microscopic imaging, video coding communication, video restoration, medical images and the like.
The Super-Resolution tasks may include Image Super-Resolution (ISR) and Video Super-Resolution (VSR), among others. For video super-separation, the super-separation can be realized by performing image super-separation on each frame or part of frame images in the video, and can also be realized by performing super-separation on a combination of multiple frame images in the video.
Image quality enhancement: including resolution enhancement and color enhancement, i.e., improving the image quality through algorithms.
Carrying out overdivision: compared with the hyper-score algorithm running on the server, the hyper-score algorithm running on the mobile terminal needs to balance the relation between the algorithm effect and the performance power consumption, so that the degree of difficulty is higher.
SRCNN (Super Resolution Convolutional Neural Networks) algorithm: the method is a classic hyper-resolution algorithm realized based on a three-layer convolutional neural network.
The image processing method provided by the embodiment of the application can be realized through artificial intelligence, and the related contents of the artificial intelligence are explained below.
Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Computer Vision technology (CV) Computer Vision is a science for researching how to make a machine "see", and further refers to that a camera and a Computer are used to replace human eyes to perform machine Vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the Computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technologies, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face recognition and fingerprint recognition.
Machine Learning (ML) is a multi-domain cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and teaching learning.
With the research and progress of artificial intelligence technology, the artificial intelligence technology is developed and applied in a plurality of fields, such as common smart homes, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned driving, automatic driving, unmanned aerial vehicles, robots, smart medical care, smart customer service, and the like.
The scheme provided by the embodiment of the application relates to the technologies of image processing, video processing, machine learning and the like in the computer vision of artificial intelligence, and is specifically explained by the following embodiment.
The following describes an embodiment of the present application.
Fig. 1 is a schematic diagram of an implementation environment of an image processing method according to an embodiment of the present application. The implementation environment includes a terminal 101, or the implementation environment includes a terminal 101 and an image processing platform 102. The terminal 101 is connected to the image processing platform 102 through a wireless network or a wired network.
The terminal 101 can be at least one of a smart phone, a game console, a desktop computer, a tablet computer, an e-book reader, an MP3(Moving Picture Experts Group Audio Layer III, motion Picture Experts compression standard Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer IV, motion Picture Experts compression standard Audio Layer 4) player, and a laptop computer. The terminal 101 is installed and running with an application program supporting image processing, which can be, for example, a system application, an instant messaging application, a news push application, a shopping application, an online video application, a social application.
Illustratively, the terminal 101 can have an image processing function, can process an image, and can render the image according to the processing result. Illustratively, in this embodiment, the terminal 101 is capable of receiving an image or video sent by the server and processing one or more frames of images in the image or video. The terminal 101 can independently complete the work and can also provide data services for the terminal through the image processing platform 102. Illustratively, the image processing platform 102 is capable of processing images and sending the processed images to the terminal 101 for rendering by the terminal 101. The embodiments of the present application do not limit this.
The image processing platform 102 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. The image processing platform 102 is used to provide background services for image processing applications. Optionally, the image processing platform 102 undertakes primary processing, and the terminal 101 undertakes secondary processing; or, the image processing platform 102 undertakes the secondary processing work, and the terminal 101 undertakes the primary processing work; alternatively, the image processing platform 102 or the terminal 101 can be separately provided with processing work. Alternatively, the image processing platform 102 and the terminal 101 perform cooperative computing by using a distributed computing architecture.
Optionally, the image processing platform 102 includes at least one server 1021 and a database 1022, where the database 1022 is used to store data, and in this embodiment, the database 1022 can store a sample first image or a sample target image to provide data services for the at least one server 1021.
The server can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud functions, cloud storage, network service, cloud communication, middleware service, domain name service, security service, CDN, big data and artificial intelligence platform. The terminal can be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, and the like.
Those skilled in the art will appreciate that the number of the terminals 101 and the servers 1021 can be greater or smaller. For example, the number of the terminals 101 and the servers 1021 may be only one, or the number of the terminals 101 and the servers 1021 may be several tens or several hundreds, or more, and the number of the terminals or the servers and the device types are not limited in the embodiment of the present application.
Fig. 2 is a flowchart of an image processing method provided in an embodiment of the present application, where the method is applied to an electronic device, where the electronic device is a terminal or a server, and referring to fig. 2, the method includes the following steps.
201. The electronic equipment performs up-sampling on the first image according to the target resolution to obtain a second image of the target resolution, wherein the resolution of the first image is smaller than the target resolution.
The first image has a low resolution, and when the first image is to be processed into a high-resolution image, the second image with high resolution can be obtained by resampling image features of the first image in an up-sampling manner.
Understandably, the density of the pixel points in the high-resolution second image is higher than that of the pixel points in the first image, and the number of the pixel points in the second image is greater than that of the pixel points in the first image. The resolution of the first image can be improved by increasing pixel points in the first image in an upsampling mode, and a second image is obtained.
In the embodiment of the application, the second image is directly obtained based on the first image through upsampling, the pixel point obtained through interpolation may not be well reflected to the original image content with the original pixel point, or the pixel value of the pixel point obtained through interpolation may not be well used for performing transition on the adjacent pixel point, and the like. The electronic device can further process the second image to obtain a third image with a better display effect. The specific processing procedure can be seen in steps 202 to 204 described below.
202. And the electronic equipment extracts the features of the second image to obtain the gradient features of the second image, wherein the gradient features are used for indicating the relationship between the pixel points and the adjacent pixel points in the second image.
The electronic device needs to fine-tune the pixel values of the pixels in the second image, so that the relationship between the pixels can better present the image content. Therefore, the relationship between the pixel points can be characterized based on the gradient feature of the second image, and how to process the second image is determined based on the gradient feature so as to optimize the relationship between the adjacent pixel points and optimize the gradient feature, so that the image obtained by over-classification is more natural and has better display effect.
203. And the electronic equipment determines a target convolution parameter corresponding to the gradient feature of the second image according to the corresponding relation between the gradient feature and the convolution parameter.
The corresponding relation between the gradient characteristics and the convolution parameters is stored in the electronic equipment, and when the convolution parameters need to be determined, the corresponding relation is inquired through the gradient characteristics, so that the corresponding target convolution parameters can be determined. The convolution parameter is used for performing convolution processing on the image so as to change the pixel value of the pixel point.
204. And the electronic equipment performs convolution processing on the second image based on the target convolution parameter to obtain a third image, wherein the resolution of the third image is the target resolution.
The convolution processing is a process of updating pixel values of pixel points in the second image, the resolution of the third image obtained in the above way is the same as that of the second image and is a target resolution, the corresponding target convolution processing is inquired through the gradient characteristics, the pixel values of the pixel points can be processed, and the third image which is more natural and has better display effect is obtained.
In the embodiment of the application, a new factor of gradient characteristics is introduced, and the target convolution parameters are directly determined from the existing corresponding relation according to the new factor, so that the time consumption of the determination process of the target convolution parameters is low, the image does not need to be subjected to fine characteristic extraction, the nonlinear mapping is also not needed, and the time consumption of characteristic extraction and processing is greatly reduced. And the high-resolution image can be obtained by performing one-step convolution processing on the target convolution parameter, and compared with a mode of reconstructing based on the extracted image characteristics, the method effectively simplifies the image processing steps, thereby improving the image processing efficiency.
In the embodiment of the application, the electronic equipment can acquire the image, perform image super-resolution on the image, and process the low-resolution image into the high-resolution image so as to improve the image quality. In a possible implementation manner, the image may be any frame image in the video, that is, the electronic device may obtain a plurality of frame images of the video, and perform image super-segmentation on one or more frame images in the plurality of frame images, thereby improving the video quality. The flow of the image processing method is explained by the embodiment shown in fig. 3. Fig. 3 is a flowchart of an image processing method provided in an embodiment of the present application, and referring to fig. 3, the method includes the following steps.
301. The electronic device acquires a first image and a target resolution.
The resolution of the first image is less than the target resolution. The target resolution is the resolution of the processed image, that is, the resolution that is desired to be achieved after the processing.
In one possible implementation, the target resolution may be set by a skilled person as required, for example, the target resolution may be 1080P. Wherein, P is Progressive and Chinese is Progressive. In a specific scenario, a target resolution is set by a related technician, and the electronic device can process the first image into an image with the target resolution when acquiring the first image.
In another possible implementation manner, the target resolution may be set by a user according to a use requirement, for example, the user sets a target resolution (such as 1080P) desired to be viewed in an electronic device, and the electronic device may process the first image after acquiring the first image, process the first image into an image with the target resolution desired by the user, and then display the image to meet the user requirement.
In one possible implementation, the number of the target resolutions may be one or more. That is, the electronic device may obtain a target resolution, and process the first image to obtain an image of the target resolution. The electronic device may also acquire a plurality of target resolutions, and process the first image based on each target resolution to obtain an image of each target resolution. The electronic equipment processes the first image based on a plurality of target resolutions to obtain a plurality of images, wherein the resolutions of the plurality of images are the target resolutions respectively.
The above provides several possible setting modes and numbers of the target resolution, and the setting modes and numbers are not particularly limited in the embodiments of the present application.
For the electronic device, the electronic device may be a terminal or a server. That is, the image processing method provided by the embodiment of the present application may be applied to a terminal, and may also be applied to a server. It can be understood that the server has better processing performance and can realize an algorithm with high computation amount compared with the terminal. Compared with a server, the terminal is directly oriented to the user and can directly present the processing result to the user.
In one possible implementation, the electronic device may provide an image processing switch control for controlling the turning on and off of the image processing functions. For example, the image processing method can be understood as a hyper-resolution algorithm, and the image processing switch control can also be referred to as a hyper-resolution switch.
In this implementation, the electronic device may determine whether to execute the steps of the image processing method according to the state of the image processing switch control. Specifically, when the electronic device acquires the first image, the state of the image processing switch control may be detected, and in response to that the state of the image processing switch control is the on state, the following steps 302 to 309 are performed. And the terminal can also respond that the state of the image processing switch control is the closing state, and directly renders the first image without executing subsequent steps.
302. The electronic equipment performs up-sampling on the first image according to the target resolution to obtain a second image of the target resolution, wherein the resolution of the first image is smaller than the target resolution.
Upsampling is essentially the process of enlarging an image by which the primary purpose of enabling (or called upsampling) the image is to enlarge the original image so that it can be displayed on a higher resolution display device. In one possible implementation, the upsampling process may be implemented by image interpolation (interpolating), for example, by interpolating values. The image interpolation process is to insert new pixels between pixel points by adopting a proper interpolation algorithm on the basis of the original image pixels. Understandably, by upsampling, the number of pixels in the image can be increased to improve the image quality.
The upsampling process may be implemented by a plurality of interpolation methods, and several possible interpolation methods are provided below, and the embodiment of the present application does not limit which method is specifically adopted.
In one possible implementation, the electronic device can perform upsampling on the first image by using a nearest neighbor method to obtain a second image. The nearest neighbor method does not need to calculate, and in four adjacent pixels of the pixel to be added, the gray scale of the adjacent pixel nearest to the pixel to be added is given to the pixel to be added. For example, as shown in fig. 4, (i + u, j + v) is set as the coordinates of the pixel to be added. Wherein I, j is a positive integer, u, v is a decimal number greater than zero and less than 1), the gray value of the pixel to be added is f (I + u, j + v). If (i + u, j + v) falls in the area A, i.e. u <0.5 and v <0.5, the gray value of the pixel at the upper left corner is assigned to the pixel to be added, and similarly, the gray value of the pixel at the upper right corner is assigned to the pixel at the upper left corner when the pixel falls in the area B, the gray value of the pixel at the lower left corner is assigned to the pixel at the lower left corner when the pixel falls in the area C, and the gray value of the pixel at the lower right corner is assigned to the pixel at the.
In another possible implementation, the electronic device can perform upsampling on the first image by using a bilinear interpolation method to obtain a second image. In particular, the electronic device may linearly interpolate the gray values of four adjacent pixels in the pixel to be added in two directions. As shown in fig. 5, when the gray scale changes from (I, j + v) to f (I, j +1) are linear, f (I, j + v) ([ f (I, j +1) -f (I, j) ]) v + f (I, j) can be obtained. Similarly, for (i +1, j + v), f (i +1, j + v) ═ f (i +1, j +1) -f (i +1, j) ] + v + f (i +1, j) can be obtained. The gray scale change from f (i, j + v) to f (i +1, j + v) is a linear relationship, and thus the gray scale value of the pixel to be added can be obtained by f (i + u, j + v) (1-u) × (1-v) × f (i, j) + (1-u) × v × (i, j +1) + u (1-v) × f (i +1, j) + u × v × (i +1, j +1), as described below.
Of course, the upsampling process may also be implemented by other Interpolation methods, such as cubic Interpolation, "Inverse Distance to a Power" (Inverse Distance weighted Interpolation), "Kriging" (Kriging), Minimum Curvature "," Modified Shepard's Method "," Natural Neighbor Interpolation "," Nearest Neighbor Interpolation "," multiregression "," Radial Basis Function "," triangle Interpolation with Linear Interpolation "," Moving Average "," Local Polynomial ", and the embodiments of the present application do not specifically limit the implementation of the upsampling process.
303. The electronic equipment acquires first gradient information of a pixel point in the second image.
After the electronic equipment obtains the second image through rapid up-sampling, appropriate target convolution parameters can be determined for the second image according to the gradient characteristics of the second image, so that convolution processing is performed on the second image, and a third image with a better display effect is obtained. The electronic device may first obtain the gradient feature of the pixel point in the second image through the step 303 and the following steps 304 and 305, and then perform the step of determining the target convolution parameter based on the gradient feature.
The electronic device may first obtain first gradient information of the pixel point, and then process the first gradient information to extract the gradient feature. The first gradient information of each pixel point is used for representing the change situation of the pixel value at the pixel point. Therefore, when the first gradient information is obtained, the pixel value of the currently calculated pixel point and the pixel value of the pixel point adjacent to the pixel point can be referred to.
Specifically, the electronic device may obtain a pixel value variation at the pixel point according to a pixel value of any pixel point in the second image and a pixel value of a first adjacent pixel point, and use the pixel value variation as the first gradient information, where a distance between the first adjacent pixel point and the pixel point is smaller than a first distance threshold. The first gradient information represents a change in pixel value between adjacent pixels, and thus, the first gradient information may also be referred to as a neighborhood gradient.
As for the first distance threshold, the first distance threshold may be set by a related technical person as required, and this is not limited in this embodiment of the application. For example, the first distance threshold may be one pixel, or a value greater than the first pixel and less than two pixels.
In a possible implementation manner, only the first distance threshold is used to explain the first adjacent pixel point, and the electronic device may not have the first distance threshold, and may not calculate the distance between the pixel points, but may explain the position of the first adjacent pixel point by the distance. The electronic device can take the pixel point adjacent to the pixel point as the first adjacent pixel point through the position of the pixel point. For example, as shown in fig. 6, for a pixel 601, the electronic device uses the pixel 602 above, below, to the left, and to the right of the pixel 601 as the first neighboring pixel.
Accordingly, in step 303, the electronic device may obtain a differential at the pixel point according to the pixel value of any pixel point in the second image and the pixel value of the first adjacent pixel point, and use the differential as the first gradient information. For example, the electronic device may calculate a neighborhood gradient of the Y channel of the first image, resulting in a differential (dxdx, dydy, dxdy), which is the first gradient information. The differential is expressed by the gradient and is the gradient statistical condition of one pixel point, and therefore, the differential can also be called as a gradient statistical component.
In a possible implementation manner, in step 303, the electronic device obtains first gradient information of a pixel point in the second image on the luminance channel. The format of the first image is YUV format. The image data of the first image is data in YUV format. Where Y is the luminance channel and U and V are the chrominance channels. The pixel value of the brightness channel is a gray value, the shape and the like of each object in the second image can be clearly known through the gray value of the pixel point in the second image, only the color information is lacked, and the influence on the presentation of the final image content is not great. Therefore, calculation is performed based on the information on the luminance channel, so that the super-resolution can be accurately performed, the calculation amount can be effectively reduced, and the image processing efficiency can be improved.
304. And the electronic equipment performs smoothing processing on the first gradient information of the pixel point in the second image to obtain second gradient information of the pixel point.
After the electronic equipment obtains the first gradient information, the electronic equipment can further process the first gradient information, so that pixel points in the image corresponding to the second gradient information are more coherent, and transition between the pixel points is more natural.
Among them, smoothing is also called blurring, and is a simple image processing method with high frequency of use. The process of this smoothing process can be understood as follows: for one pixel point, the similarity is high in consideration of the similarity of contents expressed by adjacent pixel points in the image. The process of smoothing the image can be a process of determining the pixel value of a pixel point according to the pixel value of an adjacent pixel point of the pixel point, so that the relevance between the pixel value of the pixel point and the adjacent pixel point is stronger, the transition between the pixel points is more natural, and the connection is more coherent.
The smoothing process may include a variety of processing approaches, such as gaussian blurring, normalized block filtering, median filtering, bilateral filtering, and the like. The embodiment of the present application does not specifically limit which manner is used.
In one possible implementation, the smoothing process may be a gaussian blurring process. Among them, Gaussian Blur (also called Gaussian smoothing) is a common smoothing method. Gaussian blur is widely used in Image processing software such as Adobe Photoshop, GIMP (GNU Image management Program), and paint. Wherein the GNU is a free operating system named from GNU's Not Unix! Recursive abbreviation of (1). The gaussian blur can reduce image noise and reduce the level of detail. In general, after gaussian blurring, the visual effect of an image is as if the image is observed through a frosted glass. From the mathematical point of view, the gaussian blurring process of an image is to convolute the image with a normal distribution. This image processing technique is called gaussian blurring, since a normal distribution is also called gaussian distribution. Gaussian blur is understood to be a low pass filter for an image.
Specifically, the smoothing process may be: the electronic equipment carries out weighted summation on the first gradient information of any pixel point in the second image and the first gradient information of a second adjacent pixel point to obtain second gradient information of the pixel point, and the distance between the second adjacent pixel point and the pixel point is smaller than a second distance threshold value. The second distance threshold may be set by a person skilled in the art according to requirements, for example, the second distance threshold may be two pixels, which is not limited in this embodiment of the application. For example, as shown in fig. 6, for a pixel 601, the electronic device may obtain a pixel 602 above, below, to the left, and to the right of the pixel 601 and a pixel 603 above, to the left, below, and above to the right of the pixel 601 as a second adjacent pixel, where fig. 6 is merely an exemplary illustration, and the second adjacent pixel may also include only the pixel 601, or may also include other pixels. It should be noted that, in the implementation of gaussian blur, the second distance threshold may be understood as a gaussian blur radius. In the weighted summation, the pixel values of the pixels within the gaussian blur radius centered on the pixel can be referred to. The weights of the second adjacent pixel points can be the same or different. In Gaussian blur, the weights may satisfy a normal distribution.
In step 303, after the electronic device obtains the first gradient information, the first gradient information of each pixel point may be a gradient map, and the electronic device may perform gaussian blurring on the obtained gradient map. The dimension of the image block into which the image is divided is dxd, and the corresponding dimension of the Gaussian blur kernel is (d-1) x (d-1). Wherein d is a positive integer.
305. And the electronic equipment acquires the gradient characteristics corresponding to the second gradient information.
After the second gradient information is obtained through smoothing processing, the electronic equipment can convert the second gradient information into a gradient feature capable of clearly expressing the gradient, and the gradient feature can better embody the change feature of the pixel point.
In an implementation manner in which the second gradient information includes gradient information in different directions, the electronic device may obtain the gradient feature according to that the second gradient information of each pixel includes gradient information in different directions. In one possible implementation, the gradient feature includes at least one of an angle, an intensity, and a correlation. That is, the gradient feature may be an angle. Alternatively, the gradient characteristic may be intensity. Alternatively, the gradient feature may be a correlation. Alternatively, the gradient profile may be any two of the three. Alternatively, the gradient profile may be three. Accordingly, in this step 305, the electronic device may obtain at least one of an angle, an intensity, and a correlation of the gradient of any one pixel point according to the gradient information of the pixel point in different directions.
The manner in which these three gradient features are acquired is illustrated below. In a specific possible embodiment, for a pixel point k, the gradient dx, dy of the pixel point in the x and y directions is obtained by calculating the first gradient information. After the smoothing process, second gradient information in different directions can be obtained. The method is specifically realized by the following formula I and formula II.
Figure BDA0002651657430000141
Figure BDA0002651657430000142
Next, the electronic device may continue to obtain the gradient feature through formula three, formula four, and formula five: angle thetakStrength (Strength) skCoherence (coherence). The three gradient features can reflect the gradient situation near the pixel point, and therefore, the three gradient features can also be called local gradient statistics.
θk=tan-1((λ1-dxdx)/dxdy) Equation three
sk=λ1Equation four
Figure BDA0002651657430000143
Steps 303 to 305 are processes of extracting features of the second image to obtain gradient features of the second image. In the process, gradient features can be extracted by means of obtaining first gradient information, smoothing and feature transformation. The process can also be implemented in other ways, for example, the electronic device performs convolution processing on the second image to obtain the gradient feature. The embodiments of the present application do not limit this.
In a possible implementation manner, in the feature extraction process, the electronic device may divide the image into different image blocks, and perform feature extraction on the image blocks respectively to obtain a gradient feature of each image block. Specifically, the electronic device may perform windowing on the second image to obtain at least one image block, and perform feature extraction on the at least one image block to obtain a gradient feature of the at least one image block. Accordingly, in step 306, the electronic device may determine a target convolution parameter corresponding to the gradient feature of the at least one image block according to the correspondence between the gradient feature and the convolution parameter. By dividing the image blocks, the features of the image can be more finely extracted, the extracted gradient features are more accurate, and further, a better processing effect can be achieved by executing a subsequent image processing method according to the gradient features.
306. And the electronic equipment determines a target convolution parameter corresponding to the gradient feature of the second image according to the corresponding relation between the gradient feature and the convolution parameter.
Different gradient features correspond to different convolution parameters, and it can be understood that when the gradient features of the second image are different, the convolution processing to be performed on the pixel values of the pixel points is different, and the convolution parameters used in the convolution processing are different. After the electronic device acquires the gradient feature of the second image, the target convolution parameter can be determined according to the gradient feature, that is, how to process the second image is determined, so that an image with a good display effect can be obtained.
The corresponding relationship between the gradient feature and the convolution parameter may be stored in the electronic device, and when the convolution parameter corresponding to a certain gradient feature needs to be determined, the corresponding relationship may be queried. Therefore, the electronic equipment can quickly inquire by taking the corresponding relation as a reference without adopting other complex calculation modes, and the image processing efficiency can be effectively improved.
The corresponding relationship can be obtained in various ways, and the corresponding relationship can be set by related technicians according to experience, or can be obtained by analyzing a large number of images.
In one possible implementation, the correspondence may be determined during an image processing model training process. In this implementation, the image processing method is implemented by an image processing model.
In this implementation, after step 301, the electronic device may input the acquired first image into an image processing model, perform an image processing step by the image processing model, and output a third image. Accordingly, the image processing steps may be: the electronic equipment inputs the first image into an image processing model, the steps of up-sampling, feature extraction, target convolution parameter determination and convolution processing are executed by the image processing model, and the third image is output.
Alternatively, the image processing model can process the input image into an image of some particular target resolution. The image processing model can be used for processing an input image and outputting an image with a specific resolution. That is, the electronic device may only acquire the first image in step 301, and input it into the image processing model for processing, and the target resolution is determined in the model training process.
Alternatively, the image processing model can process the input image into images of multiple target resolutions, that is, the image processing model is used for processing the input image based on the input target resolution and outputting the image of the target resolution. That is, after acquiring the first image and the target resolution in step 301, the electronic device may input both the first image and the target resolution into the image processing model, where the target resolution may be continuously set by a relevant technician as required, or may be set in response to a resolution setting instruction.
In the image processing model training, the electronic device can perform the gradient feature after feature extraction based on the low-resolution sample image and the corresponding high-resolution sample image to obtain the convolution parameter, so that the corresponding relationship between the gradient feature and the convolution parameter is established.
Each convolution parameter may be identified by corresponding identification information. The corresponding relationship may store identification information of the convolution parameter and corresponding gradient characteristics. In step 306, the electronic device may obtain identification information of the target convolution parameter through the gradient feature, and obtain the target convolution parameter according to the identification information to perform convolution processing.
In the mode of dividing the image into image blocks for feature extraction, the electronic device acquires the gradient feature of each image block, can classify the image blocks, determines the identification information of the convolution parameter corresponding to each image block, and further acquires the corresponding convolution parameter to perform convolution processing on the corresponding convolution parameter.
For example, in one specific example, image blocks may be analyzed according to gradient features of the image blocks, similar image blocks may be determined, and classified as a type for convolution processing. It can be understood that: and classifying the image blocks into a bucket for processing, wherein each bucket corresponds to a convolution parameter. Then, the identification information of the convolution parameter can be understood as a bucket index, and in step 306, the electronic device can calculate the bucket index corresponding to the image block according to the obtained local gradient statistic. The convolution parameters (which may also be referred to as convolution kernel parameters) corresponding to the bucket index may be obtained by fitting in the above-mentioned training of the image processing model. For example, in one specific example, the convolution parameters may be obtained by solving a linear fitting problem using a least squares method.
In one possible implementation, the correspondence may be stored by way of a convolution table. The above-mentioned target convolution parameter determining step may be implemented by table lookup. The table look-up mode is quick and convenient, and the image processing efficiency can be improved.
In a possible implementation manner, after the electronic device determines the gradient feature, the electronic device may further process the gradient feature, and then perform the step of determining the target convolution parameter. Specifically, the electronics quantize the gradient feature, and based on the quantized gradient feature, perform the step of determining the target convolution parameter. In a specific possible embodiment, the angle in the gradient feature may be quantized to one of 24 angles, the intensity to one of 9 intensities, and the coherence to one of 9. That is, the quantization step can partition the gradient features using a24 × 9 structure. By more finely dividing the gradient features, the image detail edge can be better processed.
For example, angle is denoted by A, intensity is denoted by B, and coherence is denoted by C. The angles include a1, a2, … …, a 24. The intensities included B1, B2, … …, B9. Coherency includes C1, C2, … …, C9. Different permutation combinations of the gradient features may correspond to different convolution parameters, i.e. to different buckets.
307. And the electronic equipment performs convolution processing on the second image based on the target convolution parameter to obtain a third image, wherein the resolution of the third image is the target resolution.
After the electronic device obtains the target convolution parameter, convolution processing can be performed on the second image, and the pixel value of the pixel point in the second image can be updated through the convolution processing, so that the image quality is improved. In a possible implementation manner, the convolution parameter may be a convolution matrix, and the image block is subjected to convolution processing by the convolution matrix, so that the third image is obtained.
In the manner in which the image processing method is implemented by an image processing model, the convolution processing is also performed by the image processing model.
The following explains the training process of the image processing model and the manner of determining the correspondence relationship in the training process.
Specifically, the training process of the image processing model can be realized through steps one to five.
The method comprises the steps that firstly, electronic equipment obtains a sample first image and a sample target image, the resolution of the sample target image is target resolution, and the resolution of the sample first image is smaller than the target resolution.
In the first step, the sample first image is a low resolution image, which may be referred to as an LR image herein. The sample target image is the high resolution image corresponding to the low resolution image, which may be referred to herein as the HR image.
And secondly, the electronic equipment performs up-sampling on the sample first image to obtain a sample second image with the target resolution.
And thirdly, the electronic equipment extracts the characteristics of the sample second image to obtain the gradient characteristics of the second image, wherein the gradient characteristics are used for indicating the relationship between the pixel points and the adjacent pixel points in the second image.
And step four, the electronic equipment determines a target convolution parameter corresponding to the gradient feature according to the gradient feature of the second image and the sample target image.
The steps two to four are similar to the steps 303 to 305, and are not described herein again.
And fifthly, the electronic equipment generates the corresponding relation between the gradient feature and the convolution parameter based on the target convolution parameter.
In one possible implementation, the image processing model is trained based on a sample first image and at least two sample target images, the resolutions of the at least two sample target images including at least one target resolution.
In this fifth step, the convolution parameters determined by the electronic device may be used by a single convolutional layer for convolution processing, that is, in this step 307, the convolution processing step may be implemented by a single convolutional layer.
Since the number of the first images of the sample is multiple, the electronic device can fit different convolution parameters through the fifth step, so as to form a convolution table or a single-layer convolution group. The single-layer convolution group comprises a plurality of single-layer convolution layers, each single-layer convolution layer represents a convolution parameter, and a convolution processing can be carried out on the image.
In one possible implementation, the size of the single convolutional layer may be 5x 5. The convolution process has been simplified to a single layer, but the convolution is still the most computationally intensive part. Fig. 7 is a schematic diagram of execution time of each filter drawing command according to an embodiment of the present application, and as shown in fig. 7, it is found through experiments that convolution with a size of 7 × 7 takes approximately 30 milliseconds (ms), other processing procedures require approximately 3ms, and in a case that other operations remain unchanged, we can change a convolution kernel to a size of 5 × 5, and can find that the operation time is directly reduced by 30%. Through the size change of the convolution layer, the arithmetic effect is guaranteed as far as possible, meanwhile, the operation amount is effectively reduced, and the image processing efficiency can be improved. Therein, eos is an image processing software. Rgbtoyuv refers to converting RGB format to YUV format. Yuvtorgb refers to converting YUV format to RGB format.
In one possible implementation, the image processing model includes a plurality of serial filters, and after the image processing model performs the upsampling step, the steps of feature extraction and convolution processing may be performed by the plurality of serial filters. For example, as shown in fig. 8, the overall architecture of image processing includes: a player 801, a decoding layer 802, a filter layer 803, and a rendering layer 804. The player 801 can acquire an image, decode the image through the decoding layer 802, filter the decoded image data through the filter layer 803, and render and display the filtered image through the rendering layer 804. Taking YUV data with image data of 540P obtained after decoding as an example, as shown in fig. 9, each filter is referred to as a filter, and fig. 9 is a decomposition structure of the filter layer 803 in fig. 8. The plurality of filters in series may include a gradient filter 901, a gaussian filter 902, a feature filter 903, and a convolution filter 904. The decoded 540P YUV data 905 (i.e., the first image) is amplified to obtain amplified 1080P YUV data 906 (i.e., the second image), and then the amplified 1080P YUV data is divided into two paths, one path passes through a gradient filter 901, a gaussian filter 902 and a feature filter 903, the first gradient information acquisition step, the gaussian blur and gradient feature acquisition step are respectively performed on the second image, and the other path directly passes through a convolution filter 904. The convolution filter 904 is configured to query the convolution table 907 based on the extracted gradient feature to obtain a corresponding target convolution parameter, so as to perform convolution processing on the input 1080P YUV data 906, thereby obtaining a third image. Before rendering, the electronic device may also format convert the convolved images through the RGB conversion filter 908 and render them on the top screen 909. Wherein top screen means displayed on the screen. As can be seen from fig. 9, the filter layer provided by the present application realizes multiple input and multiple output, and is more flexible in application. As shown in fig. 10, LR, YUV, gradient, gaussian, feature and HR in fig. 10 are image display effects obtained before processing and after processing of each filter, respectively.
It should be noted that, the plurality of filters process the image in a pipeline manner, so that the filters can be conveniently plugged and pulled out, and the effect of each filter can be debugged, which is a convenient and fast implementation manner.
In one possible implementation, during the above-mentioned process of performing the feature extraction and convolution processing by the plurality of serial filters, the electronic device may create at least one object, and perform the steps of feature extraction, determining target convolution parameters, and convolution processing by the at least one object, where the number of objects is smaller than the number of filters. Therefore, an object does not need to be created by each filter, and the filters are combined, so that the object creation times can be reduced, the time consumption can be reduced, and the image processing efficiency can be improved. As shown in fig. 10, the intermediate result generated by each Filter (Filter) is obtained, and the Filter can be conveniently plugged and unplugged by using a pipeline manner, so as to adjust the effect of each Filter, which is a convenient and fast implementation manner.
For example, in a specific example, we optimize the filter layer, and may combine some or all of the filters in multiple filters, and for the combined filter, create objects required for rendering by pipeline controllers, frame buffers, textures, etc., to implement the functions of the combined filter with one object. By combining a plurality of filter renderers, the times of pipeline controllers, frame buffering, texture creation and texture submission on the GPU are reduced, and the occupation of the GPU is effectively reduced.
As shown in fig. 11, a specific example is provided below with reference to fig. 11 and 12, in the training process of the image processing model, an input low-resolution image (LR)1101 may be input, the LR may be subjected to fast upsampling 1102, block-based feature extraction 1103 may be performed, then the input LR and a label (label) high-resolution image (HR)1105 may be solved by a fast solver 1104 to determine convolution parameters, and a plurality of convolution parameters may be obtained by fitting a plurality of LRs and HRs, and the plurality of convolution parameters may form a single-layer convolution group 1106. As shown in fig. 12, after the training of the image processing model is completed, fast upsampling 1202 and block-based feature extraction 1203 can be performed on an input low-resolution image 1201, then a filtering index 1204 is performed to obtain a bucket index of an image block, and a corresponding bucket (convolution parameter) is found from a single-layer convolution group 1205 for convolution processing.
308. And the electronic equipment sharpens the third image to obtain a fourth image.
The step 308 is an optional step, and after the step 307, the electronic device may further execute the step 308, and render the image after sharpening, or directly render the third image without sharpening.
For the sharpening process, the electronic device may obtain difference information between the third image and the first image, and obtain a fourth image based on the difference information, a target coefficient, and the first image, where a sharpness of a target region in the fourth image is greater than a sharpness of the target region in the first image. Wherein the target coefficient is a coefficient of difference information for controlling a size of the difference information added in the first image. Understandably, the third image can be obtained by adding the difference information to the first image. If the target coefficient is less than 1, the super-resolution effect of the fourth image is worse than that of the third image. If the target coefficient is greater than 1, the super-resolution effect of the fourth image is better than that of the third image.
The target coefficient may be set by a person skilled in the art as needed, or may be determined based on the coefficient setting instruction.
In one specific example, the sharpening process may be implemented by an unsharp Mask (USM).
In a possible implementation manner, a coefficient setting function can be provided, and a user can set a target coefficient according to requirements to adjust the super-resolution effect, so that a more flexible super-resolution function can be realized. Specifically, the electronic device may acquire the target coefficient in response to the coefficient setting instruction. Further, in subsequent step 309, the electronic device may render a sharpened fourth image based on the set target coefficients.
For example, as shown in fig. 13, x (n, m) is an input image (i.e., a first image), y (n, m) is an output image (i.e., a third image), and z (n, m) is a correction signal, where we use the difference between the super-divided image and the original image as the correction signal, i.e., z (n, m) ═ y (n, m) -x (n, m), which can be determined by a Linear high-voltage Filter (Linear HP Filter), and then introduce λ as a coefficient for controlling the super-divided effect, i.e., a target coefficient. The fourth image that is ultimately presented to the user is x (n, m) + λ x z (n, m).
For example, as shown in fig. 14, the user may select the target resolution, or may select the super-score effect by adjusting the target coefficient. For the target resolution, several candidate resolutions may be used for selection. For example, high definition is selected. The target coefficient adjustment may be adjusted by a horizontal bar option of the smart image quality. The user may drag the drag item in the bar option to adjust the over-score effect, for example, the over-score effect is measured by 0-100, and currently dragged to 38, which has a corresponding relationship with the target coefficient λ. As shown in fig. 15, this solution produces a very significant image quality enhancement effect in practice in a plurality of shift positions. It can be seen that, with the high definition file (540p) as an input, the subjective sharpness of our super-resolution result (1080p) is improved significantly, even reaching the level close to the sharpness of the blue light file (1080p), and meanwhile, we also test the objective quality of our super-resolution algorithm in various types of online video data, and from the test results (as shown in table 1 below), our super-resolution algorithm has significant improvements in the evaluation of vmaf (Visual multi-method Assessment Fusion), psnr (Peak Signal-Noise Ratio), SSIM (Structural SIMilarity), and other indexes.
TABLE 1
Figure BDA0002651657430000181
Wherein tv in the hyper-Resolution type tves is television, esr is Enhanced Super-Resolution, and the Super-Resolution is Enhanced.
The image processing method provided by the embodiment of the application has good applicability, and through experiments, the method can achieve a real-time super-resolution effect in most machine types.
In the above-mentioned implementation of the image processing method by the image processing model, step 308 may be executed by the image processing model, so that the image processing model outputs the fourth image.
309. The electronic device renders the fourth image.
After the electronic device acquires the fourth image, the fourth image may be rendered and displayed. If the electronic equipment is a terminal, the terminal can directly render and display the fourth image after processing the fourth image, coding is not needed, and the super-resolution effect is achieved on a rendering layer. If the electronic device is a server, the server may also compress the fourth image and transmit the compressed fourth image to the terminal for rendering and displaying.
In a possible implementation manner, the formats of the first image, the second image, the third image and the fourth image may be YUV formats. When rendering an image, an electronic device typically renders the image in RGB format. Then the electronic device may convert the format of the fourth image into RGB format and render the fourth image in RGB format in step 309.
The image processing method provided by the embodiment of the application is a super-divide algorithm, the super-divide algorithm has two modes when falling to the ground, one mode is realized on the mobile terminal, the advantage is that super-divide is realized on a rendering layer, secondary coding is not needed, the server cost is saved simultaneously, but the defect is that the terminal performance power consumption is limited, the algorithm with higher operation amount cannot be realized, the opposite is realized at the cloud end, the algorithm with better effect can be used, for example, 1080P super-divide into 2K and then issue, but the defect is that the limitation of a downlink channel is still received, secondary coding is needed after super-divide, and the server cost is higher. Therefore, a mixed scheme is finally selected, 720P and the following are realized at the terminal, the super-points are played to 1080P, and the super-points above 1080P are realized at the server.
Specifically, the flow of the above-described image processing method includes the following two cases.
In case one, the target resolution is less than or equal to the resolution threshold, and the image processing method is applied to the terminal, that is, after the terminal performs the steps 301 to 307, the terminal may render the third image.
In case two, the target resolution is greater than the resolution threshold, the method is applied to the server. That is, after the server performs the above steps 301 to 307, the server may compress the third image to obtain compressed data of the third image, transmit the compressed data to the terminal, and render the third image based on the compressed data by the terminal. Certainly, when the image processing process includes a sharpening process, the server may compress the fourth image to obtain compressed data, and then send the compressed data to the terminal for rendering.
The resolution threshold may be set by a person skilled in the art according to performance or usage requirements of the terminal and the server, for example, the resolution threshold is 1080P, which is not limited in this embodiment of the application.
The foregoing steps 301 to 309 describe an image super-resolution method, and in the embodiment of the present application, the image processing method may be used to super-resolve a video. Specifically, the electronic device may obtain at least two frames of images of the target video, perform the steps of upsampling, feature extraction, determining a target convolution parameter and convolution processing on at least one frame of image of the at least two frames of images, obtain at least one frame of target image corresponding to the at least one frame of image, and render images other than the at least one frame of image and the at least one frame of target image of the at least two frames of images. For example, the image processing method may be applied in a live scene, which may be a game live scene, for example. In the scene, the electronic equipment can perform the super-resolution processing on the images in the live stream in real time through the image processing method to obtain the images with the target resolution, so that the diversified live broadcast requirements of users are met.
In a possible implementation manner, a frame dropping mechanism may be provided, and by setting the frame dropping mechanism at the first target threshold, when it is detected that the time distance between two adjacent frames is too short, the frame dropping mechanism may adopt frame dropping to avoid that the hyper-segmentation effect is not observed by a person due to the fact that the hyper-segmentation processed image and the non-hyper-segmentation processed image are mixed together. Specifically, the electronic device may obtain, in real time, a time interval between rendering times of any two frames of images in the target video, and discard any one of the any two frames of images in response to the time interval being smaller than the first target threshold.
In a possible implementation manner, a frame rate control mechanism may be further configured, and the electronic device may determine whether frame dropping processing is required according to a real-time frame rate, so as to ensure smooth video playing. Specifically, the electronic device may obtain the frame rate of the target video in real time, update the rendering duration of each frame of image according to the frame rate, and perform frame dropping processing on the target video in response to the rendering duration being greater than the second target threshold.
For the first target threshold and the second target threshold, the two thresholds may be set by the skilled person as required, and the two thresholds may be the same, e.g. both corresponding to 25 frames/second. The two thresholds may also be different. The embodiments of the present application do not limit this.
For example, as shown in fig. 16, (a) in fig. 16 shows that GPU (Graphics Processing Unit) accounts for different situations, and it can be found that single frame rendering takes less time, only 4, without performing super-rendering. The GPU occupation ratio becomes very large through the super-division step, which is a complex super-division process through a related-art super-division algorithm. By optimizing the excess score, the GPU duty can be reduced, and the optimization may refer to the image processing method provided by the present application. For the above frame rate control, the GPU ratio is further optimized. As can be seen, the GPU ratio in the experimental data is about 46. Specifically, we can define a single-frame rendering time limit, and when it is counted that the interval between 2 frames is smaller than this value, the frame is dropped directly (if the frame dropping is not adopted, the super-resolution picture and the normal picture are mixed together, the super-resolution effect is not obvious to the naked eye). As shown in (b) of fig. 16, the frame rate of the real rendering is counted once per second, and the rendering time of a single frame is dynamically increased (more than 25 frames) or decreased (less than 25 frames) by using a quadratic curve according to the frame rate. Through experiments, the scheme can enable the video playing frame rate to be basically stable at 25 frames.
In the embodiment of the application, a new factor of gradient characteristics is introduced, and the target convolution parameters are directly determined from the existing corresponding relation according to the new factor, so that the time consumption of the determination process of the target convolution parameters is low, the image does not need to be subjected to fine characteristic extraction, the nonlinear mapping is also not needed, and the time consumption of characteristic extraction and processing is greatly reduced. And the high-resolution image can be obtained by performing one-step convolution processing on the target convolution parameter, and compared with a mode of reconstructing based on the extracted image characteristics, the method effectively simplifies the image processing steps, thereby improving the image processing efficiency.
All the above optional technical solutions can be combined arbitrarily to form optional embodiments of the present application, and are not described herein again.
Fig. 17 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application, and referring to fig. 17, the apparatus includes:
an upsampling module 1701, configured to upsample a first image according to a target resolution to obtain a second image of the target resolution, where the resolution of the first image is smaller than the target resolution;
a feature extraction module 1702, configured to perform feature extraction on the second image to obtain a gradient feature of the second image, where the gradient feature is used to indicate a relationship between a pixel point in the second image and an adjacent pixel point;
a determining module 1703, configured to determine a target convolution parameter corresponding to the gradient feature of the second image according to a corresponding relationship between the gradient feature and the convolution parameter;
a convolution module 1704, configured to perform convolution processing on the second image based on the target convolution parameter to obtain a third image, where a resolution of the third image is the target resolution.
In one possible implementation, the feature extraction module 1702 includes a first obtaining unit, a smoothing unit, and a second obtaining unit;
the first obtaining unit is used for obtaining first gradient information of a pixel point in the second image;
the smoothing unit is used for smoothing the first gradient information of the pixel point in the second image to obtain second gradient information of the pixel point;
the second obtaining unit is configured to obtain a gradient feature corresponding to the second gradient information.
In a possible implementation manner, the first obtaining unit is configured to obtain a pixel value variation at the pixel point according to a pixel value of any one pixel point in the second image and a pixel value of a first adjacent pixel point, and use the pixel value variation as the first gradient information, where a distance between the first adjacent pixel point and the pixel point is smaller than a first distance threshold.
In a possible implementation manner, the first obtaining unit is configured to obtain a differential at the pixel point according to a pixel value of any one pixel point in the second image and a pixel value of a first adjacent pixel point, and use the differential as the first gradient information.
In a possible implementation manner, the first obtaining unit is configured to obtain first gradient information of a pixel point in the second image on the luminance channel.
In a possible implementation manner, the smoothing unit is configured to perform weighted summation on the first gradient information of any pixel in the second image and the first gradient information of a second adjacent pixel to obtain second gradient information of the pixel, where a distance between the second adjacent pixel and the pixel is smaller than a second distance threshold.
In a possible implementation manner, the second gradient information of one pixel point includes gradient information in different directions;
the gradient feature comprises at least one of an angle, an intensity, and a correlation;
the second obtaining unit is used for obtaining at least one of the angle, the strength and the correlation of the gradient of any pixel point according to the gradient information of the pixel point in different directions.
In a possible implementation manner, the smoothing unit is configured to perform gaussian blurring processing on the first gradient information of the pixel point in the second image to obtain second gradient information.
In one possible implementation, the determining module 1703 is configured to:
quantifying the gradient feature;
based on the quantified gradient features, the step of determining target convolution parameters is performed.
In one possible implementation, the feature extraction module 1702 is configured to:
windowing the second image to obtain at least one image block;
performing feature extraction on the at least one image block to obtain gradient features of the at least one image block;
the determining module 1703 is configured to determine a target convolution parameter corresponding to the gradient feature of the at least one image block according to the corresponding relationship between the gradient feature and the convolution parameter.
In one possible implementation, the apparatus further includes:
the sharpening module is used for sharpening the third image to obtain a fourth image;
and the first rendering module is used for rendering the fourth image.
In one possible implementation, the sharpening module is to:
acquiring difference information of the third image and the first image;
and acquiring a fourth image based on the difference information, the target coefficient and the first image, wherein the definition of the target area in the fourth image is greater than that of the target area in the first image.
In one possible implementation, the apparatus further includes:
the first updating module is used for responding to a coefficient setting instruction and updating the target coefficient;
the sharpening module is configured to perform the step of obtaining the fourth image based on the updated target coefficient.
In one possible implementation, the apparatus further includes:
the first acquisition module is used for acquiring at least two frames of images of a target video;
the upsampling module 1701, the feature extraction module 1702, the determination module 1703 and the convolution module 1704 are respectively configured to perform the steps of upsampling, feature extraction, determining a target convolution parameter and performing convolution processing on at least one frame of image of the at least two frames of images to obtain at least one frame of target image corresponding to the at least one frame of image;
and the second rendering module is used for rendering images except the at least one frame of image in the at least two frames of images and the at least one frame of target image.
In one possible implementation, the apparatus further includes:
the second acquisition module is used for acquiring the time interval between the rendering times of any two frames of images in the target video in real time;
and the first frame dropping module is used for dropping any one of the any two frame images in response to the time interval being smaller than the first target threshold.
In one possible implementation, the apparatus further includes:
the third acquisition module is used for acquiring the frame rate of the target video in real time;
the second updating module is used for updating the rendering duration of each frame of image according to the frame rate;
and the second frame loss module is used for responding to the rendering duration being greater than a second target threshold value and performing frame loss processing on the target video.
In one possible implementation, the target resolution is less than or equal to a resolution threshold, and the apparatus is applied to a terminal;
the device also includes:
and the third rendering module is used for rendering the third image.
In one possible implementation, the target resolution is greater than a resolution threshold, and the apparatus is applied to a server;
the device also includes:
the compression module is used for compressing the third image to obtain compressed data of the third image;
and the sending module is used for sending the compressed data to a terminal, and the terminal renders the third image based on the compressed data.
In one possible implementation, the apparatus is configured to input the first image into an image processing model, perform the steps of upsampling, feature extraction, determining target convolution parameters and convolution processing by the image processing model, and output the third image.
In one possible implementation, the training process of the image processing model includes:
acquiring a sample first image and a sample target image, wherein the resolution of the sample target image is a target resolution, and the resolution of the sample first image is smaller than the target resolution;
up-sampling the sample first image to obtain a sample second image with the target resolution;
extracting the characteristics of the sample second image to obtain the gradient characteristics of the second image, wherein the gradient characteristics are used for indicating the relationship between pixel points and adjacent pixel points in the second image;
determining a target convolution parameter corresponding to the gradient feature according to the gradient feature of the second image and the sample target image;
and generating the corresponding relation between the gradient feature and the convolution parameter based on the target convolution parameter.
In one possible implementation, the image processing model is trained based on a sample first image and at least two sample target images, the resolutions of the at least two sample target images including at least one target resolution.
In one possible implementation, the image processing model includes a plurality of filters in series;
the steps of performing the upsampling, feature extraction, determining target convolution parameters and convolution processing by the image processing model, and outputting the third image include:
the step of upsampling is performed by the image processing model and the steps of feature extraction and convolution processing are performed by the plurality of serial filters.
In one possible implementation, the step of performing the feature extraction and convolution processing by the plurality of serial filters includes:
creating at least one object;
the steps of feature extraction, determining target convolution parameters and convolution processing are performed by the at least one object, the number of objects being less than the number of filters.
According to the device provided by the embodiment of the application, a new factor of gradient characteristics is introduced, and the target convolution parameters are directly determined from the existing corresponding relation according to the new factor, so that the time consumption of the determination process of the target convolution parameters is low, the image does not need to be subjected to fine characteristic extraction, the nonlinear mapping is also not needed, and the time consumption of characteristic extraction and processing is greatly reduced. And the high-resolution image can be obtained by performing one-step convolution processing on the target convolution parameter, and compared with a mode of reconstructing based on the extracted image characteristics, the method effectively simplifies the image processing steps, thereby improving the image processing efficiency.
It should be noted that: in the image processing apparatus provided in the above embodiment, when processing an image, only the division of the above functional modules is taken as an example, and in practical applications, the above function allocation can be completed by different functional modules according to needs, that is, the internal structure of the image processing apparatus is divided into different functional modules so as to complete all or part of the above described functions. In addition, the image processing apparatus and the image processing method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments in detail and are not described herein again.
The electronic device in the above method embodiment can be implemented as a terminal. For example, fig. 18 is a block diagram of a terminal according to an embodiment of the present disclosure. The terminal 1800 may be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3(Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4) player, a notebook computer or a desktop computer. The terminal 1800 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
Generally, the terminal 1800 includes: a processor 1801 and a memory 1802.
The processor 1801 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 1801 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1801 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1801 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, the processor 1801 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 1802 may include one or more computer-readable storage media, which may be non-transitory. Memory 1802 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1802 is used to store at least one instruction for execution by processor 1801 to implement the image processing methods provided by the method embodiments herein.
In some embodiments, the terminal 1800 may further optionally include: a peripheral interface 1803 and at least one peripheral. The processor 1801, memory 1802, and peripheral interface 1803 may be connected by a bus or signal line. Each peripheral device may be connected to the peripheral device interface 1803 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1804, display 1805, camera assembly 1806, audio circuitry 1807, positioning assembly 1808, and power supply 1809.
The peripheral interface 1803 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1801 and the memory 1802. In some embodiments, the processor 1801, memory 1802, and peripheral interface 1803 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1801, the memory 1802, and the peripheral device interface 1803 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1804 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1804 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 1804 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals. Optionally, the radio frequency circuitry 1804 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 1804 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1804 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1805 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1805 is a touch display screen, the display screen 1805 also has the ability to capture touch signals on or over the surface of the display screen 1805. The touch signal may be input to the processor 1801 as a control signal for processing. At this point, the display 1805 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1805 may be one, disposed on a front panel of the terminal 1800; in other embodiments, the number of the display screens 1805 may be at least two, and each of the display screens is disposed on a different surface of the terminal 1800 or is in a foldable design; in other embodiments, the display 1805 may be a flexible display disposed on a curved surface or a folded surface of the terminal 1800. Even more, the display 1805 may be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display 1805 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1806 is used to capture images or video. Optionally, the camera assembly 1806 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1806 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1807 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1801 for processing or inputting the electric signals to the radio frequency circuit 1804 to achieve voice communication. The microphones may be provided in a plurality, respectively, at different positions of the terminal 1800 for the purpose of stereo sound collection or noise reduction. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1801 or the radio frequency circuitry 1804 to sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 1807 may also include a headphone jack.
The positioning component 1808 is utilized to locate a current geographic position of the terminal 1800 for navigation or LBS (Location Based Service). The Positioning component 1808 may be a Positioning component based on a Global Positioning System (GPS) in the united states, a beidou System in china, or a galileo System in russia.
The power supply 1809 is used to power various components within the terminal 1800. The power supply 1809 may be ac, dc, disposable or rechargeable. When the power supply 1809 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 1800 also includes one or more sensors 1810. The one or more sensors 1810 include, but are not limited to: acceleration sensor 1811, gyro sensor 1812, pressure sensor 1813, fingerprint sensor 1814, optical sensor 1815, and proximity sensor 1816.
The acceleration sensor 1811 may detect the magnitude of acceleration on three coordinate axes of a coordinate system established with the terminal 1800. For example, the acceleration sensor 1811 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1801 may control the display 1805 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1811. The acceleration sensor 1811 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1812 may detect a body direction and a rotation angle of the terminal 1800, and the gyro sensor 1812 may cooperate with the acceleration sensor 1811 to collect a 3D motion of the user on the terminal 1800. The processor 1801 may implement the following functions according to the data collected by the gyro sensor 1812: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensors 1813 may be disposed on the side bezel of the terminal 1800 and/or on the lower layer of the display 1805. When the pressure sensor 1813 is disposed on a side frame of the terminal 1800, a user's grip signal on the terminal 1800 can be detected, and the processor 1801 performs left-right hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 1813. When the pressure sensor 1813 is disposed at the lower layer of the display screen 1805, the processor 1801 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 1805. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1814 is used to collect the fingerprint of the user, and the processor 1801 identifies the user according to the fingerprint collected by the fingerprint sensor 1814, or the fingerprint sensor 1814 identifies the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 1801 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 1814 may be disposed at the front, rear, or side of the terminal 1800. When a physical key or vendor Logo is provided on the terminal 1800, the fingerprint sensor 1814 may be integrated with the physical key or vendor Logo.
The optical sensor 1815 is used to collect the ambient light intensity. In one embodiment, the processor 1801 may control the display brightness of the display screen 1805 based on the ambient light intensity collected by the optical sensor 1815. Specifically, when the ambient light intensity is high, the display brightness of the display screen 1805 is increased; when the ambient light intensity is low, the display brightness of the display 1805 is reduced. In another embodiment, the processor 1801 may also dynamically adjust the shooting parameters of the camera assembly 1806 according to the intensity of the ambient light collected by the optical sensor 1815.
A proximity sensor 1816, also known as a distance sensor, is typically provided on the front panel of the terminal 1800. The proximity sensor 1816 is used to collect the distance between the user and the front surface of the terminal 1800. In one embodiment, when the proximity sensor 1816 detects that the distance between the user and the front surface of the terminal 1800 gradually decreases, the processor 1801 controls the display 1805 to switch from the bright screen state to the dark screen state; when the proximity sensor 1816 detects that the distance between the user and the front surface of the terminal 1800 is gradually increased, the processor 1801 controls the display 1805 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 18 is not intended to be limiting of terminal 1800 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
The electronic device in the above method embodiment can be implemented as a server. For example, fig. 19 is a schematic structural diagram of a server according to an embodiment of the present application, where the server 1900 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 1901 and one or more memories 1902, where the memory 1902 stores at least one program code, and the at least one program code is loaded and executed by the processors 1901 to implement the image Processing methods provided by the above method embodiments. Certainly, the server can also have components such as a wired or wireless network interface and an input/output interface to facilitate input and output, and the server can also include other components for implementing the functions of the device, which is not described herein again.
In an exemplary embodiment, there is also provided a computer-readable storage medium, such as a memory, including at least one program code, the at least one program code being executable by a processor to perform the image processing method in the above-described embodiments. For example, the computer-readable storage medium can be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, in one aspect, a computer program product or a computer program is provided that includes one or more program codes stored in a computer readable storage medium. The one or more processors of the electronic device can read the one or more program codes from the computer-readable storage medium, and the one or more processors execute the one or more program codes, so that the electronic device can perform the image processing method described above.
It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
It should be understood that determining B from a does not mean determining B from a alone, but can also determine B from a and/or other information.
Those skilled in the art will appreciate that all or part of the steps for implementing the above embodiments can be implemented by hardware, or can be implemented by a program for instructing relevant hardware, and the program can be stored in a computer readable storage medium, and the above mentioned storage medium can be read only memory, magnetic or optical disk, etc.
The above description is intended only to be an alternative embodiment of the present application, and not to limit the present application, and any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. An image processing method, characterized in that the method comprises:
according to a target resolution, performing up-sampling on a first image to obtain a second image of the target resolution, wherein the resolution of the first image is smaller than the target resolution;
extracting features of the second image to obtain gradient features of the second image, wherein the gradient features are used for indicating the relationship between pixel points and adjacent pixel points in the second image;
determining a target convolution parameter corresponding to the gradient feature of the second image according to the corresponding relation between the gradient feature and the convolution parameter;
and performing convolution processing on the second image based on the target convolution parameter to obtain a third image, wherein the resolution of the third image is the target resolution.
2. The method of claim 1, wherein the extracting the features of the second image to obtain gradient features of the second image comprises:
acquiring first gradient information of pixel points in the second image;
smoothing first gradient information of pixel points in the second image to obtain second gradient information of the pixel points;
and acquiring the gradient characteristic corresponding to the second gradient information.
3. The method of claim 2, wherein the obtaining the first gradient information of the pixel point in the second image comprises:
and acquiring the pixel value variation quantity at the pixel point according to the pixel value of any pixel point in the second image and the pixel value of a first adjacent pixel point, wherein the pixel value variation quantity is used as the first gradient information, and the distance between the first adjacent pixel point and the pixel point is smaller than a first distance threshold value.
4. The method according to claim 2, wherein the smoothing of the first gradient information of the pixel point in the second image to obtain the second gradient information of the pixel point comprises:
and carrying out weighted summation on the first gradient information of any pixel point in the second image and the first gradient information of a second adjacent pixel point to obtain second gradient information of the pixel point, wherein the distance between the second adjacent pixel point and the pixel point is smaller than a second distance threshold value.
5. The method of claim 2, wherein the second gradient information of a pixel point comprises gradient information in different directions;
the gradient feature comprises at least one of an angle, an intensity, and a correlation;
the obtaining of the gradient feature corresponding to the second gradient information includes:
and acquiring at least one of the angle, the strength and the correlation of the gradient of any pixel point according to the gradient information of the pixel point in different directions.
6. The method according to claim 1, wherein the determining the target convolution parameter corresponding to the gradient feature of the second image according to the corresponding relationship between the gradient feature and the convolution parameter comprises:
quantifying the gradient features;
performing the step of determining target convolution parameters based on the quantified gradient features.
7. The method of claim 1, wherein the extracting the features of the second image to obtain gradient features of the second image comprises:
windowing the second image to obtain at least one image block;
performing feature extraction on the at least one image block to obtain a gradient feature of the at least one image block;
determining a target convolution parameter corresponding to the gradient feature of the second image according to the corresponding relationship between the gradient feature and the convolution parameter, including:
and determining a target convolution parameter corresponding to the gradient feature of the at least one image block according to the corresponding relation between the gradient feature and the convolution parameter.
8. The method of claim 1, further comprising:
sharpening the third image to obtain a fourth image;
rendering the fourth image.
9. The method of claim 8, wherein the sharpening the third image to obtain a fourth image comprises:
acquiring difference information of the third image and the first image;
and acquiring a fourth image based on the difference information, the target coefficient and the first image, wherein the definition of the target area in the fourth image is greater than that of the target area in the first image.
10. The method of claim 9, further comprising:
updating the target coefficient in response to a coefficient setting instruction;
and executing the step of acquiring the fourth image based on the updated target coefficient.
11. The method of claim 1, further comprising:
acquiring at least two frames of images of a target video;
performing the steps of up-sampling, feature extraction, target convolution parameter determination and convolution processing on at least one frame of image in the at least two frames of images to obtain at least one frame of target image corresponding to the at least one frame of image;
rendering images except the at least one frame of image in the at least two frames of images and the at least one frame of target image.
12. The method of claim 11, further comprising:
acquiring the time interval between the rendering times of any two frames of images in the target video in real time;
in response to the time interval being less than a first target threshold, discarding any of the any two frames of images.
13. An image processing apparatus, characterized in that the apparatus comprises:
the up-sampling module is used for up-sampling a first image according to a target resolution to obtain a second image of the target resolution, wherein the resolution of the first image is smaller than the target resolution;
the characteristic extraction module is used for extracting the characteristics of the second image to obtain the gradient characteristics of the second image, and the gradient characteristics are used for indicating the relationship between pixel points and adjacent pixel points in the second image;
the determining module is used for determining a target convolution parameter corresponding to the gradient feature of the second image according to the corresponding relation between the gradient feature and the convolution parameter;
and the convolution module is used for performing convolution processing on the second image based on the target convolution parameter to obtain a third image, and the resolution of the third image is the target resolution.
14. An electronic device, comprising one or more processors and one or more memories having at least one program code stored therein, the at least one program code being loaded and executed by the one or more processors to implement the image processing method of any one of claims 1 to 12.
15. A computer-readable storage medium, characterized in that at least one program code is stored in the storage medium, which is loaded and executed by a processor to implement the image processing method according to any one of claims 1 to 12.
CN202010872723.6A 2020-08-26 2020-08-26 Image processing method, device, equipment and storage medium Active CN111932463B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010872723.6A CN111932463B (en) 2020-08-26 2020-08-26 Image processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010872723.6A CN111932463B (en) 2020-08-26 2020-08-26 Image processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111932463A true CN111932463A (en) 2020-11-13
CN111932463B CN111932463B (en) 2023-05-30

Family

ID=73305772

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010872723.6A Active CN111932463B (en) 2020-08-26 2020-08-26 Image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111932463B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112597916A (en) * 2020-12-24 2021-04-02 中标慧安信息技术股份有限公司 Face image snapshot quality analysis method and system
CN113734197A (en) * 2021-09-03 2021-12-03 合肥学院 Unmanned intelligent control scheme based on data fusion
CN114827723A (en) * 2022-04-25 2022-07-29 阿里巴巴(中国)有限公司 Video processing method and device, electronic equipment and storage medium
CN116055802A (en) * 2022-07-21 2023-05-02 荣耀终端有限公司 Image frame processing method and electronic equipment
CN116385260A (en) * 2022-05-19 2023-07-04 上海玄戒技术有限公司 Image processing method, device, chip, electronic equipment and medium
CN118279181A (en) * 2024-05-31 2024-07-02 杭州海康威视数字技术股份有限公司 Training method of adjustable parameter image restoration model and image restoration method of adjustable parameter

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106157249A (en) * 2016-08-01 2016-11-23 西安电子科技大学 Based on the embedded single image super-resolution rebuilding algorithm of optical flow method and sparse neighborhood
CN106204489A (en) * 2016-07-12 2016-12-07 四川大学 Single image super resolution ratio reconstruction method in conjunction with degree of depth study with gradient conversion
CN107527321A (en) * 2017-08-22 2017-12-29 维沃移动通信有限公司 A kind of image rebuilding method, terminal and computer-readable recording medium
WO2018006095A2 (en) * 2016-07-01 2018-01-04 Digimarc Corporation Image-based pose determination
US20180374197A1 (en) * 2016-11-30 2018-12-27 Boe Technology Group Co., Ltd. Human face resolution re-establishing method and re-establishing system, and readable medium
CN109118432A (en) * 2018-09-26 2019-01-01 福建帝视信息科技有限公司 A kind of image super-resolution rebuilding method based on Rapid Circulation convolutional network
CN109903221A (en) * 2018-04-04 2019-06-18 华为技术有限公司 Image oversubscription method and device
US20190206095A1 (en) * 2017-12-29 2019-07-04 Tsinghua University Image processing method, image processing device and storage medium
US20190304063A1 (en) * 2018-03-29 2019-10-03 Mitsubishi Electric Research Laboratories, Inc. System and Method for Learning-Based Image Super-Resolution
CN110428378A (en) * 2019-07-26 2019-11-08 北京小米移动软件有限公司 Processing method, device and the storage medium of image
CN110599402A (en) * 2019-08-30 2019-12-20 西安理工大学 Image super-resolution reconstruction method based on multi-feature sparse representation
WO2020062191A1 (en) * 2018-09-29 2020-04-02 华为技术有限公司 Image processing method, apparatus and device
CN111182254A (en) * 2020-01-03 2020-05-19 北京百度网讯科技有限公司 Video processing method, device, equipment and storage medium
CN111325726A (en) * 2020-02-19 2020-06-23 腾讯医疗健康(深圳)有限公司 Model training method, image processing method, device, equipment and storage medium
CN111402143A (en) * 2020-06-03 2020-07-10 腾讯科技(深圳)有限公司 Image processing method, device, equipment and computer readable storage medium
CN111429347A (en) * 2020-03-20 2020-07-17 长沙理工大学 Image super-resolution reconstruction method and device and computer-readable storage medium

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018006095A2 (en) * 2016-07-01 2018-01-04 Digimarc Corporation Image-based pose determination
CN106204489A (en) * 2016-07-12 2016-12-07 四川大学 Single image super resolution ratio reconstruction method in conjunction with degree of depth study with gradient conversion
CN106157249A (en) * 2016-08-01 2016-11-23 西安电子科技大学 Based on the embedded single image super-resolution rebuilding algorithm of optical flow method and sparse neighborhood
US20180374197A1 (en) * 2016-11-30 2018-12-27 Boe Technology Group Co., Ltd. Human face resolution re-establishing method and re-establishing system, and readable medium
CN107527321A (en) * 2017-08-22 2017-12-29 维沃移动通信有限公司 A kind of image rebuilding method, terminal and computer-readable recording medium
US20190206095A1 (en) * 2017-12-29 2019-07-04 Tsinghua University Image processing method, image processing device and storage medium
US20190304063A1 (en) * 2018-03-29 2019-10-03 Mitsubishi Electric Research Laboratories, Inc. System and Method for Learning-Based Image Super-Resolution
CN109903221A (en) * 2018-04-04 2019-06-18 华为技术有限公司 Image oversubscription method and device
CN109118432A (en) * 2018-09-26 2019-01-01 福建帝视信息科技有限公司 A kind of image super-resolution rebuilding method based on Rapid Circulation convolutional network
WO2020062191A1 (en) * 2018-09-29 2020-04-02 华为技术有限公司 Image processing method, apparatus and device
CN110428378A (en) * 2019-07-26 2019-11-08 北京小米移动软件有限公司 Processing method, device and the storage medium of image
CN110599402A (en) * 2019-08-30 2019-12-20 西安理工大学 Image super-resolution reconstruction method based on multi-feature sparse representation
CN111182254A (en) * 2020-01-03 2020-05-19 北京百度网讯科技有限公司 Video processing method, device, equipment and storage medium
CN111325726A (en) * 2020-02-19 2020-06-23 腾讯医疗健康(深圳)有限公司 Model training method, image processing method, device, equipment and storage medium
CN111429347A (en) * 2020-03-20 2020-07-17 长沙理工大学 Image super-resolution reconstruction method and device and computer-readable storage medium
CN111402143A (en) * 2020-06-03 2020-07-10 腾讯科技(深圳)有限公司 Image processing method, device, equipment and computer readable storage medium

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
RAO MUHAMMAD UMER: "Deep Super-Resolution Network for Single Image Super-Resolution with Realistic Degradations", 《ICDSC 2019: PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON DISTRIBUTED SMART CAMERAS》 *
于建;郭春生;: "基于梯度特征的稀疏表示超分辨率恢复", 杭州电子科技大学学报 *
胡长胜;詹曙;吴从中;: "基于深度特征学习的图像超分辨率重建", 自动化学报 *
黄剑华;王丹丹;金野;: "结合多特征的单幅图像超分辨率重建算法", 哈尔滨工业大学学报 *
黄硕;胡勇;巩彩兰;郑付强;: "基于稀疏编码的红外显著区域超分重建算法", 红外与毫米波学报 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112597916A (en) * 2020-12-24 2021-04-02 中标慧安信息技术股份有限公司 Face image snapshot quality analysis method and system
CN112597916B (en) * 2020-12-24 2021-10-26 中标慧安信息技术股份有限公司 Face image snapshot quality analysis method and system
CN113734197A (en) * 2021-09-03 2021-12-03 合肥学院 Unmanned intelligent control scheme based on data fusion
CN114827723A (en) * 2022-04-25 2022-07-29 阿里巴巴(中国)有限公司 Video processing method and device, electronic equipment and storage medium
CN114827723B (en) * 2022-04-25 2024-04-09 阿里巴巴(中国)有限公司 Video processing method, device, electronic equipment and storage medium
CN116385260A (en) * 2022-05-19 2023-07-04 上海玄戒技术有限公司 Image processing method, device, chip, electronic equipment and medium
CN116385260B (en) * 2022-05-19 2024-02-09 上海玄戒技术有限公司 Image processing method, device, chip, electronic equipment and medium
CN116055802A (en) * 2022-07-21 2023-05-02 荣耀终端有限公司 Image frame processing method and electronic equipment
CN116055802B (en) * 2022-07-21 2024-03-08 荣耀终端有限公司 Image frame processing method and electronic equipment
CN118279181A (en) * 2024-05-31 2024-07-02 杭州海康威视数字技术股份有限公司 Training method of adjustable parameter image restoration model and image restoration method of adjustable parameter
CN118279181B (en) * 2024-05-31 2024-08-27 杭州海康威视数字技术股份有限公司 Training method of adjustable parameter image restoration model and image restoration method of adjustable parameter

Also Published As

Publication number Publication date
CN111932463B (en) 2023-05-30

Similar Documents

Publication Publication Date Title
CN110136136B (en) Scene segmentation method and device, computer equipment and storage medium
CN111932463B (en) Image processing method, device, equipment and storage medium
CN111091166B (en) Image processing model training method, image processing device, and storage medium
CN110288518B (en) Image processing method, device, terminal and storage medium
CN110555839A (en) Defect detection and identification method and device, computer equipment and storage medium
CN112598686B (en) Image segmentation method and device, computer equipment and storage medium
CN114820633A (en) Semantic segmentation method, training device and training equipment of semantic segmentation model
CN111541907A (en) Article display method, apparatus, device and storage medium
CN110796248A (en) Data enhancement method, device, equipment and storage medium
CN110290426B (en) Method, device and equipment for displaying resources and storage medium
CN111836073B (en) Method, device and equipment for determining video definition and storage medium
CN115205164B (en) Training method of image processing model, video processing method, device and equipment
CN113706440A (en) Image processing method, image processing device, computer equipment and storage medium
CN110675412A (en) Image segmentation method, training method, device and equipment of image segmentation model
CN110991457A (en) Two-dimensional code processing method and device, electronic equipment and storage medium
CN110807769B (en) Image display control method and device
CN113822955B (en) Image data processing method, image data processing device, computer equipment and storage medium
CN112818979A (en) Text recognition method, device, equipment and storage medium
CN114283299A (en) Image clustering method and device, computer equipment and storage medium
CN114359225A (en) Image detection method, image detection device, computer equipment and storage medium
CN112115900A (en) Image processing method, device, equipment and storage medium
CN116757970B (en) Training method of video reconstruction model, video reconstruction method, device and equipment
CN112528760B (en) Image processing method, device, computer equipment and medium
CN112489006A (en) Image processing method, image processing device, storage medium and terminal
CN113570510A (en) Image processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant